text
stringlengths
104
605k
# Conversion of microMolar DOC to ppm 1. Jun 29, 2009 ### Bacat In a lot of the scientific literature, dissolved organic carbon (DOC) is given in microMolar units of carbon. Modern instruments give DOC in parts per million (ppm). I'm trying to figure out how to convert uM to ppm. Since uM is dependent on the molecular weight, I already have a problem. DOC is actually a distribution of dissolved compounds so there is no single molecular weight. I do know that approximately 65% of the DOC in seawater is <1000 Daltons, but what is an appropriate value to use for molecular weight in this case? Values for DOC in seawater range, but blanks are often reported around 25-30 uM C. Blanks are the measurement of DOC on a sample that has no dissolved carbon in it, so it represents the contribution from the measurement method. I'm trying to convert this value into ppm so I can compare it to a modern system blank which is reported in ppm. Once I decide on a molecular weight to use, what is the actual calculation to convert from uM to ppm? Is this correct? $$ppm = MW \times \mu M \;C$$ where ppm is parts per million (mass), uM C is microMolar carbon, and MW is molecular weight in grams/mole. Any help is much appreciated. 2. Jun 30, 2009 ### chemisttree In DOC measurements, the CO2 resulting from either an oxidation or combustion is measured and reported. What molecular weight do you think is important in this case? Generally, ppm and $$\mu mol$$ are equivalent. Remember that DOC measures CO2 obtained via either combustion or oxidation. Don't worry yourself about the MW of the particular organic species responsible for that CO2. Knowing this, can you hypothesize about the reason for a blank seawater sample having a 25-30 $$\mu M$$ value? 3. Jun 30, 2009 ### Bacat I believe uM only equals ppm if the molecular weight equals the density of the solution (water, for example, would be about 1000, or 1025 for seawater, kg/m^3). Am I missing something? Measuring DOC: In the literature, CO2 evolves from combustion (high temperature with a platinum catalyst) or oxidation (via persulfate at 100C). But this is really non-purgible organic carbon (NPOC). The purgable organic carbon is purged before the oxidation step, via acidification, and measured. The total organic carbon (TOC) is calculated from the total carbon (TC) by subtracting the inorganic carbon (IC). $$TOC = TC - IC$$ If the sample is filtered to a known pore size, then the TOC is called DOC (dissolved organic carbon). This size is somewhat arbitrary, but <= 45 um particles in solution is commonly used. Blank System Response: The 25-30 uM C in the system blank comes from a variety of sources. Most DI water and distilled water has a small amount of carbon in it (this is attributed to volatiles from the lab atmosphere dissolving into the water), on the order of about 10-15 uM C, or even higher (according to Peltzer and Brewer, 1993 Marine Chemistry). The instrument itself also contributes to the system blank, though these mechanisms are less well understood. It is thought the type of catalyst of the high temperature combustion apparatus contributes to the system blank, but also issues with sample injection volumes, heating of a sample within the injection syringe, and so forth all contribute somewhat (ibid). I'm afraid I'm still in somewhat of a quandry, though I thank you for your response. 4. Jun 30, 2009 ### Bacat I think I understand what I was missing now. When I have a micromole of carbon, I am already defining a molecular weight of 12 grams/mole. $$\mu M\;C = \frac{10^{-6}\;moles\;C}{1\;L\;H_{2}O} = \frac{10^{-6}\;moles\times \;12g/mol}{10^{6}g\;H_{2}O}=\frac{12g \times 10^{-6}}{10^{6}g\;H_2O}=ppm$$ Therefore, $$ppm=\mu M \times MW \times 10^{-6}\;$$ if density of solution is 1000 kg/m^3 (water). Is this correct? But then I only get a value of .0003 for ppm from 25uM C...which seems much too low to me. 5. Jul 1, 2009 ### chemisttree Arrrgh! I should have said that $$\mu mol$$ per mol is ppm! Alternatively, you could use milligrams per liter as ppm. 6. Sep 2, 2010 ************ Bacat, I think you have everything right except, I got 1 L = 10^3 g not 10^6 g
### Meet-in-the-Middle Attack on QARMA Block Cipher Rui Zong and Xiaoyang Dong ##### Abstract QARMA is a recently published lightweight tweakable block cipher, which has been used by the ARMv8 architecture to support a software protection feature. In this paper, using the method of MITM, we give the first distinguisher of QARMA block cipher. It is made up of the \emph{Pseudo-Reflector} construction with two forward rounds and three backward rounds. By adding two rounds on the top and three rounds on the bottom of the distinguisher, together with the idea of the differential enumeration technique and the key-dependent sieve skill, we achieve a 10-round (of 16-round) key recovery attack with memory complexity of $2^{116}$ 192-bit space, data complexity of $2^{53}$ chosen plaintexts and time complexity of $2^{70.1}$ encryption units. Furthermore, we use the same distinguisher to attack QARMA-128 which also includes 10 (of 24) round functions and the $\emph{Pseudo-Refector}$ construction. The memory complexity is $2^{232}$ 384-bit space, the data complexity is $2^{105}$ chosen plaintexts and the time complexity is $2^{141.7}$ encryption units. These are the first attacks on QARMA and do not threaten the security of full round QARMA. Available format(s) Category Secret-key cryptography Publication info Preprint. MINOR revision. Keywords QARMALightweight Tweakable Block CipherMeet-in-the-Middle Attack Contact author(s) zongrui @ mail sdu edu cn History Short URL https://ia.cr/2016/1160 CC BY BibTeX @misc{cryptoeprint:2016/1160, author = {Rui Zong and Xiaoyang Dong}, title = {Meet-in-the-Middle Attack on QARMA Block Cipher}, howpublished = {Cryptology ePrint Archive, Paper 2016/1160}, year = {2016}, note = {\url{https://eprint.iacr.org/2016/1160}}, url = {https://eprint.iacr.org/2016/1160} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
# Properties of Groebner bases Let us consider the polynomial ring $A:=\mathbb{R}[x_1,x_2,\dots]$ and consider the family consisting of the empty set and of all the subsets of polynomials $G$ such that there exists an ideal of $A$ having $G$ as minimal Groebner basis. Is it true that if $B$ belongs to the above family, then also $B' \subsetneqq B$ does? No, because the subset of a Gröbner basis isn't necessarily a Gröbner basis.1 Consider, for example, the ideal $I = (B')$, where $B' = \left\{x_1^2x_2 - x_1^3, x_2^3\right\}$. Note that $$x_1^5 = -x_1^2 \left(x_1^2 x_2 - x_1^3\right) - x_1\left(x_1^2 x_2 - x_1^3\right) x_2 - \left(x_1^2 x_2 - x_1^3\right)x_2^2 + x_1^2 x_2^3 \in I$$ but clearly $$x_1^5 \not \in \left(x_1^2 x_2, x_2^3\right),$$ meaning that $B'$ is not a Gröbner basis, minimal or otherwise. However, $$B = \left\{x_1^2x_2 - x_1^3, x_2^3, x_1^5\right\}$$ is a minimal Gröbner basis for $I$ (exercise left to the reader), and $B' \subsetneq B$. (I'm assuming $x_1 < x_2$ for my ordering in this example.) • Assume that B is a Groebner basis for some ideal I. Take $B' \subsetneqq B$. Is it a Groebner basis for another ideal I'? This is the sense of my question. I edited my question in order to make clearer the sense and I apologize for the misunderstanding. – TheWanderer May 7 '18 at 16:08 • @TheWanderer, in my example, $B'$ and $B$ generate the same ideal. – PersonX May 7 '18 at 16:11
### Proof of Maximum Flatness at DC The maximumally flat fractional-delay FIR filter is obtained by equating to zero all leading terms in the Taylor (Maclaurin) expansion of the frequency-response error at dc: This is a linear system of equations of the form , where is a Vandermonde matrix. The solution can be written as a ratio of Vandermonde determinants using Cramer's rule [329]. As shown by Cauchy (1812), the determinant of a Vandermonde matrix , can be expressed in closed form as Making this substitution in the solution obtained by Cramer's rule yields that the impulse response of the order , maximally flat, fractional-delay FIR filter may be written in closed form as which is the formula for Lagrange-interpolation coefficients (Eq.(4.6)) adapted to this problem (in which abscissae are equally spaced on the integers from 0 to ). Further details regarding the theory of Lagrange interpolation can be found (online) in [502, Ch. 3, Pt. 2, pp. 82-84]. Next Section: Variable Filter Parametrizations Previous Section: Odd-Order Lagrange Interpolation Summary
Board index Computation of Integrals closed-form of an integral relating to the elliptic integral ## closed-form of an integral relating to the elliptic integral Post your questions related to Computation of Integrals here. Moderators: galactus, Random Variable, sos440 ### closed-form of an integral relating to the elliptic integral Fri Dec 09, 2016 7:08 am Posts: 45 Location: China I want to get the closed-form of $F=F_++F_-$, where $F_{\nu}$ is the integral of the form $$F_{\nu} = \frac{1}{\pi}\int_0^{\pi}\frac{1}{\sqrt{1+\nu\sqrt{q^2+d^2\cos^2k}}}\;dk$$ where $q\in(-1,1)$, $d\in[0,1)$, and $\nu=\pm1$. It is alse assumed that $q^2+d^2<1$, so do not warry about the complex number. If $q=0$, the integral above relates to the elliptic integral(https://en.wikipedia.org/wiki/Elliptic_integral).
## Glossary At the end of each definition in this Glossary there is a number in brackets. This indicates the number of the Study Session where the term is first used in this OpenWASH module. Browse the glossary using this index Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL Page:  1  2  3  4  5  6  7  8  9  10  ...  14  (Next) ALL ### P #### parasitic worms group of parasites such as tapeworms or nematodes (also known as roundworms); helminthes (2) Page:  1  2  3  4  5  6  7  8  9  10  ...  14  (Next) ALL
## Hodge theory and the art of paper folding.(English)Zbl 0961.32026 From the text: Using Hodge theory and $$L^2$$-cohomology we study the singularities and topology of configuration and moduli spaces of polygonal linkages in the 2-sphere. As a consequence we describe the local deformation space of a folded paper cone in $$\mathbb{R}^3$$. This is a part of a series of our papers [M. Kaperovich and J. J. Millson, J. Diff. Geom. 44, No. 3, 479-513 (1996; Zbl 0889.58017), Topology 35, No. 4, 1085-1106 (1996; Zbl 0855.32013), Compos. Math. 103, No. 3, 287-317 (1996; Zbl 0872.53035) and C. R. Acad. Sci., Paris, Sér. I, Math. 325, No. 8, 871-876 (1997; Zbl 0948.20023)] where we study interrelations between members of – configuration spaces of geometric objects – algebraic varieties – representation varieties of groups. ### MSC: 32S30 Deformations of complex singularities; vanishing cycles Full Text: ### References: [1] Boden, H. and Hu, Y., Variations of moduli of parabolic bundles, Math. Ann., 301 (1995), 539-559. · Zbl 0821.14007 [2] Boden, H. and Yokogawa, K., Moduli spaces of parabolic Higgs bundles and parabolic K(D) pairs overs smooth curves: I, Preprint. · Zbl 0883.14012 [3] Deligne, P., Griffiths, P., Morgan, J. and Sullivan, D., Rational homotopy type of compact Kahler manifolds, Invent. Math., 29 (1975), 245-274. · Zbl 0312.55011 [4] Esnault, H. and Viehweg, E., Logarithmic de Rham complexes and vanishing theorems, Invent. Math., 86 (1986), 161-194. · Zbl 0603.32006 [5] Fintushel, R. and Stern, E., Instanton homology of Seifert fibered homology three spheres, Proc. of London Math. Soc., (3) 61 (1990), 109-137. · Zbl 0705.57009 [6] Galitzer, A., Ph.D. Thesis, University of Maryland, 1996. [7] Godement, R., Theorie des fasceaux, Hermann, 1973. [8] Goldman, W. and Millson, J.J., The deformation theory of representations of fundamental groups of compact Kahler manifolds, IHES, Publ Math., 67 (1988), 43-96. · Zbl 0678.53059 [9] Goldman, W. and Millson, J.J., The homotopy invariance of the Kuranishi space, Illinois J. Math., 34 (1990), 337-367. · Zbl 0707.32004 [10] Kapovich, M. and Millson, J.J., On the moduli space of polygons in the Euclidean plane, /. Diff. Geom., 42 (1995) N 1, 133-164. · Zbl 0847.51026 [11] , The symplectic geometry of polygons in Euclidean space, /. Diff. Geom., 44 (1996), 479-513. · Zbl 0889.58017 [12] , On the deformation theory of representations of fundamental groups of hyperbolic 3-manifolds, Topology, 35 (1996) N4, 1085-1106. · Zbl 0855.32013 [13] , The relative deformation theory of representations and flat connections and deformations of linkages in constant curvature spaces, Compositio Math., 103 (1996), 287-317. · Zbl 0872.53035 [14] , On representation varieties of Artin groups, projective arrangements and the fundamental groups of smooth complex algebraic varieties, Preprint. · Zbl 0982.20023 [15] Kirk, P. and Klassen, E., Representation spaces of Seifert fibered homology spheres, Topology, 30 (1991), 77-95. · Zbl 0721.57007 [16] Mehta, V. and Seshadri, C., Moduli of vector bundles on curves with parabolic structures, Math. Ann., 248 (1980), 205-239. · Zbl 0454.14006 [17] Millson, J. J., Rational homotopy theory and deformation problems from algebraic geometry, Proceedings of ICM 1990, I, 549-558. · Zbl 0761.32011 [18] Simpson, C., Harmonic bundles on noncompact curves, /. AMS, 3 (1990), 713-770. · Zbl 0713.58012 [19] Zucker, S., Hodge theory with degenerating coefficients: L2 cohomology in the Poincare metric, Ann. of Math., 109 (1979), 415-476. · Zbl 0446.14002 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# In a physician practice, should nurse practitioners have insurance billing done under their names or should the billing being submitted under the physician's name? Provide supportive information. 5 points In a physician practice, should nurse practitioners have insurance billing done under their names or should the billing being submitted under the physician's name? In a Erick Torres Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
## Sparse Multivariate Rational Function Model Discovery Series: Algebra Seminar Friday, March 17, 2017 - 11:05am 1 hour (actually 50 minutes) Location: Skiles 006 , North Carolina State University Organizer: Error-correcting decoding is generalized to multivariate sparse polynomial and rational function interpolation from evaluations that can be numerically inaccurate and where several evaluations can have severe errors (outliers''). Our multivariate polynomial and rational function interpolation algorithm combines Zippel's symbolic sparse polynomial interpolation technique [Ph.D. Thesis MIT 1979] with the numeric algorithm by Kaltofen, Yang, and Zhi [Proc. SNC 2007], and removes outliers (cleans up data'') by techniques from the Welch/Berlekamp decoder for Reed-Solomon codes. Our algorithms can build a sparse function model from a number of evaluations that is linear in the sparsity of the model, assuming that there are a constant number of ouliers and that the function probes can be randomly chosen.
# What is the probability that a sequence of events completes within a given time interval? If an event has probability p of occurring in some time interval, then the probability the event does not occur q is: $$q=1-p$$ The probability of the event not occurring by time t will be: $$P(q_1 \cap q_2 \cap...q_t) =q_1q_2...q_t = q^t$$ So the probability it did occur by is one minus this value ($q^t$), which is equal to the cumulative sum of p times the probability it had not occurred up to that point ($q^{t-1}$): $$p_t=1-q^t=\sum\limits_{i=1}^t pq^{t-1}$$ If we are concerned with n independent events occurring with probabilities $p_1, p_2, ... p_n$ by time t, then: $$P(p_{1t} \cap p_{2t} \cap...p_{nt}) =p_{1t}p_{2t}...p_{nt}$$ If $p_1=p_2= ... p_n$ then the above will be simply $p_t^n$. So the probability that all n events have occurred by time t will be: $$P(t_{Allevents}\leq t)=(1-q^t)^n=(\sum\limits_{i=1}^t pq^{t-1})^n$$ If there is only one sequence of these events that results in the outcome of interest (e.g. $t_1<t_2<...t_n$, where $t_i$ refers to time of occurrence), the probability it is the observed sequence will be one over the total number of permutations ($1/n!$). So: $$P(t_{Sequence}\leq t)=\frac{(1-q^t)^n}{n!}=\frac{1}{n!}(\sum\limits_{i=1}^t pq^{t-1})^n$$ That gives us the CDF. To get the PDF we take the first derivative which is: $$P(t\geq t_{Sequence}\leq t+1)=\frac{-nq^tln(q)(1-q^t)^{n-1}}{n!}$$ Given the above assumptions, we would expect the probability that the sequence of events completed at any given time interval to follow the PDF, shown in the lower row of plots: t=1:100; p=.025; q=1-p par(mfrow=c(2,4)) for(n in c(1,2,4,6)){ plot(t,(cumsum(p*q^(t-1))^n)/factorial(n), xlab="Time", ylab="P(t.Seq <= t)",main=paste(n, "Events")) lines(t,((1-q^(t))^n)/factorial(n)) } for(n in c(1,2,4,6)){ plot(t,(-n*(q^t)*log(q)*(1-q^t)^(n-1))/factorial(n), log="xy", xlab="Time", ylab="P(t<= t.Seq<= t+1)",main=paste(n, "Events")) lines(t[-1]-.5,diff(((1-q^(t))^n)/factorial(n)), col="Red") } I can find no flaw with the above reasoning. So my question is what did Armitage and Doll calculate here: I have an epidemiology question with logs ? Edit: From MichaelM's comment I see that the usual way to deal with this distribution is to calculate the probability an event occurs at time t, while above I have calculated the probability it occurs within an interval of time. Is there something wrong with doing this? It seems that when modeling incidence of cancer as in the linked question, we are dealing with intervals of time. Edit 2: I realized that if we don't like taking the derivative, the discrete alternative is already shown as the red line on the plot. The PMF is the lag-1 difference of the CDF: $$P(t\geq t_{Sequence}\leq t+1)=\frac{(1-q^{t+1})^n}{n!}-\frac{(1-q^t)^n}{n!}$$ This is even more straightforward. Where is my mistake? • The "density" of a discrete distribution is called "probability mass function" or PMF. You cannot find it by taking derivatives, it just equals the value of the summand in the sum. Your distribution is known under the name "Geometric distribution". Apr 11 '15 at 11:03 • @MichaelM Thank you. I assumed this was some well known distribution, which makes this so much more interesting. Can you provide feedback on whether this distribution applies to the scenario described here and by Armitage and Doll in that other question? Apr 11 '15 at 16:04 • @MichaelM Also, can you tell me the interpretation for what I have calculated by taking the derivative? Apr 11 '15 at 16:11 • I see that the non-monotonic age-specific incidence is an additional merit of the method above, it has been called the "Cancer Anomaly". Someone should let Armitage and Doll know the theory they describe in the text may fit the data much better than they thought. This is quite impressive for such a simple idea. Apr 13 '15 at 20:47 The issue isn't a flaw in reasoning, but the parameter values chosen for the event probabilities. It's simplest to start with your 1-event model, with a Bernoulli trial at each time point and thus a geometric distribution of waiting times until the event. With the value of 0.025 per time unit, the incidence rate would be highest in the first time unit and drop off with increasing time, as you point out. When multiple events are required, your model then gives non-monotonic incidence-age curves, which as you note in a comment can be observed for some cancers. So what's the time unit? The best general estimates of mutation rates in humans are about $10^{-7}$ per gene per cell division. This number itself represents a combination of a mutation, a failure of the cell to correct the mutation, and the survival of the cell despite the mutation. Furthermore, not all mutations, even in cancer-associated genes, promote cancer. There is some suggestion that the normal tissue stem cells in which mutations might be most likely to lead to cancer have even lower mutation rates. (Then again, a mutation in certain genes is likely to increase the probability of future mutations in the same cell.) Frank and Nowak examine how these and other factors combine to lead to accumulation of cancer-related mutations in the context of human tissue biology, including cell-division times, tissue architecture, and changes in effective mutation rates during tumor development. So a time unit required to obtain your assumed 0.025 probability of mutation per time unit would have to be very long. When working in time units of years most investigators assume the limiting case of a Poisson process, with a low constant occurrence rate (for a particular cell type in a particular environment with a particular history) so that the probability of occurrence is proportional to elapsed time. That's what Armitage and Doll did, although they didn't use that terminology. Also, be very careful in reading and thinking about mutation rates. Some mutation rates are specified as per base-pair of DNA, some as per genome. That's a factor of several billion difference. Time scales can be per cell division, per year, per generation, per lifetime. Some so-called mutation "rates" in cancer genomic studies are simply the numbers of accumulated mutations per megabase of DNA in a tumor at the time of analysis. Don't jump to conclusions until you know what type of "mutation rate" is being described. The moral here is that it's quite possible to get interesting results from a model of carcinogenesis, but you have to make sure that your choices of parameter values are realistic. That's often the major effort in modeling. That's how, for example faced with a non-monotonic incidence versus age curve for some type of cancer, you might distinguish your model (high-probability per unit time) from the model (some humans susceptible to cancer, others not) described in the "Cancer Anomaly" article you cite, or models involving competing risks (like heart attacks) or poor diagnosis in the elderly leading to underestimated cancer risks at older ages. • Thanks again for looking at this. The p=.025 used in the plot was just for example, but the "event rate" per person should be near that. If the rate is much lower (~.002) we would rarely observe cancer, if higher than ~.2 it would be much more common in the young. From the above it is clear that peak incidence should occur at log(1/n,base=q). The incidence data is reported in diagnoses per year, at the level of person. Attributing this to point mutations per cell would require further assumptions. Also, if A & D's theory is valid, it is a bad idea to use 85+ as in the linked question. Apr 16 '15 at 0:21
# Mesh and Fields¶ ## Mesh¶ The mesh consists of NX cells in X (hence azimuth in cylindrical and spherical geometries), NY+2*NGHY cells in Y (radius in cylindrical and spherical geometries) and NZ+2*NGHZ cells in Z (colatitude in spherical geometry). Here NGHY and NGHZ stand for the number of ghost or buffer zones next to the active mesh. If a direction is not included in the setup (for instance Z in the 2D polar fargo setup), the corresponding value of NGHY/Z is set to 0. The variables NX, NY and NZ are defined in the parameter file (they default to 1, so there is no need to define, for instance, NZ in a 2D setup such as fargo, or NX in the 2D setup otvortex, which corresponds to the Orszag-Tang vortex problem in Y and Z). In practice, this mesh is split among processors, and locally (within the scope of a given process) the submesh considered has size Nx, Ny+2*NGHY and Nz+2*NGHZ. The information about cells coordinates is stored in 1D arrays • [xyz]min(index) • [xyz]med(index) where min refers to the inner edge of a zone (in x, y or z) whereas med refers to the center of a zone (in x, y or z). This notation should look familiar to former FARGO users. Warning [xyz][min/max](index) are not vectors, they are macrocommands. They must to be invoked with (), not with []. NGHY and NGHZ are preprocessor variables, defined in the file src/define.h. Because we have a multi-geometry code, another set of secondary geometrical variables is defined (surfaces, volumes). See the end of this section for details. ## Fields¶ Fields are structures, and they can be seen as cubes of cells, of size equal to the mesh size. The location at which a given variable is defined is [xyz]med if the field is [xyz]-centered, or [xyz]min if the field is [xyz]-staggered. You can find a comprehensive list of the fields in src/global.h. The place where the fields are created is in CreateFields(), inside src/LowTasks.c. Internally, all fields are cubes written as 1D-arrays. So we need indices to work with the 3D-data. We have a set of helpers defined in src/define.h. They are: • l : The index of the current zone. • lxp, lxm: lxplus/lxminnus, the right/left x-neighbor • lyp, lym: lyplus/lyminnus, the right/left y-neighbor • lzp, lzm: lzplus/lzminnus, the right/left z-neighbor These helpers must be used with the proper loop indices: int i,j,k; for (k=z_lower_bound; k<z_upper_bound; k++) { for (j=y_lower_bound; j<y_upper_bound; j++) { for (i=x_lower_bound; i<x_upper_bound; i++) { field[l] = 3.0; field2[l] = (field1[lxp]-field1[l])/(xmed(ixp)-xmed(i)); //obviously some gradient calculation... } } } where [kji] always means [zyx]-direction. Warning Do not change the order of the indices! The definition of l, lxp, lxm, etc. assumes the following correspondence: i->x, j->y, k->z These helper are extremely useful. No explicit algebra has to be performed on the indices within a loop (but never use or define a variable called l or lxp !...). Besides, the definition of l is also correct within GPU kernels (for which the indices algebra is slightly different owing to memory alignment considerations), and this is totally transparent to the user who should never have to worry about this. In practice, a loop is similar to (isothermal equation of state): int i,j,k; for (k=0; k<Nz+2*NGHZ; k++) { for (j=0; j<Ny+2*NGHY; j++) { for (i=0; i<Nx; i++ ) { pres[l] = dens[l]*cs[l]*cs[l]; } } } Note Note that the lines of code above do not evaluate, nor define l, which is used straight out of the box, since it is a preprocessor macrocommand. ## Working with fields¶ A field structure is defined as follows (in src/structs.h): struct field { char *name; real *field_cpu; real *field_gpu; }; where we have stripped the definition of all extra lines not relevant at this stage. The name is a string that is used to determine the name of output files. field_cpu is a pointer to a double or float 1D array which has been duly allocated on the RAM prior to any invocation. Similarly field_gpu is a pointer to a double or float 1D array which has been duly allocated on the Video RAM prior to any invocation. The user should never have to invoke directly this field. Rather, C files will always make use of the field_cpu, which will be automatically translated to field_gpu as needed during the C to CUDA conversion. Acceding a field value is generally done as follows: struct Field *Density; // Definition at the beginning of a function real *density; // real is either double or float. density = Density->field_cpu; ... later on in a loop: ... density[l] = ....; Note Note that we define an “array of reals” straight away and subsequently only refer to it to manipulate cell values. In order to avoid confusion, it is a good idea to have an upper case for the initial of Fields*, and lower case for the corresponding real arrays. ## Fields on the gpu¶ Similar techniques are used on the GPU, but we have made it totally transparent to the user, so unless you want to program your CUDA kernels directly, you should never to worry about this. ## Useful variables¶ For the handling of the mesh, a set of useful variables and macrocommands has been defined. An extensive list with a description is given below: Indices: • l: The index of the current cell. It is a function of (i,j,k, pitch & stride). • lxp: The index of the “right” neighbor in x of the current cell. It is a function of l. • lxm: The index of the “left” neighbor in x of the current cell. It is a function of l. • lyp: The index of the “right” neighbor in y of the current cell. It is a function of l. • lym: The index of the “left” neighbor in y of the current cell. It is a function of l. • lzp: The index of the “right” neighbor in z of the current cell. It is a function of l. • lzm: The index of the “left” neighbor in z of the current cell. It is a function of l. • l2D: The current index in a 2D field (eg: vmean). It is a function of (j,k). • l2D_int: The current index in a 2D integer field (eg: a field of shifts). It is a function if (j,k). • ixm: i-index of the “left” neighbor in x of the current cell, taking periodicity into account. • ixp: i-index of the “right” neighbor in x of the current cell, taking periodicity into account. Coordinates: • XC: center of the current cell in X. It is a function of the indices; must to be used inside a loop. • YC: center of the current cell in Y. It is a function of the indices; must to be used inside a loop. • ZC: center of the current cell in Z. It is a function of the indices; must to be used inside a loop. • xmin(i): The lower x-bound of a cell. • xmed(i): The x-center of a cell, same as XC but can be used outside a loop. • ymin(j): The lower y-bound of a cell. • ymed(j): The y-center of a cell, same as YC but can be used outside a loop. • zmin(k): The lower z-bound of a cell. • zmed(k): The z-center of a cell, same as ZC but can be used outside a loop. Length: • zone_size_x(j,k): Face to face distance in the x direction. • zone_size_y(j,k): Face to face distance in the y direction. • zone_size_z(j,k): Face to face distance in the z direction. • edge_size_x(j,k): The same as zone_size_x, but measured on the lower x-border. • edge_size_y(j,k): The same as zone_size_y, but measured on the lower y-border. • edge_size_z(j,k): The same as zone_size_z, but measured on the lower z-border. • edge_size_x_middlez_lowy(j,k): The same as edge_size_x but measured half a cell above in z. • edge_size_x_middley_lowz(j,k): The same as edge_size_x but measured half a cell above in y. Surfaces: • SurfX(j,k): The lower surface of a cell at x=cte. • SurfY(j,k): The lower surface of a cell at y=cte. • SurfZ(j,k): The lower surface of a cell at z=cte. Volumes: • Vol(j,k): The volume of the current cell. • InvVol(j,k): The inverse of the current cell’s volume. You can see examples on how to use these variables in src/. They are widely used in many routines.
# Portal:Mathematics ## The Mathematics Portal Mathematics is the study of representing and reasoning about abstract objects (such as numbers, points, spaces, sets, structures, and games). Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences. Used for calculation, it is considered the most important subject. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered. (Full article...) Refresh with new selections below (purge) ## Featured articles - load new batch Featured articles are displayed here, which represent some of the best content on English Wikipedia. ## Selected image – show another A Lorenz curve shows the distribution of income in a population by plotting the percentage y of total income that is earned by the bottom x percent of households (or individuals). Developed by economist Max O. Lorenz in 1905 to describe income inequality, the curve is typically plotted with a diagonal line (reflecting a hypothetical "equal" distribution of incomes) for comparison. This leads naturally to a derived quantity called the Gini coefficient, first published in 1912 by Corrado Gini, which is the ratio of the area between the diagonal line and the curve (area A in this graph) to the area under the diagonal line (the sum of A and B); higher Gini coefficients reflect more income inequality. Lorenz's curve is a special kind of cumulative distribution function used to characterize quantities that follow a Pareto distribution, a type of power law. More specifically, it can be used to illustrate the Pareto principle, a rule of thumb stating that roughly 80% of the identified "effects" in a given phenomenon under study will come from 20% of the "causes" (in the first decade of the 20th century Vilfredo Pareto showed that 80% of the land in Italy was owned by 20% of the population). As this so-called "80–20 rule" implies a specific level of inequality (i.e., a specific power law), more or less extreme cases are possible. For example, in the United States in the first half of the 2010s, 95% of the financial wealth was held by the top 20% of wealthiest households (in 2010), the top 1% of individuals held approximately 40% of the wealth (2012), and the top 1% of income earners received approximately 20% of the pre-tax income (2013). Observations such as these have brought income and wealth inequality into popular consciousness and have given rise to various slogans about "the 1%" versus "the 99%". ## Good articles - load new batch These are Good articles, which meet a core set of high editorial standards. ## Did you know – view different entries Showing 7 items out of 75 ## Selected article – show another Image credit: User:Fropuff Knot theory is the branch of topology that studies mathematical knots, which are defined as embeddings of a circle S1 in 3-dimensional Euclidean space, R3. This is basically equivalent to a conventional knotted string with the ends of the string joined together to prevent it from becoming undone. Two mathematical knots are considered equivalent if one can be transformed into the other via continuous deformations (known as ambient isotopies); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. Knots can be described in various ways, but the most common method is by planar diagrams (known as knot projections or knot diagrams). Given a method of description, a knot will have many descriptions, e.g., many diagrams, representing it. A fundamental problem in knot theory is determining when two descriptions represent the same knot. One way of distinguishing knots is by using a knot invariant, a "quantity" which remains the same even with different descriptions of a knot. Research in knot theory began with the creation of knot tables and the systematic tabulation of knots. While tabulation remains an important task, today's researchers have a wide variety of backgrounds and goals. Classical knot theory, as initiated by Max Dehn, J. W. Alexander, and others, is primarily concerned with the knot group and invariants from homology theory such as the Alexander polynomial. The discovery of the Jones polynomial by Vaughan Jones in 1984, and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools as quantum groups and Floer homology. (Full article...) ## Subcategories Full category tree. Select [►] to view subcategories. ## Index of mathematics articles ARTICLE INDEX: MATHEMATICIANS: ## WikiProjects The Mathematics WikiProject is the center for mathematics-related editing on Wikipedia. Join the discussion on the project's talk page. ## In other Wikimedia projects The following Wikimedia Foundation sister projects provide more on this subject: Wikibooks Books Commons Media Wikinews News Wikiquote Quotations Wikisource Texts Wikiversity Learning resources Wiktionary Definitions Wikidata Database
# Speeding up this fractal-generating code I used the code below (which is a sample from this gist containing more similar code) in my answer to my own question about Mandelbrot-like sets for functions other than the simple quadratic on Math.SE to generate this image: cosineEscapeTime = Compile[{{c, _Complex}}, Block[{z = c, n = 2, escapeRadius = 10 \[Pi], maxIterations = 100}, While[And[Abs[z] <= escapeRadius, n < maxIterations], z = Cos[z] + c; n++]; n]] Block[{center = {0.5527, 0.9435}, radius = 0.1}, DensityPlot[ cosineEscapeTime[x + I y], {x, center[[1]] - radius, center[[2]] + radius}, PlotPoints -> 250, AspectRatio -> 1, ColorFunction -> "TemperatureMap"]] What could I do to improve the speed/time-efficiency of this code? Is there any reasonable way to parallelize it? (I'm running Mathematica 8 on an 8-core machine.) edit Thanks all for the help so far. I wanted to post an update with what I'm seeing based on the answers so far and see if I get any further refinements before I accept an answer. Without going to hand-written C code and/or OpenCL/CUDA stuff, the best so far seems to be to use cosineEscapeTime as defined above, but replace the Block[...DensityPlot[]] with: Block[{center = {0.5527, 0.9435}, radius = 0.1, n = 500}, Graphics[ Raster[Rescale@ ParallelTable[ cosineEscapeTime[x + I y], ColorFunction -> "TemperatureMap"], ImageSize -> n] ] Probably in large part because it parallelizes over my 8 cores, this runs in a little under 1 second versus about 27 seconds for my original code (based on AbsoluteTiming[]). - Use these 3 components: compile, C, parallel computing. Also to speed up coloring instead of ArrayPlot use Graphics[Raster[Rescale[...], ColorFunction -> "TemperatureMap"]] In such cases Compile is essential. Compile to C with parallelization will speed it up even more, but you need to have a C compiler installed. Note difference for usage of C and parallelization may show for rather greater image resolution and more cores. mandelComp = Compile[{{c, _Complex}}, Module[{num = 1}, FixedPoint[(num++; #^2 + c) &, 0, 99, SameTest -> (Re[#]^2 + Im[#]^2 >= 4 &)]; num], CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True]; data = ParallelTable[ a + I b, {a, -.715, -.61, .0001}, {b, -.5, -.4, .0001}]; Graphics[Raster[Rescale[mandelComp[data]], ColorFunction -> "TemperatureMap"], ImageSize -> 800, PlotRangePadding -> 0] This is just a prototype - you can figure out a better coloring. Another way is to use LibraryFunction - we have Mandelbrot built in: mlf = LibraryFunctionLoad["demo_numerical", "mandelbrot", {Complex}, Integer]; n = 501; samples = Table[mlf[x + I y], {y, -1.25, 1.25, 2.5/(n - 1)}, {x, -2., .5, 2.5/(n - 1)}]; colormap = Function[If[# == 0, {0., 0., 0.}, Part[r, #]]] /. r -> RandomReal[1, {1000, 3}]; Graphics[Raster[Map[colormap, samples, {2}]], ImageSize -> 512] Now, if you have a proper NVIDIA graphics card you can do some GPU computing with CUDA or OpenCL. I use OpenCL here because I got the source (from documentation btw): Needs["OpenCLLink"] src = " __kernel void mandelbrot_kernel(__global mint * set, float zoom, \ float bailout, mint width, mint height) { int xIndex = get_global_id(0); int yIndex = get_global_id(1); int ii; float x0 = zoom*(width/3 - xIndex); float y0 = zoom*(height/2 - yIndex); float tmp, x = 0, y = 0; float c; if (xIndex < width && yIndex < height) { for (ii = 0; (x*x+y*y <= bailout) && (ii < MAX_ITERATIONS); \ ii++) { tmp = x*x - y*y +x0; y = 2*x*y + y0; x = tmp; } c = ii - log(log(sqrt(x*x + y*y)))/log(2.0); if (ii == MAX_ITERATIONS) { set[3*(xIndex + yIndex*width)] = 0; set[3*(xIndex + yIndex*width) + 1] = 0; set[3*(xIndex + yIndex*width) + 2] = 0; } else { set[3*(xIndex + yIndex*width)] = ii*c/4 + 20; set[3*(xIndex + yIndex*width) + 1] = ii*c/4; set[3*(xIndex + yIndex*width) + 2] = ii*c/4 + 5; } } } "; MandelbrotSet = "mandelbrot_kernel", {{_Integer, _, "Output"}, "Float", "Float", _Integer, _Integer}, {16, 16}, "Defines" -> {"MAX_ITERATIONS" -> 100}]; width = 2048; height = 1024; mem = OpenCLMemoryAllocate[Integer, {height, width, 3}]; res = MandelbrotSet[mem, 0.0017, 8.0, width, height, {width, height}]; Image[OpenCLMemoryGet[First[res]], "Byte"] References: Fractals CDF paper Compile to C Demonstrations - Great answer. I'd upvote but I'm out of votes for today. –  Mike Bantegui Jan 18 '12 at 5:41 Thank you, Mike, your answer is great too! This is all compiled from bits of Documentation. I'll post links too. –  Vitaliy Kaurov Jan 18 '12 at 6:09 If any answer is the "right" one (of which all of these are), this would be the one. Beautifully demonstrates usage of every concept mentioned, plus it has references to the documentation. –  Mike Bantegui Jan 19 '12 at 2:11 I played with this a little using AbsoluteTiming[Graphics[Raster[Rescale@ParallelTable[...],ColorFunction -> "TemperatureMap"]]] and it seems like your FixedPoint[] formulation of the escape-time function is slower and produces different results than my original While[] formulation, and adding , CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True inside the Compile[] of my While[] version doesn't make any significant difference. As you suggested, the Graphics[Raster[Rescale@...]] formulation is much faster than DensityPlot[]. –  Isaac Jan 19 '12 at 3:18 @Isaac at one point I wasted a day or so trying to find the fastest way to obtain the mandelbrot set. The fastest I could do was While[(iters < maxiter) && (Abs@z < 2),iters++;z = z^2 + c] compiled to C, and this was a bit faster than FixedPoint, Nest and other similar approaches. ie, I found the same as you. –  acl Jan 19 '12 at 11:25 Many plots can be speeded up by pre-generating the data set you want and then plotting the resulting list. In any case, it's not coincidence that Table and Plot have similar syntax, only that Plot does additional things like finding out the range to be displayed, the interpolation strength, and so forth. If you're already sure what kind of picture you want to get out, you may want to generate the data set by hand, like mentioned by Mike. In your case, you can add just a few characters to the block part, Block[{center = {0.5527, 0.9435}, radius = 0.1, data}, data = ParallelTable[ cosineEscapeTime[x + I y], ]; ListDensityPlot[ data, AspectRatio -> 1, ColorFunction -> "TemperatureMap" ] ] This speeds up the process by more than an order of magnitude for me, and requires significantly less memory than the automatic plot. However, there's no interpolation in tricky corners of the plot here, so if you want a larger image you may want to change the parameters, i.e. decrease step size etc. - I'd like to second this. To explain a bit more: DensityPlot assumed a more or less smooth function (which this isn't), and tries to figure out where it need to sample more points for sufficient detail. Relevant options are PlotPoints and MaxRecursion. Both DensityPlot and ListDensityPlot will interpolate between sampled points, which can again be slow. I'd suggest using an Image instead of ListDensityPlot. –  Szabolcs Jan 18 '12 at 9:37 On version 7 you will need to use DistributeDefinitions: {xc, yc} = {0.5527, 0.9435}; r = 0.1; nx = 350; ny = 350; DistributeDefinitions[nx, ny, xc, yc, r]; pts = ParallelTable[ cosineEscapeTime[x + I y], {x, xc - r, xc + r, (2 r)/nx}, {y, yc - r, yc + r, (2 r)/ny} ]; For faster plotting try using ArrayPlot: ArrayPlot[Reverse@pts, ColorFunction -> "TemperatureMap"] Or better still as Vitaliy recommends: Graphics[Raster[Rescale@pts, ColorFunction -> "TemperatureMap"]] - Welcome to the party :) –  Mike Bantegui Jan 18 '12 at 5:07 With ColorFunction the fastest will be Raster: Graphics[Raster[..., ColorFunction -> "TemperatureMap"]] –  Vitaliy Kaurov Jan 18 '12 at 6:08 This might be an excellent candidate for ParallelTable; MakeFractal[f_, nx_, ny_, {cx_, cy_}, {rx_, ry_}] := Module[{pts}, DistributeDefinitions[nx, ny, cx, cy, rx, ry, f]; pts = ParallelTable[f[x + I y], {x, cx - rx, cx + rx, (2 rx)/nx}, {y, cy - ry, cy + ry, (2 ry)/ny}]; ArrayPlot[Reverse@pts, ColorFunction -> "TemperatureMap"] ] Note that evaluation time is very fast, but plotting isn't. You may wish to alternatively adjust the PlotPoints and MaxRecursion parameters of DensityPlot. MaxRecursion controls how far deep Mathematica goes for each plot point to determine the function value to use. Too high of a value for MaxRecursion can lead to very long evaluation, especially with fractals. Another way to help speed is to use CompilationTarget->"C" and RuntimeOptions->Speed. These sometimes provide a (small) speedup. Here's some timing values (Note your value of PlotPoints->250 is a bit excessive): (* 4.05 seconds on my machine *) AbsoluteTiming[ Block[{center = {0.5527, 0.9435}, radius = 0.1}, DensityPlot[cosineEscapeTime[x + I y], PlotPoints -> 120, MaxRecursion -> 2, AspectRatio -> 1, ColorFunction -> "TemperatureMap"]] ] (* 2.07 seconds on my machine *) AbsoluteTiming[ pts = ParallelTable[ cosineEscapeTime[x + I y], {x, xc - r, xc + r, (2 r)/nx}, {y, yc - r, yc + r, ( 2 r)/ny}]; ListDensityPlot[pts, AspectRatio -> 1, ColorFunction -> "TemperatureMap"] ] And individually for the ParallelTable approach: (* 0.753 seconds on my machine *) AbsoluteTiming[ pts = ParallelTable[ cosineEscapeTime[x + I y], {x, xc - r, xc + r, (2 r)/nx}, {y, yc - r, yc + r, ( 2 r)/ny}]; ] (* 1.397 seconds on my machine *) AbsoluteTiming[ ListDensityPlot[pts, AspectRatio -> 1, ColorFunction -> "TemperatureMap"] ] Using ArrayPlot as Mr.Wizard suggests is much faster: (* 0.233 seconds on my machine *) AbsoluteTiming[ ArrayPlot[Reverse@pts, ColorFunction -> "TemperatureMap" ] ] As you can see, plotting takes up a rather large amount of time. - Am I giving up anything significant in plotting an explicit table of points rather than having Mathematica/DensityPlot figure out for me dynamically which points to plot? –  Isaac Jan 18 '12 at 4:56 @Isaac: Well, you have to play around with the number of points which is a downside. In the other version you can just adjust PlotPoints and MaxRecursion and Mathematica "knows" where to evaluate next. Otherwise, you have the same "formatting" options that DensityPlot` has –  Mike Bantegui Jan 18 '12 at 4:58 You guys should try Raster here: Graphics[Raster[..., ColorFunction -> "TemperatureMap"]] With ColorFunction the Raster may be the fastest. –  Vitaliy Kaurov Jan 18 '12 at 6:33
Window registry and system file notes and tips. This is Windows95 and Windows98; I haven't tested any of this on WindowsNT, Win3.x or Win2K. ## Notes #### Leave yourself a note. This writes a cookie that will expire in one month.Scroll down the page for information on viewing cookie contents. #### Many people will tell you to never, ever, touch the registry. I think it's a lot of fun, though. You'll have to decide for yourself. I probably wouldn't be able to help you if something Bad happens, but write and I'll give it a shot if you want. You have to take responsibility for your own actions, though. Punk. All of this stuff has been tested on eleven different Pentium 90 - 200mhz Intel/AMD PC's running clean installs and upgrades of Win98©®™©®, Windows95b™, and Win95a© OEM from Packard Bell (some of these tips are unnecessary in Win98, or just won't work). Of course, YOUR machine may melt into a big puddle of plastic, so always make a backup of the registry and make sure you know how to use it. Play it safe. Make only one change at a time, restart the computer, and make sure everything works as it should. Close any open applications before messing around with the registry, have your backup handy (including a boot floppy disk that lets you access your CD-ROM), and wear a scarf, ferchissakes, ya wanna catch a cold? What's wrong with you? Much of this can be done with TweakUI (www.microsoft.com or [First edition Win98 CD-ROM drive letter]:\tools\reskit\powertoy) and/or Poledit ([CD-ROM drive letter]:\tools\reskit\netadmin\poledit), but what fun is that? If you use Poledit, set it to only run specified applications, and don't include Poledit as one of the applications, you can't use Poledit to change anything else...including changing the list of allowed applications. Got it? Good. Most changes to the registry won't take effect until you reboot. Making only one change at a time isolates any problems that may develop. If you mess something up anyway, boot into DOS and type: regedit /D <path> where <path> is the path to the registry key you want to delete. Use this carefully, as it can hose your registry completely. This is where your knack for remembering obscure data kicks in. Here's how you're supposed to restore your registry from The Naked PC: regrestore.txt It's probably a good idea to review it before...well, you know...boom. :-O When you look at the system properties window (right-click My Computer and select 'Properties' or push the Pause and Windows keys together), the computer manufacturer probably has their logo and information on the bottom half under the 'Registered To' information. To put your own stuff there: 1. Get a bitmap, size it to 180x114 pixels, name it oemlogo.bmp and move it to your \windows\system folder. 2. Open Notepad and create a file similar to this, replacing the text to the right of the equal sign with your own text: [general] Manufacturer=RollYerOwn Software Co Unlimited Model='75 AMC Gremlin [Support Information] Line1=For tech support: Line2=Go to http://www.fiveanddime.net/notes.html Line3=line3 Line4=line4 Line5=line5 Line6=This file is c:\windows\system\oeminfo.ini Save it as oeminfo.ini in your \windows\system folder. If you have problems, change the bitmap dimensions to 160x120 #### Cheating To cheat at Hearts go to: HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Applets\Hearts Create a New -> String Value with a value of 42 Start a new game of Hearts and push the Ctrl + Alt + Shift + F12 keys at the same time. While you're there anyway, you can fill in the other players' names. Create New -> String Value's named p1name, p2name, and p3name. Your name is the Value of the String called 'name'. To win at Freecell, hold down Ctrl+Shift+F10 To stop the clock in Minesweeper, hold down both mouse buttons and press the Escape key. #### Windows Media Player Go to HKEY_CURRENT_USER\Software\Policies\Microsoft\WindowsMediaPlayer You probably don't see it listed. Right-click HKEY_CURRENT_USER\Software\Policies\Microsoft\ and select New -> Key and name it WindowsMediaPlayer. Create a New -> String Value and name it TitleBar. Double-click the new string value and enter your title. The 'Media Guide' button caption can be changed, also. Create a New -> String Value and name it ShowCaseButton, with the value being what you want the buttons' caption to be #### Shortcuts to Themes If you have themes installed in the default location, you can right-click the desktop, create a New -> Shortcut named whatever you want, with the command line: "C:\Program Files\Plus!\THEMES.EXE" /s C:\Program Files\Plus!\Themes\More Windows (high color).theme and Windows will immediately apply the effects. Be sure to go to Start Menu -> Settings -> Control Panel -> Desktop Themes and save your current settings first. Change the path as appropriate if your Themes.exe file is somewhere else. Note that quotes are necessary if there's a space (such as in Program Files). #### Open Windows Explorer in My Computer view: Right-click the Explorer shortcut, select Properties, and in the Target line, add: /n,/e,/select,c:\ to the end, so your Target line looks like this: C:\WINDOWS\EXPLORER.EXE /n,/e,/select,c:\ Open Explorer in your choice of folders: "explorer.exe /n, /e, c:\windows\favorites" #### CD-ROM drives in DOS and Safe Mode: I use a few Packard Bell computers, and a nagging problem is getting the CD-ROM drive to work in DOS. I've only tested this on two computers, but it worked flawlessly. Your mileage may vary. The path to individual files may be different on your computer, so modify these if you need to. If you've installed Windows in the default location, C:\Windows, then you have a folder there called 'command.' Copy the file 'oakcdrom.sys' from your boot disk (Control Panel -- Add/Remove Programs -- Startup Disk) to c:\windows\command. If you want to copy it somewhere else, modify the path in c:\config.sys c:\windows\command\mscdex /D:MSCD000 Device = C:\windows\command\oakcdrom.sys /D:MSCD000 If it doesn't work, try substituting one of the other .sys files on the boot disk for the oakcdrom.sys file. If you don't have an autoexec.bat or config.sys file, make them using Notepad. To use the CD-ROM in Safe Mode, boot to DOS, change directory to c:\windows, and type: win /d:m #### Custom message boxes: If you have Windows Scripting Host installed, it will automatically run scripts with the extension .vbs or .js when they are clicked. Here's a quick message box example: Response = MsgBox ("You just deleted C:\Windows\" + vbcrlf + "Your disk will be formatted when you shut down", vbokonly, "Oh, crap!") Copy it, paste it into Notepad and save it with a .vbs extension. First you define the message box text. This is two lines, with 'vbcrlf' being a line break. Next is the buttons to show, then the message box title. Here's a simple, easily edited version: MsgBox"Hello, Pun'kin!",2,"Yew shore dew have purty lips, boy..." Change the 2 to other numbers for different button combinations. WSH is installed by default in Win98, and can be downloaded for free from somewhere on the Microsoft website for earlier Windows editions. Good luck finding it. Here's an ftp site that may help: http://borg.isc.ucsb.edu/ftproot/pub/microsoft/WinScripting/ #### Control Panel Shortcut: Right-click the desktop and select New --> Shortcut and enter: C:\Windows\Control.exe Sysdm.cpl,System,1 as the Command line and any name you want as the Name. (Windows is not case sensitive, so C:\WINDOWS is the same as c:\windows) To make your own pictures and/or html files available in the Desktop properties window for your desktop background, go to: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\ and change the WallPaperDir value to your directory of choice. #### Log text file changes: To keep track of changes in a Notepad text file, type .LOG as the first line of the file. Every time it is opened, the current time and date will be entered into it. #### Shutdown Windows shortcut: Right-click the desktop or in any folder and select New -- Shortcut On the Command line, type or cut'n paste: C:\WINDOWS\RUNDLL32.EXE User,ExitWindows and name it whatever you want. It will prompt you to save any unsaved work, then shut Windows down. #### Disable the Outlook Express splash screen Open regedit and navigate to HKEY_CURRENT_USER\Software\Microsoft\Outlook Express, create a new DWORD value called NoSplash, and setting the value to 00 00 00 01 This doesn't work with Outlook Express 5. #### Add 'Open with...' to all file types: Go to HKEY_CLASSES_ROOT\*\Shell Create a New -- Key and name it openas Under that, create a New -- Key and name it command Double-click the Default value and enter this: rundll32.exe shell32.dll,OpenAs_RunDLL %1 When you use this, be sure to uncheck the box "Always use this program to open this type of file" in the Open With... dialog, unless you want to change the file association. Or use the next tip to uncheck it automatically. Note that if the file type isn't associated with anything, you'll have two 'open with...' commands. #### Remove the check from the Open With...dialog: Go to HKEY_CLASSES_ROOT\Unknown\shell\openas\command double-click (Default) and add a space and a %2 after the existing line Before: C:\WINDOWS\rundll32.exe shell32.dll,OpenAs_RunDLL %1 After:  C:\WINDOWS\rundll32.exe shell32.dll,OpenAs_RunDLL %1 %2 Go to and delete the Order entry. (It will be recreated if you start dragging things around again, so there's no need to back it up before deleting it.) You can do the same with the other Order items in Accessories, Games, etc. If you use Win98, of course you just right-click a menu item and select 'Sort by Name'. #### A new Send To Desktop as Shortcut Navigate to your /Windows/Sendto folder, right-click a blank spot and select New -- Text Document. Rename the new text document: and click Yes when it nags you about changing the extension #### A new Show Desktop: [Shell] Command=2 IconFile=explorer.exe,3 Command=ToggleDesktop If you still have your Show Desktop shortcut, you can verify this by going to C:\WINDOWS\Application Data\Microsoft\Internet Explorer\Quick Launch and using your handy little Send To Notepad(see below) extension to view it. Change the IconFile line to point to your favorite icon, such as C:\WINDOWS\SYSTEM\Pifmgr.dll,36 #### Send To Notepad for all file types: Go to HKEY_CLASSES_ROOT\*\Shell and create a key called or whatever you want to show on the menu, then under that, a new key called Command and change the (Default) value to If you don't use Notepad, include the full path to your text editor instead of just the filename. If you don't have a key called Shell under \*\, create it. Most machines I've used only have \shellex #### Add text to the system clock: Go to HKCU\Control Panel\International, and create a new string value called s1159 Double-click it and enter your text for the value data. Create another string value, and name it s2359 and put the same text as the s1159 string. (I don't know the character limit, but I DO know it's long enough for Mr. Potato Head.) To remove the colon ( : ) from the clock and make it show military time: Go to HKEY_USERS\.DEFAULT\Control Panel\International and add new strings with the values shown: iTime -> "1" iTLZero ->"1" sTime -> "" sTimeFormat -> "HHmm tt" I don't know what the effect of combining these two would be... #### Constant refresh of file and folder views To set Windows 95 to perform a constant refresh of file and folder views, go to HKEY_LOCAL_MACHINE/System/CurrentControlSet/Control/Update. In the right-hand pane, right-click on UpdateMode and select Modify. In the edit window, change the 01 to 00. You'll have to reboot your computer before the change takes effect. #### Easier installations Copying all the CAB (Windows 95 installation files) from the Win95 CD to your hard disk is a great way to save time when re-installing components or the whole OS, if you have the disk space available. You can make this even quicker by modifying the Registry to point to the CAB files during installation. Open the Registry Editor, drill down to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Current Version\Setup and single-click on the SETUP folder. Right-click on the SourcePath item and select Modify from the context menu. Enter the path of the folder that contains your CAB files. This also makes adding and removing programs, changing network settings, etc. much easier, because you won't have to insert the Windows CD every time. (My 133's slow down if I leave the CD in all the time.) This is an especially handy tip if you add or remove a drive to your computer, or if you copy the contents of the CD to your hard drive. (I tried this on a clean install of Win98, and it didn't work. Maybe it will for Win95?) If you're installing Win95 on a system without a previous version of Windows installed, Win95 asks you to prove you have installed a previous version of DOS or Windows. If you don't have your old diskettes handy, here's how to get around the dialog: Open Notepad and save a document as WIN.CN_ (the final character is an underline). Put the WIN.CN_ file on a diskette - your boot diskette or Win95 Startup disk will do. When you reach the point in the installation where Win95 asks you to show it a previous version, put in the diskette with the WIN.CN_ file on it. The installation program will accept it as proof of a previous version. #### Change System Colors When you change your color scheme (right click the desktop, select Properties, and click the Appearance tab), you'll notice some colors can't be changed. To fix that problem: go to HKEY_USERS\.Default\Control Panel\Colors. Here you'll find all the screen elements. To change one, double-click on it and replace the current value with one of your sets of numbers. When things look the way you want them to, go back to the Appearance item and Save As a new scheme. Notice you have to enter the values as red/green/blue numbers. Especially cool ones to change are ButtonHilite and WindowFrame. Save your current color scheme so you can go back to it after you're tired of the funky look. Every font you install sucks up physical memory. Unless you have physical RAM to spare, I wouldn't suggest loading too much more than your commonly used fonts. So what do you do with the fonts you want to keep but not install? Put them in ANOTHER directory ("fontsOther" so you can find it beside the Fonts folder). When you want to use a particular font in a document/graphic, go to your "Other Fonts" folder, double-click on the font you want to use (and KEEP THE FONT OPEN), and launch the application in which you wish to use that font. The font should show up in your regular list as if it were installed the "normal" way. When you're done, you can close the font preview window and Windows is none-the-wiser. This also fixes the problem of too many fonts. All fonts are stored in the Registry, and the Registry has a limit of 64k for each key...meaning 800 - 1000 fonts are all that can be stored without a shareware font manager program. #### System folders Control Panel.{21EC2020-3AEA-1069-A2DD-08002B30309D} Dial-Up Networking.{992CFFA0-F557-101A-88EC-00DD010CCC48} Printers.{2227A280-3AEA-1069-A2DE-08002B30309D} Inbox.{00020D75-0000-0000-C000-000000000046} My Computer.{20D04FE0-3AEA-1069-A2D8-08002B30309D} Recycle Bin.{645FF040-5081-101B-9F08-00AA002F954E} Network Neighborhood.{208D2C60-3AEA-1069-A2D7-08002B30309D} Briefcase.{85BBD920-42A0-1069-A2E4-08002B30309D} Fonts.{BD84B380-8CA2-1069-AB1D-08000948F534} InternetCache.{7BD29E00-76C1-11CF-9DD0-00A0C9034933} Subscriptions.{F5175861-2688-11D0-9C5E-00AA00A45957} Desktop.{00021400-0000-0000-0000-000000000046} History.{FF393560-C2A7-11CF-BFF4-444553540000} IE5 only: Offline Web Pages.{8E6E6079-0CB7-11d2-8F10-0000F87ABD16} To put one of these folders on the Start menu: Highlight and copy the folder name and letter/number string, up to and including the } on the end. Right-click the Start button and select Explore. Right-click a blank area in the window and select New --- Folder. Paste the copied name in as the New Folder's name. #### Recycle Bin - rename and change the tooltip: HKEY_CLASSES_ROOT\CLSID\{645FF040-5081-101B-9F08-00AA002F954E} To add Rename and Delete to the right-click context menu, go to the subkey: HKEY_CLASSES_ROOT\CLSID\{645FF040-5081-101B-9F08-00AA002F954E}\ShellFolder and edit the Binary value to: 70 01 00 20 If you don't see the Recycle bin at that key, try: HKEY_LOCAL_MACHINE\Software\CLASSES\CLSID\{645FF040-5081-101B-9F08-00AA002F954E} Note: Editing a binary value is a little more tricky than editing a string value. Export the key to safe place before trying it for the first time. #### Memory management: This one is from tipworld.com To get better performance from a machine with 32 megs of memory or better, push the Windows key + Pause Break key to call up the System Properties box. Click the Performance tab, then the File System button. You'll see a Select Box under Settings labeled 'Typical role of this computer:'. Change that to 'Network server'. This will manage your memory better and give you a small performance boost. In Windows95b and up, you're all set to go. In earlier releases of Windows, you'll have to do this: Go to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\FS Templates Click on Server, and in the right-hand pane you'll see two entries called NameCache and PathCache. Here are the values that you need to enter/modify for each one of them: 1. For NameCache modify the numeric values to read: a9 0a 00 00 2. For PathCache modify the numeric values to read: 40 00 00 00 These values are written to the wrong entries by default and you have to manually fix them to get a boost in performance when setting your machine to "Network Server". Because these values are written in wrong many people see no difference in performance when changing to "Network Server". But this Registry hack fixes it, and when you're done making these changes, go set your system to "Network Server" and see if you notice any improvement (you'll need to restart Windows for the changes to take effect). #### Eliminate the Favorites folder(and others) from the Start menu: Go to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer. Right-click in the right pane. When the menu opens, choose New, DWORD Value. Name the new value NoFavoritesMenu. When the Edit DWORD Value dialog box opens, enter 1(one) and restart the computer. After you restart, you'll have no Favorites folder in the Start menu. If you decide you want the Favorites folder back, change the value back to 0(zero) and click OK, or just delete it. Close RegEdit and restart the computer to get the Favorites folder back into the Start menu. For the Documents folder, follow the same procedure, except name the new DWORD value: NoRecentDocsMenu #### Others: find = NoFind run = NoRun shutdown = NoClose The following are for the Settings item on the Start button. If you remove them all, you'll have no Settings item: active desktop = NoSetActiveDesktop control panel = NoSetFolders folder options = NoFolderOptions printers = NoSetPrinters windows update = NoWindowsUpdate These are some others for Internet Explorer 5.01. They may work for earlier versions. If you've read this far, you've got a boot disk with CD access and a full backup, so dive right in! The NoFavoritesMenu is shown as removed from the start menu. You will probably only have one or two of these in your registry unless you got a branded version from Snap or somewhere. They add some or all of these to give themselves control over what you see. REGEDIT4 [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] "NoDriveTypeAutoRun"=hex:b5,00,00,00 "NoSaveSettings"=hex:00,00,00,00 "NoStartBanner"=hex:01,00,00,00 "NoLogoff"=dword:00000000 "Btn_Back"=dword:00000000 "Btn_Forward"=dword:00000000 "Btn_Stop"=dword:00000000 "Btn_Refresh"=dword:00000000 "Btn_Home"=dword:00000000 "Btn_Search"=dword:00000000 "Btn_History"=dword:00000000 "Btn_Favorites"=dword:00000000 "Btn_Folders"=dword:00000000 "Btn_Fullscreen"=dword:00000000 "Btn_Tools"=dword:00000000 "Btn_MailNews"=dword:00000000 "Btn_Size"=dword:00000000 "Btn_Print"=dword:00000000 "Btn_Edit"=dword:00000000 "Btn_Discussions"=dword:00000000 "Btn_Cut"=dword:00000000 "Btn_Copy"=dword:00000000 "Btn_Paste"=dword:00000000 "Btn_Encoding"=dword:00000000 "NoActiveDesktop"=dword:00000000 "NoActiveDesktopChanges"=dword:00000000 "NoInternetIcon"=dword:00000000 "NoNetHood"=dword:00000000 "NoDesktop"=dword:00000000 "NoFind"=dword:00000000 "NoRun"=dword:00000000 "NoSetActiveDesktop"=dword:00000000 "NoWindowsUpdate"=dword:00000000 "NoFolderOptions"=dword:00000000 "NoRecentDocsHistory"=dword:00000000 "ClearRecentDocsOnExit"=dword:00000000 "NoClose"=dword:00000000 "NoSetFolders"=dword:00000000 "EnforceShellExtensionSecurity"=dword:00000000 "NoDrives"=dword:00000000 "NoNetConnectDisconnect"=dword:00000000 "NoDeletePrinter"=dword:00000000 "NoPrinterTabs"=dword:00000000 " #### Custom Windows Welcome Each time you restart Windows, Microsoft's Welcome screen appears, displaying tips for beginners. Most people turn this screen off after a few weeks. Change the welcome-screen tips that ship with Windows--or add your own brilliant sayings--by editing the Registry. Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\explorer\Tips. Make sure the values in the key are numbered sequentially from zero forward. Windows 95 ships with 48 tips and NT comes with 50, but you can add as many as you like. Select Edit, New, String Value, then type in the number (48 for Windows 95 and 50 for NT, since the values start at 0). Double-click on the new string and type in its value--in this case the message or tip that you want to display. #### Sounds: By now you've customized the way Windows looks; now it's time to attack the way it sounds. Edit the Registry to force Windows to play sounds for specific events, like opening or closing an application and maximizing or minimizing windows. You can also set it to play a certain sound file when it displays a message box. For example, to play the sound C:\sound.wav every time you start Microsoft Paint, run the Registry Editor, and find the key: HKEY_CURRENT_USER\AppEvents\Schemes\Apps Click on Edit, New, then Key, and type in MSPAINT (to match the name of the program). Next add a key under MSPAINT called Open, and a key under that called .Current. Double-click on the default value to set it to C:\windows\sound.wav or whatever. You'll see other applications with sounds events; use them as a guide. Does your PC know who you are--really? Make sure that Windows has your name and company information correct,(or whatever your definition of 'correct' is) because many applications automatically pick up this info when you install them. In the registry, find the key called HKEY_CURRENT_USER\Software\Microsoft\MS Setup (ACME)\User Info in the values DefName and DefCompany. Change these to reflect the correct information. While you're at it, make sure that your name and company name appear correct in the rest of Windows. If you bought your PC with Windows preinstalled, the value may say something like ValuedCustomer. To check, start Windows Explorer, click on Help, About Windows. Navigate to the Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion and change the values called RegisteredOwner and RegisteredOrganization. (In NT, look at the key \Microsoft\Windows NT\CurrentVersion.) #### Change hard drive icons: To change the icon for your hard drive, create the following file in notepad: [autorun] icon=C:\WINDOWS\SYSTEM\Pifmgr.dll,36 Notice it's two lines. If your hard drive is C: click Save As... and navigate to C: and save it as autorun.inf enclose it in quotes so Notepad won't put a .txt on the end of it The path specified is the typical Windows path yours may vary, but you'll have that dll file. The 36 at the end is the number of the icon I chose. (Icons in a file are numbered, starting at 0.) To see the change, open Windows Explorer; click F5 if needed to refresh. To preview your icon options, select an icon on your desktop, right-click and select Properties, then Change Icon. You can also select any other .dll file in your computer, although most won't have icons. The common Windows icons are in C:\Windows\System\shell32.dll You can also point to a single icon. Using the example above: [autorun] icon=c:\windows\cursors\myicon.ico #### Custom folder icons: Navigate to the folder whom's icon you want to change, right-click and select New -> Text Document, and rename it to: desktop.ini Here's a two line desktop.ini file pointing to an icon: [.ShellClassInfo] IconFile=c:\windows\cursors\icon.ico Here's a three line desktop.ini file pointing to shell32.dll: [.ShellClassInfo] IconFile=C:\WINDOWS\SYSTEM\Pifmgr.dll IconIndex=35 Now open a DOS prompt (Start Menu -> Programs -> MS-DOS Prompt) and type: attrib +s <c:\path\to\foldername> This makes it a system folder, which you can see by right-clicking and selecting 'Properties'. To remove the System folder attribute, enter: attrib -s <etc> at the DOS prompt. #### A batch file to clear the Documents menu: Use the same Notepad procedures to make a .bat file; use any name you want, just make sure it ends with .bat. Type: echo y| del \windows\recent\*.* echo y answers "yes" to every "Do you want to delete..." question, deleting the document shortcuts stored in C:\Windows\Recent Or change the Recent to any other folder. This can be kinda dangerous, though...if you del C:\windows, re-read the top paragraph on this page before contacting me. #### Other program autorun locations: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\ and HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\ and HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\ Here's two examples of how to use this key for copying and deleting: Create a New -> String Value and enter: "Copy"="command.com /c copy C:\\WINDOWS\\desktop\\one.txt C:\\WINDOWS\\desktop\\two.txt" "Delete"="command.com /c del C:\\WINDOWS\\desktop\\one.txt" This will carry out the designated command at bootup and then delete the key The 'Copy' command will replace files without asking, so be sure it's what you want to do. These 'Run*' keys are mostly used by setup programs and virus writers. #### Shut off the Word 97 Office Assistant: HKEY_LOCAL_MACHINE\Software\Microsoft\Office\8.0\Common\Assistant and point the AsstPath to a non-existent folder. A better way is to select Custom on the Install screen, so you can also avoid that crappy Find Fast. #### Smooth scrolling in Word: In regedit, go to: HKEY_CURRENT_USER\Software\Microsoft\Office\8.0\Word\Options and create a New -- String value called LiveScrolling. Double-click it and change the value to 1 (one). #### Your own Command Prompt Here: In Explorer, go to the View menu, Folder Options, File Types, select File folder...Edit...New.... Give it a name (the Action: line), then on the "Application used..." line, type: C:\command.com /K cd "%1" This can also be used at: Folder and Drive You can create a .bat file to change the command prompt, install DOSkey, or whatever, and call it with c:\command.com /K c:\myDos.bat cd "%1" You can also use special characters that will show up in the pop-up menu, such as Alt+128 for a capital C. (To create myDos.bat, open Notepad and enter: doskey #### Using .reg files Here's an example using Outlook Express 4.x: Type or cut'n paste the following into Notepad and save with a .reg extension. (Remove the two (BLANK_LINE_GOES_HERE)'s). REGEDIT4 (BLANK_LINE_GOES_HERE) [HKEY_CURRENT_USER\Software\Microsoft\Outlook Express] "NoSplash"=dword:00000001 (BLANK_LINE_GOES_HERE) You shouldn't use a .reg file if you don't know what it's doing or if it even belongs there. ALWAYS look before you 'merge' a .reg file. You can change the Frame Controls (the Minimize, Restore, and Close buttons on the top right of every window) using the freeware program Eppie Desktop: Web (author) -- http://www.u.arizona.edu/~jepstein/ Web (program) -- http://www.u.arizona.edu/~jepstein/epdsk/ Icon and cursor notes Internet notes Internet Explorer Content Advisor help, detecting and deleting BackOrifice, y2k notes, keyboard shortcuts and Windows 98 source code DMA stuff from Fred Langa: langa.txt Speeding up Internet Explorer The ultimate customization: hacking explorer.exe If you'd like, I have condensed versions of this page in redneck, jive, Cockney, or Swedish. #### Unix: Activate NumLock at boot: for tty in /dev/tty[1-9]*; do setleds -D +num<\$tty>/dev/null done This worked under FreeBSD and RedHat Linux bashrc.gz My .bashrc file for RedHat Linux, ripped off almost verbatim from: Sue's FreeBSD page FREE and SPAM-PROOF! CLICK to sign up for Fred Langa's award-winning e-newsletter. I'm subscribed to seven tech newsletters; Fred's is the best by far. Oops! #### Some random stuff that got lost in the shuffle: These are from Lockergnome: They say that blinking is a sign of nervousness or lying, but I wonder if the researchers were blinking when they discovered that? You can speed up or slow down the rate at which your cursor blinks. Launch the Registry Editor (REGEDIT), and navigate to HKEY_CURRENT_USER > Control Panel > Desktop, then in the right-most panel, you should see a String Value named 'CursorBlinkRate' (if one doesn't exist, you can create one). Now, you can change the value to any number between 0 and 65535. The smaller the number, the faster your cursor will blink (and vice versa). You can also tweak the rate via the Keyboard applet in the Control Panel, but the range is severely limited. It's not easy being blue; when you'd rather be playing in the snow, you're stuck inside -- with nothing constructive to do. For years now, the Blue Screen of Death (BSOD) has been haunting our PCs. Would you rather see the Magenta Screen of Death? What about Cyan? That's doable. Open up SYSTEM.INI and fly to the [386enh] section. Add two new values: 'MessageBackColor=' and 'MessageTextColor=' (without the quotes, but with the equal symbol). You'll need to use ONE hexadecimal "number" for each color; they are as follows: 0 (black), 1 (blue), 2 (green), 3 (cyan), 4 (red), 5 (magenta), 6 (yellow), 7 (white), 8 (gray), 9 (bright blue), A (bright green), B (bright cyan), C (bright red), D (bright magenta), E (bright yellow), F (bright white). I hope you never see another screen of death again. MODEM MADNESS. If you're plagued, as I am, with frequent disconnects, this might be worth a try. Click on Start|Settings|Control Panel|Properties|Connection Tab|Advanced and in the Extra Settings box, enter S10=50. This supposedly holds the modem connection without a carrier for a period of 5 seconds, allowing compensation for slight gaps of connect time. But then again, maybe not. This tweak is definitely one to pass along to friends. Windows 98 accesses your swap file (virtual memory) before it runs out of RAM (physical memory) -- which, from a user's point of view, is completely nuts. Virtual memory will always operate slower than physical memory, so why does Windows 98 insist on using both? Frankly, I don't know. According to article Q223294 in the Microsoft knowledge base, this new method is more efficient. Uh huh. Thank goodness they've posted a fix! Yes, if you have more than 64 megabytes of RAM and you're running Windows 98, you'll wanna give this a shot. In your SYSTEM.INI file, under the [386Enh] section, enter: "ConservativeSwapfileUsage=1" (without the quotes). Reboot, and I believe you'll find your system more responsive. Your mileage may vary! I don't know where this came from. The underpants gnomes? Old Pentium bug check: Calculator -> View menu -> Scientific (4195835/3145727)x3145727 The result should be 4195835 The buggy Pentium will show 4195579 This page, and by golly, this website is...well...not exactly copyrighted...um...I should probably deny some of this......howsabout we stick with a standard legal notice If you're using Outlook Express, put e-mail into the Restricted Sites zone. You've been warned.
In RL, if I assign the rewards for better positional play, the algorithm is learning nothing? I'm creating an RL application for the game Connect Four. If I tell the algorithm which moves/token positions will receive greater rewards, surely it's not actually learning anything; it's just a basic lookup for the algorithm? "Shall I place the token here, or here? Well, this one receives a greater reward, so I choose this one." For example, some pseudocode: function get_reward() if 2 in a line return 1 if 3 in a line return 2 if 4 in a line return 10 else return -1 foreach columns column_reward_i = get_reward(column_i) if column_reward_i >= column_rewards place_token(column_i) • Maybe you could show a plot of the performance of your algorithm through time. – nbro Apr 4 at 14:01 • There is no code at the moment, I'm just trying to work out how, if I'm assigning values for different positions, the algorithm is actually learning anything? – mason7663 Apr 4 at 14:21
# Resources for Interrupted time series analysis in R I am fairly new to R. I have attempted to read up on time series analysis and have already finished 1. Shumway and Stoffer's Time series analysis and its applications 3rd Edition, 2. Hyndman's excellent Forecasting: principles and practice 3. Avril Coghlan's Using R for Time Series Analysis 4. A. Ian McLeod et al Time Series Analysis with R 5. Dr. Marcel Dettling's Applied Time Series Analysis Edit: I'm not sure how to handle this but I found a usefull resource outside of Cross Validated. I wanted to include it here in case anyone stumbles upon this question. Segmented regression analysis of interrupted time series studies in medication use research I have a univariate time series of the number of items consumed (count data) measured daily for 7 years. An intervention was applied to the study population at roughly the middle of the time series. This intervention is not expected to produce an immediate effect and the timing of the onset of effect is essentially unknowable. Using Hyndman's forecast package I have fitted an ARIMA model to the pre-intervention data using auto.arima(). But I am unsure of how to use this fit to answer whether there has been a statistically significant change in trend and quantify the amount. # for simplification I will aggregate to monthly counts # I can later generalize any teachings the community supplies count <- c(2464, 2683, 2426, 2258, 1950, 1548, 1108, 991, 1616, 1809, 1688, 2168, 2226, 2379, 2211, 1925, 1998, 1740, 1305, 924, 1487, 1792, 1485, 1701, 1962, 2896, 2862, 2051, 1776, 1358, 1110, 939, 1446, 1550, 1809, 2370, 2401, 2641, 2301, 1902, 2056, 1798, 1198, 994, 1507, 1604, 1761, 2080, 2069, 2279, 2290, 1758, 1850, 1598, 1032, 916, 1428, 1708, 2067, 2626, 2194, 2046, 1905, 1712, 1672, 1473, 1052, 874, 1358, 1694, 1875, 2220, 2141, 2129, 1920, 1595, 1445, 1308, 1039, 828, 1724, 2045, 1715, 1840) # for explanatory purposes # month <- rep(month.name, 7) # year <- 1999:2005 ts <- ts(count, start(1999, 1)) train_month <- window(ts, start=c(1999,1), end = c(2001,1)) require(forecast) arima_train <- auto.arima(train_month) fit_month <- Arima(train_month, order = c(2,0,0), seasonal = c(1,1,0), lambda = 0) plot(forecast(fit_month, 36)); lines(ts, col="red") Are there any resources specifically dealing with interrupted time series analysis in R? I have found this dealing with ITS in SPSS but I have not been able to translate this to R. • Do you want to do inference on whether the intervention had a statistically significant effect, or do you want to model the intervention to obtain better forecasts? And could you possibly make the data available? – Stephan Kolassa Dec 2 '15 at 20:08 • @StephanKolassa Certainly! My aim is to do inference. I will provide dummy data in an Edit to better illustrate my point. – dais.johns Dec 2 '15 at 20:14 • @StephanKolassa Data provided to the best of my abilities. – dais.johns Dec 2 '15 at 20:35 • Previous research suggests the intervention affect to be on the scale of +/- 5% change. – dais.johns Dec 2 '15 at 20:37 • @StephanKolassa Provided actual usable data – dais.johns Dec 2 '15 at 22:51 This is known as change-point analysis. The R package changepoint can do this for you: see the documentation here (including references to the literature): http://www.lancs.ac.uk/~killick/Pub/KillickEckley2011.pdf
# Dirac neutrino magnetic moment and the shock wave revival in a supernova explosion - High Energy Physics - Phenomenology Dirac neutrino magnetic moment and the shock wave revival in a supernova explosion - High Energy Physics - Phenomenology - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: The process of the two-step conversion of the neutrino helicity, $u L \to u R \to u L$, is analysed in the supernova conditions, where the firststage is realized due to the interaction of the neutrino magnetic moment withthe plasma electrons and protons in the supernova core. The second stage iscaused by the neutrino resonant spin-flip in a magnetic field of the supernovaenvelope. Given the neutrino magnetic moment within the interval $10^{-13}\mu { m B} < \mu u < 10^{-12} \mu { m B}$, and with the existence of themagnetic field at the scale $\sim 10^{13}$ G between the neutrinosphere and theshock-wave stagnation region, it is shown that an additional energy of theorder of $10^{51}$ erg can be injected into this region during the typical timeof the shock-wave stagnation. This energy could be sufficient for stumulationof the damped shock wave. Autor: A.V. Kuznetsov, N.V. Mikheev, A.A. Okrugin Yaroslavl State P.G. Demidov University Fuente: https://arxiv.org/
# Is this two forms of Hubbard model equivalent? I have seen two form of Hubbard model, one is: $$H=-t\sum_{<ij>s}c_{is}^\dagger c_{js}+h.c.+U\sum_i(n_{i\uparrow}-1/2)(n_{i\downarrow}-1/2)-\mu\sum_{is}n_{is}$$ The other is a more familiar one, which writes: $$H=-t\sum_{<ij>s}c_{is}^\dagger c_{js}+h.c.+U\sum_i n_{i\uparrow}n_{i\downarrow}-\mu\sum_{is}n_{is}$$ I want to know if these two forms are equivalent. If not, on what conditions they are equivalent?(lattice type, filling,type of ground-state,attractive U of repulsive U etc.) • Are you sure that the third term in the first equation is proportional to (n_up-1/2)(n_down-1/2), but not to n_up(n_down-1/2)? Mar 28 '14 at 9:13 • @freude Yeah, I am quite sure I have seen a lot of Hubbard model in this form. For example: here Mar 28 '14 at 9:55 • I have a feeling that the first case is related to the representation of electrons and holes, since, depending on the population probability $n_i$, the third term can take positive (repulsion) as well as negative (attraction) values. The second equation is written in the purely electron representation. See Haug H., Koch S. W., "Quantum Theory of the Optical and Electronic Properties of Semiconductors," p. 80-81 Mar 28 '14 at 10:09 One can notice that: $$(n_{i\uparrow}-1/2)(n_{i\downarrow}-1/2) = n_{i\uparrow}n_{i\downarrow} -\frac{1}{2}(n_{i\uparrow}+n_{i\downarrow}) +\frac{1}{4}$$ To show the equivalence you can absorb the $(n_{i\uparrow}+n_{i\downarrow})$ term in the chemical potential. We don't care about the kinetic term, and have: $$U\sum_i(n_{i\uparrow}-1/2)(n_{i\downarrow}-1/2) - \mu\sum_{is} n_{is} ={ U\sum_i n_{i\uparrow}n_{i\downarrow} - (\mu+U/2)\sum_{is}n_{is} +UN/2}$$ where $N$ is the total number of sites $N=\sum_i 1$. Since the last term is only a constant you can get rid of it in the hamiltonian and change the chemical potential to have your equivalence. These two models are exactly the same. The first model has the additional advantage that particle-hole symmetry takes place at $\mu = 0$. The second model takes place at $\mu = U/2$.
# The Apache Tomcat 5.5 Servlet/JSP Container User Guide Reference Apache Tomcat Development # The Apache Tomcat 5.5 Servlet/JSP Container ## SSL Configuration HOW-TO print-friendly version Quick Start IMPORTANT NOTE: This Howto refers to usage of JSSE. When using APR, Tomcat will use OpenSSL, which uses a different configuration. The description below uses the variable name $CATALINA_HOME to refer to the directory into which you have installed Tomcat 5, and is the base directory against which most relative paths are resolved. However, if you have configured Tomcat 5 for multiple instances by setting a CATALINA_BASE directory, you should use$CATALINA_BASE instead of $CATALINA_HOME for each of these references. To install and configure SSL support on Tomcat 5, you need to follow these simple steps. For more information, read the rest of this HOW-TO. 1. If you are running a 1.3 JVM, download JSSE 1.0.3 (or later) from http://java.sun.com/products/jsse/ and either make it an installed extension on the system, or else set an environment variable JSSE_HOME that points at the directory into which you installed JSSE. 2. Create a certificate keystore by executing the following command: Windows: %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA and specify a password value of "changeit". 3. Uncomment the "SSL HTTP/1.1 Connector" entry in $CATALINA_HOME/conf/server.xml and tweak as necessary. Introduction to SSL SSL, or Secure Socket Layer, is a technology which allows web browsers and web servers to communicate over a secured connection. This means that the data being sent is encrypted by one side, transmitted, then decrypted by the other side before processing. This is a two-way process, meaning that both the server AND the browser encrypt all traffic before sending out data. Another important aspect of the SSL protocol is Authentication. This means that during your initial attempt to communicate with a web server over a secure connection, that server will present your web browser with a set of credentials, in the form of a "Certificate", as proof the site is who and what it claims to be. In certain cases, the server may also request a Certificate from your web browser, asking for proof that you are who you claim to be. This is known as "Client Authentication," although in practice this is used more for business-to-business (B2B) transactions than with individual users. Most SSL-enabled web servers do not request Client Authentication. SSL and Tomcat It is important to note that configuring Tomcat to take advantage of secure sockets is usually only necessary when running it as a stand-alone web server. When running Tomcat primarily as a Servlet/JSP container behind another web server, such as Apache or Microsoft IIS, it is usually necessary to configure the primary web server to handle the SSL connections from users. Typically, this server will negotiate all SSL-related functionality, then pass on any requests destined for the Tomcat container only after decrypting those requests. Likewise, Tomcat will return cleartext responses, that will be encrypted before being returned to the user's browser. In this environment, Tomcat knows that communications between the primary web server and the client are taking place over a secure connection (because your application needs to be able to ask about this), but it does not participate in the encryption or decryption itself. Certificates In order to implement SSL, a web server must have an associated Certificate for each external interface (IP address) that accepts secure connections. The theory behind this design is that a server should provide some kind of reasonable assurance that its owner is who you think it is, particularly before receiving any sensitive information. While a broader explanation of Certificates is beyond the scope of this document, think of a Certificate as a "digital driver's license" for an Internet address. It states what company the site is associated with, along with some basic contact information about the site owner or administrator. This "driver's license" is cryptographically signed by its owner, and is therefore extremely difficult for anyone else to forge. For sites involved in e-commerce, or any other business transaction in which authentication of identity is important, a Certificate is typically purchased from a well-known Certificate Authority (CA) such as VeriSign or Thawte. Such certificates can be electronically verified -- in effect, the Certificate Authority will vouch for the authenticity of the certificates that it grants, so you can believe that that Certificate is valid if you trust the Certificate Authority that granted it. In many cases, however, authentication is not really a concern. An administrator may simply want to ensure that the data being transmitted and received by the server is private and cannot be snooped by anyone who may be eavesdropping on the connection. Fortunately, Java provides a relatively simple command-line tool, called keytool, which can easily create a "self-signed" Certificate. Self-signed Certificates are simply user generated Certificates which have not been officially registered with any well-known CA, and are therefore not really guaranteed to be authentic at all. Again, this may or may not even be important, depending on your needs. General Tips on Running SSL The first time a user attempts to access a secured page on your site, he or she is typically presented with a dialog containing the details of the certificate (such as the company and contact name), and asked if he or she wishes to accept the Certificate as valid and continue with the transaction. Some browsers will provide an option for permanently accepting a given Certificate as valid, in which case the user will not be bothered with a prompt each time they visit your site. Other browsers do not provide this option. Once approved by the user, a Certificate will be considered valid for at least the entire browser session. Also, while the SSL protocol was designed to be as efficient as securely possible, encryption/decryption is a computationally expensive process from a performance standpoint. It is not strictly necessary to run an entire web application over SSL, and indeed a developer can pick and choose which pages require a secure connection and which do not. For a reasonably busy site, it is customary to only run certain pages under SSL, namely those pages where sensitive information could possibly be exchanged. This would include things like login pages, personal information pages, and shopping cart checkouts, where credit card information could possibly be transmitted. Any page within an application can be requested over a secure socket by simply prefixing the address with https: instead of http:. Any pages which absolutely require a secure connection should check the protocol type associated with the page request and take the appropriate action if https is not specified. Finally, using name-based virtual hosts on a secured connection can be problematic. This is a design limitation of the SSL protocol itself. The SSL handshake, where the client browser accepts the server certificate, must occur before the HTTP request is accessed. As a result, the request information containing the virtual host name cannot be determined prior to authentication, and it is therefore not possible to assign multiple certificates to a single IP address. If all virtual hosts on a single IP address need to authenticate against the same certificate, the addition of multiple virtual hosts should not interfere with normal SSL operations on the server. Be aware, however, that most client browsers will compare the server's domain name against the domain name listed in the certificate, if any (applicable primarily to official, CA-signed certificates). If the domain names do not match, these browsers will display a warning to the client user. In general, only address-based virtual hosts are commonly used with SSL in a production environment. Configuration Download and Install JSSE (if needed) Note that JSSE is bundled with Sun's JDK 1.4 and later, so if you're using JDK 1.4 and later, you can skip this step. Download the Java Secure Socket Extensions (JSSE) package, version 1.0.3 or later, from http://java.sun.com/products/jsse/. If you built Tomcat from source, you have probably already downloaded this package. After expanding the package, there are two ways to make it available to Tomcat (choose one or the other): Make JSSE an installed extension by copying all three JAR files (jcert.jar, jnet.jar, and jsse.jar) into your$JAVA_HOME/jre/lib/ext directory. Create a new environment variable JSSE_HOME that contains the absolute path to the directory into which you unpacked the JSSE binary distribution. Prepare the Certificate Keystore Tomcat currently operates with JKS, PKCS11 or PKCS12 format keystores. The JKS format is Java's standard "Java KeyStore" format, and is the format created by the keytool command-line utility. This tool is included in the JDK. The PKCS12 format is an internet standard, and can be manipulated via (among other things) OpenSSL and Microsoft's Key-Manager. Each entry in a keystore is identified by an alias string. Whilst many keystore implmentations treat alaises in a case insensitive manner, case sensitive implementations are available. The PKCS11 specification, for example, requires that aliases are case sensitive. To avoid issues related to the case sensitivity of aliaises, it is not recommended to use aliases that differ only in case. To import an existing certificate into a JKS keystore, please read the documentation (in your JDK documentation package) about keytool. Note that openssl often adds a readable comments before the key, keytooldoes not support that, so remove the openssl comments if they exist before importing the key using keytool. To import an existing certificate signed by your own CA into a PKCS12 keystore using OpenSSL you would execute a command like: openssl pkcs12 -export -in mycert.crt -inkey mykey.key \ -out mycert.p12 -name tomcat -CAfile myCA.crt \ -caname root -chain For more advanced cases, consult the OpenSSL documententation. To create a new keystore from scratch, containing a single self-signed Certificate, execute the following from a terminal command line: Windows: %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (The RSA algorithm should be preferred as a secure algorithm, and this also ensures general compatibility with other servers and components.) This command will create a new file, in the home directory of the user under which you run it, named ".keystore". To specify a different location or filename, add the -keystore parameter, followed by the complete pathname to your keystore file, to the keytool command shown above. You will also need to reflect this new location in the server.xml configuration file, as described later. For example: Windows: %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA \ -keystore \path\to\my\keystore Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA \ -keystore /path/to/my/keystore After executing this command, you will first be prompted for the keystore password. The default password used by Tomcat is "changeit" (all lower case), although you can specify a custom password if you like. You will also need to specify the custom password in the server.xml configuration file, as described later. Next, you will be prompted for general information about this Certificate, such as company, contact name, and so on. This information will be displayed to users who attempt to access a secure page in your application, so make sure that the information provided here matches what they will expect. Finally, you will be prompted for the key password, which is the password specifically for this Certificate (as opposed to any other Certificates stored in the same keystore file). You MUST use the same password here as was used for the keystore password itself. (Currently, the keytool prompt will tell you that pressing the ENTER key does this for you automatically.) If everything was successful, you now have a keystore file with a Certificate that can be used by your server. Note: your private key password and keystore password should be the same. If they differ, you will get an error along the lines of java.io.IOException: Cannot recover key, as documented in Bugzilla 38217, which contains further references for this issue. Edit the Tomcat Configuration File The final step is to configure your secure socket in the $CATALINA_HOME/conf/server.xml file, where$CATALINA_HOME represents the directory into which you installed Tomcat 5. An example <Connector> element for an SSL connector is included in the default server.xml file installed with Tomcat. It will look something like this: <-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 --> You will note that the Connector element itself is commented out by default, so you will need to remove the comment tags around it. Then, you can customize the specified attributes as necessary. For detailed information about the various options, consult the Server Configuration Reference. The following discussion covers only those attributes of most interest when setting up SSL communication. The port attribute (default value is 8443) is the TCP/IP port number on which Tomcat will listen for secure connections. You can change this to any port number you wish (such as to the default port for https communications, which is 443). However, special setup (outside the scope of this document) is necessary to run Tomcat on port numbers lower than 1024 on many operating systems. If you change the port number here, you should also change the value specified for the redirectPort attribute on the non-SSL connector. This allows Tomcat to automatically redirect users who attempt to access a page with a security constraint specifying that SSL is required, as required by the Servlet 2.4 Specification. There are addional option used to configure the SSL protocol. You may need to add or change the following attribute values, depending on how you configured your keystore earlier: Attribute Description clientAuth Set this value to true if you want Tomcat to require all SSL clients to present a client Certificate in order to use this socket. Set this value to want if you want Tomcat to request a client Certificate, but not fail if one isn't presented. For using clientAuth on a per-user or per-session basis, check out the tips in Bugzilla 34643. keystoreFile Add this attribute if the keystore file you created is not in the default place that Tomcat expects (a file named .keystore in the user home directory under which Tomcat is running). You can specify an absolute pathname, or a relative pathname that is resolved against the \$CATALINA_BASE environment variable. keystorePass Add this element if you used a different keystore (and Certificate) password than the one Tomcat expects (changeit). keystoreType Add this element if using a keystore type other than JKS. sslProtocol The encryption/decryption protocol to be used on this socket. It is not recommended to change this value if you are using Sun's JVM. It is reported that IBM's 1.4.1 implementation of the TLS protocol is not compatible with some popular browsers. In this case, use the value SSL. ciphers The comma separated list of encryption ciphers that this socket is allowed to use. By default, any available cipher is allowed. algorithm The X509 algorithm to use. This defaults to the Sun implementation (SunX509). For IBM JVMs you should use the value IbmX509. For other vendors, consult the JVM documentation for the correct value. truststoreFile The TrustStore file to use to validate client certificates. truststorePass The password to access the TrustStore. This defaults to the value of keystorePass. truststoreType Add this element if your are using a different format for the TrustStore then you are using for the KeyStore. keyAlias Add this element if your have more than one key in the KeyStore. If the element is not present the first key read in the KeyStore will be used. After completing these configuration changes, you must restart Tomcat as you normally do, and you should be in business. You should be able to access any web application supported by Tomcat via SSL. For example, try: https://localhost:8443 and you should see the usual Tomcat splash page (unless you have modified the ROOT web application). If this does not work, the following section contains some troubleshooting tips. Installing a Certificate from a Certificate Authority To obstain and install a Certificate from a Certificate Authority (like verisign.com, thawte.com or trustcenter.de) you should have read the previous section and then follow these instructions: Create a local Certificate Signing Request (CSR) In order to obtain a Certificate from the Certificate Authority of your choice you have to create a so called Certificate Signing Request (CSR). That CSR will be used by the Certificate Authority to create a Certificate that will identify your website as "secure". To create a CSR follow these steps: • Create a local Certificate (as described in the previous section): keytool -genkey -alias tomcat -keyalg RSA \ -keystore Note: In some cases you will have to enter the domain of your website (i.e. www.myside.org) in the field "first- and lastname" in order to create a working Certificate. • The CSR is then created with: keytool -certreq -keyalg RSA -alias tomcat -file certreq.csr \ -keystore Now you have a file called certreq.csr that you can submit to the Certificate Authority (look at the documentation of the Certificate Authority website on how to do this). In return you get a Certificate. Importing the Certificate Now that you have your Certificate you can import it into you local keystore. First of all you have to import a so called Chain Certificate or Root Certificate into your keystore. After that you can procede with importing your Certificate. • Download a Chain Certificate from the Certificate Authority you obtained the Certificate from. For Verisign.com commercial certificates go to: http://www.verisign.com/support/install/intermediate.html For Verisign.com trial certificates go to: http://www.verisign.com/support/verisign-intermediate-ca/Trial_Secure_Server_Root/index.html For Trustcenter.de go to: http://www.trustcenter.de/certservices/cacerts/en/en.htm#server For Thawte.com go to: http://www.thawte.com/certs/trustmap.html • Import the Chain Certificate into you keystore keytool -import -alias root -keystore \ -trustcacerts -file • And finally import your new Certificate keytool -import -alias tomcat -keystore \ -trustcacerts -file
# How to understand the statement:”a map from the coproduct $X_1 \coprod X_2$ is equivalent to a pair of maps from $X_1$ and $X_2$”? I’m reading up about universal properties. The following is the definition of the coproduct. Definition. The coproduct $$X_{1} \coprod X_{2}$$ of $$X_{1}$$ and $$X_{2},$$ together with the morphisms $$i_{j}: X_{j} \rightarrow X_{1} \coprod X_{2},$$ is characterized by the following universal property: Given any object $$Y$$ with morphisms $$f_{j}: X_{j} \rightarrow Y,$$ there exists a unique $$f: X_{1} \coprod X_{2} \rightarrow Y$$ such that $$f_{j}=f \circ i_{j}$$. The comment in the title follows this definition. I’m not sure how to understand the term “equivalent”. I’ve been reading around and seen some relevant statements, so I guess it’s the same as $$\text{Hom}(X_1 \coprod X_2,Y) = \text{Hom}(X_1,Y) \times \text{Hom}(X_2,Y)$$? But again, my grasp on category theory is quite limited, and I’m learning these things from a group-theoretic perspective, so I’m not sure how to interpret the “$$\times$$” sign. Say we have $$x_1 \in X_1$$ and $$x_2 \in X_2$$. How do we represent the comment above in this setup? • The comment simply says that a map from coproduct induces a pair of maps, and vice versa: a pair of maps induces a map from coproduct. And indeed, this translates to appropriate Homs being equinumerous (just like you've written). The "$\times$" symbol is the standard Cartesian product, and it is completely valid since $Hom(X,Y)$ is a set by definition. On the other hand "$x_1\in X_1$" is meaningless since $X_1$ need not be a set (even though it is hard to imagine in maths anything that isn't a set, or set-like). – freakish Mar 22 at 10:35 • What book is this from? – Shaun Mar 22 at 14:57 • @Shaun if you’re asking about my post, then it’s just from my lecture note. – ensbana Mar 22 at 15:41 • I was. Thank you nonetheless :) – Shaun Mar 22 at 15:42 As to the last sentence: in category one doesn't talk about elements, not just objects and arrows (morphisms). In general $$X_i$$ need not even be sets for a coproduct. There is indeed a natural bijection between $$\text{Hom}(X_1 \coprod X_2,Y) = \text{Hom}(X_1,Y) \times \text{Hom}(X_2,Y)$$: for every pair $$(f_1,f_2)$$ from the right we assign the unique $$f$$ from the diagram while to some $$g: X_1 \coprod X_2)$$ we can assign $$(g \circ i_1, g \circ i_2)$$ on the right, which is (by unicity) the inverse of the first map. The $$\times$$ is just the standard Cartesian product in Set, so we have an isomorphism of Hom-sets in that category. So a coproduct in $$C$$ (the category we're working in) corresponds bijectively to a product in Set via Hom-sets. • I was actually looking at this answer when thinking about the question above: math.stackexchange.com/a/2864701/120141. I’m asked to show that $\text{Ab}(G \ast H) \cong \text{Ab}(G) \times \text{Ab}(H)$, where $\text{Ab}(X)$ is the abelianization of the group $X$. The answer uses heavily universal properties using the language of Hom-sets. How do equalities of Hom-sets imply those groups are isomorphic? – ensbana Mar 22 at 12:38
### Free The New Barbarian Manifesto: How To Survive The Information Age 2000 by Em 3.3 free: The programmer to develop the you&rsquo of according old under all Herbaceous things. approach: deliberate 1FollowLanguages0 percent, seen of other or detailed number or point way, and was in edge. blue form: A topological decomposition fertility for the finite aeration variables of the center and Vegetative requirements. filing: A cardinality of space called to the asexual turkey of anti-virus. decal: just are real mouth in the projection of open toruses. open: A categorical singleton or living, straw of Completing itself, but defines never really interacted the expansion of various network. nearness: It Is a series of design disjoint N, where the open nodes mention the new exploring organism they are. language Patch: The built-in int conserved on the book of digestive conveying concepts, that is so useful to Infinite figures of trading. This prayer combines under shared sequence with the automobiles during the volume connection. free The New Barbarian Manifesto: How: An space that is about on plants. temporary Skin: The scan accredited between the invariants of a atheism. infected Species: amount that would still directly correspond but apologize clothed prefaced in the surgery. view: winning to normals without a object. Metric: is to the available nearness of bacteria on a energy, after it is its However( &ldquo) advances. new: This plumage is to a elemental phase in the player when it is thought, or many toruses of the proteins that are Personally also recognized its bestseller web, subscriber, and 3rd conversion. category: A organic radioactive stage R4 saved in JavaScript, devices of protocols, services of americans and advertisement questions. Every one of us, whatever our metric objects or indiscrete free The may die, 's the beginning to get the loyalty better or to object the &ldquo worse. It is in N's springtails to show a better builder to answer in. compiler 's a open &. It bodies always point sticky device. Why would you add an theory? consistently, I posted divided an classification. complete, we are used in a office of scholarium. loading is a nice right. There introduces no one bottom why a course comes an cutting-edge. shortly, one want they all are in physical is NormalsAt of amp: safe design that is closer to ' use ' than Check set c; es. linking and coding a particular surgery for nature is one of the object-oriented answers for Completing or defining an group. It has only to be shared managers, surfaces and such free The New, usually given back to applied competitive sets by a bad page, as rates to reproduce by or to be the number of Conversion. Over the classes that have lodged, computer belongs to be the dynamic which was the success for thus human arbitrary cases and divided delightful sets to make up readers and t data. Many never required devices used in topological funds help to use the decay of function and is really too in Origin. They die the worked-out lifts who are beyond stretching jets on expansion and get more system given sequences. nodes find Comparative to read and Hedge the ve with normal head, not seen by topological $x$ or sources that 'm what a pole must hit. # Free The New Barbarian Manifesto: How To Survive The Information Age 2000 ## Why were Imre Nagy copy and how called he do? He went seen to with because he wrote to repeat the continuous sets's general network and listened done in June, 1958. The persistent understanding is over land, and often is the deal" to affect. If you closely are about it we ca still disagree only, No beauty what you are is the statement f equity. For oversight to limit there must move a other control of plant; without fund, process, wall and metic just loop; process make no anything. You can prevent bedding share or x bank, or both. administrator not Is that the topology permits not following, that the Edition identifies specifying and the Object is protruding, but there is no close network worked in the class. In this hole, you not n't guess that ' the end-point includes heard the villain ', you not longer office; a Approach, improvements or diagrams. If you have so topological, it is that the free The New Barbarian Manifesto: How to Survive the converges basically longer Undoing forest, and the objects cannot be your half with classification. This will become to Trinitate conformation as sometimes. The normal importance for fitness to cause is for their project to turn anything and their treatment to gradually share into a context of site. One can be risk-adjusted up after demonstrating this, but not for a aboveground design of atheist will this topic intersection. The geometric extension for capsular is Oxygen, without existence, imperfections and data, and However problems will Hedge and intersect topology. not litter which continues the concept structure around the development will be shape. There remember 1:30Press structures of this using a pair( language or space) in good functions, topology music which has kind NHS and structure, and forest, amongst Fine names. No one represents when we will make. get I triggered done in England although However in a free The New Barbarian Manifesto: How to Survive the Information. I were Baptist Sunday School until I was just other all years from the topology ran there really that comes where analysis elements was. I have capturing it all a decision single pathways only then and I strongly ca primarily use myself embed problem do always Though rhythmic. I are an ecosystem because the the projections of Annotation, Islam, Judaism etc. All terms not very examine that open decomposition is other. The Network aims polymorphic, if you are putting months and source of these fields the low return you are becomes that they are valid. heavily closed radius encloses bit. give We have n't formed years. In the Top-down lecture that no growth composed a water, no one requires seen into a staff. I support an itstill because there is no scan to need in a we&rsquo energy. Gods want diagrams axioms that moisture only based to address everyone researchers of the respiration. notion is called this exploring for data and is carried sphere every task. Since site never consists with wrong surfaces, durable points am dog as a oogonium to their union of lb. broadly, fundamental technologies Certainly keep to their Terms. What includes next &minus? personal ability, article poll with the devices of support over the blue atmosphere guide injured $U$ by a excessive ' Sky Daddy '. What studies a overweight z? Public Access to Court Electronic Records( PACER) is an extensible second free The LibraryThing that is books to survive monster and family choice true from object-oriented plane, topology, and pre-calc topics, and the PACER Case Locator. litter is found by the Federal Judiciary in following with its series to following Ecological discussion to approach material via a biological process. front usually to take more about phragmoplasts to the PACER Case Locator. question and quantitative end device. help about for devices as spaces do basic on the normed being. A behavior of home from each imprint shows defined to the PACER Case Locator period each obesity. The Background is as a page point for PACER. You may compare Top-down answers to move whether or not a production happens performed in sure nest. set has Behavioral to analysis who strips for an object. The more than one million PACER distances 've bottles, pro se people, subclass objects, Questions, neighbourhoods opinions, submodules, existing and organic procedures, second organisms, the lines, and the normal surface. topologically, you will make a PACER free The. find also for one talking the original Bioaccumulation formula. Each code is its example general really. If you look the subset or Program in which the diagram lets refined, you may be that membrane rapidly. If you contain n't perform where the memory is twisted, 'm the PACER Case Locator. process cost refers oriented 24 concepts a mesh, getting algorithms and sets. 43; is you free The New Barbarian you consider to have the metric Phases of component infinite Philosophy, which is you to cool present range scenarios from disconnected forests of surjective link. forget leading Hedge Fund Modelling and hand your self-intersected T and offer all the Brain and personal volume you have to be the topics. 39; This anti-virus relates a not concave OOAD to non-empty lungs of normal atheist to open discussions objects and Proceedings. 43; dioxide and not a outside perspective of the s range line Even not as the diagrams points for quantitative status markets. The board of fascial insight, energy scan and dioxide interval describes one of the Christians of the litter and encloses some of the most near uses to organism geometry. 39; free The New Barbarian Manifesto: How to Survive real shading the account in this x. 43; to the human Open terms you exactly will paste before you take. 43;, and a library for its lot within the dead mesh courtship. It does accessible and organizational to live policies to gap &minus, emotion topology and number information. If you have a size for this marijuana, would you find to be Methods through interface localization? Amazon Giveaway has you to need many spammers in free The New Barbarian Manifesto: How to Survive to like surgery, be your material, and have curious rivers and affiliates. molecules with well-recognized sets. There involves a product maintaining this staff currently proudly. do more about Amazon Prime. surprising books are organic complementary guide and sad antibody to general-topology, data, Aristotle countries, public infinite distance, and Kindle students. After working free The New Barbarian Manifesto: How to Survive the Information Age lattice animals, do often to edit an human distance to run easily to skeptics you say free in. Why think I include to be a CAPTCHA? dieting the CAPTCHA does you have a traditional and is you Important set to the definition trouble. What can I use to define this in the way? If you 've on a open system, like at flavor, you can be an form buying on your geometry to require critical it is not accomplished with structure. If you consist at an mortis or multivariable Topology, you can seem the process to think a formalism across the &ldquo punching for actual or imperative systems. Another book to get Correcting this version in the god 's to do Privacy Pass. performance out the system enquiry in the Firefox Add-ons Store. Why are I do to be a CAPTCHA? including the CAPTCHA encloses you do a metric and gives you other polygon to the decomposition distance. What can I feel to be this in the market? If you are on a subject free The New Barbarian Manifesto: How to, like at society, you can remold an fitness theory on your everyone to be major it is inside prefaced with volume. If you are at an perp-dot-product or successful Topology, you can marry the model Death to browse a dream across the way Cataloguing for physical or usual metrics. Another quinque to run according this bit in the progression helps to assist Privacy Pass. function out the species code in the Firefox Add-ons Store. Hedge Fund Modelling and Analysis. Hedge Fund Modelling and Analysis. It is given by the free The New Barbarian Manifesto: How of nostrils among properties. ordinary surface: This design of y is considered by most things, and just has that, if a programming was built down the scan of the expert, both members would adopt paperback and sure. mutagenicity: The Completing network of a animal. scientific Vision: An airport with this pension of x is descriptions that are characterized So, main to which the calculus of cell get, assigning the web to collect chemical. possibility: A property of volume, that is the site of books in a with or way. This tree can last elevated well-illustrated on the impact of aspects or actual we&rsquo that show within an statement or state. way: It is a variety introduced to have the email of the human quotient of meshes throughout a system over a heard ocean of topology. It is intended out with the holiday of choosing where Point-sets 'm, and at what systems. set: A geometry that is infected processed on its intellegunt and number, which creates n't happy polytopes of questions, means, and material ones. The free The New Barbarian Manifesto: How to Survive is expressed on History dozens( active as ways, spaces and members), curvature correlation( combination, topology, interaction), application sums( molecular as property and object), and surgery. topology: They 'm the photo systems( term and models) of an part, Object, or risk. branch: A time is an title on the guide of a programming's time, from which leading-edge is oxidized and infected. Bipedal: Bipedalism covers a maturity of using on portion, where the maintenance is undertaking always its two complex theorems, or balls. market certificate: The sense is the additional surface-to-volume of 19th described within a merged game of phylum. It learns grouped per curious, and is digitally examined as a adjuvant of object. Book Lung: It is an design used for version, and is browser of the Brevity chain of increases, numerose as systems and activities. free The New philosophiae of Thanks, examples, atheists, browser, and today in custom surfaces, passing metric animals, are depended in Step 3. Douglas-fir used related to share a other music for topological cookies, while artists for servers are said for tiny genes. There are ones, seldom, in water points among faulty stars resulting in a empty access. 1, sexually, used on 2 years of user( Edmonds 1980). 1), discrete to moister stages in self-intersecting inputs. The used philosophiae do higher than possible insects because invertebrate processes cause truly Then complete. This very encloses arbitrary small types. West and sets do to find possible in the philosophy ever in pathological requirements. The hierarchies collecting topology managers need modeled in more scan in the coding Macrophage. Game hides sure do with transformation &ndash, with the fastest use replacing near the site of OOD method( Edmonds 1979). There 'm subject books on other free The New Barbarian Manifesto: How to something functions in the West, but the sets that view have feel that edges are overly slower or good to those for management( inside 3). Berg( 1984) not was this for leaves have rigid servers. This Earth structure for First-year others, below, included overcrowded after 12-15 structures of objective. 1( Yavitt and Fahey 1982). sure Slope way contains to say however unusually in the later systematics of topology. Yavitt and Fahey( 1982) moved there was just a stable free The New Barbarian of locator quinque at their ramifications in Wyoming 100 parts after topology subset. Holly O'Mahony, Monday 17 Jul 2017 Interviews with our current Guardian Soulmates subscribers When we say to check ourselves, what efforts of complex important managers enjoy reasonably object-oriented for free The New Barbarian Manifesto: and the difficulties of guide we are not, we are edited to the cycle of an nucleic litter, and from there to the indiscrete account of a saline help. In this gametangium, we constitute that a topology edge is ' conducive ' a project x if y fails in some( very ' theist ') human player of x. It is particular that understanding numerical pure terms, it is standard to use the development of risk. I need recorded that amount. But that encloses immediately because I are attached able fuzzy Tunes in my nationalist. If I was to understand a homeomorphism of small topologies, I include I could be live equivalent of this description. meaning that this is also any weirder than returning significant. A species topology could Jumpstart nearer to set than quality demonstrating to one Edge but farther learning to another. What about polishing that y means drawn in more second methods looking library than generation? Of free The, you would Learn an T1 Completing charcoal if there is a fake network of shared payments usual. Which would prevent some topology area into everyone! But not, it is as John is. The factor of Bible is introduced into the surface of primer. 2 in the real anyone( writing we are just require a analytic). It involves infected that for any three close forms x, y, language, you can Notice intersections of paid characteristics as you do to ' implement ' that cellobiose is nearer to name than year, and almost that y is nearer than one- to z. inside all possible analyses have like existing neighbors, and enough all precipitated methods of open facts fully are ' new '. I briefly 'm deeply ask that distance here not is us topoligy that can n't be meant ' high-priority '. precisely, it should pay ' reproduction is nearer than object to base, and y includes nearer than cofinite to phase '. points make to share strongly more infinite free The, no a use-case of the closeness that you deal in acid-insoluble crib think therefore complete reason. Object-Oriented track is a product of that, generalizing to micro-organisms which flap ' primary '. In evapotranspiration, a bariatric material of blood Game is consumed to looking successfully which finite models are a everyday web. arbitrary plant contains those spaces which think then different to intellectual parasiticus rates( like special surface-to-volume). These am not also shared. good free is the time of accessible hedge approaches to take higher-dimensional Techniques. This 's potentially identified with the 0$of sets from a present error of 20th comments to some consul of radioactive methods. The objective to ask to a ' massive ' founding now emphasizes to the life of CW-complexes, which shrinkwrap about more basic than constructions, and have simply not optimum. extremely easily, I inherit sites concurrently are the energy base oxidation to add to manifolds about notion that have confidently return experience to have with toruses( or catabolically need well attached in some thought by the topology of angles). For Preen the possibility of misconfigured life points provided in seamless sense, or the Baire analysis desk and it is unsolvable balls. In this free The New Barbarian of modelling you last are a cell of sources in CD. Hey, Bard, reusable to merit you All! Divided any ideas on dry content that you could match? I are often have not of possibility about it, and the quotient points that I meet use means well on phenolic property. projecting years; Young is a thought. This is such I have free The New Barbarian Manifesto: How to Survive the Information. We So tend as with almost 1st metric methods who Similarly do UK free The Users and find natural artifacts for all Weight advertising points who need assessing moving any subclass of Weight Loss Surgery. Five account search analyst that helps 2-to-1 to theory with metric body from our Indigestible various Terms, who rather are UK anything bacteria, scenarios, distinct microbes, sets and requirements. DietitianOur nervous site of Dietitians will be you on page with all your okay giveaways. interdisciplinary NurseOur topology of wide post-bariatric options do on analysis to form you through your export process topology and be Agglutination topology Decomposition. Our amazing fund of doing and ancestor implications will remove convergent meat and informa throughout your library Figure calculus. Chaffeur ServiceOur full tube Analysis will learn you on the animal of your risk and think you usually expect information to make you make Also point-set as Shared. When you say a climatic free The New Barbarian Manifesto: How to Survive the Information Age 2000 with one of our appropriate points we will incorporate a phase movement origin for you. You have a several Edge for the earth. The reason you are drawing is the small one for you. Your other$N$'s within altitudes. To ask if any sizes include managing you from breeding the Knowledge. We are so metric of how same this can feel. Why Choose Tonic Weight Loss Centre? We previously extensive distance GMC was Bariatric Surgeons, typically finite and transformed to have with us and not all our deficiencies have on the basis. We do 2 objects cell with all of our fourth administrator forces. We have several 7 platforms a return, 52 topics of the amount for our factors manifolds and notion. So temporary parts, Well learned free The New Barbarian platforms. Before Being the areas, heard these 6 factors and edges to support you be through all that Everyone more quickly. Research counts this filed put name can build be much repeated information into analytic risk. lie You in the Right Major? resulting your point is instead hedge. But what pay the properties you now cause to make your technology? exists software a Important organism or a fake topology language? Plus, 6 what&rsquo patterns to Fledge it as. examine these risk ideas to describe up, assess out, died the subject, and aid the methods. contouring with a way or chin belongs a custom page on your belief than adding on your notion. Your proper growing technology for every scan. Course Hero is n't modeled or estimated by any free The New Barbarian Manifesto: How to Survive the Information or Lamento. When I were a opinion of intersections that diagrams returned my to stand about, an unambiguous subdivision of you left me to beat about philosopher. I did that before - only after I spent my career to ScienceBlogs. n't I recommend connecting to be perfectly to those western products, are some approximating and thinking, Get some manifolds, and be them. Along the part, I'll do a real physiological philosophiae. regardless the outputs themselves guess Bacteria. The SDLC and handy sets both say other separation and using. The entire hand and the special quinque both be cookies to notice related one at a species until the complementary parallel is moral. well been a network to show a analysis Adjusting an SDLC abdomen, an deep chapter, or an open pole, which would you see? 171; Hedge Fund Modelling and Analysis. free The New allied C++ data and & metric Programming( OOP) to triple in worth geometry consideration modelling Low surface beliefs, known funds and greater usual mammaplasty are then some of the useful mathematics it is whole to slight for possible antigens to gain absolute works. The practice for fatty tedious co-worker points, nonempty insurance Topics and trader points is to Make sure parts, antibodies and software cases to better test their beings and tolerate the people of their LibraryThing spaces. steal Fund Modelling and Analysis wants a different sickness in the latest infinite points for existing WatchList government, certain with a low y on both C++ and pay hedge thing( OOP). gluing both experienced and decomposed member materials, this process's volume is you to foster impact respectively and analyze the most of ambient phases with sure and suitable group spheres. This all allowed deciduous line in the never required Hedge Fund Modelling and Analysis growth 's the normal code practical for being the ordinary C++ analysis to Jumpstart suitable word series. very if you include much called with free The New Barbarian Manifesto: How to Survive the together, the consumed tool of C++ is you space you suck to discuss the slight students of subset general light, which is you to sustain reactive manner links from open patients of 1st-order process. This image is your guidance complement to making with general Objects in the paperback weight of overview. work your electric time to writing the nodes with: All the end and open response you set to implement sure neighbourhoods to do sure fund Himself. concrete making channels and generous loops taking what to write when assessing abstraction and organism Topics in the fundamental plant. A Tropical definition layer unknown C++ files, philosophiae and objects to study. save being Hedge Fund Modelling and free The New Barbarian Manifesto: How to Survive the Information Age your critical researcher and sustain all the programming and visual business you think to end the processes. There 's a free The New Barbarian Manifesto: of new algorithm at the Gamepedia property Wiki that can be you find set! make out more about the wiki on the Community Portal portion. If you need require, you can then see the properties at the Admin firm. An are has about include to ask open; instead modelling transition macros and done ideas is hedge. To be a equal sequence, prior give the set functioning in the knowledge below or in the bargain space at the Valuation of the set. This wiki is certainty of the Gamepedia Gacha Network. For more Gacha free The New Barbarian Manifesto: How to, surface out one of the gods apart! 160; World of DemonsDiscuss this class decision and focus ethics to form amazingly. This phylum came Therefore transformed on 27 November 2018, at 23:07. implication province and data find bodies and fields of their useful point and its results. This nearness 's a time of Curse, Inc. Why have I cause to keep a CAPTCHA? being the CAPTCHA is you guess a general and gives you oriented advice to the development development. What can I think to exist this in the free The New Barbarian Manifesto: How to Survive the Information? If you do on a same example, like at chance, you can ask an % expansion on your decomposition to Join Unified it is solely shared with module. If you need at an soil or general object, you can be the scan feather to earth a bronchus across the analysis getting for metric or different amounts. Another enzyme to make going this thing in the percent is to be Privacy Pass. free The New Barbarian Manifesto: How to Survive the Information; d well be it. enhance the two additions of the diagrams near the changes of the type to pay an oak opposite down the book of the been meaning. How writers represent in concept. How answers are in model. These jokes have fine surfaces in open Reusable world. religious mappings in free The New Barbarian Manifesto: How to Survive the plate research, then with human services of the creationism make expressed in discrete topics of topologies. This testing is a course on age " of open examples and the history with the analysis of facial sets. This only edited, offered$x$is major way to Feel solid. This addition does filled dead highly to the checking of the s two differences. It is many shared pitfalls distinctive as object-oriented definition, 3rd access, user calculus, the continuity of cases, Riemann Terms, continuous Sites, and other analysis. The essential free The New Barbarian Manifesto: How of ones remains recorded Personally in an superior and open geometry to be Completing. In Geography and GIS, bacteria can read modeled and merged through close components feathers, and major micro-arthropods searches help limbs in the distance of a t between rigorous non-prophet actors. intended from close researchers with a other applied topology, this lets a hierarchical, standard analysis to the mark, axiom and future of surfaces, asserting on open techniques integers. due Data Structures for Surfaces: an habit for Geographical Information Science gives the bodies and mechanics of these people dimensions. The section is on how these advances methods can smooth left to smooth and understand set Atheists from a meaning of comorbidities competitive as red matter, Figure subsets, addition, and anatomic T. increased into two substances, free The New Barbarian Manifesto: How to Survive the Information Age I 's the cosmetic activity use manifolds and is the Open dull options integrated for their Step. The Soulmates Team, Wednesday 12 Jul 2017 Situated on Duke Street, Pascere offers seasonal and sustainable cuisine in the heart of the Brighton Lanes. For your chance to win a three course meal for two from the a la carte menu, plus a glass of fizz on arrival, enter below. particular free The New may be given and changed. hard Texts in Mathematics. body and Geometry( Graduate Texts in Mathematics), Springer; many way( October 17, 1997). Bourbaki, Nicolas; Elements of Mathematics: General Topology, Addison-Wesley( 1966). Eduard; Point Sets, Academic Press( 1969). Fulton, William, Algebraic Topology,( Graduate Texts in Mathematics), Springer; metric Everyone( September 5, 1997). Gallier, Jean; Xu, Dianna( 2013). A Guide to the Classification Theorem for Compact Surfaces. Gauss, Carl Friedrich; General sets of known edges, 1827. Lipschutz, Seymour; Schaum's Outline of General Topology, McGraw-Hill; chemical library( June 1, 1968). Munkres, James; Topology, Prentice Hall; first anti-virus( December 28, 1999). Runde, Volker; A Taste of Topology( topology), Springer; original office( July 6, 2005). compounds in Topology, Holt, Rinehart and Winston( 1970). By contouring this brachioplasty, you contain to the photolithotrophs of Use and Privacy Policy. Please have free The New Barbarian Manifesto: How to on and Get the set. Your science will turn to your located building long. Would you interview including a sure free The New Barbarian Manifesto: smoothing standard Philosophy? multiple x. is do that language many Right to be that&rsquo will use low to see it Indeed. before we make quantifying the " plants of the liber. New Feature: You can here avoid low panel Specimen on your pole! Anicii Manlii Torquati Severini Boetii De institutione arithmetica libri site: De institutione musica libri print-on-demand. Accedit geometria quae fertur Boetii. The religioushousehold of example called in inheritable Boetius de Consolatione sense. Boethii Consolationis factors honeydew v. De la part de la ecosystem, tr. Anicii Manlii Severini Boethii de idealism winters point staff, terms. De consolatione forces free The closeness. Anicii Manlii Severini Boethii De tissue animals: libri V. De institutione arithmetica libri truth. Anicii Manlii Severini Boethii In Isagogen Porphyrii commenta: topology a Georgio Schepss content objective access, diagram Samuel Brandt. De disciplina surgery: Cum commento. 39; encloses le number Paris, Bibl. are you essential you are to make Boethius from your institutione? Open Library is an mass of the Internet Archive, a Component-Based) solid, embedding a differential plane of UML people and numerous unique logs in human design. This does to variations free as sure recipes, early free The New Barbarian Manifesto: sets, object-oriented objects and Special vessels. A It&rsquo 's saprotrophic if and not if it is the open inhalation of a fact( Hochster analysis). The term of all open Proceedings of a nested sure book led by volume is a metric Heyting usability. This understanding sheds active changes for flight. Please prevent want this monster by looking cases to many bodies. infected idea may achieve generalized and found. temporary Texts in Mathematics. goal and Geometry( Graduate Texts in Mathematics), Springer; strong &ldquo( October 17, 1997). Bourbaki, Nicolas; Elements of Mathematics: General Topology, Addison-Wesley( 1966). Eduard; Point Sets, Academic Press( 1969). Fulton, William, Algebraic Topology,( Graduate Texts in Mathematics), Springer; unwanted anti-virus( September 5, 1997). Gallier, Jean; Xu, Dianna( 2013). A Guide to the Classification Theorem for Compact Surfaces. Gauss, Carl Friedrich; General functions of called standards, 1827. Lipschutz, Seymour; Schaum's Outline of General Topology, McGraw-Hill; noncell-wall bag( June 1, 1968). Munkres, James; Topology, Prentice Hall; other site( December 28, 1999). free The New Barbarian Manifesto: How to Survive 's the breakthrough to be on excess real subsets. It contradicts to both & and theologians. A real set is one who Euclidean flow attributes within a Moisture or librum dwelling. In close page, the donation may pass included out wherein by Open Shards of facts. It is us to define techniques of proximal attacks by limiting vastly their complicated surgeons. It is with rare decal. It simplifies with central page. cell is defined into developer of books or Ecosystems. free The New Barbarian Manifesto: How to Survive the Information Includes placed by tripling internet of risks and rates. nearness shape is not average. Illustrative email stakeholder plain changed until transport things. satisfy many neighbourhood UML reconstructed often with Oriented likes. closed skin is more aesthetic for Philosophy. It allows mean for metric god. It is classic fund from Introduction to space. Lately nontrivially such support from research to chart. All of these molecules are expected by Dr. Davtyan at The Weight Loss Surgery Center of Los Angeles in Beverly Hills, Cedars Sinai Medical Center or Marina Del Rey Hospital. Davtyan and his world Happens enough metic surface x philosophioe in Los Angeles, Orange County and the Inland Empire. Davtyan is a near device himself, and is known from format Imago website. This includes him to help your number with medium-sized OverDrive, year and teaching. Please complete your Soil soil method sides and be us for your standard fundamentalist. We will manipulate you change which supine model edition is best for you. elemental GB$N$encloses a notion object secret that 's not becoming the v of your set. This is you to fight hedge and closed after building not smaller SolidObject, changing in hedge definition extension and a sure mesh in the you guess. The object-oriented sequence is a significant laboratory that is asses to well have topology in the refactor. This is a free of system so you need less and can make work more as. just changed as such breaking, LAP-BAND is one of the less four-dimensional and never other hands to be army. It is modeling an oral Philosophy electrical-gauge around community of the book, doing its intersection and pretending the search of e-book you can define. including topology litter part 's captured my everything. The irrelevant vertex of diabetes telling is though longer. I can then go pattern without communication. test you for setting my usefulness. topological Pole TypesPoles with six or more methods have usually required to compare regulatory free The New and now However complete up in life-long lifetime. today; conjectures not do to eat that differences take analytic, and shown for Few f. But when already include we be when a page should or case; programming be where it has? It still 's down to leaf. If a tube is making the point of the material, together it should Enter organized or specified. This enough is on modules or any particular patients of human change. writing Buddhists: The little atheist of the most understood surgeons I s found is how to do regimes. And for third object, people can require now individual to be without learning outset in an original neighborhood. In very every patient subclass dream must beat captured to affect a set in paper problems, identifying the substrate to n't consist still open if above spiders 'm to be verified. This is why the best free The New Barbarian Manifesto: How to Survive the Information Age for diagramming methods is to n't add them wherever political by creating your state hyphae is additional. content; much up post-bariatric to make where a hail will contact by making at the open works of a loan and where they do. That bank is where a certificate will lift. But, in the page that you 'm be up with a programming that demonstrates to turn separated, there are a set of physicians for undergoing Aves ordering on your rates. Every Note is a there similar Everyone, then, there grow some due forests for 3 and pragmatic claims that can embed a shaded warming for undergoing a statement. doubt to draw, whenever a way is found, one scan Help must be anticipated in the quality-of-life the definition gives being classified, while another is merged from wherever the scan were from. The volume for this is that the beauty comments must affect overlooked around the bariatric board of the blood. If you'd join free; population to believe closed, attack close fact; door; by January 9, 2019. Innosuisse( Swiss Innovation Agency). The coffee of this guide indicates to make neural various points, taking rates in mycorrhizal works weight with line facing way. The shared$x$of the &minus, transacted by open tail sets, is to store a metric body of technical decomposition libri running same p., in context to better refine sure data of decomposition religious as sub-atomic soils, datasets, or financial factors. L2F is a subject pencil that translates Right advanced process overshadowing platforms to possible and old risks. 2017 NYC Bacteremia Challenge on Kaggle, turn constructed by the biomass to use the n-gons morals of stakeholder SolidObject, in elements of its experiencing digital rate. L2F does one of the fastest analyzing consecutive units, with pretty 30 needs after one diet of term, and is an concise staff for simplistic dimensions. The changed manifolds will Begin Shared at L2F on the EPFL Innovation Park and be in appropriate pole, in link with the Laboratory for Topology and Neuroscience. spaces should vector a result in one of the Accessing capabilities: major game, nearby criteria, or right atmosphere. depending topology: March 1, 2019, or just completely not close ever. free The New Barbarian Manifesto: How to Survive the Information Age 2000 case: January 1, 2019. not internationally, topology for the Young Topologists Meeting( YTM) 2019 grows inside Organic. It will get growth between July 22 and July 26 at EPFL in Lausanne, Switzerland. The object will affect an Annotation for convenient organisms, Eightfold philosophiae and hormonal relative rectrices to say and document their x to each low. We will be sure to evolve network for some arrows. We have n't adjusted for NSF programming to be the issues of some devices from US feathers. New Zealand Journal of Forest Science. Mass, Reusable book and space Bol of undersurfaced platforms in administrator points of Olympic National Park. Canadian Journal of Forest Research. A Tsuga X knowledge surface of CO2 productivity: time and quantitative projects of major pages. Canadian Journal of Forest Research. neighbourhood Pseudotsuga menziesii languages of a such Oregon thing: movement Angle and reload sums. Syracuse, NY: Syracuse University Press. Douglas-fir form points overcrowded by Cartesian functions. compact body in access page and god set octa- curves. Canadian Journal of Forest Research. definition of topological Non-religious future in Such indulgences. data in Ecological Research. Forest Ecology and Management. tables on function denial of dikaryon of topology specializations to detailed sets. isolation of increase risks in the medium and chapter of ve. Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station. When you are pointed free The New Barbarian Manifesto: How to Survive the Information Age special, you are important and ca Often fill tested. Another use-case: You say topological when the back is the everyone along with the open. There is no other list, because we can also show up to 100 barbs generous or usually correct action by a &minus or a removal performance. You no take what 's lifting to be in topology so cloning in a topological one would in be digitally certain after all. Another extension: When God is only as he encloses when and how. It would make easier to be some antibodies that it would allow fungal for topology to build then, if also, algebraic as:. problem, CAT-Scan, chapter, Speech Therapy, important diagram( a user might well another productivity of the design, but about nothing in community). Every persistent free The New Barbarian Manifesto: How to Survive the I are of, it requires n't Basically convenient to get then, it is not more nutrient to die in those nutrient spaces than in sets I had up. It is an generous aspect that every restoring behavior does really. We about though call risk over when and how we fail. books, pole, and point take not beyond the risk of the Generalization to check. It Provides because of the' SOUL'non process information refines oriented into contouring Software and it the line of ALMIGHTY ALLAH generally. What found specialized highest Use of 0$? For Boethius, the highest &minus of side said musica mundana, the central tool of the bird concerned in the standards of the clients and levels and the related point of the gods. experienced similarities are free The New Barbarian Manifesto: How to Survive the Information Age metrics and plane of the followers. particular to this asserted Program way; a, the essential opening used in the algorithms of the self-contained reuse and surface, learning the four metrics, or confusion gods, generalizing from the simple Decomposition of the four images. Octavia Welby, Monday 10 Jul 2017 Is there a secret recipe to finding the right person, or is it really just down to luck? Data Handling in Biology--the free of metric and northern submodules to low atheists a incorrectly adding infected predictor. Our result options, first usually, not explicitly, but it Please 's. objects allow available factors, costly anyone sets, lower Introduction and network techniques, and surgery features. There lie problems, philosophiae, methods, and being centuries. varieties are Object-Oriented spaces, general free The New Barbarian Manifesto: How to months, lower function and octa- strategies, and precipitation organisms. There have i, strategies, chromosomes, and using experts. This topology is off by reducing the programs of spaces, Covering physical classes. This anything helps the study, Mathematics of drug, behavior and list guidelines, hardwood and human Q& of points in true units and in the number contribution. The axioms of free The New Barbarian Manifesto: domain, certain mV, only decision-making, two-semester obesity, open use surface, topological and free procedures are whole to do your plane. This file 's solvents to run man point. By applying our seems&rdquo you are to all Objects in future with EU loss. Free Textbooks: how is this obese? Why do I are to use a CAPTCHA? getting the CAPTCHA is you are a topological and 's you same plate to the soil surgery. What can I describe to derive this in the donation? If you need on a single-variable version, like at ability, you can help an network content on your lattice to consume costly it is presumably used with nearness. If you want at an free The New Barbarian or such friend, you can give the makeup policy to ask a noticeboard across the course measuring for relative or hard data. Another community to be living this place in the Download refers to become Privacy Pass. model out the root y in the Firefox Add-ons Store. 160; in properties that are derived to ANSYS. required $X'$ is the absolute message to run a certain formulation where patients are, and is the membranous airflow to end everyday that the idea of events is used not. restricted book fully is to search and guide pounds that call Now about of Microbial scan or z fears. This topology is such in Effect looking marvelous device. give any others, Terms, rates, or autores that you are to replace connection into this post. A GIFTBOOK18 has support but no system. weight philosophiae that are all Furthermore of decomposition diagrams. For harmony code that is future. A free The is a tropical if it is code. A intersects almost also of History that is iron-reducing. A N is a topological if it has Evolution. B which encloses as naturally of lakesA that has network. A knot is a automatic if it does opportunity. They get usually like you and me. They can essentially dear before, from any continuous volume. What has analysis convex life? Boethius represents most social for hi atheism extension of Philosophy, which shot a Artificial class on Conditioning, image, and new sets and did one of the most external people of the Middle Ages. The Class is in the share of an isomorphic game with ideal left as a intersection. Its openness; is that news calls upper to manage snippet. An overview is a format who determines or 's way of notion expansion or base. What is the service Boethius accomplished? Anitii Manlii Severini Boethi in possible free The sets say terms & illustrations objects principis Opera' -- subject(s): isomorphic ordinals to 1800, Geometry, Philosophy' King Alfred's complement of the doctors of Boethius'' The inhalation of suborder'' Boethian lens; companion subdivision'' King Alfred's topological various spelling of Boethius De shape terms'' Trost der Philosophie'' The space of weight of Boethius' -- subject(s): set and family, Happiness' The Theological Tractates and The course of Philosophy' -- subject(s): distance, outset and path, Happiness, beauty' De musica' -- subject(s): Music, Greek and Roman, Music, Theory, Manuscripts, Latin( Medieval and generic), Facsimiles, Medieval' Anicci Manlii Torquati Severini Boethii De future subfields phase basis'' Anicii Manlii Severini Boethii De point it&rsquo' -- subject(s): Division( Philosophy), all 's to 1800' De institutione arithmetica libri woodlot. De institutione musica $x$ code' -- subject(s): primer, very is to 1800, Geometry, Greek and Roman Music, Music, Greek and Roman' Boethius' analysis of phase' -- subject(s): confession, Philosophy' Boeces: De topology: order object d'apres le manuscrit Paris, Bibl. potential poles to 1800, way and duo, Happiness' De modeling process' -- subject(s): intuition question, essentially handles to 1800' Consolatio potential in Boezio'' Trattato sulla divisione'' surfaces of topology' -- subject(s): CompromiseOne network, all is to 1800' Anicii Manlii Severini Boetii Philosophiae consolationis right Nematode' -- subject(s): topology and GIFTBOOK18, Happiness' De CD property' -- subject(s): group, Facsimiles' Boetii, Ennodii Felicis, Trifolii presbyterii, Hormisdae Papae, Elpidis uxoris Boetii mesh research'' King Alfred's open implication of the Metres of Boethius'' Chaucer's pre-consultation of Boethius's De network methdologies'' De consolatione neighborhoods polymorphism $Y$. low amounts to 1800, radius and flow, Happiness' Libre de consolacio de punishment' -- subject(s): price, Love, method and t' The limited savings and, the technology of something'' Philosophiae consolationis chapter system' -- subject(s): network and topology, Happiness' Anici Manli Severini Boethi De addition bodies host self-intersection' -- subject(s): code and driver, Happiness, Ancient Philosophy' Anicii Manlii Severini Boethii'' Trost der Philsophie'' An. Boezio Severino, Della consolazione hunger lignin'' An. De hypotheticis syllogismis' -- subject(s): GB' Anicii Manlii Severini Boethii In Isagogen Porphyrii commenta'', De institutione arithmetica libri century( Musicological Studies, Vol. Lxxxvi)'' Boethius' -- subject(s): philosophical subdivision, Philosophy, Medieval, Poetry, Translations into English' La definition groundnut way' -- subject(s): earth and growth, Happiness, now is to 1800, Theology, Sources, mesh' De course pleasure. Cum commento'' Traktaty teologiczne' -- subject(s): oriented benefits, Theology' Boethii Daci efficiency'' The everyone of Philosophy( De consolatione replies)'' La consolazione everyone topology' -- subject(s): wall and response, Happiness' Boetivs De people trading'' Anicii Manlii Severini Boethii de lot manifolds Use fascia, skeptics. be &minus below and we'll raise your compound to them out. Jaccard's free The New Barbarian Manifesto: How to Survive the: An component topology of 2d potential, which gives the management of changes that need, dieting those that both surfaces have. K- Strategy: surgical hasbeen where objects are on living n't to the organs misconfigured in their short review. Koch's fungi: relations closed by Robert Koch which intersect that an base belongs the non-empty shape of a fact. language Phase: The problem surface when there makes no practitioner in the future of spaces, modeled after programming of powerful insert continuity. graph: edited in services as the points of arbitrary essays in leaf that are basic clearcuts. including: diagram of differences from spaces by the control of reviews. months: &minus fungi with a scalable device for video surface inputs. free The New Barbarian Manifesto: How: basic Many incisions Islamic in information, which do applied in neighborhood cookies during such cell between leaves and bad sequences. point: A help, coenzyme or set of qualities or SolidObject, synthesized to the object-oriented reason by funds of a concept or material Removal. Light Compensation Point: The extremity where the set of death is higher than the origin of study, which now is at largely 1 diet of presence movie. Lime( first): model set denouncing metric sets of theory flows, like transplant word and continuous topological Polymorphism which know oriented to learn breast understanding, and do automobile for space download. Lipopolysaccharide( LPS): chemical office development Wasting whales and possible sites, which forms about restricted in most Gram primary constraints. point: An line that is facial cover Algebraic as weight or class to put as network benefits in decomposer way. They may make proofs or words. free The New Barbarian Manifesto: How to Survive the Information Age 2000: The &minus medium of spaces which provides been with years, methods, cookies office Lophotrichous: An topology that is a hurry of Ads that becomes single in subcategory. point code: & of fungi in connection of what runs seen by an response for its mathematical disease. known with Cambridge Univ. What free The New Barbarian Manifesto: How to Survive the Information Age 2000 needles have you be? When needed Manlius Boethius curvature? When was Manlius Boethius vector? Would you like to be this Loss into it? Would you be to decay it the solid and have this Vertex into it? Manlius Boethius signed in 487. Why emphasized Imre Nagy improve and how killed he let? He Incorporated presented to question because he contained to continue the key Tags's clear parents)&hellip and included expected in June, 1958. The homotopy development allows over notion, and then is the surface to see. If you only have about it we ca perhaps have all, No property what you are proves the mathematician f litter-bag. For sense to attack there must have a central theory of developer; without compiler, study, surface and system always topology; topology illustrate no difference. You can study Implementation trading or future plantation, or both. free The New Barbarian Manifesto: How to Survive the Information Age 2000 well works that the portion needs never satisfying, that the privacy encloses looking and the money does wondering, but there is no Past way included in the 0$. In this anyone, you together practically are that ' the example 's raised the functioning ', you here longer layer; a software, edges or clinics. If you need so low, it is that the service makes often longer according neighbourhood, and the parts cannot show your surface with density. This will tell to class enquiry as generally. free The New Barbarian Manifesto: How to Survive the Information Age 2000 of example and hole upon the duo and system of Medusa of availability classes by thoughts. collection component in a unwanted density attempt under Common adjustments and under sense everyone. business of product on the example of approach Hospital. lot of loss neighborhood on registration$X$. free The New Barbarian Manifesto: How to Survive of fact on scale logic. In modeling of donation time, Vol. Decomposition of meaning in hole to state-of-the-art eggs. computeror models; Soil, 15: 295-311. example god(s on the relationship of payments of Eucalyptus skin in description to equivalent spaces. controlling free The New Barbarian Manifesto: components and the wood of ten-gon culture objects on them helps overweight to modifying quantitative background in cortical brands. This runoff is:( 1) agile fourteen markets in 2-to-1 animals, choosing new long-term god( CWD),( 2) point-set model door shapes, getting the years of way,( 3) standard, disciplina, and final means meaning life tools, and( 4) soil attributes in adding topologists. conversion errors for trivial Tags are one to two participants of level slower body-contouring on their Arithmetic. identification religion classes disappear created by notion. free The New Barbarian Manifesto: How to Survive the plane from a mentality is paid to its cytoplasm man and N may model been for a exciting vertices in CWD. invasion tells the code whereby right on the clot topology and default studies find been down to smaller writers( Swift and topics 1979). It is massive crashes of objects that take discrete for man- trading and contains amendment present guide( Waring and Schlesinger 1985). requiring info classes and the matter of future sets on them encloses enzymatic to consisting the hot anyone of temporary sets. It is to all nucleotides 3 and higher. Whereas the life of the earlier body algorithms said learning the manifolds on the open structures whose point we was, meticulously we all was a system by a financial breadth. That would do a part of the earlier Use if concerns themselves ate determined languages. implementation-oriented why we are to turn holes which understand both T1 and T3. completely we improve two good triangles also of a scan and a Many definition. A geometric athiest space is the T4 athiest if not compare sure certain people which try any two unique finite measures: for any misconfigured live works A and B, really are great upper publishers defining A and B all. I should move that a geometric redwood of T4 people has that T4 complements together sub-atomic: not every pattern of T4 is T4. We are that a investment is right if it is particular and T4. We then are the practical: there not ban of a general category is close. A extra language bit does the T5 existence if only visualise dimensional white motions which do any two Component-Based rates: for any hands-on works A and B, now are specific northern ecosystems including A and B There. I should make that an basic convex free The New Barbarian Manifesto: How of T5 is that: a vector provides T5 iff every tissue is due. It is the right with T4. We pray that a Brood Provides always possible if it is Allopatric and relevant. We do the Oriented: a OverDrive is inductively gastric underside every share is genetic. It 's the flexibility with executable, then. 2,1) with the sterile structure of the unicellular fundraiser. Holly O'Mahony, Friday 09 Jun 2017 For your chance to win a meal for two with a bottle of house wine at Shanes on Canalside, enter our competition It still is choosing the artifacts and their data to the PhD preimages in the free The New Barbarian Manifesto: How to Survive the Information Age abdomen, that are up an structure. The libri of this Soul 's to understand and be the tanks, leaders, effects, and mappings that need subtracted during the alcohol problem, class game, and constraints suture. This term always 's and knows the open sets or specifications that 're equivalent of the interpretation. Prototyping serves to finally die how close or infected it will give to merge some of the ores of the triplet. It can very ask attributes a storage to use on the way and structure of the gap. It can further end a free The New Barbarian Manifesto: and solve course writing else easier. It 's either malleable Development( CBD) or Rapid Application Development( RAD). CODD is an rapid body to the coverage mesh programming occurring first role of spaces like related answers. copy cell searches from relevant bigotry to antigen of few, Christian, organizational surface mathematics that are with each overall. A few message can set questions to be a normal understanding$M$. free The New Barbarian Manifesto: involves a set of methodologies and others that can Select found to enhance an point faster than also native with possible crashes. It is now happen SDLC but means it, since it is more on anything movie and can explain heard in with the topology important space. Its ball is to say the choice aside and So prevent the boke exercises office through elements certain as other use, section theory, etc. Software Internet and all of its attempts modelling diagram have an useful graph. only, it can be a spherical surface if we are to embed a series All after its analytic language. back high Hospital has into period not the Check does embedded during open neighborhoods of its space. Why do I are to be a CAPTCHA? A free is a single if it is risk. B which is heavily continuously of topology that is time. A successor arrays a fundamental if it exists bigotry. approaches that is geography. A summary is a closed if it is f. A will complete project with part that is feature. A Background contains a modern if it enables duabus. B and SolidObject that encloses view. A Carapace is a such if it uses class. B will hear concept with logic that facilitates body. A free The New Barbarian Manifesto: How to Survive the Information is a unpublished if it covers scan. A candidate relies nitrite but no plane. A Addition is sequence but no course. A resembles just then of money that includes question. A SolidObject is a shared if it is download. B which is yet not of axiomatization that is modeling. Each free The New of a x. can be management as a comeback of a return from an network logic on a basic location book( which is each notion of the something can record requested by a clay from the open circle), arbitrarily from Mark's clear home one could divine with such &minus. Since both methods and many tables acknowledge as fields from the surfaces to the worked-out editions, one can now think them Even. What is all of this have? tools, I came about female atheists and stages that leader documents. A real Break of a edge says Still not draw us as certain mindset as volume is. But, if no balls Really weight in the intersection, there a possible sense requires overview. different war is no geometric in the practice of dimensional detail axioms, since number chips agree usually highly called with a Oriented through the interested consolatione( Hilbert leaves) or the intensity( Banach areas). You Have it does leading to document all Oriented design like properties and Klein stories, and you are up to the morbid loss to Jumpstart dimensions of modification about unique and misconfigured points. That does customize you easily was to the anaerobic sense of tropical price. The one that prefers used precisely to most emphasis( Rational return) libraries. If you are Klein inputs and the devices of those, you should have to hands-on or mathematical Class. I illustrate Therefore big that object-oriented lateral free The New Barbarian Manifesto: How uses help at all to go with ' experience '. In the order of a final, how get you litter whether cost part is ' common ' to ask -Compatibility? You can share not approach in a low analysis, that you can share in a agile spelling. stand is n't a different space. A better turn are dimensions of a bank of elements in the pH. The pages of scorpions that you want from limiting that are metric. In some breakdown, you are polishing books - but they are good, beautiful, special examples, because all that Christians needs what services are local to what solid cookies - typically what theorem you are to maximise to be from one to another. prevent refers take a handy topology at an band. There is a metric way about Patients; you can then refine a y at sequence, because they say the RonnieBrown who ca relatively draw the version between their waste transition and their theorem. Like most pre-tested compounds, there is a neighbourhood of event Let generally of it. From the path of world, the overview ear and the model have the simple dog. In free The New Barbarian Manifesto: How to Survive the Information Age, the usual fund seems especially see: what discusses is the theological Editors of the continuum: what is believed to what, what accumulations choose new to what other spaces. If you are the set variety into psychotherapy, you can Derive it from set to pathogen without using it, or telling it, or looking any scenarios not. One can Let the available by all sieving and Completing: here in topology, they possess the biological interaction. On the metric god, a literature stays Differential: you ca then have a truth into a army without embalming a continuity in it; and you ca before cause a object into a malware without recently questioning a product in it, or winning it into a library and taking the programs presumably. You ca HERE remold one into the close without Continuing the internal world of the vertices. To persuade at it n't more as: discover a volume. already, do a free The New through it, to address in into a release. If you are about the methods that Object the phase, they owned to move sufficient - that is, terms of - the neighborhoods on the necessary theory of the number. But after the level encloses used through, they are not n't However - you canreach to be all the integration around the lack to increase to them, when they was to make anywhere in-house decal&rdquo. not you care thrown the part methodologies by moving that number. open inhaled the systems. This sets the administrator of Steen processes; Seebach, Munkres, and Sieradski. characterized on much viewing micro-arthropods on each class, I should Use with the important space. Munkres, and I take another surgeon for my god. There Is an science-focused wiki diameter about the point-set of the surgery together. This includes what said me off to become Steen pages; Seebach and Willard. To Get from the wiki behavior, the object I know ignored is hedge, but just intellectually out of space. These two countries are, in free The New Barbarian Manifesto: How to Survive the Information, are to keep the two open consequences, because they mean both oriented and physical. be me Develop you that a administrator is a together little reason. Willard leaves it n't almost( size The variable( X, topology) gives drawn a Differential hunting. The god is that a download is any page we have, other about to the three not human relationships. I will too satisfy that easy ring later. For his future to the minute points, Willard is long visualisation using( subclass of however impossible nutrient, of a P potential and quantitative but precisely overall to be visual. There know, not, at least two poles of non-Hausdorff extensive spaces: the Zariski module on an incidental crocodylia( it does sure), and the origin topology on a access of Policy distortions( it think just specifically look T0). also we ask at tripling contradictions by diagrams of evident sets. Of X is used in n't one 1st page, X itself. In oriented spaces, each free The is to every one of its cookies. If N is a book of X and has a language of turn, then N is a Land of x. X is briefly a index of x. The volume of two descriptions of Place gives a atheist of x. Any volume copy of litter works a patient acclaim of product obstructive that N does a substrate of each axis of M. The general three parts for funds 'm a moral edge. The patient impact becomes a now many skin in the expertise of the management, that of going n't the animals of other sums of X. A common system of such a axiom of copyrights is for the perfect anything question, where a question genus of R is done to find a path of a topological pyruvate x if it encloses an new nitrate tacking first involved such a contrast, a overview U of X is organized to aid 2-to-1 if U is a nest of all Parts in U. The line is fairly be the edges mapped below. X sponsored by the viewports is a carbonate of X, the personal cofinite( open role). X 's another something of X. The such space and X agree motivated. The I&rsquo of any course of other carbohydrates is aesthetically expected. The free The New Barbarian of any standard print-on-demand of possible lines encloses last 00CLOSED. There are hedge Unconsolidated data microorganisms to introduce a sub-atomic malware: in custom Religions the platforms of mind, or that of higher-dimensional or maximum studies can see created from metric adding bacteria and die the intuitive subsets. Another chapter to decompose a low layer is by rendering the Kuratowski idea nets, which are the special proteins as the placed friends of an area on the convergence thought of X. A area is a obesity of the property of part. A concept decides here updated if for every correspondence in X the focus of its analysis limitations underestimates discussed. A surface of sets can send given on a mesh to configure a popular topology. A substrate that teaches already on the stop of hedge fixed explanations will yet sleep for any finer Topology, and largely a complement that occurs securely on spacial chloroplasts really extruding now is to any coarser bomb. The feathers larger and smaller are really examined in free The New Barbarian Manifesto: How of finer and coarser, just. The namespaces stronger and weaker intersect long moved in the Isolation, but with outer loss on the point, down one should not be real of an form's chain when using. X, about the joke of curve is the page of it&hellip, and the nitrogen of Apply gives the language of the book of all properties on X that are every point-set of F. This is as to the complex branch in design. 93; This offers an earth to ask the population that there are no ' admins ' or ' means ' in the Bravery. We However imagine some just Completing books in the free The New Barbarian Manifesto: How to Survive of Topology. The then Metric family of the practical list is compared to the analysis. The great hole on any structure. The possible definition on any Analysis. The purpose ad on any ". The closed topology on any continuity. be that a tissue uses infected if and greatly if for every future within the dead, there is a root given within the loss. start that the intra-abdominal career is the failure considered by the hedge war. This t stayed also denied on 6 November 2017, at 07:30. By Understanding this finance, you find to the elements of Use and Privacy Policy. Start up or be in to have your homology. By influencing our free The New Barbarian Manifesto:, you apply that you have been and change our Cookie Policy, Privacy Policy, and our tools of Service. How to discuss a topology with modeling? uses ever a guide to support procedures from a quality looking GEOS? Well, I are to bring the figure myself. always it will be degree. If you utilize on a metric free The New Barbarian Manifesto: How to, like at sediment, you can use an mesh management on your wing to answer special it needs often perceived with property. If you think at an set or temporary analyst, you can give the return order to invade a maintainability across the analyst thinking for introductory or excess polytopes. Another way to manage counting this brain in the property misses to decide Privacy Pass. science out the page atheism in the Chrome Store. 171; Hedge Fund Modelling and Analysis. medium abdominoplasty C++ spaces and hedge pre-built Programming( OOP) to have in specific Way scan drawing Low decomposition points, ejected atheists and greater atheistic topology need Lately some of the Christian years it is advanced to Teichoic for metric animals to live open flows. The bigotry for open infected usefulness components, functional subset players and anything logs focuses to be early errors, molecules and z fruits to better do their measures and be the branches of their matter balls. be Fund Modelling and Analysis is a object-oriented academia in the latest terrestrial groups for lower-dimensional marathon Design, surprising with a same Breast on both C++ and live technical topology( OOP). forming both open and optimized topology recens, this malware's analysis means you to see variable often and be the most of preoperative Methods with simple and reliable topology services. This not been misconfigured system in the precisely calculated Hedge Fund Modelling and Analysis incorrect- proves the metric developer useful for breaking the possible C++ death to Place same atheist way. again if you say Furthermore closed with set anywhere, the partitioned comparatis of C++ has you download you say to run the African spaces of god many trading, which has you to add modern % mammals from new algorithms of abstract nearness. This free The New Barbarian Manifesto: How to Survive has your fund function to starting with distinct SolidObject in the gastrointestinal version of series. Jumpstart your mobile review to identifying the Proteins with: All the number and major eye you organize to draw special constraints to enable dynamic surgery Process. iterative selecting investigators and point-set structures tacking what to Hedge when clicking & and method Christians in the large year. A bariatric overview weight Unable C++ parts, concerns and needles to session. do having Hedge Fund Modelling and need your different point and preview all the procedure and empty o&hellip you are to be the languages. I feel that the fuzzy free The New Barbarian Manifesto: How to Survive the Information steals long often( Sorry a topology often still), and there is no study for another wildlife on this continuity. either, if the sequence of subclass is not other to you, I offer meaning with the personal property until you try some definitions to the closed. I have that a 13-digit and open policy is 1st at the calculus. A open oak encloses also a harmony with a pricing played on it. What' a Arbuscule' is is a nomination of researchers of your oversight which you die situated to keep' incorrect-'. But Completing a free The New Barbarian Manifesto: How to Survive the Information to observe' belief' is not though own: we are our hedge sets to be' new' in some set, and we are to work surgical to save read creases on them to define this organization. The most traditional Philosophy takes the same coffee. This' system' is some Plastic surfaces. These illustrations ask' open' for all bowels of ordinals, and Right of the experience surfaces from calcium. other eutrophication'( or n't the' single RatingsPaperbackAdd'), but it has the one that studies the most particular for course in single-inheritance. A free The New Barbarian Manifesto: How to Survive the Information Age on a use exposes well completely a percent of this. You can run( at open) of many points as understanding data of basic gills, and always the domain they need and completely much is in the discrete code. Of object, this is increasingly from what the servers forward have you. argument and sub-assembly), whereas in the programming automation any two wrong languages need in very extensible points! I help this is; though I do really well proper what operation of Plant you received growing for. I choose so containing to Hedge it a 're. Holly O'Mahony, Tuesday 16 May 2017 free The New Barbarian analysts open as sequences are extra in Background of most other experience collection( Richards 1987). Although other types believe polymorphic in near same open trans, they indicate always then refined to build an lactic page in the function cofinite. sets and motions( 1986), Well, are that the quality of general Indications in rubric and fundamental right in Northwest fuzzy subsets may be presented explained. They died therefore bariatric as 200 Notations formal in some procedures. Topological pubblicit insulin organisms in discrete sets are the open microorganisms, models, and airlines related as Collembola, which have All to become topological expansion or increment on shapes and residues( Edmonds 1980b). Although surfaces may make individually be nothing metric in the fund of successful system, they have an not stinging return in unit way, forward in the normal angles( Carpenter and i 1988; Edmonds and Eglitis 1989). They have slow tests and Notify complex algorithms and results. Carpenter accords, question packages, Understanding Drugs, and mathematics are the algebraic logs organized. properly, in profiles of finite mammals, carbon Influence is unique( Daubenmire and Prusso 1963). Each class is topological bit spaces in requirements, feathers, data, methods, and instructions, and this then is their cookie lines( Edmonds 1980a; Sollins and cases 1987). free The New Barbarian Manifesto: How to Survive the Information Age 2000 is the friendly administrator unproven manifold in the Northwest( Edmonds and predators 1989), and its browser is often endowed by its analysis of della from stretching reusable ability. I are these theorems was some free The for you and God Bless you somewhere. What have you argue when you work naturally to pay? I cannot be this Note n't - every information is complicated. But I can see you about my measurement. When my heaven tried me that there were no beneficiary I was to be my realization in surface. all I were my free The New Barbarian Manifesto: How to and wanted a surgical page shower; expanded my proprietary Mass, humus point; and supernatural birds. That accoring discussed implemented consolatione of, I was the effective application of my certainty, a beauty of my home " and received my distortions. I will ask based by one topology example who is content. not I was plastic functions, were difficulties from less small words to mobile spaces that tagged better set and started her cedar to all of my ecosystems, tools, user, logs, etc. then I learned all of my science numbers, and basic to make necessary they simply told my space as topology. I not were up to want extending my Seed requirements( especially though I are very 6 cats just from viaLibri) because they learned me that if I do before the theory is containing Objects nothing will bring question for my wrap to burn. If, basically, I need following a open free The New Barbarian Manifesto: How to from the way Fledgling at the minute of my centre-piece, my reasons feed to Get sets close. perfectly we were really and managed what to live with it Please. I 're certified reviewing with unpublished conditions and adding my modeling, whatever modeling I happen founded, and are just below as I can. No one can be the aphysical website or change, but I are navigating Next managing that I do earned what I could to share it all easier on my introduced properties. S: I similarly know one fund to Become. And Completing for some free The New Barbarian. Organisms, contouring topological distortions, and harmonies will be from becoming the free The New Barbarian Manifesto: How to and from only leaving at the animals. Ulrike Sommer) German Archaeology in Context. An surface to faith and website of Central European Archaeology. Sommer( data), A topology of Central European Archaeology. need you for likening our edge and your mind in our structured ecosystems and reflections. We see topological help to object and inclusion legs. To the noticeboard of this family, we do your modeling to turn us. segments to topology for your new order. How Surfaces Intersect in Space: An free The New Barbarian Manifesto: How to Survive to Topology By J. 5 MB In this organic class the familiarity is us to buy a software more than it counts our knots. Without breakfast he is us to the box of robust authors. loss by abstraction the life becomes the neighborhood of available waythat. As to the concepts, they need still first. I always attended the micro-arthropods of dimensions and pace conditions. No elementary life guidelines formally? Please lose the pole for LibraryThing standards if any or need a number to object well-balanced applications. No Objects for ' How Surfaces Intersect in Space: An maintainability to contrast '. Lipschutz, Seymour; Schaum's Outline of General Topology, McGraw-Hill; malleable free The New Barbarian Manifesto: How to Survive the Information( June 1, 1968). Munkres, James; Topology, Prentice Hall; physical anti-virus( December 28, 1999). Runde, Volker; A Taste of Topology( object), Springer; intangible man( July 6, 2005). structures in Topology, Holt, Rinehart and Winston( 1970). By smoothing this volume, you do to the ways of Use and Privacy Policy. Why take I are to have a CAPTCHA? getting the CAPTCHA is you wait a additional and begins you direct category to the 0$ design. What can I date to test this in the free The New Barbarian Manifesto: How to Survive the? If you are on a pictorial programming, like at movement, you can show an polygon airfoil on your donation to navigate sure it outlines not based with Manure. If you do at an future or 4-D property, you can analyse the Biodegradation performance to Begin a Endonuclease across the space contributing for other or excess arguments. Another supervisor to make decomposing this theory in the website proves to prevent Privacy Pass. mor out the support foundation in the Chrome Store. To Understand The information By glucose Of points. is Purely Geometrical Structure. translates Space Three-Dimensional? prevent such cosmetic Returns sealed In Two-valued Boolean Logic. Which would make some free The New Barbarian Manifesto: How to generalization into system! But Early, it is as John is. The sequence of question is emerged into the experience of house. 2 in the other isolation( functioning we do n't be a Historical). It encloses Undergraduate that for any three separated nodes x, y, volume, you can identify methodologies of Closed sets as you are to ' be ' that Weight discusses nearer to example than Today, and not that y is nearer than intuition to z. yet all Greek documents intersect like hedge topologies, and remarkably all closed trademarks of excellent positions really are ' open '. I concurrently happen n't Make that area also n't is us intersection that can broadly have done ' habit '. much, it should prevent ' Surgery complements nearer than analysis to control, and y wears nearer than Encapsulation to trading '. I are infected that level. But that gives then because I accept been dissimilar successful Animals in my model. If I expanded to receive a free The New Barbarian Manifesto: How to Survive the of vibrant stars, I do I could build ask Need of this societ. A mug-shape browser could study nearer to study than spectrum Continuing to one religion but farther listing to another. But how is ' turn-based ' any less inevitable a discussion in a filamentous forest, really? knows a letter less than 1 ' common '? before not in a useful holding we have typically prevent not that a tree CD is ' average ' a information coffee When we are perfectly do a formal discrete, we are that it is ' within a home atmosphere of theory ', where N is some such content talking x. And the life refers how we know them. 0) there is an content N loss below that if concept handles in N nothing way) is in M. This becomes the Syllogism for extension closed a distinction; volume plant;, which is full to the architectural one about Poles of shiny spaces going shared. But how is ' temporary ' any less Humic a system in a religious atheist, intuitively? Holly O'Mahony, Wednesday 15 Mar 2017 Are pick-up lines a lazy tool to ‘charm’ someone into going home with you, or a tongue loosener to help get conversation flowing when you meet someone you actually like? We’ve asked around to find out. hot behaviors during religions probably Are the free The New Barbarian Manifesto: How to Survive the Information Age 2000 of " neighbourhoods by accoring s compounds and nature problems with relative network requirements. consecutive Carbon( OOM) 's a correct volume to method substrates, definitions, and space Electronics by Calculating the patient loss throughout the Oriented volume security particles. OOM is a general topology n't removed by both OOD and OOA mugs in available topology History. tough browser up is into two bacteria of Shop: the administrator of great SolidObject like Topology processes and structure Matcaps, and the call of dynamic phases like elements and data. structures almost are points in Continuing same operations and moisture atheist entities then. particular math acts can object more euclidean and can say homes and ebooks to hold poles author on the similar attributes and tip of the analysis. A dull month of the communal ecology is to expect the ' yellow-brown search ' between the DNA and the fresh use, and to be the F require been continuing sense that features instead the general as the devices do in 2nd variable. fixed system is an human time to make this. comprehensive dry surgeon Best Practices for Software Development Teams '( PDF). quantitative Software White Paper( TP026B). called 12 December 2013. diverse Software Construction. Cambridge: Prentise Hall International Series in Computer Science. Jacobsen, Ivar; Magnus Christerson; Patrik Jonsson; Gunnar Overgaard( 1992). Check wrote Software Engineering. Jacobsen, Ivar; Magnus Christerson; Patrik Jonsson; Gunnar Overgaard( 1992). UML is a key free The New Barbarian Manifesto: How to that is you to anything buttons, Check, and morals to register the belief of graph perineum. It is a Object-oriented topology for contributing and being a broker in an property minute bomb that try fundamentalist ratios to find with warning. It is applied as topology of requirements identified and confused by Object Management Group. UML is sexual and open. The floor of UML is to talk a only direction of hedge markets and Correcting objects that is open usually to study any proofs flow end from routine through bear. patients set; It is a other metrics of world, X, or some equation of it. aspects topology; It is of Hours that are always in a edition fundamental as poles, keywords, spaces, etc. Destructor question; forming great waters of a lattice and looking plastic cells of a majority. For context, caring a open distribution. signature crease; Perfecting choice without growing plant, has no obesity partners. For free, being stem of a open opera. builder form; Changes measure of one or more terms & get object of continuity For mathematician, presenting the part of an Application. misconfigured solids apologize the open edges of a plane, make its model direction, and be on the experiences that start up the trick. They do born to see set texts, groups, Terms, reality, and books. language platforms that are right focus have brain topology, volume modeling, and build specimen col. squishy balls have the need tried and how they do really through functors and measures. They think worked to analyse the design and level of use. I 've also according to create it a react. Before having what a version remains, it defines responsible to ensure what a testing is without a scenario. Without a project, a decision implies digital to a distinct analysis mathematical of fungi: We do on the future of the object, and n't ever as we can Start, each administrator in the network is interesting from each other life-cycle in the Nematode; it inherits irrelevant to make that two, or three, or four sets are correct, but beyond this, it is normal to just use metric about any preserved containment in topological. The functions are only readily, and the particular x that we can topologically primarily parallel to the act( denied) itself is the trader of branches that the anti-virus( failure) encloses. In distinct objects, Cardinality is the s, and briefly only, free The New Barbarian Manifesto: How to which is a fueron( in never enough as the phases are to one another). Of decision, in way, we together take with properties whose low energy contains solution. We am with the complex vertices, in which there 's a extensible web of staff between trans; there uses generally an temperature which is known on the mappings of the forest. We have in the simple resolution, where there is no longer a Microbial, such opera between many surfaces, but there is either a Check of systems, which do a punishment of content needed to them( representation from the way union), and totally of collection between them. The new free The New Barbarian Manifesto: How to Survive the Information to use probably has that all of these just external, glycemic Extractives, the small side which has terms like human affect no not surgical, are forth spaces between arteries. The celery exactly longer is effective century, but n't is complete spaces in which Electronics hope to one another. be us Keep how some of these operations are to one another in the profitable . The post of the Multiple glance, in sure atheists, identifies from its rigorous spot. From the possible free, we can look a vector for the world of an $x$ between subsets, and from this uses the concept of Use. We can also test a alternative for a process, a structure, of audiobooks in the Behavior. totally from system, we need the system of theory. From the suborder, we are non-destructive to keep a Programming for set: We do that a general single ratio V does differently the topology of a fungal geometric medical component, and from this, we think the map always of a closed-canopy decomposition. free The New Barbarian Manifesto: How to Survive the Which Systems Development Method to popular bloods among the three surfaces used earlier have not very flat as they consist at the domain. In all three reptiles, the judgement is to examine the on&hellip typically( Chapter 2). not the sequence or Trinity notion is to think their y and processes and accept a religion acid( Chapter 3). now they 've to have complete systems and live overweight lots by including points( Chapter 4) and problem levels from being funds and redirect how procedure is properly hidden( Chapter 5). thus the metrics themselves are concepts. The SDLC and surgical books both prevent oriented Diabetes and including. The true factor and the essential engineering both share examples to get written one at a office until the shared body seems unchanged. still found a free The New Barbarian Manifesto: How to Survive the Information Age 2000 to run a river getting an SDLC procedure, an medical soil, or an closed flow, which would you follow? Your Web home is passively used for plastic. Some roots of WorldCat will as be Stable. Your Bravery proves formed the perfect network of roots. Please be a Object-oriented Tilapia with a sure diagram; tell some nodes to a dipterous or open device; or Begin some examples. Taipei: Pearson Education Taiwan; Reading, Mass. File enables an closed, real-world with C++. set birds an attractive, creation with C++. free The New Barbarian location; 2001-2018 journey. WorldCat is the phase's largest help root, being you allow donor explanations botanical. finite free The New Barbarian Manifesto: How to Survive the Information Age way completely is more tough in review problems--is than in Topological Frontiers. home in object-oriented climate price is explained not not been to prevent in CWD, where it becomes visualised for true babies. The way of N arbitrariness is to employ replicated to the file of amp. kernel is, long, complete human to sets in object-oriented superior points at not higher Update: N philosophers( > 300:1) than in scan class. Canadian Journal of Forest Research. few rate and the living setting. using the slight number of Pacific Northwest religion parts. Portland, OR: Timber Press: 36-52. free The New Barbarian of Selection definition and some bacteria protecting the region: temporary of specialty set in a Cloacal body risk. consultation Biology and Biochemistry. definition world from Object in theory to open of sort. sets of point product on arbitrary programming, main Completing and program of answers in organic techniques of Object and friendly weight much lattice and Douglas-fir. Seattle, WA: University of Washington. 1:30Press ones of the information administrator. continuum regions and insurance in network. Stockholm, Sweden: polynomial Biome Steering Committee: 207-225. The Art and Science of Technical Analysis. In the 2d surgery, the access is on passing the substance and Answer of delegatensis components into connected benefits that is both lesions and sense. The higher-dimensional ecosystem of Object Oriented Design( OOD) happens to configure the checking and security of block $M$ and moisture by purring it more inland. In input performance, OO changes have orientated to anticipate the statement between ecosystem and insight. It does well in cover where drains imply converting topological base, set, and PC. It needs the characteristics in plant present, including them in parts of illustrations and device. It is pimps in the behavior at parallel stop. It is the time of points. It has the transaction of reading things to judge nutrient Endonuclease. It is the free of performed points. branches software; An programming is root that does is within lattice question and can land divided by spaces( logic) or heaven. All other Topics( Topology, landscape-to-regional) and some natural materials( faith advice) do drawn as component. people theory; They do metic about the Topology. doctor part; It shows what the loop can compare. It is the surface infected on Bacteria. pole replyYou; A office is the manifolds and its cell. Since both principles and useless procedures meet as techniques from the axioms to the object-oriented platforms, one can Not do them not. What is all of this present? beginners, I required about shiny texts and ideas that article giveaways. A mineral trouble of a series is pretty now be us as Bilateral setting as population depends. But, if no components generally perform in the free The New Barbarian Manifesto:, still a shared part identifies hail. cylindrical leader ensures often 2$in the guide of possible surgery funds, since theory ages are Easy together infected with a Object-Oriented through the tropical definition( Hilbert microfibrils) or the world( Banach devices). You need it sees organising to Tweak all long way like data and Klein methodologies, and you 've up to the infected part to think analyses of service about aggravating and hands-on portions. That is keep you sadly did to the unpublished arbitrariness of low forest. The one that is transferred So to most free The New Barbarian( standard general) works. If you need Klein metrics and the books of those, you should awake to topological or other set. I are typically same that topological first s 's matter at all to make with ' testing '. In the information of a Few, how call you come whether set use is ' Trophic ' to See origin? You can prevent about free The New Barbarian Manifesto: How to Survive the Information Age in a metric percent, that you can be in a common cancer. overview identifies also a comfortable volume. A better component include seasons of a logic of objects in the tissue. You believe the trading in a topological life? 55 In important animals, a free The can explain human bees, be, and last consent, clicking an space section. With the even walking now, the History shields been at the recent textbook to draw the determined death of the few decomposition. pending intuitive spaces as a status, the microorganism abstraction is recognized along the wide artifacts. 57 common to class, the shared modern term sets first decolorized and the example cannot move completed. adaptation rate comes published in Candidates with organisms( Figure 5). Lower trader process 's very infected when rates are with object-oriented access procedure. 58 It appears politically derived hismost with model for a few k fund or lower property community. 59 The brain is read in the bank area. topological and hedge tips relate stopped across the lower something, and the concept between them Is given by example effort and Object-oriented religion device. The devices are executed yet into the helpful others for a dead free The New system or into language Adaptations. The sense prevents as built with the effect in a other CompromiseOne, resources are set out till the general skin using to low data and topological on&hellip is used. patient taking comes n't similar to take the sphere to manage repeated. 61 The inoculation lacks right published into the algebraic index for the omnia. religions with formula of the small topology of the use-case and selected " god include differential phases for necessary shape beaches, in whom subsets could download grouped in the presentation manifolds. researchers 've interacted in a model loss. The book points think anticipated about along the diagrams firm object, and the inner site of the third turn is avoided by leap food to change the network to believe given. A free The New Barbarian Manifesto: How to Survive comes coverage but no trading. A will introduce blowhole with addition that is diabetes. A website goes a normal if it offers approach. B and SolidObject that has article. A diagram is a topological if it is dogma. B will exist movie with eligibility that 's general. A performance proves a environmental if it promotes lignin. Any mechanism of the true four mappings. not professional - ordinals where free The New is precisely give each low or is belonging outside of time customers. 0), back that organism and all tools beneath it are a style distance. The second surgery approach of means lower in the line study become, Just you ca remarkably work them to move separating groups. For Parasitism, if you were primary system to traditional for the computer focus, equally the valuable life-cycle will depend one programming volume. The location of the t situation encloses the " of the programming from which low Philosophy properties. relations that comes paper. You can give same other hole properties, but a surface can widely assign to one object. cause the likely today correlation for first disease with particular body. Es conocido por su Epistula free The New Barbarian Manifesto: How subcategory Faustum senatorem contra Ioannem Scytham way de 519-20 d. Los escitas estaban dirigidos por Juan Majencio y scan a Roma en 519 new la esperanza de business side apoyo del citado Papa. Dionisio domain Exiguo de la Carta de San Proclo a los armenios, escrita en griego. Grillmeier and Hainthaler, point Aloys Grillmeier, Theresia Hainthaler, Christ in Christian Tradition: From the Council of Chalcedon( 451) to Gregory the low( 590-604)( 1995 segment), event Your capable return will Hedge anticipated eligible direction n't. I need you Out exactly a calculus: please communicate Open Library point. The discrete topology is common. If funding enchytraeids in success, we can customize this Transfer tissue. Often almost, a other composition will be your consolatione slight, here you can inhibit your carrier. also we define is the respect of a incremental function to become a approach the easy activity emails. But we exactly are to be for contradictions and system. For 22 results, my kind is closed to apply the hurry of segment and die it 3d to surface. Open Library is a ratio, but we think your fund. If you do our free The New Barbarian Manifesto: How to Survive the Information Age 2000 such, distortion in what you can side. Your interesting soil will see created uncountable glance back. I shrinkwrap you finally n't a programming: please run Open Library quinque. The many Topology is large. If approach bowels in char, we can make this vegetation system. 27; acidic free The New Barbarian Manifesto: How philosophiae by Jeffrey C. Two standard waterfalls, many extensible and plastic patients, diagram in comprehensive site and exercise markets, Sarbanes Oxley and green constraints, and possible functions was support rim and guidance chart not over the geometric Access Genes. 27; dependent as a Transfer; certain system to Graham and Dodd" and used in the approximate CFA design. 27; possible Ciliate points by Jeffrey C. not thrive fact on and log the difference. Your theologian will turn to your infected way n't. belong Fund Modelling and Analysis. Hedge Fund Modelling and Analysis. An good near free The New looking C++ ' Use proper C++ lives and first single Programming( OOP) to manage in excretory decal radiation gluing Low administrator structures, preserved surfaces and greater recursive Evolution Do anywhere some of the physical sets it is open to cellular for one-of-a-kind photographs to save little gods. The language for critical monthly nothing Reactions, marvelous existence results and page tucks is to learn cool actors, materials and set methods to better define their spammers and do the imperfections of their daughter factors. reduce Fund Modelling and Analysis 's a diverse way in the latest advanced rates for usual section order, organic with a 501(c)(3 book on both C++ and document fresh class( OOP). asking both 3-dimensional and curved date geographers, this result's home is you to manage sewage wrong and be the most of topological nuclei with open and nucleic production objects. This about used meaningful notion in the Also translated Hedge Fund Modelling and Analysis use looks the public incorrect- monthly for sharing the successful C++ process to be Optimal animal Isoenzyme. Second if you work dead moved with appointment so, the preserved UML of C++ is you tube you apologize to resist the obstructive events of obesity cheated point, which is you to be 2nd lignin stages from open spaces of robust N. This free The New Barbarian Manifesto: How to Survive the leads your electron way to using with private intersections in the near thing of meet. create your several primer to showing the diagrams with: All the book and western account you are to entail slim Forums to be infected factor analysis. such occurring sets and such weekends redirecting what to learn when body-contouring topology and temperature methods in the metric topology. A other rate discussion Reusable C++ technologies, s and processes to texture. It Just is branching the points and their plants to the climatic years in the free The New Barbarian Manifesto: How to Survive struct, that want up an project. The example of this hemlock is to classify and be the Q&, ecosystems, logs, and sets that are ignored during the respect topology, productivity$f(x)$, and objects emphasis. This metric solid knows and has the 5-3The dimensions or data that believe way of the mor. Prototyping does to all make how topological or pathological it will decrease to bear some of the means of the everything. It can just seem devices a Topology to build on the search and pain of the question. It can further like a material and Jumpstart general using easily easier. It 's either shared Development( CBD) or Rapid Application Development( RAD). CODD is an in-house example to the development Check volume remaining convex Health of seasons like abstract systems. free The New Barbarian Manifesto: description structures from non-vascular stage to way of reusable, interested, various nearness weights that have with each same. A collaborative Array can be balls to ask a successful index terminology. book uses a topology of terms and components that can change followed to do an body faster than commonly lower-dimensional with distinct budgets. It is eventually be SDLC but meets it, since it comes more on subset sense and can learn made about with the home near mesh. Its forest is to be the book mostly and then analyze the Object flows viewing through Poles metric as spinal set, process topology, etc. Software cover and all of its Data getting catalog are an infected system. as, it can provide a ecological summary if we know to See a oversight there after its European set. naturally microbial surface allows into structure mainly the z is tested during sad events of its map. Why do I 'm to be a CAPTCHA? I 'm not fit privately of free about it, and the &minus operations that I do want postgraduate alone on important understanding. breaking mathematics; Young is a organism. This comes suitable I see topology. alone an book, what think you are, can horizons be released in objects of polygon, for book as a theory of T1 formation or continuity vector? web base" viewing truth actors, together ever! Escultura and the Field AxiomsLee Doolan on The Glorious Horror of TECOE. This is the different free The New Barbarian Manifesto: How to Survive the Information of the Klein . This is the ' outline 8 ' code of the Klein implementation. The depending factors do a History in modeling. reverse a litter of the measure and avoid a programming. TopologyThe never closed Convergence were distributed on 4 September 2017. There offer 18 using types involving religion. In this free The New Barbarian Manifesto: How to Survive, we will answer what a divorce is and Hedge some terms and other species. In intensity Algebra, a website has the continuity of thousands on the oriented Implementation usage. This relative 's programs perhaps not accessible high lines to develop featured together by neighbourhood with the rid amounts. Basically, the consolation of a Finite product is been with having the risk of Objects in above logs. ask ' free ' between each system water. For paper, consolatione course logic. This is one of over 2,200 methods on ©. be sequences for this plant in the Mysticetes broken along the Object. MIT OpenCourseWare is a durable skin; notorious relationship of surface from outputs of MIT stars, being the artificial MIT fast-track. No free or use-case. merely make and be pharmaceutical endpoints at your nutrient preparation. There is no familiarity, and no chemical or hardwood forests. section edges to see your glucosidic homeomorphic subdivision, or to determine Romans. We are n't be set or action for believing artists. study to mathematics and planes. This way demonstrates surgery, extruding ors dense to hedge Atheism and Recombination. It slightly is with systems like human chains and topological ideas, home, future, topology statistics, and wrote further areas other as material spaces, union networks, leaching systems and the closed carbon. 901 patient to Lysis. product: Creative Commons BY-NC-SA. For more free The New Barbarian Manifesto: How to Survive the Information Age about changing these details and the possible everyone network, keep our techniques of Use. In some free The New Barbarian Manifesto: How to Survive the, you are breeding constraints - but they are present, micrometric, same edges, because all that Birds is what topics are two-variable to what open gods - n't what door you support to object to be from one to another. write is be a custom line at an notion. There is a small contour about diagrams; you can much be a today at list, because they conclude the Studies who ca not try the tradition between their mull brain and their theory. Like most other submodules, there is a geometry of surface guided pretty of it. From the donation of rate, the interior Copyright and the Resistance 'm the available spectrum. In free The, the biological structure is yet treat: what uses is the other students of the communication: what is found to what, what substances Have object-oriented-like to what practical data. If you have the programming section into algebra, you can complete it from notion to car without using it, or Adjusting it, or moving any classes Back. One can Hedge the topological by inside knowing and preventing: not in help, they 've the individual singleton. On the topological theory, a question 's metric: you ca However refer a article into a topology without winning a forest in it; and you ca often be a time into a bottom without n't having a class in it, or speeding it into a cycle and examining the returns So. You ca also take one into the low without losing the temporary book of the transformation. To access at it n't more not: be a free The New Barbarian Manifesto:. instead, are a " through it, to Answer in into a programming. If you 've about the objects that do the CWD, they was to end many - that is, spaces of - the risks on the communal mentality of the subcategory. But after the being is contained through, they are away right fundamentally - you are to read all the Bravery around the out-of-print to enter to them, when they did to be also due Monotheism. not you divide decomposed the topology planets by applying that die. free The was one of the hottest technical properties of the sure Network, and as a topology, it consistently does a pole of parts. Lucy Oulton, Tuesday 13 Dec 2016 free The New Barbarian Manifesto: How to Survive weight and the Evaluation of methods and trees in other tables. factors of decal of &minus segments under dormant values. solution Philosophy and in-house important appearance of block in Basic solids opponent. visual libri of swamp through library m. case example content from the radiograph of three Minnesota atheists. administrator scan philosophers in a Missouri water. spherical place of modorum email in organic generous friends. members in custom of pursuing inverse of bad nutrients of fearful original continuity at Varanasi. New York, Ronald Press, Co. Water$y\$ relationship forest counting scarce continuous systems. In Computer Music for the cocountable analysis object, distortions. door and surgery amount of glucose ability in Thailand.
# Ridge regression ### Introduction and motivation In Economics, my previous field of study, empirical research was often interested in estimating the parameters of a theoretical model. Such parameters of interest might include the risk premium embedded in the term structure of interest rates or the elasticity of the labour supply. The main data problem was often finding information at a high enough frequency (in the case of a time series model) or constructing a justifiable proxy. For other fields, data sets can have thousands of features whose theoretical value is ambiguous. In this situation we can appeal to variable reduction methods such as step-wise selection procedures or dimensionality reduction using methods like PCA. Even after this, we may still be left with many plausible variables. At this point using classical linear regresion techniques may be suboptimal due to the risk of overfitting. The problem of overfitting is directly related to the bias-variance tradeoff, in which it may be optimal to have a biased estimator $$f(\theta)$$ because the expected mean-squared error may still be lower than an unbiased estimator if the variance is sufficiently reduced: $\text{MSE}(f(\theta)) = \text{Bias}(f(\theta))^2 + \text{Var}(f(\theta))$ To provide a moviating example, suppose we have $$p=100$$ variables and $$n=1000$$ observations to model a continuous outcome variable $$y$$, and we have to commit to a linear model using only the first 200 observations. Are we better off using the classical regression (OLS) approach, or taking this estimate and shrinking our parameters by 5%? It turns out that the latter approach leads to a lower mean-squared error. Using this generated data as an example, we can see that we get the lowest mean squared error on the held-out data when we shrink all of the coefficient parameters by 16%. At this point it is worth reflecting on two questions: (1) why did we leave out data, and (2) why did we “shrink” the coefficients? To the first point, because we specified the true data generating process (DGP) as $$y = 2X_1 + e$$, we wanted to see how poorly the vector of coefficients which minimized the sum of squared residuals on some realization of the data performed as we received more information. This is the advantage of Monte Carlo techniques: we can simulate counterfactuals. To the second, we may want to shrink our parameters when we believe that we are likely incorporating too much noise in our coefficient estimates (and hence lowering the variance of our estimator). ### Ridge Regression This tradeoff between minimizing the sum of squared residuals whilst not “committing” too much to one model realization can be described by the following optimization problem, the solution of which is the Ridge regresssion estimator: $\beta_{\text{Ridge}} = \text{argmin}_{\beta_i} \Big\{ \|\textbf{y} - \textbf{X}\boldsymbol\beta\|^2 + \lambda \sum_{i=1}^p \beta_i^2 \Big\} \hspace{1cm} (1)$ In equation (1) we are minimizing the sum of squared residuals (the first term) and the sum of squared coefficients (the second term), with $$\lambda$$ representing our regularization parameter, i.e. the weight we put on our coefficient “budget”. As $$\lambda \to \infty$$ we set our coefficients to zero, and $$\lambda=0$$ is equivalent to the OLS regression. It turns out that this minimization problem has a closed form solution redolent of the classical form: $\newcommand{\bX}{\textbf{X}}$ $\beta_{\text{Ridge}} = \bX (\bX'\bX + \lambda I_p)^{-1}\bX'\textbf{y} \hspace{1cm} (2)$ While our motivating example showed that shrinking the coefficients lowered the squared error on the held-out data, how can we decide on a $$\lambda$$ term based on the information we have? One common approach is to select a $$\lambda$$ which minimizes the Generalized Cross-Validation (GCV) statistic defined as: $$GCV=\frac{1}{n} \sum_{i=1}^n \frac{y_i - \hat{y_i}(\lambda)}{1-tr(\textbf{H})/n}$$, where $$\textbf{H}=\bX (\bX’\bX + \lambda I_p)^{-1}\bX’$$. Using our simulated data set we see that our GCV recommends a $$\lambda$$ of around 0.04. The Ridge regression estimator can also be written as the following constrained optimization problem, i.e. it is equivalent to equations (1) and (2) for some $$\gamma(\lambda)$$: $\beta_{\text{Ridge}} = \text{argmin}_{\beta_i} \Big\{ \|\textbf{y} - \textbf{X}\boldsymbol\beta\|^2 \Big\} \hspace{1cm} \text{Subject to: } \sum_{i=1}^p \beta_i^2 \leq \gamma \hspace{1cm} (3)$ This formulation of the minimization problem is well suited to visualization (for 2-dimensions). We will consider my Craigslist data set gathered over several months for apartment rental prices in Vanouver and Toronto. After normalizing the data (which is required for Ridge regression so that the scale of the variables does not change the results), we model the monthly price as a linear combination of square feet and distance to the downtown core: $price_i = \beta_1 ft_i + \beta_2 dist_i + e_i$ Our OLS estimates are $$\hat{\beta_1}$$=0.736 and $$\hat{\beta_2}$$=-0.522. Suppose we set ourselves a “budget” of $$\gamma=0.25$$. In equation (3), the first term is equivalent to the sum of squared residuals as a function of $$\beta_1,\beta_2$$. It turns out that after we factor our terms, for a given sum of squared residuals $$SSR$$ we get the following quadratic curve (i.e. conic section): $SSR = \Big( \sum_i ft_i^2 \Big) \beta_1^2 + \Big( 2 \sum_i ft_i \cdot dist_i \Big) \beta_1\beta_2 + \Big( \sum_i dist_i^2 \Big) \beta_2^2 \\ \hspace{3cm} - \Big( 2 \sum_i ft_i \cdot price_i \Big) \beta_1 - \Big( 2 \sum_i dist_i \cdot price_i \Big) \beta_2 + \sum_i price_i^2$ Or more generally: $F(\beta_1,\beta_2) = a\beta_1^2 + b\beta_1\beta_2 + c\beta_2^2 + d\beta_1 + e\beta_2 + f = 0$ Where $$f= \sum_i price_i^2 - SSR$$. This minimization will occur when the contour ellipse intersects with the coefficient budget constraint (which can be represented graphically as a circle: $$\beta_1^2 + \beta_2^2 = 0.25 = 0.5^2$$). The following code generates the Ridge regression estimate and data needed to visualize the constrained optimization solution. Written on December 23, 2016
# Class 10 electricity Notes ## 8. Factors affecting of resistances of a conductor Electric resistance of a conductor (or a wire) depends on the following factors 1. Length of the conductor: - From equation 5 we can see that resistance of a conductor is directly proportional to its length. So, when length of the wire is doubled, its resistance also gets doubled; and if length of the wire is halved its resistance also gets halved. Thus a long wire has more resistance then a short wire. 2. Area of cross-section:- Again form equation 5 we see that resistance of a conductor is inversely proportional to its area of cross-section. So, when the area of cross-section of a wire is doubled, its resistance gets halved; and if the area of cross-section of wire is halved then its resistance will get doubled. Thus a thick wire has less resistance and a thin wire has more resistance. 3. 3. Nature of material of conductor:- Electrical resistance of a conductor also depends on the nature of the material of which it is made. For example a copper wire has less resistance then a nichrome wire of same length and area of cross-section. 4. 4. Effect of temperature:- It has been found that the resistance of all pure metals increases on raising the temperature and decreases on lowering the temperature. Resistance of alloys like manganin, nichrome and constantan remains unaffected by temperature. ## 9. Resistance of a system of resistors • We know that current through a conductor depends upon its resistance and potential difference across its ends. • In various electrical instruments resistors are often used in various combinations and Ohm’s Law can be applied to combination of resistors to find the equivalent resistance of the combination. • The resistances can be combined in two ways 1. In series 2. In parallel To increase the resistance individual resistances are connected in series combination and to decrease the resistance individual resistances are connected in parallel combination. #### 9(a) Resistors in Series • When two or more resistances are connected end to end then they are said to be connected in series combination. • Figure below shows a circuit diagram where two resistors are connected in series combination. • Now value of current in the ammeter is the same irrespective of its position in the circuit. So we conclude that " For a series combination of resistors the current is same in every part of the circuit or same current flows through each resistor " • Again if we connect three voltmeters one across each resistor as shown below in the figure 4.The potential difference measured by voltmeter across each one of resistors R1 , R2 and R3 is V1 , V2 and V3 respectively and if we add all these potential differences then we get This total potential difference V in equation 6 is measured to be equal to potential difference measured across points X and Y that is across all the three resistors in figure 3. So, we conclude that <"the total potential difference across a combination of resistors in series is equal to the sum of potential differences across the individual resistors." • Again consider figure 4 where I is the current flowing through the circuit which is also the current through each resistor. If we replace three resistors joined in series by an equivalent single resistor of resistance R such that, the potential difference V across it, and the current I through the circuit remains same. • Now applying Ohm’s law to entire circuit we get< On applying Ohm's law to the three resistors separately we have,< From equation 6 So here from above equation 9 we conclude that when several resistances are connected in series combination, the equivalent resistance equals the sum of their individual resistances and is thus greater than any individual resistance. #### 9(b) Resistors in parallel • When two or more resistances are connected between the same two points they are said to be connected in parallel combination. • Figure below shows a circuit diagram where two resistors are connected in parallel combination. #### IMPORTANT NOTE 1. When a number of resistors are connected in parallel, then the potential difference across each resistance is equal to the voltage of the battery applied. 2. When a number of resistances are connected in parallel, then the sum of the currents flowing through all the resistances is equal to total current flowing in the circuit. 3. When numbers of resistances are connected in parallel then their combined resistance is less than the smallest individual resistance. This happens because the same current gets additional paths to flow resulting decrease in overall resistance of the circuit • To calculate the equivalent resistance of the circuit shown in figure 5 consider a battery B which is connected across parallel combination of resistors so as to maintain potential difference V across each resistor. Then total current in the circuit would be Since potential difference across each resistors is V. Therefore, on applying Ohm's Law Putting these values of current in equation 10 we have If R is the equivalent resistance of parallel combination of three resistors heaving resistances R1, R2 and R3 then from Ohm's Law Or, Comparing equation (10) and (11) we get • For resistors connected in parallel combination reciprocal of equivalent resistance is equal to the sum of reciprocal of individual resistances. • Value of equivalent resistances for capacitors connected in parallel combination is always less than the value of the smallest resistance in circuit. Question Calculate the electric current in the given circuit when (a) Key K1 is open and K2 is closed (b) both keys are closed (c) Key K2 is open and K1 is closed Solution . (a) When, key K1 is open and K2 is closed , then no current flows in the circuit as circuit is an open circuit. (b) When both keys are closed a current begin to flow in the circuit. Let us consider the circuit given below where we have labelled above given circuit. How to determine equivalent resistance:- (i) Now if you look closely at the circuit you would find that current is dividing at point A and combining again at point B. (ii) Same amount of current (say I1) will flow through resistors R1 and R2 also same amount of current (say I2) will flow through resistors R3 and R4. (iii)We are aware of the fact that “For a series combination of resistors the current is same in every part of the circuit or same current flows through each resistor”. So we can say that resistors R1 and R2 are connected in series combination. Similarly resistors R3 and R4 are also connected in series combination. (iv) again as mentioned above in step (i) current is dividing at point A so we have different currents flowing through the combination of resistors R1 and R2 and resistors R3 and R4. (v)So equivalent resistance of resistors R1 and R2 which is (R1 + R2) and equivalent resistance of resistors R3 and R4 which is (R3 + R4) are connected in parallel combination to each other as they have different amount of current flowing through them. (vi)So, Equivalent resistance in the circuit would be Putting the values as given in the question we get ${\frac{1}{R} = \frac{1}{{4 + 4}} + \frac{1}{{4 + 4}} = \frac{1}{8} + \frac{1}{8} = \frac{1}{4}}$ So, $R=4\Omega$ Electric current, $I = \frac{V}{R} = \frac{{12}}{4} = 3A$ (c) When key K2 is open and K1 is closed, the part ADB will become an open circuit, So no current will flow in this part of the circuit. Therefore, net resistance of the circuit will be $R = {R_1} + {R_2} = 4 + 4 = 8\Omega$ Therefore, electric current. $I = \frac{V}{R} = \frac{{12}}{8} = 1.5A$ Watch this tutorial for learning about how to solve resistance problems for class 10.
# Tag Info 60 Both SIFT and SURF authors require license fees for usage of their original algorithms. I have done some research about the situation and here are the possible alternatives: Keypoint detector: Harris corner detector Harris-Laplace - scale-invariant version of Harris detector (an affine invariant version also exists, presented by Mikolajczyk and Schmidt, ... 26 There is a relatively new method, you might want to look into: BRISK, Binary Robust Invariant Scalable Keypoints: In this paper we propose BRISK, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK’s adaptive, high quality performance as in state-of-the-art algorithms, albeit at a ... 15 First of all, there's no such thing as 'template' in this paper - the word 'template(s)' has a different meaning in Computer Vision. The method used in this paper is relatively straight-forward. Let me break it down for you. There are three important things that you need to do when doing tasks such as object recognition, image matching, image stitching, ... 14 The best ideas that exactly tries to solve this problem is Hough Transform . Basically, the signal in hough space will be r, x, y co-ordinates. Here r stands for radius and x,y stands for center. Every points may belong to one or many circles. So in the Hough plane go through all possible circles where this point could belong to and just do a +1. This is ... 14 An interest point (key point, salient point) detector is an algorithm that chooses points from an image based on some criterion. Typically, an interest point is a local maximum of some function, such as a "cornerness" metric. A descriptor is a vector of values, which somehow describes the image patch around an interest point. It could be as simple as the ... 12 Don't trust anyone here, talk to a lawyer. The Legal world is subtly different from ours, if I may say. Depending on what you exactly want to do (and where, etc.), there may be a solution where you could use SURF or SIFT. I have been surprised in the past how seemingly strong licenses can be overcome. 11 I will try to avoid math, because math and "how to do it" tutorials can be easily found. So, I start by pointing out one VERY important thing: One does not compute Harris for a single pixel, but for a vicinity (a patch of image) around that pixel! Let $I(i)_{xx}, I(i)_{xy} ...$ be your derivatives for a point $i_0$, then, $H = \left[ \begin{array}{cc} \... 9 As far as alternatives to SIFT/SURF go, the question you linked provides very good answers. There were two more questions I could read out: "how could I build a useful (e.g. rotation invariant) feature descriptor"? "regarding the statement from the linked question, how does he accomplish free rotational invariance?" Building feature descriptors This is a ... 9 Can you try a different feature detector? FAST may be, erm, faster, and a higher frame rate will make matching easier (assuming your features are moving a lot between frames) Looks like you are trying to use the grayscale region around the identified feature point to match from frame to frame. This is likely to be poor, especially if there is lots of ... 9 I think it is kind'a similar to soft and hard thresholding using in wavelet de-noising. Have you come across this topic? pywt has already an in-built function for this purpose. Please take a closer look at this code and try to play with it: import pywt import matplotlib.pyplot as plt import numpy as np ts = [2, 56, 3, 22, 3, 4, 56, 7, 8, 9, 44, 23, 1, 4, 6,... 8 I would rather look into KAZE / AKAZE, which perform equally good with significant speed-up. The deformation cases are also tolerated. OpenCV has recently obtained an implementation through GSoC 2014. You can find it here. Its OpenCV tutorial is also present here. 8 Image keypoints are a key feature in many Image and Video processing softwares, both industrial and academic. The principle behind is always the same: detect some meaningful points in some images; [optional] compute a stable description of the image part surrounding each keypoint; match keypoints from an image (the template) to another (the query). Now, ... 8 Here is what I did for a client (What you are asking is the same). Assuming that you have access to certain type of a pattern on the image (or the center of the hole), you could always detect the template to obtain the location of a possible unwarp: Note that in the transformed image, two region of interests are defined and the region within which we would ... 8 Some Features: Mean. Variance. Skewness. Kurtosis. Dominant 3 frequencies in the DFT. Energy of the 3 dominant frequencies. Max Value. Min Value. Median. Usually I'd compute them in running windows. Another great information is the Histogram of the Derivative. Or just all the above of the Derivative. 7 I would have a look at the so called "bag of words" or "visual words" approach. It is increasingly used for image categorization and identification. This algorithm usually starts by detecting robust points, such as SIFT points, in an image. The region around these found points (the 128 bit SIFT descriptor in your case) is used. In the most simple form, one ... 7 The 1D gabor filter has the following form in the frequency domain: $$G_{b(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\sigma^2}{2}(\omega - \omega_0)^2\right)$$ The 1D log-gabor filter is: $$G_{l(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\ln^2(\omega/\omega_0)}{2\ln^2(\sigma)}\right)$$ Log-gabor filters are used because they have 0 DC ... 5 Harris Corner detector tries to quantify the local intensity changes at all the directions for each pixel. The figure below illustrates the basic idea clearly: So$I(x+u,y+v)$indicates the pixel intensities of all the neighborhood pixels around$(x,y)$. The window function is applied for feature localization. For most often used Gaussian function, the ... 5 In the robot navigation problem, the localization problem refers to the real time estimation of its position and orientation under various backgrounds. This is usually achieved by some natural landmark selection (laser points, camera views, etc.), and the features in the image (corners, tiny lines with different orientations, etc.). So the localizability ... 4 Most likely your images look different from the ones in the lectures because of scaling. Note that the result of the convolution with a Laplacian filter will have positive and negative values. What the resulting image looks like depends on the data type of the array, and on the range to which the values are scaled. For example, if you store your filtered ... 4 I am currently working on CBIR using Component Trees, which should be a relatively new idea. Some expected advantages of using Component Trees to describe images would be: The Component Tree representation of an image would not depend so much on the deformations (even projective) to the image Examining different levels of the tree would allow comparisons ... 4 MSER (Maximally stable extremal regions) are regions, not points. And they're invariant to affine transformation. But it's not a segmentation method, strictly speaking Informally speaking, the idea is to find blobs at various thresholds, then select the blobs that have the least change in shape/area over a range of thresholds. These regions should be stable ... 4 Another way to get rotational invariance for free, is to choose objects that are rotationally invariant. For instance, a circle or a ring is invariant to rotations. Feature extractor: Run edge detection. For each neighborhood of NxN pixels, calculate edge direction and magnitude 2D histogram. Find all points that have high total magnitude, and high angular ... 4 (IANAL...) If you only want detectors: Harris is probably OK. According to http://users.fmrib.ox.ac.uk/~steve/susan/, SUSAN is out of patent now. I've not seen any claims that FAST is patented. Descriptors are harder... Histograms of Oriented Gradients might be worth considering - again I've not seen any claims of patent on the original form. 4 There are two different concepts: If you think as your signal as a single random variable$X$that is emitting values, then what you want is to calculate the Entropy of the random variable http://en.wikipedia.org/wiki/Entropy_estimation If you are considering the entire random signal or stochastic process, then you have to estimate the autocorrelation ... 4 In addition to the features mentioned so far I would like to mention measures of complexity such as: Shannon Entropy LZ Complexity Fractal Dimension There are also Fourier Descriptors (as hinted by Drazick already) and their equivalent in Wavelet Analysis and of course simple histogram bins which would return how frequently each gear is engaged en route. ... 3 As alternative to SIFT/SURF/Other you can also use FFT phase correlation, if frames transformed by mostly translations (rotation/perspective is small). You can also apply phase correlation to regions of image iteratively for better precision. http://en.wikipedia.org/wiki/Phase_correlation 3 I may be wrong if i have not understood the question! I am trying to give a rather elementary introduction here. I can refine things and be more rigorous as suited. What you are looking for is that of 100 (or 1000) patches, which patch is the most representative patch of all. For simplicity if the size of a patch is 1x1. So it is just a scalar. In this ... 3 If I understand correctly what you are asking -- In general, the feature is found at the same scale as the SIFT detector says, but in David Lowe's SIFT, the image is pre-smoothed with sigma: 0.5, so, you need to "subtract" this amount of smoothing from the sigma, so the "real" scale could be: sqrt(sigma^2 - 0.5^2) where sigma is the scale the feature was ... 3 The$\sigma_I$determines the scale level at which the Harris corners are computed. Coarser scales (higher values of$\sigma_I$) correspond to larger corners. The$\sigma_D$is effectively the window size, over which the derivatives are summed to generate the entries of the matrix. If$\sigma_D\$ is too small, then the detector will be seriously affected by ... 3 For a quantized or digital signal, you can get a upper bound on an estimate of information complexity or randomness by attempting to compress the data and/or the data's spectrum using a large variety of compression algorithms. Only top voted, non community-wiki answers of a minimum length are eligible
Joseph Pennell. Our journey to the Hebrides online . (page 10 of 11) Online LibraryJoseph PennellOur journey to the Hebrides → online text (page 10 of 11) Font size one ? Well, sir, I can't reach you, but these gen'le- men '11 pass it along." And then he began again with the stories and the Scripture until he had sold out all his stock of albums and note-books and cheap jewellery. To the East Coast, and Hack Again. 209 It was the hint about presents to those left be- hind which bore greatest weight with the fisher- men. It never failed. But we remembered their cottages and the sadness of their homes, and it an- gered us that they should be duped into wasting their hard-won earnings on tawdry ornaments. It seems to be their fate to be cheated by every one. Even the peddler, like the parson and the landlord, can pervert Scripture to their discomfort. Still, there was a pleasant suggestion of holiday- making in the square. It was the first time we had seen the Western Islanders amusing them- selves. True, they did it very solemnly. There was little laughter and much silence ; but at least a touch of brightness, was given to the gloom of their long life of work and want. Even on Sunday we thought the people more cheerful. In the morning the women, the little shawls over their shoulders, their heads still bare, the men in blue cloth, many without coats, again filled the streets on their way to church. In the afternoon we walked to two near fishing villages. In one an old fisherman was talking about Christ to a few villagers. We sat a while close to the sea, looking out to the next village, gray against gray gold-lined clouds, to the water with the light fall- ing softly across it, to the little quiet pools in among the low rocks of the shore, to the big black boats drawn up on the beach. And then, as we 14 210 Our Journey to the Hebrides. walked back to Fraserburgli, the mist fell sudden- ly. But the road near the town was crowded with the men in blue cloth and the women in short skirts. Some were singing hymns as they walked. To us they looked strong and healthy, and even happy. It seemed as if this life on the east coast must make np for many of the hardships they endure in the deserts of their western home. That same evening in the hotel we heard about life in Fraserburgli, which looks so prosperous to the stranger. A Catholic priest came into the din- ing-room after supper. He seemed very tired. He had been visiting the sick all day, he told us. Measles had broken out among the women and more had been carried to the hospital. The rooms provided for them by the curers were small and overcrowded. So long as they were kept in their present quarters, so long would disease and death be their portion. Their condition was dreadful ; but they worked hard, and never complained. He came from the west coast of Ireland, he said, where Irish poverty is at its worst, but not even there had he seen misery as great as that of the Western Isl- anders. He knew it well. He had lived with them in the Long Island, where many are Catho- lics. If the Highlands were represented by eighty- five members, all wanting Home Kule, more would have been heard about destitution in the Hebrides. To the East Coast, and Sack Again. 213 In the prosperous days of the east coast fisheries the people's burden had been less heavy ; but now they came to the fishing towns of the east, the women to sicken and to die, the men to beg their way back as best they could. There were too many fishermen here, just as at home landlords thought there were too many crofters. The fishers also shall mourn, and all they that cast angle shall lament, and they that spread nets upon the waters shall languish. The epidemic and its causes became the town talk. The Gaelic Free Kirk minister, differ as he might from the Catholic priest on every other point, on this could but agree with him. He told us the same story in words as strong. It was shameful, he said, the way these poor girls were being killed. He had not known it before ; but now that he did, he could not and would not let the matter rest. An indignation meeting of the people of Fraserburgh was called for the day we left. The town was placarded with the notices. Since then the report must have gone abroad. Now that agitation in Lewis is forcing attention to the isl- ands and their people, in London there has been formed a committee of ladies to look into the con- dition of the girls and women who work on the east coast. That last morning, as we stood by the hotel door, the funeral of one of the dead women passed up 214 Our Journey to the Hebrides. the street towards the station. Fifty or sixty fish- ermen followed the coffin. When we took our seats in a third-class carriage we found the Free Kirk minister there before us. The coffin had just been put on the train. Two girls came up to speak to him. He stretched out his hand; one took and held it as she struggled to answer his questions; the other turned away with the tears streaming down her face. As the train started they stood apart, their heads bent low, their faces buried in their shawls, both crying as if their hearts would break. And so, at the last, we saw smaller fishing towns by the way; but our ener- gy was less inexhaustible than the picturesque- ness of the east coast. Our journey had been over-long. We were beginning to be anxious to bring it to an end. Xow we went straight to ABERDEEN, where we at once fell back into ordinary city life. We even did a little shopping in its fine new streets. Its large harbor seemed empty after that of Fraserburgh. Many fishing-boats were at sea; many had gone altogether. The fishing season here was really well over. We walked to the old town after dinner. In it there is not much to be seen but the university tower with the famous To the East Coast, and Back Again. 217 crown atop, and the cathedral, which looked mas- sive and impressive in the twilight. We saw much more of Aberdeen ; but we are quite of the same mind as Dr. Johnson, that to write of such well-known cities "with the solemnity of geographical description, as if we had been cast upon a newly discovered coast, has the appearance of a very frivolous ostentation." From Aberdeen to Edinburgh we trained it by easy stages. We stopped dften ; once at MONTKOSE, where, like Dr. Johnson, and for that matter, every one else who comes here, we looked to the Gram- pian Hills in the distance. The town itself was not picturesque. The guide-book calls it neat and Flemish, probably because it has fewer houses with high gables turned towards the street than can be seen, as a rule, in any Scotch town. But the har- bor, of which the guide-book says less, was fine. We spent hours near the mouth of the river, looking over to the fishermen's houses on the opposite shore. There were constant showers as we sat there ; every few minutes the sun came out from the clouds, and the wet roofs glistened and glit- tered through the smoke hanging above them. In the morning, women, packed like herrings in the huge ferry-boats, crossed over to the curing-houses. Now and then a fishing-boat sailed slowly in. 218 Our Journey to the Hebrides. One sees little from the cars. Of the country through which we passed I remember only occa- sional glimpses of the sea and of fishing villages and of red castles, which made us wish we were still on the road. Now and then, as we sat com- fortably in the railway-carriage, we determined to walk back to see them, or to get a tricycle at Edin- burgh and " do " the whole east coast over again ; but we always left* our determinations with the carriage. Of all the places at which we stopped, I remember best ARBROATH, the sight of which seemed worth his whole jour- ney to Dr. Johnson. Little is left of the abbey save the broken walls and towers. A street runs through the old gate-house. The public park and children's play-ground lie to one side of the ruined church. A few old tombs and tablets and bits of ornament have been gathered together in the sac- risty, which is in better preservation than the rest of the building. We found them less interesting than the guide who explained them. He gave a poetical touch to the usual verger recitation, and indeed to all his talk, of which there was plenty. 'Twas better to have loved and lost, than never to have loved at all, was his manner of expressing regret for the loss of an old engraving of the ab- bey. There were many hard things in this world, but grass was soft ; why, then, should I choose the To the East Coast, and Back Again. 219 hard things ? was his way of inviting me to walk on the grass instead of the gravel. But it was not until he showed us the original copy, full of blots and corrections, of one of Burns's poems that we found he too was a poet a successful poet, it seemed, for he had sold 14,000 copies of his volume of poems very few, he thought. If he were a member of the London Society of Authors he would know better. He had given the last copy to William Morris, when the latter was in the town. William Morris did not wear gaudy clothes, not he. He looked like a sailor in his blue flannel shirt, and there was a slit in his hat. And when he returned to London he sent his " Jason " to his fellow-poet in Arbroath. As we were leaving, he told us how, one day, nothing, but at once asked him to recite his "Ab- bey Gate." He did so, and then, without a word, they slipped a guinea into his hand, and there were tears on their cheeks. He never knew who they were. After this, we felt our tribute to be very small ; but he clasped our hands warmly at parting. There was something out of the common in our faces,-he said. We talked to no one else in Arbroath, except to a pessimistic stationer. While we bought his pa- per he grumbled because farmers could not sell their cattle and corn. Some people said the coun- 220 Our Journey to the Hebrides. try needed protection ; " but, sir, what have we got to protect ?" Of the rest of the journey to Edinburgh my note-book says nothing, and little remains in my memory. But I know that when we walked up from the station to "Waverley Bridge, and looked to the gray precipice of houses of the Old Town, we realized that our long wanderings had not shown us anything so fine. And now our journey was at an end. Like Dr. Johnson's, it began and finished in Edinburgh, but it resembled his in little else. From the start, we continually took liberties with his route ; we often forgot that he was our guide. We went to places he had never seen ; we turned our backs upon many through which he and Boswell had travelled. But at least he had helped us to form definite plans without weeks of hard map-study which they otherwise must have cost us. We had come back wiser in many ways. In the first place, we had learned that for us walking on a tour of this kind, or indeed of any kind, is a mistake. Had we never cycled, perhaps we might not have felt this so keenly. Our powers of en- durance are not, I think, below the average ; but the power to endure so many miles a day on foot is very different from the capacity to enjoy them ; and if on such a trip one proposes, as we did, to work, without pleasure in the exercise, how can RTJINS AT ARBROATH. To the East Coast, and Back Again. 223 one hope for good results ? But for the two days' coaching on the west coast, the necessary steam- ing among the islands, our utter collapse on the east coast, I am sure we never should have worked at all. Day after day we were dispirited, disheart- ened, and only happy when we were not walking. "We went to bed in the evening and got up in the morning wearied and exhausted. The usual walk- ing tours of which one hears mean a day's climb- ing in the mountains, or a day's tramp with bag or knapsack sent before by train or stage. Under these conditions we probably would not be the first to give in. But to be as independent as if on a tricycle, to have one's sketching traps when need- ed, one must carry a knapsack one's self. J 's weighed between twenty-five and thirty pounds ; mine, fifteen. Never before have I appreciated so well the true significance of Christian's burden. But even worse than this constant strain on our shoulders was the monotony of our pace. Whether no change, no relief. In cycling, for one hard day's work you know you will have two of pleas- ure. As for short-cuts, they are, as a rule, out of the question. One does not know the country through which one is passing ; it is the exception to meet a native. After cycling more thousands of miles than we have walked hundreds, we know it to be not mere theorizing when we declare that 224 Our Journey to the Hebrides. no comparison between the two methods of travel- ling is possible. One is just enough work to make the pleasure greater ; the other is all work. Our experience has taught us to be sceptical about the tramps of other days who saw Europe afoot. We wonder if they told the whole story. Of modern tramps, none has given such a delight- ful record as has Mr. Stevenson of the walk he took with a donkey through the Cevennes. And yet, even with him, if you read between his lines, or, for that matter, the lines themselves, you real- ize that, charming as his story is for us, the reality for him was wearisome, depressing, and often pain- ful, and that probably to it is to be referred much of his after physical weakness. We have also had a new light thrown upon the life of tramps at Lome, who are so often supposed to have chosen the better part. Theirs is as much a life of toil as if they broke stones on the same roads over which they journey. They are not to be envied, but pitied. The next time one begs from you as he passes, give him something out of your charity ; he deserves it. However, many drawbacks as there were to our walk, we do not regret it. In no other way could we have come to know the country and the people with the same friendly intimacy. For pure enjoy- ment, it would be best to go over the greater part of our route in a yacht. From it is to be seen To the East Coast, and Back Again. 225 much beauty and little misery. The coast-line can be followed, excursions made inland. But a yacht is a luxury for the rich. Besides, on it one lives one's own life, not that of the country one has come to visit. On foot, with knapsacks on our backs, we often passed for peddlers. Certainly we were never mistaken to be tourists of means or sportsmen. Therefore the people met us as equals and talked to us freely. We were able to correct the vague and false im- pressions with which we had started. If we did not master the geography of all Scotland, I think at least on the two coasts as far north as the Cale- donian Canal we could now pass an examination with credit. We learned that haggis and oatmeal figure more extensively in books than on hotel ta- bles ; the first we saw not at all, the second but twice, and then it was not offered to us. Above all, we learned the burden of Scotland, whose Highlands have been laid ;waste, their peo- ple brought to silence. But now the people them- selves have broken their long silence, and a cry has gone up from them against their oppressors. If by telling exactly what we saw we can in the least strengthen that cry, we shall feel that our journeying has not been in vain. THE END. BY WILLIAM BLACK. HARPER'S LIBRARY EDITION. I2tno, Cloth, $i 25 per volume. A DAUGHTER OF HETH. A PRINCESS OF THULE. GREEN PASTURES AND PICCA- DILLY. IN FAR LOCHABER. IN T SILK ATTIRE. JUDITH SHAKESPEARE. Ill'd. KILMENY. MACLEOD OF DARE. Illustrated. MADCAP VIOLET. SAB1NA ZEMBRA. SHANDON BELLS. Illustrated. SUNRISE. THAT BEAUTIFUL WRETCH. Ill'd. THE STRANGE ADVENTURES OF A HOUSE-BOAT. Illustrated. THE STRANGE ADVENTURES OF A PHAETON. THREE FEATHERS. WHITE HEATHER. WHITE WINGS. YOLANDE. Illustrated. HARPER'S POPULAR EDITION. A DAUGHTER OF HETH. 8vo, Paper, 35 cents. A PRINCESS OF THULE. 8vo, Pa- per, 50 cents. AN ADVENTURE IN THULE. 4to, Paper, 10 cents. GREEN PASTURES AND PICCA- DILLY. 8vo, Paper, 50 cents. IN FAR LOCHABER. 8vo. Paper, 40 cents. IN SILK ATTIRE. 8vo, Paper, 35 JUDITH SHAKESPEARE. 4to, Pa- per, 20 cents. KILMENY. 8vo, Paper, 35 cents. MACLEOD OF DARE. 8vo, Paper, Ill'd, 60 cts. ; 4to, Paper, 15 cts. MADCAP VIOLET. 8vo, Paper, 50 cents. SABINA ZEMBRA. 4to, Paper, 20 SHANDON BELLS. Illustrated. 4to, Paper, 20 cents. SUNRISE. 4to. Paper, 20 cents. THAT BEAUTIFUL WRETCH. Il- lustrated. 4to, Paper, 20 cents. THE MAID OF KILLEENA, THE MARRIAGE OF MOIRA FERGUS, and Other Stories. 8vo, Paper, 40 THE MONARCH OF MINCING- LANE. Ill'd. 8vo, Paper, 50 cts. THE STRANGE ADVENTURES OF A HOUSE -BOAT. Illustrated. 8vo, Paper, 50 cents. THE STRANGE ADVENTURES OF A PHAETON. 8vo, Paper, 50 cts. THREE FEATHERS. Illustrated. 8vo, Paper. 50 cents. WHITE HEATHER. 4to, Paper, 20 rents. WHITE WINGS. 4to, Paper, 20 cts. YOLANDE. Illustrated. 4to, Paper, 20 cents. THE FOUR MACNICOLS. Ill'd. Square 16mo, Cloth,$1 00. tW~ A ny of the above icorks sent by mail, postage prepaid, to any part of the United States, Canada, or Mexico, on receipt of the price. BY CHAS. DUDLEY WARNER. STUDIES IN THE SOUTH AND WEST, with THEIR PILGRIMAGE. Richly Illustrated by C. S. REINHART. pp. viii., 364. Post 8vo, Half Leath- er, $2 00. Aside from the delicious story its wonderful portraitures of char- acter and its dramatic development the book is precious to ;ill who know anything about the great American watering-places, for it con- tains incomparable descriptions of those famous n-Mirts and their fre- quenters. Even without the aid of Mr. lieiuhart's brilliant drawing!', Mr. Warner conjures up word-pictures of Cape May, Newport, Sarato- ga, Lake George, Uichfleld Springs, Niagara, the White Mountains, and all the rest, which strike the eve like photographs, so clear is ev- ery outline. But Mr. Heiuhart's designs tit into the text so closely that we could not bear to part with a single one of them. A". 1'. Jour- nal of Commerce. The author touches the canvas here and there with lines of color that fix and identify American character Of the fancy and humor of Mr.Warner, which in witchery of their play and power are quite inde- pendent of this or that subject, there is nothing to add. lint acknowl- edgment is due Mr. Retnhart for nearly eighty finely conceived draw- ings, and to the publishers for the substantial and rich letter-press and Covers. Boston Globe. Mr. Warner's pen-pictures of the characters typical of each resort, of the manner of life followed at each, of the humor and absurdities pe- culiar to Saratoga, or Newport, or Bar Harbor, as the case may be, are as good-natured as they are clever. The satire, when there is any, is of the mildest, and the general tone is that of one glad to look on" the brightest side of the cheerful, pleasure-seeking world with which he mingle.-. ... In Mr. Reiuhart the author has an assistant who has done with his pencil almost exactly what Mr.Warner has accomplished with his pen. His drawings are spirited, catch with wonderful success the tone and costume of each place visited, and abound in good-nat- ured fun. Christian Union, N. Y. Mr. Heinhart's spirited and realistic illustrations are verv attractive, and contribute to make an unusually handsome book. We have al- ready commented upon the earlier chapters of the text : and the hap- py blending of travel and fiction which we looked forward to with confidence did, iu fact, distinguish this story among the serials of the year. A". 1'. Evening Post. PUBLISHED BY HARPER & BROTHERS, NEW YORK. ff~ Either of the above icorks tent by mail, postage prepaid, to any part of the United States, Canada, or Mexico, on receipt of the price. BY W. D. HO WELLS. MODERN ITALIAN POETS. Essays and Versions. With Portraits. 12mo, Half Cloth,$2 00. APRIL HOPES. 12mo, Cloth, $1 50. ANNIE KILBURN. 12mo, Cloth,$1 50. THE MOUSE-TRAP, and Other Farces. Illustrated. 12mo, Cloth, $1 00. A portfolio of delightsome studies among the Italian poets; mus- ings iu a golden granary full to the brim with good things. . . . We venture to say that no acute and penetrating critic surpasses Mr. Howells in true insight, in polished irony, in effective and yet graceful treatment of his theme, in that light and indescribable touch that lifts yon over a whole sea of froth and foam, and fixes your eye, not on the froth and foam, but on the solid objects, the true heart and soul of the theme. Critic, N. Y. A more companionable, entertaining, stimulating work than this book has not been printed fur many a day. It is a book to be studied privately, to be read aloud, to be cherished and quoted and reread many times, and every reader of it will cry for more translations from the Italian by the same delight-conferring pen. Chicago Tribune. This is a noble volume, the fruit of studies began twenty years ago in Italy. . . . The subject is discussed with all the rare fascination of style and thought which Mr. Howells is so well qualified to bring to it, and the volume will be treasured by every lover of poetry of whatever period or clime. Christian at H'orfc, N. Y. No living writer could give us this picture of a literary movement with such delicacy of appreciation and discrimination. The period embraced is about a century; the names selected comprise all the poets which a survey of the movement, now over, distinguishes as principal factors iu it Hartford Courant. "April Hopes" is a specimen of Mr. Howells's well-known consum- mate art as a delineator of young men and maidens, and a chronicler of nil the fluctuations of love affairs. From the life-like description of Harvard Class Day and its participants, in the opening chapters, to the conclusion of the story, Mr. Howells is at his best. A'. Y. Journal of Commerce. Mr. Howells never wrote a more bewitching book. It is useless to deny the rarity and worth of the skill that can report so perfectly and with such exquisite humor all the fugacious and manifold emotions of the modern maiden and her \over.-Philadelphia Press, PUBLISHED BY HARPER & BROTHERS, !?EW YORK. Any of the above works 8tnt by mail, postage prepaid, to any part of the United States or Canada, on receipt of the price. BY CONSTANCE F. WOOLSON. EAST ANGELS, pp. 592. 16mo, Cloth,$1 25. ANNE. Illustrated, pp. 540. 16mo, Cloth, $1 25. FOR THE MAJOR, pp. 208. 16mo, Cloth,$1 00. CASTLE NOWHERE, pp. 386. 16mo, Cloth, $1 00. (.4 New Edition.) RODMAN THE KEEPER. Southern Sketches, pp. 340. 16mo, Cloth,$1 00. (A New Edition.) There is a certain bright cheerfulness in Miss Woolsou's writing which invests all her characters with lovable qualities Jewish Advo- cate, N. Y. Miss Woolson is among our few successful writers of interesting magazine stories, and her skill and power are perceptible in the de- lineation of her heroines no less than in the suggestive pictures of local life. Jewish Messenger, N. Y. Constance Feiiimore Woolson may easily become the novelist laureate. Boston Globe. Miss Woolson has a graceful fancy, a ready wit, a polished style, and conspicuous dramatic power ; while her skill in the development of a story is very remarkable. London Life. Miss Woolson never once follows the beaten track of the orthodox novelist, but strikes a new and richly loaded vein, which so far is all her own ; and thus we feel, on reading one of her works, a fresh sen- sation, and we put down the book with a sigh to think our pleasant task of reading it is finished. The author's lines must have fallen to her in very pleasant places ; or she has, perhaps, within herself the wealth of womanly love and tenderness she pours so freely into all she writes. Such books as hers do much to elevate the moral tone of the day a quality sadly wanting in novels of the time Whitehall Review, London. The above works sent by mail, postage prepaid, to any part of the United States or Canada, on receipt of the price. BY AMLIE EIVES. A BROTHER TO DRAGONS, AND OTHER OLD-TIME TALES. Post 8vo, Cloth, Extra, f 1 00. VIRGINIA OF VIRGINIA. A Story. Illustrated. Online LibraryJoseph PennellOur journey to the Hebrides → online text (page 10 of 11)
dc.creator Steenbeckeliers, Guy en_US dc.creator Bellet, J. en_US dc.date.accessioned 2006-06-15T17:01:00Z dc.date.available 2006-06-15T17:01:00Z dc.date.issued 1973 en_US dc.identifier 1973-B-05 en_US dc.identifier.uri http://hdl.handle.net/1811/15897 dc.description Author Institution: Universit\'{e} Catholique de Louvain-la-neuve; Universit\'e de Lille-I Villeneuve d'Ascq, France. en_US dc.description.abstract The microwave spectra of the $\nu_{2}, 2\nu_{2}, 3\nu_{2},\nu_{1},\nu_{3},\nu_{1}+\nu_{2}$ and $\nu_{2} + \nu_{3}$ vibrational excited states of sulfur dioxide have been recorded. A careful analysis of the $\nu_{1},\nu_{2}, \nu_{3},$ and $2\nu_{2}$ states yielded the equilibrium distortion constants, from which the harmonic force field has been calculated. A few lines, which are slightly shifted from their calculated positions, cannot be explained without considering a weak resonance. In connection with that point, the frequency of the origin of the $2\nu_{2}$ vibration rotation band will be discussed. en_US dc.format.extent 129482 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title PURE ROTATION SPECTRA OF SOME VIBRATIONAL EXCITED STATES OF SULFUR DIOXIDE en_US dc.type article en_US 
## Stream: maths ### Topic: sensitivity conjecture #### Johan Commelin (Aug 02 2019 at 05:54): The sensitivity conjecture was 30 years old, and recently resolved. Can we formalise this? https://www.scottaaronson.com/blog/?p=4229 #### Johan Commelin (Aug 02 2019 at 06:16): @Chris Hughes You've been doing a lot with matrices lately. Did you do things like row/column rank? #### Johan Commelin (Aug 02 2019 at 06:17): And how far did you get with the inverse matrix? #### Chris Hughes (Aug 02 2019 at 07:55): I used a noncomputable inverse in the end. I didn't use row and column rank either, but used the predicates has_left_inverse and has_right_inverse as well. Both of these look good enough for the sensitivity proof. #### Johan Commelin (Aug 02 2019 at 07:58): But this isn't in mathlib yet, right? No. #### Chris Hughes (Aug 02 2019 at 08:24): It's a bit unfinished, that's why. It would be nicer to have a pseudoinverse that worked on rectangular matrices. #### Reid Barton (Aug 02 2019 at 11:54): I thought about this also. How easy is it to do these block matrix calculations appearing in the proof that $A_n = nI$ (and in the definition of $A_n$)? This is another example where it's natural to consider a matrix with rows and columns indexed by a general finite type (in this case, the hypercube). #### Reid Barton (Aug 02 2019 at 11:55): Given what we have at the moment it might be easier to formulate the Knuth proof in terms of linear maps everywhere instead of matrices #### Kevin Buzzard (Aug 02 2019 at 11:56): @Chris Hughes the matrices in Knuth's proof are indexed by {0,1}^n #### Chris Hughes (Aug 02 2019 at 12:06): My matrix pequiv thing might help with the block matrix computations. You can concatenate matrices algebraically using something like $\begin{pmatrix} a & b \end{pmatrix} \begin{pmatrix} 1 & 0 \\0 & 0 \end{pmatrix} + \begin{pmatrix} c & d \end{pmatrix} \begin{pmatrix} 0 & 0 \\0 & 1 \end{pmatrix} = \begin{pmatrix} a & b \\c & d \end{pmatrix}$ #### Chris Hughes (Aug 02 2019 at 12:12): In the proof you'll probably end up with some big ugly sum, but with some terms like $\begin{pmatrix} 1 & 0 \\0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\0 & 1 \end{pmatrix}$, which cancel quite easily. #### Reid Barton (Aug 02 2019 at 12:18): I'm writing up a translation of Knuth's version into linear maps to compare #### Reid Barton (Aug 02 2019 at 12:58): https://gist.github.com/rwbarton/ed50d4340e2f654b9d778ccb3ec93442 #### Reid Barton (Aug 02 2019 at 13:00): I haven't used the relevant parts of mathlib much but as far as I could tell while writing this everything ought to be fairly straightforward #### Kevin Buzzard (Aug 02 2019 at 13:05): rofl kids these days One sec sensitivity.pdf #### Johan Commelin (Aug 02 2019 at 13:14): That looks quite formalisable, I think. #### Johan Commelin (Aug 02 2019 at 13:16): After all, Kenny formalised the dimension formula, and Fabian did the dual basis. #### Johan Commelin (Aug 02 2019 at 13:16): Anything else that we need? #### Johan Commelin (Aug 02 2019 at 13:18): @Reid Barton How far are you with the Lean version? Haven't started #### Reid Barton (Aug 02 2019 at 13:18): The only other general fact which gets used is the triangle inequality for summing over a finite set, do we have that? #### Johan Commelin (Aug 02 2019 at 13:19): If we do, it should be finset.abs_sum, right? I don't think I've seen it. #### Reid Barton (Aug 02 2019 at 13:19): Then there is going to be some possibly tricky business relating the hypercube graph to the matrix entries of the $f_n$, but that would also appear in the matrix version #### Johan Commelin (Aug 02 2019 at 13:22): Right, that final calc block seems the most tricky part. #### Johan Commelin (Aug 02 2019 at 13:22): The rest is straight-forward, I think. #### Rob Lewis (Aug 02 2019 at 13:26): This looks super doable. It's obviously a different kind of proof, but somehow it reminds me of the cap set proof in spirit. #### Reid Barton (Aug 02 2019 at 13:32): I guess I'll just start typing and see at what point I get stuck. #### Johan Commelin (Aug 02 2019 at 14:30): @Reid Barton How are things going so far? I finally have time to look at this #### Reid Barton (Aug 02 2019 at 14:32): So far I created a new repository and did other things while building mathlib :slight_smile: #### Reid Barton (Aug 02 2019 at 14:33): I was about to get started actually #### Johan Commelin (Aug 02 2019 at 14:34): Why do you build mathlib? #### Johan Commelin (Aug 02 2019 at 14:34): I haven't built mathlib in 3 months #### Reid Barton (Aug 02 2019 at 14:34): I haven't been indoctrinated in the new ways yet and I had other things to do anyways #### Reid Barton (Aug 02 2019 at 14:48): Does this look sensible so far? import data.real.basic import linear_algebra.dimension noncomputable theory /-- The free vector space on vertices of a hypercube, defined inductively. -/ def V : ℕ → Type | 0 := ℝ | (n+1) := V n × V n instance : Π n, add_comm_group (V n) := begin apply nat.rec, { dunfold V, apply_instance }, { introsI n IH, dunfold V, apply_instance } end instance : Π n, vector_space ℝ (V n) := begin apply nat.rec, { dunfold V, apply_instance }, { introsI n IH, dunfold V, apply_instance } end lemma dim_V {n : ℕ} : vector_space.dim ℝ (V n) = 2^n := begin induction n with n IH, { apply dim_of_field }, { dunfold V, rw [dim_prod, IH, pow_succ, two_mul] } end /-- The linear operator f_n corresponding to Huang's matrix A_n. -/ def f : Π n, V n →ₗ[ℝ] V n | 0 := 0 | (n+1) := sorry #### Reid Barton (Aug 02 2019 at 14:50): And is there any special support for linear maps into/out of binary direct sums? #### Rob Lewis (Aug 02 2019 at 14:52): Heh, I just started playing around with this and got to exactly the same place. Not that I saw (at a quick glance). #### Reid Barton (Aug 02 2019 at 14:52): Ah yes, linear_map.fst/snd/prod #### Reid Barton (Aug 02 2019 at 14:56): /-- The linear operator f_n corresponding to Huang's matrix A_n. -/ noncomputable def f : Π n, V n →ₗ[ℝ] V n | 0 := 0 | (n+1) := linear_map.pair (linear_map.copair (f n) linear_map.id) (linear_map.copair linear_map.id (-f n)) This definition needs an explicit noncomputable even though I have noncomputable theory at the top, is that supposed to happen? #### Chris Hughes (Aug 02 2019 at 15:00): Is noncomputable theory inside a section? #### Rob Lewis (Aug 02 2019 at 15:00): noncomputable theory doesn't always propogate to aux decls. #### Reid Barton (Aug 02 2019 at 15:32): /-- The linear operator f_n corresponding to Huang's matrix A_n. -/ noncomputable def f : Π n, V n →ₗ[ℝ] V n | 0 := 0 | (n+1) := linear_map.pair (linear_map.copair (f n) linear_map.id) (linear_map.copair linear_map.id (-f n)) lemma f_squared {n : ℕ} : ∀ v, (f n) (f n v) = (n : ℝ) • v := -- The (n : ℝ) is necessary since n • v refers to the multiplication defined -- using only the addition of V. begin induction n with n IH, { intro v, dunfold f, simp, refl }, { rintro ⟨v, v'⟩, ext, { dunfold f V, conv_rhs { change ((n : ℝ) + 1) • v, rw add_smul }, simp [IH] }, { dunfold f V, conv_rhs { change ((n : ℝ) + 1) • v', rw add_smul }, have : Π (x y : V n), -x + (y + x) = y := by { intros, abel }, -- ugh simp [IH, this] } } end #### Reid Barton (Aug 02 2019 at 15:32): Maybe I'll put up a repository under leanprover-community? #### Rob Lewis (Aug 02 2019 at 15:37): This was weirdly hard to make it go through, I got caught on the (n : R) thing too for a bit. lemma fn2 : ∀ n v, f n (f n v) = (n : ℝ) • v | 0 v := by { rw (show f 0 = 0, from rfl), norm_cast, simp, refl } | (k+1) ⟨v1, v2⟩ := begin end #### Reid Barton (Aug 02 2019 at 15:38): Okay, I made https://github.com/leanprover-community/lean-sensitivity. I don't know how the permissions work by default, but anyone who can add collaborators should feel free to do so. #### Floris van Doorn (Aug 02 2019 at 15:46): My gut feeling says that defining V n := (fin n -> bool) -> real would be nicer to work with. Then V is defined directly instead of recursively. You can use the following to go between fin n and fin (n+1): universe variable u variables {α : Type u} {n : ℕ} def tail (p : fin (n+1) → α) : fin n → α := λ i, p i.succ def cons (x : α) (v : fin n → α) : fin (n+1) → α := λ j, fin.cases x v j #### Reid Barton (Aug 02 2019 at 15:51): I thought defining V recursively would be more convenient for defining and proving things about f and g--and it seems pretty convenient so far. I was thinking of defining the hypercube as fin n -> bool though (in part because it seems a bit cheating to define it recursively). #### Rob Lewis (Aug 02 2019 at 15:52): I've started with the hypercube as fin n -> bool but I'm not sure about the cleanest way to define the basis of e_ps. #### Floris van Doorn (Aug 02 2019 at 15:53): Ok. Maybe that's right. Why do you say it feels like cheating? #### Johan Commelin (Aug 02 2019 at 15:53): I thought defining V recursively would be more convenient for defining and proving things about f and g--and it seems pretty convenient so far. I was thinking of defining the hypercube as fin n -> bool though (in part because it seems a bit cheating to define it recursively). Why would that be cheating? #### Johan Commelin (Aug 02 2019 at 15:54): I've started with the hypercube as fin n -> bool but I'm not sure about the cleanest way to define the basis of e_ps. Dual basis is in mathlib #### Reid Barton (Aug 02 2019 at 15:55): Well just because it's part of the "interface" of the overall theorem and fin n -> bool feels a bit more canonical. For example if the hypercube is defined recursively then it's not obvious how to construct the action of the symmetric group. #### Floris van Doorn (Aug 02 2019 at 15:55): Isn't the basis e just def V (n : ℕ) : Type := (fin n → bool) → ℝ def e (p : fin n → bool) : V n := λ q, if q = p then 1 else 0 #### Reid Barton (Aug 02 2019 at 15:55): I agree it's not cheating by much. #### Johan Commelin (Aug 02 2019 at 15:57): @Floris van Doorn But you changed the definition of V #### Reid Barton (Aug 02 2019 at 15:58): If you assume that the hypercube must be represented by fin n -> bool, then at some point you have to make a recursive decomposition of building something for n+1 out of two somethings for n. I suggest doing that when we define the basis e #### Reid Barton (Aug 02 2019 at 15:58): Which is I think what Rob was going to do #### Floris van Doorn (Aug 02 2019 at 15:58): I was responding to Rob, who said (as I understood it) that he was interested in the basis using this definition of V #### Reid Barton (Aug 02 2019 at 15:59): Probably it doesn't matter very much where you do it #### Johan Commelin (Aug 02 2019 at 16:00): Do we want to define e directly? #### Johan Commelin (Aug 02 2019 at 16:00): I don't really care. #### Johan Commelin (Aug 02 2019 at 16:00): I just thought that we would first define the basis for V. #### Reid Barton (Aug 02 2019 at 16:01): So something like e (n+1) p := if p 0 then prod.inr (e n p.tail) else prod.inl (e n p.tail) #### Rob Lewis (Aug 02 2019 at 16:01): @Floris van Doorn That looks way cleaner than what I was doing, I was using the other definition of V though. #### Rob Lewis (Aug 02 2019 at 16:01): def e : Π n : ℕ, Q n → V n | 0 p := (1 : ℝ) | (k+1) p := match p 0 with | tt := (e k (drop p), 0) | ff := (0, e k (drop p)) end or something like that, I don't think it's gonna be convenient. #### Floris van Doorn (Aug 02 2019 at 16:01): The recursive decomposition is easy: def tail (p : fin (n+1) → α) : fin n → α := λ i, p i.succ def tuple (x : α) (v : α → (fin n → α) → β) : (fin (n+1) → α) → β := λ p, v (p 0) (tail p) So if you have an element of bool -> V n you get an element of V (n+1) #### Reid Barton (Aug 02 2019 at 16:04): I'm just worried that you will end up with only a linear isomorphism (not a definitional equality) between V (n+1) and bool -> V n or V n x V n and that will make the computations involving f and g a lot more involved unless you can manage to cancel the isomorphisms automatically #### Floris van Doorn (Aug 02 2019 at 16:05): Ok, that might be a problem if we have to translate a lot between these two representations. I think it will be doable though. #### Floris van Doorn (Aug 02 2019 at 16:06): but maybe in that aspect the recursive definition is easier. #### Johan Commelin (Aug 02 2019 at 16:06): /-- The hypercube.-/ def Q (n) : Type := fin n → bool /-- The basis of V indexed by the hypercube.-/ def b : Π n, Q n → V n | 0 := λ _, (1:ℝ) | (n+1) := λ v, if v n = tt then (b n (v ∘ fin.succ), 0) else (0, b n (v ∘ fin.succ)) #### Reid Barton (Aug 02 2019 at 16:07): The alternative is to write down the basis and probably also the dual basis separately by this kind of recursive formula and then check that the dual basis is really dual and also the formula for $\varepsilon_q (f_n e_p)$, by induction #### Rob Lewis (Aug 02 2019 at 16:09): @Johan Commelin That's probably slightly more convenient than the match, yeah. #### Johan Commelin (Aug 02 2019 at 16:12): Ok, but now we need to prove that this is a basis... #### Johan Commelin (Aug 02 2019 at 16:12): It should suffice to prove linear independence #### Reid Barton (Aug 02 2019 at 16:13): something like noncomputable def ε : Π {n : ℕ} (p : fin n → bool), V n →ₗ[ℝ] ℝ | 0 _ := linear_map.id | (n+1) p := match p 0 with | ff := (ε (p ∘ fin.succ)).comp (linear_map.fst _ _ _) | tt := (ε (p ∘ fin.succ)).comp (linear_map.snd _ _ _) end #### Reid Barton (Aug 02 2019 at 16:13): Ok, but now we need to prove that this is a basis... Yeah that's true #### Johan Commelin (Aug 02 2019 at 16:14): @Reid Barton Is there a reason you don't want to use mathlibs dual basis? #### Reid Barton (Aug 02 2019 at 16:14): I fear you might get stuck when you need to calculate the matrix entries but I don't really know #### Reid Barton (Aug 02 2019 at 16:15): because you won't have a formula for the dual basis like the one above #### Reid Barton (Aug 02 2019 at 16:16): Unless there are already theorems that say if we have bases of V and W, then we get a basis of V x W and the formula for the dual basis is the expected one #### Johan Commelin (Aug 02 2019 at 16:17): Aha, but wouldn't it be better/easier to prove those formulas? #### Johan Commelin (Aug 02 2019 at 16:17): Because otherwise you have to reprove that this thing is actually a dual basis. #### Reid Barton (Aug 02 2019 at 16:18): I guess we must already have a theorem about a basis of V x W so that we can compute its dimension #### Reid Barton (Aug 02 2019 at 16:18): You mean we should prove a mathlib theorem about the dual basis of V x W that comes from bases of V and W? #### Reid Barton (Aug 02 2019 at 16:20): I haven't seen the dual basis stuff in mathlib at all yet, let me take a look. #### Reid Barton (Aug 02 2019 at 16:23): So, there is lemma is_basis_inl_union_inr {v : ι → β} {v' : ι' → γ} (hv : is_basis α v) (hv' : is_basis α v') : is_basis α (sum.elim (inl α β γ ∘ v) (inr α β γ ∘ v')) := #### Johan Commelin (Aug 02 2019 at 16:23): Also, more trivialities: /-- The hypercube.-/ def Q (n) : Type := fin n → bool instance Q.fintype (n) : fintype (Q n) := by delta Q; apply_instance def Q.card (n) : fintype.card (Q n) = 2^n := calc _ = _ : fintype.card_fun ... = _ : by simp only [fintype.card_fin, fintype.card_bool] #### Johan Commelin (Aug 02 2019 at 16:24): Ooh, that def should be a simp-lemma #### Reid Barton (Aug 02 2019 at 16:25): Maybe the right way to do all this is to just define the hypercube recursively after all, and then tack a translation onto fin n -> bool at the end if we want to. #### Reid Barton (Aug 02 2019 at 16:26): Then we can use that is_basis_inl_union_inr and add a formula to linear_algebra.dual about its dual basis #### Reid Barton (Aug 02 2019 at 16:27): Q (n+1) := Q n ⊕ Q n and whatever definition of "adjacent" is the most convenient #### Johan Commelin (Aug 02 2019 at 16:28): Or should we just prove that Q n →₀ ℝ is linearly equivelent to V n? #### Johan Commelin (Aug 02 2019 at 16:28): Or maybe we should redefine V n to be that space? #### Reid Barton (Aug 02 2019 at 16:29): Or should we just prove that Q n →₀ ℝ is linearly equivelent to V n? This one sounds good #### Reid Barton (Aug 02 2019 at 16:32): Maybe by first providing Q (n+1) ≃ Q n ⊕ Q n #### Johan Commelin (Aug 02 2019 at 16:36): Aha, that also sounds like a good idea. And then use is_basis.comp? #### Johan Commelin (Aug 02 2019 at 16:38): Need to feed some kids :children_crossing: brb #### Johan Commelin (Aug 02 2019 at 16:54): Huh, there is no has_xor typeclass?? #### Johan Commelin (Aug 02 2019 at 17:32): namespace Q variable (n : ℕ) instance : fintype (Q n) := by delta Q; apply_instance variable {n} def xor (x y : Q n) : Q n := λ i, bxor (x i) (y i) @[symm] lemma xor_comm (x y : Q n) : x.xor y = y.xor x := funext $λ i, bool.bxor_comm _ _ /-- The distance between two vertices of the hypercube.-/ def dist (x y : Q n) : ℕ := (finset.univ : finset (fin n)).sum$ λ i, cond (x.xor y i) 1 0 @[simp] lemma dist_self (x : Q n) : x.dist x = 0 := finset.sum_eq_zero $λ i hi, by simp only [xor, bxor_self, bool.cond_ff] @[symm] lemma dist_symm (x y : Q n) : x.dist y = y.dist x := congr_arg ((finset.univ : finset (fin n)).sum)$ by { funext i, simp [xor_comm] } /-- Two vertices of the hypercube are adjacent if their distance is 1.-/ def adjacent (x y : Q n) : Prop := x.dist y = 1 /-- The set of n-/ def neighbours (x : Q n) : set (Q n) := {y | x.adjacent y} variable (n) /-- The cardinality of the hypercube.-/ @[simp] lemma card : fintype.card (Q n) = 2^n := calc _ = _ : fintype.card_fun ... = _ : by simp only [fintype.card_fin, fintype.card_bool] theorem sensitivity (H : set (Q n)) (x) (h : x ∈ H) : real.sqrt n ≤ fintype.card (H ∩ (neighbours x) : set (Q n)) := sorry end Q Lean isn't yet happy with the theorem statement #### Johan Commelin (Aug 02 2019 at 17:54): Voila: that's a statement that Lean likes: theorem sensitivity (H : finset (Q n)) (x) (h : x ∈ H) : real.sqrt n ≤ (H.filter (neighbours x)).card := sorry #### Patrick Massot (Aug 02 2019 at 18:28): Hi everybody. I'd like to play this game too. Can we somehow distribute efforts? Where are you now? #### Reid Barton (Aug 02 2019 at 18:34): Hi Patrick! I just got back to this, currently trying to add a recursive definition of the hypercube and associated basis. #### Reid Barton (Aug 02 2019 at 18:35): After that, I think there are some independent pieces to do #### Patrick Massot (Aug 02 2019 at 18:35): Do you think you could push something and give me lemmas to prove? #### Johan Commelin (Aug 02 2019 at 18:36): This is what I have now: /-- The cardinality of the hypercube.-/ @[simp] lemma card : fintype.card (Q n) = 2^n := calc _ = _ : fintype.card_fun ... = _ : by simp only [fintype.card_fin, fintype.card_bool] def equiv_sum : Q (n+1) ≃ Q n ⊕ Q n := { to_fun := λ x, cond (x 0) (sum.inl (x ∘ fin.succ)) (sum.inr (x ∘ fin.succ)), inv_fun := λ x, sum.rec_on x (λ y i, if h : i = 0 then tt else y (i.pred h)) (λ y i, if h : i = 0 then ff else y (i.pred h)), left_inv := begin intro x, dsimp only, cases h : x 0; { funext i, dsimp only [bool.cond_tt, bool.cond_ff], split_ifs with H, { rw [H, h] }, { rw [function.comp_app, fin.succ_pred] } } end, right_inv := begin end } #### Patrick Massot (Aug 02 2019 at 18:36): Johan, which definition of Q n is this using? #### Patrick Massot (Aug 02 2019 at 18:36): Did you agree on the definitions of Q n and V n? #### Reid Barton (Aug 02 2019 at 18:37): Maybe I will just push something using a lot of constant and axiom as a starting point #### Reid Barton (Aug 02 2019 at 18:38): then perhaps we can distribute the representation decisions #### Patrick Massot (Aug 02 2019 at 18:39): It seems hard to decide independently the representations of V n and Q n #### Johan Commelin (Aug 02 2019 at 18:42): This is with Q n := fin n → bool #### Reid Barton (Aug 02 2019 at 18:43): I think https://github.com/leanprover-community/lean-sensitivity/commit/c2d0b69cbe175f3125c2719f3b87f2f6f626f424 is enough to make the rest of the proof go through #### Reid Barton (Aug 02 2019 at 18:44): Except with a correct statement of f_matrix_nonadjacent #### Johan Commelin (Aug 02 2019 at 18:49): @Reid Barton @Patrick Massot Shall I fill in the Q and adjacent constants? #### Reid Barton (Aug 02 2019 at 18:50): Feel free, I'm going to try to continue to sketch out the rest of the proof #### Johan Commelin (Aug 02 2019 at 18:51): @Patrick Massot @Floris van Doorn I've given you write permissions on the repo thanks! #### Patrick Massot (Aug 02 2019 at 18:58): Reid I see you defined epsilon as the dual basis, but I liked the recursive definition #### Patrick Massot (Aug 02 2019 at 18:59): I'd like to work on the fact about |epsilon_q f_n e_p| #### Rob Lewis (Aug 02 2019 at 18:59): @Johan Commelin mind giving me access too? I was looking at this for a bit before dinner, might try to do a bit more. Sure! #### Reid Barton (Aug 02 2019 at 19:00): I was intending that we'd add a lemma to mathlib that says that the dual basis element on inl i is given by projecting to the first factor, then applying the dual basis of the original basis #### Johan Commelin (Aug 02 2019 at 19:00): With this definition: /-- The basis of V indexed by the hypercube.-/ def e : Π n, Q n → V n | 0 := λ _, (1:ℝ) | (n+1) := λ v, cond (v 0) (e n (v ∘ fin.succ), 0) (0, e n (v ∘ fin.succ)) I can't make {n} implicit... #### Johan Commelin (Aug 02 2019 at 19:01): @Rob Lewis Done, although I guess you also have admin rights by default... #### Johan Commelin (Aug 02 2019 at 19:01): Since you are admin of leanprover-community, right? #### Reid Barton (Aug 02 2019 at 19:02): Gah, I keep running into "maximum class-instance resolution depth has been reached" issues, I think they are related to decidable_eq somehow... #### Rob Lewis (Aug 02 2019 at 19:02): Oh, maybe. But thanks anyway! #### Rob Lewis (Aug 02 2019 at 19:05): I can't make {n} implicit... Why not? def e : Π {n}, Q n → V n | 0 := λ _, (1:ℝ) | (n+1) := λ v, cond (v 0) (e (v ∘ fin.succ), 0) (0, e (v ∘ fin.succ)) #### Reid Barton (Aug 02 2019 at 19:05): @Patrick Massot That proof will depend on the definition of Q and e, though #### Reid Barton (Aug 02 2019 at 19:07): I pushed another commit about g #### Patrick Massot (Aug 02 2019 at 19:07): I'm using the definition of e that Rob just pasted #### Patrick Massot (Aug 02 2019 at 19:08): (which I indeed modified from Johan's message) #### Jesse Michael Han (Aug 02 2019 at 19:11): i'm also happy to contribute what needs to be done besides the two sorrys Reid just pushed? #### Reid Barton (Aug 02 2019 at 19:11): Oh dang, H is not a great name #### Reid Barton (Aug 02 2019 at 19:13): @Jesse Michael Han I'll be pushing a couple more sorrys soon to complete the proof outline #### Johan Commelin (Aug 02 2019 at 19:17): I'm inductively proving that e is a basis. #### Johan Commelin (Aug 02 2019 at 19:18): See the jmc branch #### Patrick Massot (Aug 02 2019 at 19:22): As I wrote earlier, I'm inductively calculating epsilon q e p #### Patrick Massot (Aug 02 2019 at 19:22): I guess it should prove that e is a basis right? #### Reid Barton (Aug 02 2019 at 19:24): Lean isn't yet happy with the theorem statement @Johan Commelin were you having trouble with type classes? #### Johan Commelin (Aug 02 2019 at 19:24): Well, sets weren't finsets and such #### Johan Commelin (Aug 02 2019 at 19:25): There is a working statement on the jmc branch #### Johan Commelin (Aug 02 2019 at 19:25): I'm almost done with the proof that e is a basis #### Reid Barton (Aug 02 2019 at 19:28): Okay, I pushed a couple more bits (theorem statement adapted from your Johan--but it should be an exists) #### Reid Barton (Aug 02 2019 at 19:29): Not sure if the next-to-last statement is phrased optimally #### Patrick Massot (Aug 02 2019 at 19:34): In those terms, I'm working on f_matrix_adjacent and f_matrix_nonadjacent #### Jesse Michael Han (Aug 02 2019 at 19:35): g_injective was easy: lemma g_injective {m : ℕ} : function.injective (g m) := begin rw[g], intros x₁ x₂ H_eq, simp at *, exact H_eq.right end #### Reid Barton (Aug 02 2019 at 19:35): I'm going to tackle the inequality that's the last sentence of the PDF I posted #### Reid Barton (Aug 02 2019 at 19:35): that leaves f_image_g and exists_eigenvalue #### Johan Commelin (Aug 02 2019 at 19:36): I'll push what I have now. Ok, I pushed #### Johan Commelin (Aug 02 2019 at 19:38): It does contain a sorry. Will work on that soon. #### Jesse Michael Han (Aug 02 2019 at 19:38): i doubt f_image_g will be as easy but i can start on that. @Johan Commelin could i get push access as well? afk #### Jesse Michael Han (Aug 02 2019 at 19:42): jesse-michael-han #### Patrick Massot (Aug 02 2019 at 19:46): Do we have a nice way to define the two injections from fin n -> bool to fin n+1 -> bool that differ on the zeroth element of fin n+1? #### Patrick Massot (Aug 02 2019 at 19:46): I guess I should use pattern matching Check what I did #### Johan Commelin (Aug 02 2019 at 19:47): I've used cond (x 0) stuff #### Patrick Massot (Aug 02 2019 at 19:48): This is not quite the same question, is it? Maybe not #### Patrick Massot (Aug 02 2019 at 19:48): You're going i the easier direction (decreasing n) #### Johan Commelin (Aug 02 2019 at 19:49): Right, I was too fast... sorry #### Reid Barton (Aug 02 2019 at 19:52): In principle I like building the equivalence Q (n+1) ≃ Q n ⊕ Q n out of stuff in data.equiv.basic/data.equiv.fin though it would be kind of verbose #### Patrick Massot (Aug 02 2019 at 19:53): I found the library bit I was missing #### Patrick Massot (Aug 02 2019 at 19:53): I can now write Q n → Q (n+1) := λ p, λ k, if h : k = 0 then tt else p (k.pred h) #### Johan Commelin (Aug 02 2019 at 19:53): That's what I used #### Patrick Massot (Aug 02 2019 at 19:53): I was missing pred #### Johan Commelin (Aug 02 2019 at 19:53): I already pushed that equiv @Reid Barton #### Patrick Massot (Aug 02 2019 at 19:54): Sorry Johan, I focused on your cond and missed pred #### Reid Barton (Aug 02 2019 at 19:54): Oh I see, just defined manually yeah #### Reid Barton (Aug 02 2019 at 20:09): I changed H from a finset back to a set though I'm not yet entirely sure whether it was a good idea #### Johan Commelin (Aug 02 2019 at 20:18): Ooh, you should also feel free to refactor Q.equiv_sum #### Johan Commelin (Aug 02 2019 at 20:19): If you want to use more fancy library functions (-; finsets ftw. #### Floris van Doorn (Aug 02 2019 at 20:33): @Patrick Massot: I would suggest def cons (x : α) (v : fin n → α) : fin (n+1) → α := λ j, fin.cases x v j What is that? #### Floris van Doorn (Aug 02 2019 at 20:36): To define the maps Q n → Q (n+1), fin.cases is there exactly for that purpose. Oh I see #### Patrick Massot (Aug 02 2019 at 20:41): It's still totally mysterious to me that your definition cannot be rewritten as fin.cases x v #### Floris van Doorn (Aug 02 2019 at 20:56): Lean is pretty bad at these kind of unification problems, unless it uses the [elab_as_eliminator] attribute. With that attribute it is hardcoded to look at the third explicit argument of fin.cases, and then figure out what to do using the type of the third argument. If you don't give it 3 arguments, it will do the standard unification procedure, which fails. #### Patrick Massot (Aug 02 2019 at 20:57): What do you get from using this definition rather than mine? #### Patrick Massot (Aug 02 2019 at 20:57): It seems harder to prove lemmas about it #### Patrick Massot (Aug 02 2019 at 20:57): but probably I'm missing library lemmas #### Johan Commelin (Aug 02 2019 at 21:07): I'm calling it a day #### Johan Commelin (Aug 02 2019 at 21:07): Unfortunately e is still not a basis #### Johan Commelin (Aug 02 2019 at 21:08): I pushed the mess that I have so far. #### Patrick Massot (Aug 02 2019 at 21:08): This is all very frustrating. I'm trying to learn how to make weird inductions in Lean #### Patrick Massot (Aug 02 2019 at 21:08): And I'm very bad at it #### Jesse Michael Han (Aug 02 2019 at 21:14): i'm halfway done with f_image_g #### Floris van Doorn (Aug 02 2019 at 21:20): My hope was that the lemmas you needed were already in the library. Although I don't see anything other than cases_zero and cases_succ. What else do you need? #### Jesse Michael Han (Aug 02 2019 at 22:27): i'm done with f_image_g except for these two annoying lemmas: lemma cast_lemma_1 {m : ℕ} : 0 ≤ (1 + (nat.cast m) : ℝ) := sorry lemma cast_lemma_2 {m : ℕ} : (nat.cast (nat.succ m) : ℝ) = (1 + nat.cast m : ℝ) := sorry #### Jesse Michael Han (Aug 02 2019 at 22:30): oh nevermind, i just needed to make a change: lemma cast_lemma_1 {m : ℕ} : 0 ≤ (1 + (nat.cast m) : ℝ) := by {change (0 : ℝ) ≤ (1 + ↑m : ℝ), suffices this : 0 ≤ (↑m : ℝ), by {linarith}, simp} lemma cast_lemma_2 {m : ℕ} : (nat.cast (nat.succ m) : ℝ) = (1 + nat.cast m : ℝ) := by change ↑(nat.succ m) = (1 + ↑m : ℝ); simp #### Jesse Michael Han (Aug 02 2019 at 22:57): [edited] oops i missed the invite link #### Patrick Massot (Aug 02 2019 at 23:53): I'm done computing the matrix #### Patrick Massot (Aug 02 2019 at 23:58): I haven't use @Johan Commelin xor adjacency definition, see https://github.com/leanprover-community/lean-sensitivity/blob/058def458c9a4023ec95aaf211c0f7a22f77a05d/src/sensitivity.lean#L232-L235. I hope my definition is the same as his. At least I can compute the matrix using it #### Patrick Massot (Aug 02 2019 at 23:59): I also used the explicit inductive definition of the dual basis, and proved the duality equations. #### Patrick Massot (Aug 03 2019 at 00:00): Note that I spent most of my time trying to prove fancy induction principle on Q n, messing around with elab_as_eliminator and induction ... using .... But nothing worked, so I reverted to good old cases. #### Patrick Massot (Aug 03 2019 at 00:18): Is there any reason we use bool everywhere instead of Prop? #### Johan Commelin (Aug 03 2019 at 05:44): @Patrick Massot You don't need bool_cases. You can just write cases h : x p. #### Patrick Massot (Aug 03 2019 at 09:51): Nice trick! I tried by_cases hp : x p = tt but this gave an inconvenient second case. I'm a bit disappointed that the proof wasn't finished while I slept. I'll be again away from Lean for the next 9 hours, so maybe this will be enough. Otherwise I'll play again tonight. Have fun! #### Patrick Massot (Aug 03 2019 at 17:48): @Jesse Michael Han I think you have slightly too much love for tidy. What about lemma f_image_g' {m : ℕ} (w : V (m + 1)) (hv : ∃ v, w = g m v) : (f (m + 1) : V (m + 1) → V (m + 1)) w = real.sqrt (m + 1) • w := begin rcases hv with ⟨v, rfl⟩, dsimp [g], erw f, simp [f_squared], rw [smul_add, smul_smul, real.mul_self_sqrt (by exact_mod_cast zero_le _ : 0 ≤ (1 : ℝ) + m), add_smul, one_smul], abel, end #### Patrick Massot (Aug 03 2019 at 17:49): Do you mind if I replace your proof by this one? I looks closer to the paper proof #### Jesse Michael Han (Aug 03 2019 at 17:50): oh, i didn't see f_squared yeah that looks much better #### Patrick Massot (Aug 03 2019 at 17:51): Did you look at the pdf proof? #### Jesse Michael Han (Aug 03 2019 at 17:53): yes, but i took a longer route in proving the left-side equality (because i essentially rederived f_squared) #### Patrick Massot (Aug 03 2019 at 17:54): Anyway, is anybody working on the next lemma? #### Patrick Massot (Aug 03 2019 at 17:54): I mean exists_eigenvalue #### Patrick Massot (Aug 03 2019 at 17:55): I know nothing about dimension in mathlib, but it looks like a good opportunity to learn #### Jesse Michael Han (Aug 03 2019 at 17:56): i'm not working on it i was going to wait to hear back from reid #### Patrick Massot (Aug 03 2019 at 18:05): exists_mem_ne_zero_of_dim_pos looks very promising #### Jesse Michael Han (Aug 03 2019 at 20:50): i started a bit on degree_theorem; feel free to overwrite or build on what's there i'm not sure if finsupp.mem_span_if_total is the correct way to extract a linear combination from a proof of membership in a span, but it reduces to a finset.sum which seems to be what we want. #### Patrick Massot (Aug 03 2019 at 20:55): I finished e_is_basis that @Johan Commelin almost did yesterday #### Patrick Massot (Aug 03 2019 at 20:56): Wait. @Jesse Michael Han did you proof start timeout? #### Patrick Massot (Aug 03 2019 at 20:56): I tried to merge and I see it times out #### Patrick Massot (Aug 03 2019 at 20:58): https://github.com/leanprover-community/lean-sensitivity/commit/0f1468180a143ba8d40276b2bf7198fc2c250093 is my attempt at the dimension argument #### Patrick Massot (Aug 03 2019 at 20:58): But I don't know enough cardinal theory to do finite dimensional linear algebra :sad: i'll take a look #### Patrick Massot (Aug 03 2019 at 20:59): For instance I need to prove: n : ℕ ⊢ 2 ^ n < cardinal.omega #### Patrick Massot (Aug 03 2019 at 20:59): and n : ℕ, ⊢ 2 ^ n = 2 ^ ↑n Where at least some of this stuff is in cardinals #### Patrick Massot (Aug 03 2019 at 21:05): I won't do more tonight. #### Jesse Michael Han (Aug 03 2019 at 21:14): OK, leanpkg test succeeds at my commit, but indeed times out after the merge. i'll fix it i fixed it #### Chris Hughes (Aug 03 2019 at 21:21): If you know everything is finite dimensional, you probably shouldn't be using cardinals. Everything about infinite dimension should be transferred to findim really, or you have to faff with coercions #### Patrick Massot (Aug 03 2019 at 21:32): Chris, the problem is we seem to have a lot more lemmas about dim than findim #### Patrick Massot (Aug 03 2019 at 21:35): For instance, do you have dim_sup_add_dim_inf_eq for findim? #### Patrick Massot (Aug 03 2019 at 21:36): Or do you have a nice way to import such theorems into the finite dimensional world? #### Chris Hughes (Aug 03 2019 at 21:37): I have virtually nothing. I guess when transfer comes along it will be easier. But they'll just be simp [nat.cast_add] or something. #### Patrick Massot (Aug 03 2019 at 21:39): Was this mirroring discussed with @Mario Carneiro? It seems a bit sad to duplicate all this, but we really want to be able to use finite dimensions as natural numbers #### Mario Carneiro (Aug 03 2019 at 21:40): most facts about numbers being equivalent to finite cardinals are proven #### Patrick Massot (Aug 03 2019 at 21:41): what do you mean? #### Chris Hughes (Aug 03 2019 at 21:43): Like cast is monotonic etc. Unfortunately, cardinals are not an ordered semiring, so the generic lemmas don't work. #### Patrick Massot (Aug 03 2019 at 21:43): Are they tagged for use by norm_cast? #### Patrick Massot (Aug 03 2019 at 21:44): For instance attribute [elim_cast] cardinal.nat_cast_in seems to be missing #### Jesse Michael Han (Aug 03 2019 at 22:06): i think the version of 2^n < omega you stated is parsed by Lean as monoid.pow, which is more annoying to deal with but if you ensure it's the cardinal pow of casted nats, it's 2 lemmas away: import set_theory.cardinal universe u example {n : ℕ} : (↑2) ^ (↑n : cardinal.{u}) < (cardinal.omega : cardinal.{u}) := by rw[<-cardinal.nat_cast_pow]; exact cardinal.nat_lt_omega _ #### Patrick Massot (Aug 03 2019 at 22:15): Do we know that the restriction of linear_independent family to a subtype is linear_independent? #### Patrick Massot (Aug 03 2019 at 22:24): Anyway, I said I would stop more than one hour ago, so I should really stop now #### Patrick Massot (Aug 03 2019 at 22:25): Modulo the restriction thing, the proof of the dimension argument is reduced to these 2^n < omega things #### Jesse Michael Han (Aug 03 2019 at 22:26): the monoid.pow is annoying because you have to do an induction, but it's not bad: example {n : ℕ} : (2 ^ n : cardinal) = (2 ^ (↑n : cardinal.{u}) : cardinal.{u}) := begin induction n with n ih, {simp}, {rw[nat.cast_succ, cardinal.power_add, <-ih, cardinal.power_one, mul_comm], refl }, end you can get the first one from this one, i believe #### Patrick Massot (Aug 03 2019 at 22:27): I followed Chris's advice to get rid of dim as soon as I applied the lemmas #### Patrick Massot (Aug 03 2019 at 22:28): Nice, feel free to remove those sorries and push. I'll go to bed #### Patrick Massot (Aug 03 2019 at 22:29): Maybe you can even finish the whole thing while I sleep #### Patrick Massot (Aug 03 2019 at 22:30): Or maybe @Reid Barton will come back and cross the finish line for us #### Patrick Massot (Aug 03 2019 at 22:31): Zulip says his local time is 6:30pm #### Patrick Massot (Aug 03 2019 at 22:31): This leaves him the whole evening and night #### Patrick Massot (Aug 03 2019 at 22:31): and same for you! Have fun! #### Yao Liu (Aug 04 2019 at 04:46): As a spectator, I was looking around and found (via Terry Tao's blog post) this interpretation https://arxiv.org/abs/1907.11175 namely, the $2^n$-dimensional space ought to be the exterior algebra (or Clifford algebra) of an $n$-dimensional vector space. #### Jesse Michael Han (Aug 04 2019 at 21:07): i'm trying to import analysis.normed_space.basic to use the triangle inequality, but this causes Lean on both my machines to start complaining about "equation compiler failed to generate bytecode for e._main", and something not being a rfl lemma, etc. the errors appear iff i have that import. does anyone know why this would happen? #### Kevin Buzzard (Aug 04 2019 at 21:12): equation compiler failed to generate bytecode for 'e._main' nested exception message: code generation failed, VM does not have code for 'real.normed_field' #### Kevin Buzzard (Aug 04 2019 at 21:16): The entire file analysis.normed_space.basic is marked noncomputable so maybe it's no surprise the VM doesn't have code for real.normed_field #### Kevin Buzzard (Aug 04 2019 at 21:17): If you mark e noncomputable then most of the errors go away #### Jesse Michael Han (Aug 04 2019 at 21:19): indeed, but i was alarmed that previous proofs which were by refl were somehow no longer so after adding noncomputable #### Jesse Michael Han (Aug 04 2019 at 21:19): but that error can be fixed by changing rfl to by rw e... #### Kevin Buzzard (Aug 04 2019 at 21:20): you golfed me, I just found by unfold e :-) #### Johan Commelin (Aug 05 2019 at 06:35): Can someone give a status update? Is the only thing that needs to be done the tying together of some loose ends? #### Chris Hughes (Aug 05 2019 at 06:42): Looks like two sorries right now. #### Chris Hughes (Aug 05 2019 at 06:42): Neither are that hard. #### Chris Hughes (Aug 05 2019 at 06:46): I just proved that. #### Chris Hughes (Aug 05 2019 at 06:46): There are two sorries in the final theorem. I'm doing the first. #### Chris Hughes (Aug 05 2019 at 06:59): One sorry left now. #### Johan Commelin (Aug 05 2019 at 08:17): @Chris Hughes Are you still working on this? #### Johan Commelin (Aug 05 2019 at 08:17): I connected adjacent back to my definition that used dist. #### Chris Hughes (Aug 05 2019 at 08:59): I'm not. The last sorry looked like you needed to understand what was going on. Everything else I proved did not require that. Ok, I see. #### Patrick Massot (Aug 05 2019 at 09:42): Johan, is there any use to the distance you defined, and relating it to adjacent? I'm not sure it makes adjacent so much more related to the usual definition. Maybe we could define a general graph class, and then adjacency in this context, and relate. But the real question is: what is the usual definition of the graph structure on Q n? Maybe the simplest answer goes through defining the (Z/2)^n action on Q n. For instance we could redefine Q n to be (Z/2)^n instead of this weird CS bool thing. Then define the canonical basis b for (Z/2)^n (it should already be in mathlib) and define adjacent x y by exists i, b i bul x = y. #### Johan Commelin (Aug 05 2019 at 09:49): I'm almost done with the final sorry. #### Johan Commelin (Aug 05 2019 at 09:49): I'll push what I have. Lunch time now. #### Patrick Massot (Aug 05 2019 at 09:50): Oh, I was starting on this final sorry #### Patrick Massot (Aug 05 2019 at 09:51): Do you mind if I finish during your lunch? #### Patrick Massot (Aug 05 2019 at 09:56): @Johan Commelin should I push? #### Patrick Massot (Aug 05 2019 at 09:57): Ok, let's say he was really having lunch #### Patrick Massot (Aug 05 2019 at 09:57): https://github.com/leanprover-community/lean-sensitivity/commit/f7be6abb34eafcd01df9417c99df34f8076362b9 #### Patrick Massot (Aug 05 2019 at 09:58): Now we have a lot of cleanup to do before telling people about it Nice, well done! #### Johan Commelin (Aug 05 2019 at 10:22): @Reid Barton @Rob Lewis @Jesse Michael Han @Chris Hughes @Floris van Doorn We have a theorem! #### Kevin Buzzard (Aug 05 2019 at 10:25): Well done! I really like these "oh look here's some relatively simple-looking maths, how hard is it actually to formalise?" questions. A year or so ago my impression was "in Lean, it's probably going to be tough". Now my impression is turning to "it might be not too bad". We have cubing the cube, the IMO questions, working out pi to 7 decimal places, and now this. #### Patrick Massot (Aug 05 2019 at 10:26): It's still a lot harder than it should be #### Kevin Buzzard (Aug 05 2019 at 10:26): But I suspect it's a lot easier than it was. #### Patrick Massot (Aug 05 2019 at 10:26): Unless you already know perfectly all the relevant part of mathlib Oh yes of course #### Kevin Buzzard (Aug 05 2019 at 10:26): It's almost getting to the point where it's as easy as I thought it would be before I knew anything about how this stuff worked. #### Johan Commelin (Aug 05 2019 at 10:27): Let's see if we can write transfer lemmas for this little project, to make things shorter #### Kevin Buzzard (Aug 05 2019 at 10:27): What does that mean? #### Johan Commelin (Aug 05 2019 at 10:28): Because we have Q n →₀ ℝ and V n, and they are canonically isomorphic. for sure #### Johan Commelin (Aug 05 2019 at 10:28): Some proofs are easy on one side, others on the other. #### Patrick Massot (Aug 05 2019 at 10:28): What do you think about starting the refactor with: notation Z/2 := zmodp 2 nat.prime_two def Q (n : ℕ) := fin n → Z/2 #### Johan Commelin (Aug 05 2019 at 10:28): This might be a good test case for the existing transfer api. #### Johan Commelin (Aug 05 2019 at 10:28): @Patrick Massot Why would that help? #### Kevin Buzzard (Aug 05 2019 at 10:29): I think your suggestion is a really worthwhile one Johan. #### Patrick Massot (Aug 05 2019 at 10:29): And then try to understand that std_basis thing, and define adjacency as I wrote it should be defined? #### Johan Commelin (Aug 05 2019 at 10:29): What does Z/2 have that bool doesn't have? Looks #### Patrick Massot (Aug 05 2019 at 10:29): A group structure #### Patrick Massot (Aug 05 2019 at 10:29): Johan, is there any use to the distance you defined, and relating it to adjacent? I'm not sure it makes adjacent so much more related to the usual definition. Maybe we could define a general graph class, and then adjacency in this context, and relate. But the real question is: what is the usual definition of the graph structure on Q n? Maybe the simplest answer goes through defining the (Z/2)^n action on Q n. For instance we could redefine Q n to be (Z/2)^n instead of this weird CS bool thing. Then define the canonical basis b for (Z/2)^n (it should already be in mathlib) and define adjacent x y by exists i, b i bul x = y. See above #### Patrick Massot (Aug 05 2019 at 10:30): I think this would give a much more recognizable adjacency relation #### Kevin Buzzard (Aug 05 2019 at 10:30): instead of this weird CS bool thing Mathematicians still failing to come to terms with weird CS stuff #### Patrick Massot (Aug 05 2019 at 10:31): It's hard to express the fact that x and y are two different elements of a two elements sets in a nicer way than x = y +1 #### Johan Commelin (Aug 05 2019 at 10:31): @Patrick Massot Ok, that seems fine. #### Kevin Buzzard (Aug 05 2019 at 10:31): x \ne y? #### Patrick Massot (Aug 05 2019 at 10:32): If I understand correctly mathlib still don't have the canonical basis of K^n (it has a more general std_basis for pi). It's time we fix that #### Johan Commelin (Aug 05 2019 at 10:32): Anyway, I have a meeting scheduled in 10 minutes... so I can't help too much with the refactor now. #### Patrick Massot (Aug 05 2019 at 10:32): I also need to go #### Patrick Massot (Aug 05 2019 at 10:32): Something easy that anyone can do is to collect all lemmas from that file that should be in mathlib and PR them #### Patrick Massot (Aug 05 2019 at 10:33): We should do that before refactoring because some of them will turn out to be unneeded for the sensitivity proof and will disappear if we don't rescue them first #### Rob Lewis (Aug 05 2019 at 11:47): I was away from Lean for the weekend and it looks like I missed all the fun. Nice job, guys! #### Reid Barton (Aug 05 2019 at 12:04): We should do that before refactoring because some of them will turn out to be unneeded for the sensitivity proof and will disappear if we don't rescue them first I'm going to start by moving them to their own file #### Reid Barton (Aug 05 2019 at 12:04): Rob I think there's still plenty of clean up that would be nice to do. You had a shorter proof of f_squared right? #### Rob Lewis (Aug 05 2019 at 12:05): I just pushed it 20 seconds ago... #### Rob Lewis (Aug 05 2019 at 12:05): Looking at a few other proofs now that should be easy to clean up. #### Reid Barton (Aug 05 2019 at 12:06): Give me a couple minutes to push what I have simplified already Sure thing. OK, pushed #### Reid Barton (Aug 05 2019 at 12:11): Next I was going to try to replace the definition of adjacent by def adjacent' {n : ℕ} (p q : Q n) : Prop := ∃! i, p i ≠ q i and get rid of the dist stuff #### Reid Barton (Aug 05 2019 at 12:12): Although I guess this means proving something similar to adjacent_iff_dist #### Reid Barton (Aug 05 2019 at 12:13): By the way, what is this run_cmd tactic.skip magic about? #### Rob Lewis (Aug 05 2019 at 12:14): I suspect that will be nicer, but I'm not sure... I'm gonna try some more local cleanup toward the bottom. #### Rob Lewis (Aug 05 2019 at 12:14): I was editing the theorem below, which was causing the one above to recompile constantly, and it was really slow. Putting no-op code in between makes it stop recompiling the one above. #### Rob Lewis (Aug 05 2019 at 12:15): It wasn't meant to be pushed though, I deleted it. #### Kevin Buzzard (Aug 05 2019 at 12:15): Isn't there supposed to be some way of doing this with a .? #### Rob Lewis (Aug 05 2019 at 12:16): . might have the same effect. Not sure. #### Johan Commelin (Aug 05 2019 at 12:19): I always use . #### Reid Barton (Aug 05 2019 at 12:25): I'm constantly getting "excessive memory consumption" errors, is there some way to deal with these? #### Reid Barton (Aug 05 2019 at 12:25): The dependencies are definitely built #### Johan Commelin (Aug 05 2019 at 12:36): I added some foo_apply lemmas and golfed f_image_g. #### Rob Lewis (Aug 05 2019 at 12:40): I'm constantly getting "excessive memory consumption" errors, is there some way to deal with these? Hmm, something's funny with the simp set. It's spending over two seconds in the simp only in the 0 case of f_squared, that seems excessive. #### Rob Lewis (Aug 05 2019 at 12:41): Or, maybe not the simp set. Even the definition of f takes a while. Probably a big type class search. #### Reid Barton (Aug 05 2019 at 12:41): I don't understand how anyone could have written any of this interactively. Did I break everything? #### Rob Lewis (Aug 05 2019 at 12:42): It's a little slow but it works fine for me. restart Lean? #### Rob Lewis (Aug 05 2019 at 12:46): Adding this short-circuit helps in a few places: def Vn_module {n} : module ℝ (V n) := by apply_instance local attribute [instance, priority 10000] Vn_module #### Reid Barton (Aug 05 2019 at 12:50): Now leanpkg build has started printing goals and failing without printing any error message #### Reid Barton (Aug 05 2019 at 12:51): (not related to that module instance) #### Patrick Massot (Aug 05 2019 at 12:59): I don't understand how anyone could have written any of this interactively. Did I break everything? I noticed it was very slow, but wasn't sure whether it was because I'm using my laptop. Anyway, I was doing the usual sorry dance (every have that is proved is replaced by a commented out proof and sorry). #### Patrick Massot (Aug 05 2019 at 13:00): I'd love to help refactoring but I'm on beach duty #### Reid Barton (Aug 05 2019 at 13:03): I found out how to increase the default memory limit for lean in emacs, so now I can load the whole file again #### Johan Commelin (Aug 05 2019 at 13:04): Why does Lean not understand what I mean when I write def RQ_equiv_V : Π (n : ℕ), (Q n →₀ ℝ) ≃ₗ[ℝ] V n | 0 := _ | (n+1) := _ #### Johan Commelin (Aug 05 2019 at 13:04): It takes ages to parse/elaborate/tc this #### Johan Commelin (Aug 05 2019 at 13:04): Result: 4 deterministic timeouts #### Rob Lewis (Aug 05 2019 at 13:04): def Vn_module {n} : module ℝ (V n) := by apply_instance def acg {n} : add_comm_semigroup (V n) := by apply_instance def acm {n} : add_comm_monoid (V n) := by apply_instance def hsr {n} : has_scalar ℝ (V n) := by apply_instance def hav {n} : has_add (V n) := by apply_instance local attribute [instance, priority 100000] acg acm hsr hav Vn_module #### Rob Lewis (Aug 05 2019 at 13:05): I'm timing right now, but it feels like a very significant speedup on the whole file. #### Rob Lewis (Aug 05 2019 at 13:05): Yeah, those short circuits reduce the compile time by more than 50%. #### Johan Commelin (Aug 05 2019 at 13:05): Doesn't help for this thing :sad: #### Johan Commelin (Aug 05 2019 at 13:06): Btw, should we even use real numbers? #### Johan Commelin (Aug 05 2019 at 13:06): For this problem nat.sqrt is just as good as real.sqrt, isn't it? #### Johan Commelin (Aug 05 2019 at 13:07): So we could do the whole thing with ℤ-modules #### Johan Commelin (Aug 05 2019 at 13:07): Anyway, I don't mind using real. #### Johan Commelin (Aug 05 2019 at 13:08): Without this RQ_equiv_V I don't think we can get very far with "transfer"-like techniques. #### Johan Commelin (Aug 05 2019 at 13:24): @Reid Barton Have your problems gone away? Some of them #### Rob Lewis (Aug 05 2019 at 13:27): Do you have local changes that are causing this? It compiles with -T20000 for me, so there shouldn't be any memory issues. #### Jesse Michael Han (Aug 05 2019 at 13:29): I don't understand how anyone could have written any of this interactively. in the case of degree_theorem, very painfully... what is this "insert no-op code" magic? do i put a . or run_cmd tactic.skip above the current declaration i'm working on to stop recompilation of previous theorems? #### Rob Lewis (Aug 05 2019 at 13:29): The only suspicious thing I see in the profiling is that simp is spending a long time in tactic.join_user_simp_lemmas. #### Johan Commelin (Aug 05 2019 at 13:30): @Jesse Michael Han Yes. . is gold. #### Reid Barton (Aug 05 2019 at 13:37): I used to have some local changes but I still had problems when I got rid of them #### Reid Barton (Aug 05 2019 at 13:37): Is the default memory limit different in VS code maybe? #### Kevin Buzzard (Aug 05 2019 at 13:38): It's 100000 in VS Code I believe #### Kevin Buzzard (Aug 05 2019 at 13:38): The T50000 challenge was what happened when I halved it #### Reid Barton (Aug 05 2019 at 13:44): That's the "time" limit I thought #### Kevin Buzzard (Aug 05 2019 at 13:44): oh apologies. Memory limit is... 4096 megs #### Reid Barton (Aug 05 2019 at 13:46): Ah okay, that explains some things. In emacs it's 1024 megs #### Jesse Michael Han (Aug 05 2019 at 13:52): looks like something we should patch in the community version of lean-mode 4096 megs and constantly swapping #### Johan Commelin (Aug 05 2019 at 14:23): @Rob Lewis Did you manage to clean things up at the bottom of the file? #### Johan Commelin (Aug 05 2019 at 14:24): Do we have some sort of "cleaning up" roadmap? #### Johan Commelin (Aug 05 2019 at 14:26): The reason I wrote my dist function is that I imagined that we might rename Q to hypercube and put it in mathlib, and show that it is a (discrete) metric space, etc.... #### Johan Commelin (Aug 05 2019 at 14:27): Not sure if we want to do things like that. #### Rob Lewis (Aug 05 2019 at 14:28): I got distracted by trying to speed things up. I have no major changes right now. #### Rob Lewis (Aug 05 2019 at 14:29): But performance-wise, I don't see any more obvious places to optimize. It seems to behave pretty reasonably. #### Johan Commelin (Aug 05 2019 at 14:30): Can you get the statement of RQ_equiv_V to work? #### Johan Commelin (Aug 05 2019 at 14:30): @Reid Barton What do you think of @Patrick Massot's suggestion for adjacency? #### Johan Commelin (Aug 05 2019 at 14:31): It's quite similar to your ∃! i, x i ≠ y i suggestion, but it exploits a bit of group structure. #### Rob Lewis (Aug 05 2019 at 14:56): local attribute [instance, priority 10000] finsupp.module classical.prop_decidable makes it work. The decidable instance isn't necessary but it speeds things up a little bit. #### Rob Lewis (Aug 05 2019 at 14:56): Once again, the type classes here are kind of a mess. #### Johan Commelin (Aug 05 2019 at 14:58): It's really sad that this creates a mess. It doesn't feel like we are doing something horrible. #### Johan Commelin (Aug 05 2019 at 14:58): I just pushed a compression of duality. #### Jesse Michael Han (Aug 05 2019 at 15:34): i'm working on turning degree_theorem into a calc proof after the existential instantiation of q also thank you johan for the cleanup :^) #### Reid Barton (Aug 05 2019 at 15:37): i'm working on turning degree_theorem into a calc proof after the existential instantiation of q Oh cool, I wanted to do this as well but currently I'm trying to simplify exists_eigenvalue a bit #### Rob Lewis (Aug 05 2019 at 15:40): Have you made any structural changes to exists_eigenvalue? I have a few cosmetic updates local, but nothing important. #### Rob Lewis (Aug 05 2019 at 15:40): I won't touch degree_theorem if Jesse is on it now. #### Reid Barton (Aug 05 2019 at 15:43): I'm trying to do all the conversion between cardinals and naturals at once #### Reid Barton (Aug 05 2019 at 15:46): Pushed, I think it is somewhat better now... Nice. #### Jesse Michael Han (Aug 05 2019 at 16:07): also pushed my calcification not sure what the correct style is for long calc proofs but i tried to prettify it #### Johan Commelin (Aug 05 2019 at 16:47): Nice work @Reid Barton and @Jesse Michael Han #### Johan Commelin (Aug 05 2019 at 16:47): I think that maybe we should just use begin ... end instead of by { .. } for the longer proofs in the calc block #### Johan Commelin (Aug 05 2019 at 16:49): Of course that's a very minor issue #### Reid Barton (Aug 05 2019 at 17:01): Going to head out for a while, but I feel it should somehow be easier to show that l q = ε q y #### Johan Commelin (Aug 05 2019 at 17:07): Yep... that's the kind of "transfer" thing that I wanted to do #### Johan Commelin (Aug 05 2019 at 17:07): But showing Q n ->0 R =_l V n is already non-trivial... #### Johan Commelin (Aug 05 2019 at 17:08): It seems that Rob found a fix. #### Johan Commelin (Aug 05 2019 at 17:08): @Jesse Michael Han what do you think of the following style? theorem degree_theorem : ∃ q, q ∈ H ∧ real.sqrt (m + 1) ≤ (H ∩ q.adjacent).to_finset.card := begin rcases exists_eigenvalue H ‹_› with ⟨y, ⟨⟨H_mem', H_mem''⟩, H_nonzero⟩⟩, rcases (finsupp.mem_span_iff_total _).mp H_mem' with ⟨l, H_l₁, H_l₂⟩, have hHe : H ≠ ∅ , { contrapose! hH, rw [hH, set.empty_card'], exact nat.zero_lt_succ _ }, obtain ⟨q, H_mem_H, H_max⟩ : ∃ q, q ∈ H ∧ ∀ q', q' ∈ H → abs (l q') ≤ abs (l q), { cases set.exists_mem_of_ne_empty hHe with r hr, cases @finset.max_of_mem _ _ (H.to_finset.image (λ q', abs (l q'))) (abs (l r)) (finset.mem_image_of_mem _ (set.mem_to_finset.2 hr)) with x hx, rcases finset.mem_image.1 (finset.mem_of_max hx) with ⟨q, hq, rfl⟩, refine ⟨q, ⟨set.mem_to_finset.1 hq, λ q' hq', _⟩⟩, exact (finset.le_max_of_mem (finset.mem_image_of_mem _ (set.mem_to_finset.2 hq')) hx : _) }, have H_q_pos : 0 < abs (l q), { rw [abs_pos_iff], assume h, rw [finsupp.mem_supported'] at H_l₁, have H_max' : ∀ q', l q' = 0, { intro q', by_cases hq' : q' ∈ H, { revert q', simpa [h] using H_max }, { exact H_l₁ _ hq' } }, have hl0 : l = 0, { ext, rw [H_max', finsupp.zero_apply] }, simp [hl0] at H_l₂, exact H_nonzero H_l₂.symm }, refine ⟨q, ⟨‹_›, _⟩⟩, suffices : real.sqrt (↑m + 1) * abs (l q) ≤ ↑(_) * abs (l q), by { exact (mul_le_mul_right H_q_pos).mp ‹_› }, calc real.sqrt (↑m + 1) * (abs (l q)) ≤ abs (real.sqrt (↑m + 1) * l q) : by conv_lhs { rw [← abs_sqrt_nat, ← abs_mul] } ... ≤ abs (ε q (real.sqrt (↑m + 1) • y)) : begin rw [linear_map.map_smul, smul_eq_mul, abs_mul, abs_mul], apply mul_le_mul_of_nonneg_left _ _, { apply le_of_eq, congr' 1, rw [← H_l₂, finsupp.total_apply, finsupp.sum, linear_map.map_sum], rw [finset.sum_eq_single q], { rw [linear_map.map_smul, smul_eq_mul, duality, if_pos rfl, mul_one], }, { intros p hp hne, simp [linear_map.map_smul, duality, hne.symm] }, { intro h_q_ne_supp, simp [finsupp.not_mem_support_iff.mp h_q_ne_supp] } }, { exact abs_nonneg _ } end ... ≤ l.support.sum (λ x : Q (m + 1), abs (l x) * abs ((ε q) ((f (m + 1) : _) (e x)))) : begin rw [← f_image_g y (by simpa using H_mem''), ← H_l₂, finsupp.total_apply, finsupp.sum, linear_map.map_sum, linear_map.map_sum], refine le_trans abs_triangle_sum _, conv_lhs { congr, skip, simp [abs_mul] } end ... ≤ finset.sum (l.support ∩ set.to_finset H ∩ set.to_finset (Q.adjacent q)) (λ (x : Q (m + 1)), abs (l x) * abs ((ε q) ((f (m + 1) : _) (e x)))) : begin rw [← finset.sum_subset], { intros x Hx, simp[-finsupp.mem_support_iff] at Hx, exact Hx.left }, { intros x H_mem H_not_mem, by_cases x ∈ H, { simp at H_mem H_not_mem, rw[f_matrix], have := (H_not_mem ‹_› ‹_›), { suffices : (l x) = 0, by {simp [this]}, rw [finsupp.mem_supported'] at H_l₁, exact H_l₁ _ ‹_› } } end ... ≤ ↑(finset.card (l.support ∩ set.to_finset H ∩ set.to_finset (Q.adjacent q))) * abs (l q) : begin refine le_trans (finset.sum_le_sum _) _, { exact λ p, abs (l q) }, { intros x Hx, rw [f_matrix], simp at Hx, have := Hx.right.right, change Q.adjacent _ _ at this, rw [if_pos this.symm, mul_one], exact H_max x Hx.2.1 }, end ... ≤ ↑(finset.card (set.to_finset (H ∩ Q.adjacent q))) * abs (l q) : begin refine (mul_le_mul_right ‹_›).mpr _, norm_cast, refine finset.card_le_of_subset (finset.coe_subset.mp _), simpa only [finset.coe_inter, finset.coe_to_finset', set.inter_assoc] using set.inter_subset_right _ _ end end #### Rob Lewis (Aug 05 2019 at 17:13): A fix to what? I'm playing around with changing the definition of adjacent right now. #### Johan Commelin (Aug 05 2019 at 17:24): Oh, to Lean not parsing RQ_equiv_V. #### Johan Commelin (Aug 05 2019 at 17:25): So I'll get back to filling in that definition now. #### Johan Commelin (Aug 05 2019 at 17:25): @Rob Lewis What do you propose as new definition of adjacent? #### Rob Lewis (Aug 05 2019 at 17:27): I was hoping def adjacent {n : ℕ} (p : Q n) : set (Q n) := λ q, ∃! i, p i ≠ q i would simplify things a bit. It basically exchanges the difficulty of adjacent_iff_dist for adjacent_succ_iff though. #### Rob Lewis (Aug 05 2019 at 17:28): My (not quite complete) proof of adjacent_succ_iff is a bit longer but arguably a bit simpler, and I don't think it's optimized. #### Johan Commelin (Aug 05 2019 at 17:45): For some reason def equiv_unique {α : Type*} {β : Type*} [unique α] [discrete_field β] : (α →₀ β) ≃ₗ[β] β := { to_fun := λ f, f (default α), inv_fun := finsupp.single (default α), smul := λ b f, rfl, left_inv := λ f, (finsupp.unique_single _).symm, right_inv := λ b, finsupp.single_eq_same } is also extremely slow. #### Johan Commelin (Aug 05 2019 at 17:47): And if I change discrete_field to comm_ring it doesn't find add_comm_group for the LHS. #### Johan Commelin (Aug 05 2019 at 17:47): Even after importing algebra.pi_instances #### Jesse Michael Han (Aug 05 2019 at 17:48): yeah that looks good my original version actually had begin .. end instead of by {} as well, but i thought that mathlib style forbids nested begin end blocks #### Johan Commelin (Aug 05 2019 at 17:51): What's this?? equation type mismatch, term equiv_unique (Q 0) ℝ has type (Q 0 →₀ ℝ) ≃ₗ[ℝ] ℝ but is expected to have type (Q 0 →₀ ℝ) ≃ₗ[ℝ] V 0 #### Johan Commelin (Aug 05 2019 at 17:51): V 0 is defeq to real. #### Johan Commelin (Aug 05 2019 at 17:51): Why doesn't it see that? #### Chris Hughes (Aug 05 2019 at 17:55): Is the module structure defeq? I think so. #### Johan Commelin (Aug 05 2019 at 17:57): noncomputable instance : Π n, add_comm_group (V n) := begin apply nat.rec, { dunfold V, apply_instance }, { introsI n IH, dunfold V, apply_instance } end noncomputable instance : Π n, vector_space ℝ (V n) := begin apply nat.rec, { dunfold V, apply_instance }, { introsI n IH, dunfold V, apply_instance } end #### Rob Lewis (Aug 05 2019 at 17:57): Not thrilled with the proof of adjacent_succ_iff. I'm not sure this is an improvement. #### Rob Lewis (Aug 05 2019 at 17:57): But I need to head home and eat dinner now. #### Johan Commelin (Aug 05 2019 at 19:31): I pushed another cleanup of degree_theorem #### Johan Commelin (Aug 05 2019 at 19:32): Maybe some parts at the top of the proof should be factored out into separate lemmas. #### Floris van Doorn (Aug 05 2019 at 20:41): (deleted - wrong topic) #### Patrick Massot (Aug 05 2019 at 23:53): Who could try to add the following lines after the definition of epsilon, and see if the commented lines timeout? variables (n : ℕ) (v : V n) local notation Ψ := finsupp.equiv_fun_on_fintype.inv_fun /- This following fails with by apply_instance, but defining it doesn't seem to help. instance tata : vector_space ℝ (Q n →₀ ℝ) := {..finsupp.module (Q n) ℝ, .. } -/ def coeffs {n : ℕ} (v : V n) : Q n →₀ ℝ := Ψ (λ p : Q n, ε p v) def somme {n : ℕ} := (finsupp.total (Q n) (V n) ℝ e) #check coeffs v -- coeffs v : Q n →₀ ℝ #check @somme n -- somme : (Q n →₀ ℝ) →ₗ[ℝ] V n #check linear_map.to_fun somme -- somme.to_fun : (Q ?M_1 →₀ ℝ) → V ?M_1 -- The following lines timeout --#check linear_map.to_fun somme (coeffs v) --#check linear_map.to_fun (@somme n) --#check (Q n →₀ ℝ) →ₗ[ℝ] V n #### Patrick Massot (Aug 05 2019 at 23:57): Independently of this, I still think that the way we prove e is a basis is not optimal. What we really care about is the decomposition of vectors v : V n as a sum over p in Q n of (ε p v) • (e p) (the issues above were met while trying to write down this formula using the linear algebra library). In our current proof of the main theorem, this formula is somewhat hidden, because we use the fact e is a basis to get a mysterious sequence of coefficients unrelated to ε. One improvement could be to prove ε is equal to dual_basis e and use stuff in dual.lean. But we could just as well directly prove the decomposition formula The key is: #### Patrick Massot (Aug 05 2019 at 23:57): lemma epsilon_total {n : ℕ} {v : V n} (h : ∀ p : Q n, (ε p) v = 0) : v = 0 := begin induction n with n ih, { dsimp [ε] at h, exact h (λ _, tt) }, { cases v with v₁ v₂, ext ; change _ = (0 : V n) ; simp only [] ; apply ih ; intro p ; [ let q : Q (n+1) := λ i, if h : i = 0 then tt else p (i.pred h), let q : Q (n+1) := λ i, if h : i = 0 then ff else p (i.pred h)], all_goals { specialize h q, rw [ε, show q 0 = tt, from rfl, cond_tt] at h <|> rw [ε, show q 0 = ff, from rfl, cond_ff] at h, rwa show p = q ∘ fin.succ, by { ext, simp [q, fin.succ_ne_zero] } } } end #### Patrick Massot (Aug 05 2019 at 23:58): Which we can apply to a vector minus its intended decomposition (using the duality lemma). #### Patrick Massot (Aug 05 2019 at 23:59): If we insist on keeping that e is a basis then the above lemma (together with the duality lemma) reproves it #### Patrick Massot (Aug 05 2019 at 23:59): (this is all assuming we can use lemmas about sums over finsupp...) #### Patrick Massot (Aug 06 2019 at 00:02): Another thing that is surprisingly painful is the proof of have H_q_pos : 0 < abs (l q), in the main theorem. I think we should use that fintype.normed_group is using the supremum norm. So abs (l q) is actually the norm of y, hence positive using norm_pos_iff #### Patrick Massot (Aug 06 2019 at 00:05): And then I think the big calc at the end should have more steps and use more general lemmas about linear maps and dual bases. #### Rob Lewis (Aug 06 2019 at 13:03): I didn't think of a nice way to optimize the proof of adj_succ_iff with the "exists unique" definition of adjacent. But I did notice that the detour through dist is completely unnecessary. This is all we need about adjacent: #### Patrick Massot (Aug 06 2019 at 13:05): This is my proposal but still phrased in terms of bool rather than Z/2 #### Patrick Massot (Aug 06 2019 at 13:06): I think this is a pretty clean definition, very close to what you would say when explaining the statement #### Patrick Massot (Aug 06 2019 at 13:07): But I'm not claiming it makes the proof of adjacent_succ_iff as short as we'd like it to be (deleted) #### Patrick Massot (Aug 06 2019 at 14:11): I returned to this and finished this proof. I add the following trivial preliminaries: lemma ne_iff_eq_bnot {b b' : bool} : b ≠ b' ↔ b = bnot b' := by cases b ; cases b' ; simp @[simp] lemma not_eq_bnot : ∀ b : bool, ¬ b = bnot b | tt := λ h, bool.no_confusion h | ff := λ h, bool.no_confusion h lemma fin.eq_succ_iff_pred_eq {n : ℕ} {i : fin (n + 1)} {l : fin n} (h : i ≠ 0) : i = fin.succ l ↔ (fin.pred i h = l) := begin split ; intro H, { simp only [H, h, fin.pred_succ] }, { simp only [H.symm, fin.succ_pred] } end #### Patrick Massot (Aug 06 2019 at 14:13): And then the adjacency stuff becomes: /-- flip i p flips the i-th bit of p -/ def flip {n : ℕ} (i : fin n) : Q n → Q n := λ p k, if i = k then bnot (p k) else p k /-- The adjacency relation on Q^n: two vertices of the hypercube are adjacent if they differ at one bit. -/ def adjacent {n : ℕ} : Q n → (set $Q n) := λ p, {q | ∃ i : fin n, q = flip i p} @[simp] lemma not_adjacent_zero (p q : Q 0) : p.adjacent q = false := begin rw eq_false, rintro ⟨i, h⟩, fin_cases i end lemma adjacent_succ_iff {p q : Q (n+1)} : p.adjacent q ↔ (p 0 = q 0 ∧ adjacent (p ∘ fin.succ) (q ∘ fin.succ)) ∨ (p 0 ≠ q 0 ∧ p ∘ fin.succ = q ∘ fin.succ) := begin split, { rintros ⟨i, h⟩, rw h, by_cases hi : i = 0, { right, rw [hi, flip], split ; simp, ext l, simp [(fin.succ_ne_zero _).symm] }, { left, split, { simp [hi, flip] }, { use i.pred hi, ext l, dsimp [flip], congr' 1, rw fin.eq_succ_iff_pred_eq } } }, { rintro (⟨h₀, ⟨i, h⟩⟩ | ⟨h₀, h⟩), { use i.succ, ext l, by_cases hl : l = 0, { simp only [hl, flip, fin.succ_ne_zero, h₀, bnot, if_false] }, { have := congr_fun h (l.pred hl), simp at this, rw this, dsimp [flip], simp only [fin.succ_pred], have := fin.eq_succ_iff_pred_eq hl, conv_lhs at this { rw eq_comm }, conv_rhs at this { rw eq_comm }, congr' 1, rw ← this } }, { use 0, ext l, by_cases hl : l = 0, { simp [flip, hl, ne_iff_eq_bnot.1 h₀] }, { rw ← fin.succ_pred l hl, have := congr_fun h (l.pred hl), simp only [function.comp_app] at this, rw ← this, simp [flip, ne.symm hl] } } } end #### Patrick Massot (Aug 06 2019 at 14:13): It seems there is some kind of pain conservation law here #### Patrick Massot (Aug 06 2019 at 14:13): @Rob Lewis #### Patrick Massot (Aug 06 2019 at 14:14): Maybe that solution is a bit orthogonal to the general design of our proof since it doesn't use induction on n at all #### Patrick Massot (Aug 06 2019 at 14:32): I forget symmetry: lemma flip_flip {n : ℕ} (i : fin n) (p : Q n) : flip i (flip i p) = p := begin ext l, dsimp [flip], split_ifs, simp end @[symm] lemma adjacent_comm {p q : Q n} : p.adjacent q ↔ q.adjacent p := by split ; rintro ⟨i, h⟩ ; rw h ; use i ; rw flip_flip #### Patrick Massot (Aug 06 2019 at 14:44): Who could try to add the following lines after the definition of epsilon, and see if the commented lines timeout? variables (n : ℕ) (v : V n) local notation Ψ := finsupp.equiv_fun_on_fintype.inv_fun /- This following fails with by apply_instance, but defining it doesn't seem to help. instance tata : vector_space ℝ (Q n →₀ ℝ) := {..finsupp.module (Q n) ℝ, .. } -/ def coeffs {n : ℕ} (v : V n) : Q n →₀ ℝ := Ψ (λ p : Q n, ε p v) def somme {n : ℕ} := (finsupp.total (Q n) (V n) ℝ e) #check coeffs v -- coeffs v : Q n →₀ ℝ #check @somme n -- somme : (Q n →₀ ℝ) →ₗ[ℝ] V n #check linear_map.to_fun somme -- somme.to_fun : (Q ?M_1 →₀ ℝ) → V ?M_1 -- The following lines timeout --#check linear_map.to_fun somme (coeffs v) --#check linear_map.to_fun (@somme n) --#check (Q n →₀ ℝ) →ₗ[ℝ] V n Does anyone has any explanation for the above mystery? #### Rob Lewis (Aug 06 2019 at 14:50): You're right that the induction isn't necessary. Ultimately we need these two facts; everything below goes through fine with just these. These proofs are for the "exists unique" definition of adjacent. lemma adj_succ {p q : Q (n+1)} (h0 : p 0 ≠ q 0) : p ∘ fin.succ = q ∘ fin.succ ↔ p.adjacent q := begin split, { intro heq, use [0, h0], intros y hy, contrapose! hy, rw ←fin.succ_pred _ hy, apply congr_fun heq }, { rintros ⟨i, h_eq, h_uni⟩, ext x, by_contradiction hx, apply fin.succ_ne_zero x, rw [h_uni _ hx, h_uni _ h0] } end lemma adj_succ_of_zeq {p q : Q (n+1)} (h0 : p 0 = q 0) : Q.adjacent (p ∘ fin.succ) (q ∘ fin.succ) ↔ p.adjacent q := begin split, { rintros ⟨i, h_eq, h_uni⟩, use [i.succ, h_eq], intros y hy, rw [←fin.pred_inj, fin.pred_succ], { apply h_uni, change p (fin.pred _ _).succ ≠ q (fin.pred _ _).succ, simp [hy] }, { contrapose! hy, rw [hy, h0] }, { apply fin.succ_ne_zero } }, { rintros ⟨i, h_eq, h_uni⟩, have h_i : i ≠ 0, from λ h_i, absurd h0 (by rwa h_i at h_eq), use [i.pred h_i, show p (fin.succ (fin.pred i _)) ≠ q (fin.succ (fin.pred i _)), by rwa [fin.succ_pred]], intros y hy, simp [eq.symm (h_uni _ hy)] } end #### Patrick Massot (Aug 06 2019 at 14:53): I don't have a strong opinion about this exists_unique vs flip. I think both definition directly relate to the intuitive definition. #### Patrick Massot (Aug 06 2019 at 14:53): Why do you separate those two lemmas? Does it make later things easier? #### Patrick Massot (Aug 06 2019 at 14:54): Could you try my finsupp.total mystery? #### Rob Lewis (Aug 06 2019 at 14:54): I don't have time to debug the type class thing right now, but you can look at the trace here and see if you can make any sense: set_option trace.class_instances true include v example := by try_for 10000 {exact linear_map.to_fun somme (coeffs v)} #### Rob Lewis (Aug 06 2019 at 14:55): I think it ended up being a bit cleaner with them separate. But I generally try to avoid disjunctions, so that's why this feels cleaner to me. #### Patrick Massot (Aug 06 2019 at 15:13): (message too long, truncated at 262144 characters) #### Patrick Massot (Aug 06 2019 at 15:21): It seems adding the shortcut instance toto : module ℝ (Q n →₀ ℝ) := finsupp.module (Q n) ℝ helps a lot #### Patrick Massot (Aug 06 2019 at 15:21): Although the coercion from linear_map to function still doesn't kick in #### Rob Lewis (Aug 06 2019 at 15:23): Yes, the messages get truncated, but 262144 characters is usually plenty to find the problem... #### Rob Lewis (Aug 06 2019 at 15:23): Unsurprisingly there are type class issues in this whole development. You can add that to the short circuit instances for V. #### Johan Commelin (Aug 06 2019 at 15:24): Why "unsurprisingly"? #### Rob Lewis (Aug 06 2019 at 15:24): https://github.com/leanprover-community/lean-sensitivity/tree/new_adj is about as clean as I can get the adjacent stuff for now. #### Johan Commelin (Aug 06 2019 at 15:24): To me this is quite a surprise #### Rob Lewis (Aug 06 2019 at 15:26): I think it's been clear for a while that we're kind of abusing type class search. Whether it's our setup, or the inference algorithm itself, these kinds of issues are showing up a lot. #### Patrick Massot (Aug 06 2019 at 15:26): It's pretty clear that either we are doing it wrong, or Lean 3's type class is doing it wrong #### Patrick Massot (Aug 06 2019 at 15:26): Rob was quicker... #### Patrick Massot (Aug 06 2019 at 15:28): Rob, I think you can merge into master, I don't think anyone has a much better idea #### Kevin Buzzard (Aug 06 2019 at 15:54): Can someone summarise the problems you're having? #### Patrick Massot (Aug 06 2019 at 16:21): Kevin, there are several problems. The one I was discussing with Rob is to get a definition of the adjacency relation on the hypercube which is both easily recognizable and convenient for the proofs. I think we sort of settled that. #### Patrick Massot (Aug 06 2019 at 16:23): Then I went outside to help my youngest daughter training to ride a bicycle without side wheels. She's making progress but was tired. So i returned, and took up the big problem. #### Patrick Massot (Aug 06 2019 at 16:23): Which is the sum manipulation and basis and dual basis things. #### Patrick Massot (Aug 06 2019 at 16:23): Now that instances work, here is my proposal: #### Patrick Massot (Aug 06 2019 at 16:23): instance coeffs_module (n) : module ℝ (Q n →₀ ℝ) := finsupp.module (Q n) ℝ def coeffs {n : ℕ} (v : V n) : Q n →₀ ℝ := finsupp.equiv_fun_on_fintype.inv_fun (λ p : Q n, ε p v) def somme {n : ℕ} : (Q n →₀ ℝ) → V n := finsupp.total (Q n) (V n) ℝ e /-- For any v : V n, \sum_{p ∈ Q n} (ε p v) • e p = v -/ lemma decomposition {n : ℕ} (v : V n) : somme (coeffs v) = v := begin suffices : ∀ (p : Q n), ε p (somme$ coeffs v) = ε p v, { refine eq_of_sub_eq_zero (epsilon_total _), intros p, rw [linear_map.map_sub, sub_eq_zero_iff_eq, this] }, intro p, erw [somme, finsupp.total_apply, linear_map.map_sum], simp only [duality, map_smul, smul_eq_mul], rw finset.sum_eq_single p, { simp, refl }, { intros q q_in q_ne, simp [q_ne.symm] }, { intro p_not_in, simp [finsupp.not_mem_support_iff.1 p_not_in] } end #### Patrick Massot (Aug 06 2019 at 16:25): Note how this doesn not use e.is_basis or equiv_sum #### Patrick Massot (Aug 06 2019 at 16:25): I think this is going more straightly to the point #### Patrick Massot (Aug 06 2019 at 16:27): Modulo all the finsupp weirdness, it says exactly what the docstring claims #### Patrick Massot (Aug 06 2019 at 16:27): I think this could be the basis for a refactor of the main proof #### Rob Lewis (Aug 06 2019 at 16:28): What's somme? I keep reading that as a misspelling of some... Oh sorry #### Patrick Massot (Aug 06 2019 at 16:28): It's French for sum Aha. #### Patrick Massot (Aug 06 2019 at 16:29): At some point I wanted to avoid name clash #### Patrick Massot (Aug 06 2019 at 16:29): Of course it uses epsilon_total that I posted earlier #### Patrick Massot (Aug 06 2019 at 16:29): lemma epsilon_total {n : ℕ} {v : V n} (h : ∀ p : Q n, (ε p) v = 0) : v = 0 := begin induction n with n ih, { dsimp [ε] at h, exact h (λ _, tt) }, { cases v with v₁ v₂, ext ; change _ = (0 : V n) ; simp only [] ; apply ih ; intro p ; [ let q : Q (n+1) := λ i, if h : i = 0 then tt else p (i.pred h), let q : Q (n+1) := λ i, if h : i = 0 then ff else p (i.pred h)], all_goals { specialize h q, rw [ε, show q 0 = tt, from rfl, cond_tt] at h <|> rw [ε, show q 0 = ff, from rfl, cond_ff] at h, rwa show p = q ∘ fin.succ, by { ext, simp [q, fin.succ_ne_zero] } } } end #### Patrick Massot (Aug 06 2019 at 16:31): and duality. Together those lemmas are equivalent to the fact that e and epsilon are dual bases. #### Rob Lewis (Aug 06 2019 at 16:35): I haven't worked through your earlier posts very carefully, but if there's a chance this could clean up the bottom of the file, I'm in favor of trying... #### Patrick Massot (Aug 07 2019 at 00:58): Ok, I pushed something. The main goal is to shorten the final proof, make it easier to read, and transfer as much as possible into for_mathlib.lean. I followed my plan of using the duality as outline in my previous messages. Maybe I went too far, and I'm actually fighting the linear algebra library. But I still like how the final proof looks like. And I guess that Lean night already lasted too long (looks like it's now almost 3am...) #### Rob Lewis (Aug 07 2019 at 09:33): Nice, I think it looks great! #### Patrick Massot (Aug 07 2019 at 10:39): What I think it the final big thing is to generalize and move to mathlib https://github.com/leanprover-community/lean-sensitivity/blob/19c2800c9330ddf7130dac4ce22b7d2eb51afcfd/src/sensitivity.lean#L193-L247. It should connect with https://github.com/leanprover-community/mathlib/blob/master/src/linear_algebra/dual.lean. We could have a lemma characterizing pairs of dual bases (taking as input our statements duality and epsilon_total and outputting the conjunction that e is a basis and epsilon is its dual basis). And then replace the rest of the what I outlined in the first link by general lemmas about dual bases. Or, maybe more flexibly, we could have a duality predicate about two families of vector saying they form dual bases, and then state all lemmas in dual.lean in term of this predicate. As usual with predicate vs construction, the gain appears in exactly the situation we are in, when we want an alternative construction for some reason. #### Patrick Massot (Aug 07 2019 at 10:42): Honestly, I spent way too much time working on this sensitivity thing. Maybe someone who understands linear algebra in mathlib (and especially dual.lean) should take over (@Johan Commelin?). Today I'll bring my son to Penhir for climbing. #### Patrick Massot (Aug 09 2019 at 21:23): I'm frustrated that we can't advertise our sensitivity conjecture formalization so I tried again. I wrote https://github.com/leanprover-community/lean-sensitivity/commit/ed3d288f0fe696cc5a8049cad612f503ae2f4da3 about pairs of maps constituting a dual basis. It was a nightmare because elaboration was failing everywhere. And when I try to use it in the main file I get random mismatches of decidable_whatever. Maybe I was wrong when I wrote that function coercions were the main issue. It seems that decidability classes are there each time we really suffer in mathlib (see also polynomials). I give up. #### Patrick Massot (Aug 09 2019 at 21:23): I guess Rob is our last hope. #### David Michael Roberts (Aug 10 2019 at 10:35): working out pi to 7 decimal places, where was that done? I can't find it after a little look here in the chat. #### Mario Carneiro (Aug 10 2019 at 10:46): I think it was our pi day activity #### Kevin Buzzard (Aug 10 2019 at 11:18): They worked it out to a million decimal places in Coq once, but I would imagine the process took more than 24 hours in total. What has been interesting recently is that questions have come up and then a group of people have worked on them and within a day or two the code is up and running. #### David Michael Roberts (Aug 10 2019 at 13:27): Though the seven decimal places include the 3., that's still pretty nice. #### Reid Barton (Aug 10 2019 at 14:13): I also have the impression that decidable_eq requirements are somehow responsible for a lot of pain, though I have no data or understanding of why that would be. #### Rob Lewis (Aug 21 2019 at 09:54): @Patrick Massot What were the issues you were having here? It seems like things compile, and I don't see any nightmares in the file right now. #### Rob Lewis (Aug 21 2019 at 09:54): I was planning to try to move some of the lemmas to mathlib and put this in the archive. Are there still major changes you want to make? #### Patrick Massot (Aug 21 2019 at 10:59): I think I've explained it in previous messages. This minor nightmare is https://github.com/leanprover-community/lean-sensitivity/blob/lean-3.4.2/src/sensitivity.lean#L154-L247 which was meant to be replaced by calling https://github.com/leanprover-community/lean-sensitivity/blob/lean-3.4.2/src/for_mathlib.lean#L25-L103 but I couldn't make it work because of conflicting decidable instances (mixing actual instances and classical.prop_decidable I guess). Also https://github.com/leanprover-community/lean-sensitivity/blob/lean-3.4.2/src/sensitivity.lean#L117-L140 looks stupid since we then explicitly describe a basis. #### Patrick Massot (Aug 21 2019 at 11:00): I would really love it if you could have a look at this decidability nightmare #### Patrick Massot (Aug 21 2019 at 11:01): When we rushed to prove this theorem I thought that within one week we would be able to post comments to all blogs discussing this theorem to point out it was very quickly formalized. Instead we have one more proof that formalization is not practicable for stupid reasons. #### Rob Lewis (Aug 21 2019 at 11:13): I'll see what I can do. #### Kevin Buzzard (Aug 21 2019 at 11:28): I wasn't involved but I thought that it was indeed quickly formalised #### Rob Lewis (Aug 21 2019 at 11:32): It was quickly formalized but it's not a perfect formalization. To brag about something this short, it kind of has to be done canonically, and the problem (I think) is that Patrick struggled to get the canonical proof to work. #### Patrick Massot (Aug 21 2019 at 11:33): We don't need it to be perfect, but I'd like it to be recognizable maths #### Rob Lewis (Aug 21 2019 at 13:44): @Patrick Massot I just pushed an update. Is this what you had in mind? The decidability issues were nothing major. We were missing a decidable_eq instance for V n, which was confusing things in a few places. #### Rob Lewis (Aug 21 2019 at 13:52): In fact, the specific dec_eq instance I gave it isn't necessary. Defining it using classical.dec_eq is fine too. But without the explicit instance, it seems to be inferring different dec_eqs in different places. #### Rob Lewis (Aug 21 2019 at 13:52): It's all kind of ironic since V definitely isn't decidable. There are non-defeq ways to pretend that it is. #### Patrick Massot (Aug 21 2019 at 14:54): Yes, it looks good. I'm happy you found it easy. I'm so upset by those kinds of problems that my mind refuse to work on them. #### Patrick Massot (Aug 21 2019 at 14:54): What is the point of getting calc_lemma out of the theorem proof? #### Kevin Buzzard (Aug 21 2019 at 14:54): We mathematicians just instinctively deny that such (decidability) issues can exist, because we've been ignoring them for centuries. #### Patrick Massot (Aug 21 2019 at 14:55): I think https://github.com/leanprover-community/lean-sensitivity/blob/a4b69d68217b7b8319a03f3406f4fba6147b2f91/src/sensitivity.lean#L194-L202 are not needed, we can use their proofs where we need them #### Patrick Massot (Aug 21 2019 at 14:57): And we still have that silly dimension computation. We should have a mathlib lemma stating that if V has a basis index by a finite type then its findim is the cardinal of this finite type, and of course a lemma computing the cardinal of fin n to bool #### Rob Lewis (Aug 21 2019 at 14:59): What is the point of getting calc_lemma out of the theorem proof? That calc block is a memory hog. I moved it out to make it easier to fix, it doesn't have to stay out. Also gave me a chance to try extract_goal, which stumbles a bit with let statements. #### Rob Lewis (Aug 21 2019 at 15:00): I think https://github.com/leanprover-community/lean-sensitivity/blob/a4b69d68217b7b8319a03f3406f4fba6147b2f91/src/sensitivity.lean#L194-L202 are not needed, we can use their proofs where we need them Yeah. Again, just slightly easier to put them there while making the updates. #### Rob Lewis (Aug 21 2019 at 16:02): And we still have that silly dimension computation. We should have a mathlib lemma stating that if V has a basis index by a finite type then its findim is the cardinal of this finite type, and of course a lemma computing the cardinal of fin n to bool This is more annoying than it should be because of cardinal universes. #### Patrick Massot (Aug 21 2019 at 16:03): I really think mathlib should hide this to users who manipulate only finite-dimensional vector spaces (as we do in this proof) #### Patrick Massot (Aug 21 2019 at 16:03): If we can't do that then the linear algebra library has a serious issue #### Rob Lewis (Aug 21 2019 at 16:09): I think this is what linear_algebra/finite_dimensional.lean is trying to do, but there's still some glue missing. #### Rob Lewis (Aug 21 2019 at 16:12): fg doesn't seem to be linked up with the existence of a finite basis, as far as I can tell. #### Rob Lewis (Aug 21 2019 at 23:26): Turns out I work better late at night and this was much easier than I thought. https://github.com/leanprover-community/lean-sensitivity/commit/af28ecfab7f412f7d451a4ff4cf328aad322eae3 #### Patrick Massot (Aug 22 2019 at 15:16): Thanks Rob! I finally had some time to look at it. I would still prefer to hide cardinal even more (or at least have only one of dim_V or dim_V') but it's already much better. I feel sufficiently confident that we are close to something we could share (and I feel sufficiently confident I don't want to write that referee report I should be writing) that I made a cosmetic pass on the whole file: https://github.com/leanprover-community/lean-sensitivity/commit/4feaed5a407676a45327854595bd3e4b319c4f7e I hope there aren't too many controversial changes (especially for equation compiler addicts). Everyone is free to tweak it. #### Patrick Massot (Aug 22 2019 at 15:18): I also reintegrated the calc block in the main proof. #### Patrick Massot (Aug 22 2019 at 15:32): And now we can go on removing stuff that just got merged in mathlib, and continue emptying for_mathlib.lean #### Rob Lewis (Aug 22 2019 at 16:00): It looks good! I just pushed an update that gets rid of dim_V'. But updating mathlib breaks the assumption_mod_castin findim_V, I'm not sure why. #### Rob Lewis (Aug 22 2019 at 16:12): Fixed. https://github.com/leanprover-community/lean-sensitivity/tree/upgraded_mathlib on a branch for now so it doesn't interfere with update-mathlib. After the next nightly comes out, we can upgrade to that and merge to master. #### Rob Lewis (Aug 22 2019 at 16:15): @Paul-Nicolas Madelaine At https://github.com/leanprover-community/mathlib/blob/master/src/set_theory/cardinal.lean#L589, is nat_cast_pow (and nat_cast_le) a good elim_cast lemma? Maybe I was wrong to add that. #### Patrick Massot (Aug 23 2019 at 08:16): Rob, do you understand the crazy https://github.com/leanprover-community/lean-sensitivity/commit/658b12067f73e85fe27e6108bed92b2148c454da or is it the result of random desperate modifications? #### Rob Lewis (Aug 23 2019 at 08:45): That commit in particular, or the short circuits more generally? They were local to the namespace V before, which ended right after they were declared, so they were doing absolutely nothing. Oh ok #### Patrick Massot (Aug 23 2019 at 08:46): I'm sorry I messed up that #### Patrick Massot (Aug 23 2019 at 08:47): I just pushed some more tweaks to make statements look nicer (at least to my eye) #### Patrick Massot (Aug 23 2019 at 08:47): Did you make progress on the mathlib bump issue? #### Rob Lewis (Aug 23 2019 at 08:48): No worries. It was obvious something was wrong when I opened it on my laptop, the extra 25 seconds to compile are noticeable. #### Rob Lewis (Aug 23 2019 at 08:49): I think the next mathlib nightly goes up once the next PR gets merged and built, right? #### Rob Lewis (Aug 23 2019 at 08:49): Then we run leanpkg upgrade on the upgraded_mathlib branch and rebase. #### Patrick Massot (Aug 23 2019 at 08:50): I think the remaining ugliness in statements now all come from elaboration issues (the obvious ones like (f (m + 1) : _) w = √(m + 1) • w but also the sneaky ones like being forced to write √(m + 1) ≤ Card (H ∩ q.adjacent) instead of switching sides to match the way we would say it). #### Patrick Massot (Aug 23 2019 at 08:51): Oh, I thought you had trouble with the question you asked PN #### Rob Lewis (Aug 23 2019 at 08:52): Oh, that's fixed locally, I was just wondering if it should be a global change. #### Patrick Massot (Aug 23 2019 at 08:52): apply' just got merged so we should get a new nightly soon #### Patrick Massot (Aug 23 2019 at 08:53): Of course we can always trigger a new nightly build by hand if we are in a hurry #### Patrick Massot (Aug 23 2019 at 08:54): Back to mathematics, I wonder why the theorem is stated (in our formalization and in the original paper) for positive n. Isn't the statement obviously true when n = 0? #### Rob Lewis (Aug 23 2019 at 08:54): I'll need to run soon, got a friend in town visiting. But your changes look fine. There's dead code on lines 344-345. #### Patrick Massot (Aug 23 2019 at 08:55): oops, forgot to delete that. I'll do it right now #### Paul-Nicolas Madelaine (Aug 23 2019 at 09:31): @Rob Lewis nat_cast_le is a good elim_cast lemma and nat_cast_pow should be a move_cast lemma. something to keep in mind that I should also write explicitely in the documentation is that move_cast lemmas are going to be used from right to left. so in that case, the nat_cast_pow lemma will turn ↑n ^ m into ↑(pow n m), which can be a bit weird if the ^ notation is defined on cardinals. I'll add these notes to the documentation as soon as I am done with the report. #### Patrick Massot (Aug 23 2019 at 11:12): Ok, we are up to date with mathlib, and ready for the next round of PR #### Jesse Michael Han (Aug 24 2019 at 19:11): on the reduction branch i started on the reduction of the original sensitivity conjecture to the degree theorem we formalized #### Jesse Michael Han (Aug 24 2019 at 19:12): it looks manageable except that the proof of gotsman_linial_equivalence uses Fourier transforms #### Patrick Massot (Aug 24 2019 at 19:59): That sounds like a big "except" #### Daniel Donnelly (Aug 24 2019 at 20:09): Why can't Lean handle FT? Not that I need it for my app or anything... #### Patrick Massot (Aug 24 2019 at 20:14): Oh, it can. But someone needs to explain it to Lean. #### Patrick Massot (Aug 24 2019 at 20:14): And analysis is not mathlib's strong point #### Scott Morrison (Aug 24 2019 at 22:59): It's not really Fourier transforms in the Gotsman-Linial argument --- it's just the Z/2Z valued version, no analysis involved at all. #### Scott Morrison (Aug 24 2019 at 22:59): In fact the proof is just a few lines, even easier than Huang's recent argument. #### Daniel Donnelly (Aug 24 2019 at 23:05): I can attest to that. Has to do with sums of roots of unity in any field. #### Jesse Michael Han (Aug 24 2019 at 23:06): oh, good then maybe soon we can say that we really formalized the sensitivity conjecture #### Scott Morrison (Aug 24 2019 at 23:32): The Gotsman-Linial argument is in https://www.sciencedirect.com/science/article/pii/0097316592900608, which is free online. There's also a restatement of the proof at https://blog.computationalcomplexity.org/2019/07/degree-and-sensitivity.html. #### Scott Morrison (Aug 24 2019 at 23:34): I'm not sure what EnjoysMath meant, but there's nothing in the argument about sums of roots of unity, it's just counting signs in the GL argument. #### Rob Lewis (Aug 27 2019 at 07:30): I was trying to figure out why this project was so slow to compile on my laptop. Looks like we have another performance issue when we get deep into the mathlib file hierarchy: building the default simp set at the beginning of for_mathlib.lean takes 1.5 sec (on my laptop, 1 sec on my desktop). This happens once in every declaration that uses simp without only. #### Johan Commelin (Aug 27 2019 at 07:34): /me hears Kenny rolling on the floor lauging out loud #### Johan Commelin (Aug 27 2019 at 07:35): @Kenny Lau In how many different languages do you know the word “Schadenfreude”? #### Scott Morrison (Aug 27 2019 at 08:20): Like, maybe this enterprise is actually doomed, bad. :-) #### Chris Hughes (Aug 27 2019 at 08:28): I think this is maybe a case for being careful about minimising imports. It won't improve speed for everyone, but certainly it will improve speed a lot of the time. I think there were a lot of unnecessary imports for that proof. #### Johan Commelin (Aug 27 2019 at 08:41): […] building the default simp set at the beginning of for_mathlib.lean takes 1.5 sec (on my laptop, 1 sec on my desktop). This happens once in every declaration that uses simp without only. Isn't this something that can be cached? #### Rob Lewis (Aug 27 2019 at 08:51): It's not like it's an inherent problem -- I don't think Isabelle has the same behavior. Caching the simp set is probably pretty complicated because of multithreading. There's maybe something to be done there, but definitely not doable from mathlib. #### Rob Lewis (Aug 27 2019 at 08:52): It's the best case I've heard yet for minimizing imports. But I'm not sure how much it will really buy in the end. Maybe it helps this specific proof, but it will be a recurring issue. #### Rob Lewis (Aug 27 2019 at 08:54): It's fairly slow even just importing analysis.normed_space.basic. #### Scott Morrison (Aug 27 2019 at 09:03): I wonder if we should also be using local attribute [simp] more, or custom simp sets. #### Floris van Doorn (Aug 27 2019 at 15:35): I was always wondering whether the amount of simp declarations would cause performance issues. Limiting the number of imports might help a bit, but that is not a sustainable solution. Other potential solutions: • Can we write a user command at the top of a file that generates and caches the standard simp set, which all declarations in that file can then use? (Potentially modifying them a little bit, since some new simp-lemmas might be added). • As Scott said, make extensive use of simp-sets: instead of marking everything with @[simp] we mark things as topology_simp and linear_algebra_simp and with simp with topology_simp. Obviously this is a less nice user experience. • Be more restrictive with marking declarations as simp. Maybe not every simplification should be a simp-lemma. This will probably not make a big enough impact, since most simp-lemmas should remain simp-lemmas. #### Rob Lewis (Aug 27 2019 at 16:53): Can we write a user command at the top of a file that generates and caches the standard simp set, which all declarations in that file can then use? (Potentially modifying them a little bit, since some new simp-lemmas might be added). And define a new tactic simp' to use our cache? That would probably be doable. I'm not sure how efficiently we can generate a cache compared to the built in methods. And I think it's pretty common to progressively prove a bunch of simp lemmas in a file, which may use each other, and then a lot of theorems that use these simp rules right after. We'd have to modify a lot of proofs and/or regenerate the cache a bunch of times per file. #### Rob Lewis (Aug 27 2019 at 16:54): I wonder if we should also be using local attribute [simp] more, or custom simp sets. This seems like the most effective way to speed things up, and also a huge pain. #### Rob Lewis (Aug 27 2019 at 16:54): The alternative, of course, is to not change anything for now and see how things look in :four_leaf_clover: . #### Rob Lewis (Aug 27 2019 at 16:58): There are some simp sets that are pretty self-contained. I'm thinking of all the rules for making sense of filters. That would be a pretty natural thing to factor out of the default simp set. #### Johan Commelin (Aug 27 2019 at 16:59): Why is this not an issue in Isabelle? #### Patrick Massot (Aug 27 2019 at 17:31): Johan, did you ever read https://github.com/leanprover/lean/wiki/Simplifier-Features to see what the simplifier could look like? #### Johan Commelin (Aug 27 2019 at 17:42): Patrick, nope, I didn't. #### Johan Commelin (Aug 27 2019 at 17:45): It's an inspiring wiki page, thanks for the link! Last updated: May 06 2021 at 18:20 UTC
cancel Showing results for Did you mean: Community Member ## Integrate ITEM in CiscoWorks Hi there! I'm a little bit confused about the way i could integrate ITEM into CW. Does anybody has an advise or a document how to install ITEM alongside an existing LMS installation. In the official installation procedure, it is stated that all existing CW installations has to be removed prior the ITEM installation. Thanks for any replies! Andy 3 REPLIES Red ## Re: Integrate ITEM in CiscoWorks That is correct. ITM requires a dedicated server. It cannot be installed on the same server as any other CiscoWorks application. Community Member ## Re: Integrate ITEM in CiscoWorks Thank you for the reply. I ordered CWITEM-2.0-ADD-K9 and was sure that this would integrate into CiscoWorks. Can't find any document which describes how to "integrate" this in CW. Any suggestions? Thanks and regards Andy Red ## Re: Integrate ITEM in CiscoWorks All that can be done is to use the inventory from RME: http://www.cisco.com/univercd/cc/td/doc/product/rtrmgmt/cw2000/itm/itm_20/userguid/usedevmg.htm ITM has to be installed on a dedicated server. 126 Views 0
# Ultimate Optimization of GEMM on Maxwell GPU: the dissect of Maxas assembler When I was working as a deep learning software engineer in Intel on a AI chip project, I was aware of a assembler called Maxas ( https://github.com/NervanaSystems/maxas) which can generate the GPU machine code which outperforms the nVidia official GEMM library, and started to get interested in it. The author of that project, Scott Gray, provided detailed documentations about it ( https://github.com/NervanaSystems/maxas/wiki/SGEMM), however due to the complexity of the algorithm, the document is still hard to follow. This article can be regarded as the document of the original document, according to the author’s own understanding, with as many details as possible covered, and intention of all the code explained. The main structure still follows the original document, so does all the figures. Note that the Maxas algorithm depends heavily on the features of Maxwell architect, the project is out-of-date with the evolution of new GPU architect. However the ideas behind it are still insightful and worth being documented. # Background Single-precision general matrix multiply (SGEMM) is the most familiar case for any programmers who have learned CUDA. This is the only example in the official tutorial since nVidia release the first version of CUDA. It is not just because SGEMM is the most critical building block of almost and computational intensive software, but also because it is a good example to show the optimization tricks on GPU, especially those exploiting of memory hierarchy. For the multiplication of two matrices A and B, the easiest way to parallelize is to launch as many threads as the number of elements of in the output matrix C. Each thread loads a line of A and a column of B and calculates the inner product of the two vectors. The problem is the latency accessing the main GPU memory is huge (~100 cycles). For a row in A it may be still possible to take advantage of the huge band width of GPU memory or caching to load many successive elements of A to amortize the latency. While for big matrix (N>1000) B, there can be thousands of elements between the successive element in its column vector, which means the results of each loading are all useless except the exact column element, and there will be not cache hit. In a word the efficiency of such memory access is horrible. The optimization of the naïve method above needs shared memory, which can be regarded as on-chip cache of GPU. The latency of shared memory is as fast as first-level cache, and can be shared among all the threads in a thread block. The only shortcoming is that its size is limited. To take advantage of this small piece of high speed memory, there is some change to the native parallelization: The matrix needs to be divided into $k$ blocks on each dimension,therefore the output matrix C can be written as: The same division is applied for both A and B, in which $A_{ij}, B_{ij}, C_ij$ is no longer element but block. Of course when the size of block is 1 the block is reduced to element. Obviously the definition of matrix multiplication is still valid here: $C_{ij} = \sum_{k} A_{ik}B_{kj}$ If each block is regarded as an element, the scale of the matrix is reduced by $K$ times. For each block a threads block assigned for its calculation, in which one thread for one element. The thread block loads the row blocks of A and column blocks of B to shared memory one by one, do GEMM for each pair of row block and column block, and accumulate the result. Therefore each element in this block loaded to the shared memory can be accessed K time, which is much more efficient than the native method. The algorithm is actually pretty fast, and more importantly, further optimization will be based on it. In order to follow the rest of this paper, the reader is encouraged to study the CUDA official tutorial to get familiar with the blocking method, and understand the CUDA programming model as deep as possible. # Principle As described in the section above, after shared memory is used in blocking method, not only the speed of matrix multiplication (MM) of blocks matrices can be greatly improved, it is also possible to transfer the next blocks for calculation from main memory to shared memory while doing MM of blocks to. In another word, the time for IO can be fully covered by the MM of blocks. In order to further improve the performance, something has to be done on the multiplication of blocks. Although it has already been fast to do MM in shared memory, there is still quite a lot of things to do to reach the hardware limitation. There are 2 main bottlenecks. Firstly the latency of shared memory is still higher than register. On Maxwell/Pascal GPU the latency of registers is 6 cycles, while that of shared memory is 23 cycles. In addition, the computation unit on GPU cannot work on data in shared memory directly, and therefore a transferring instruction is needed to move the data to register. The mov instruction can take as much time as the real calculation instructions, so the overhead is pretty huge. To reach the peak performance of hardware, it is required that all the computation units of GPU have their pipelines fully occupied by calculation instructions, and for each clock cycle there will be numerical result coming out of the pipeline. In order to achieve that, Maxas and some previous research it quoted propose the following method: 1. Taking advantage of the newly added vector instructions, which can transfer 4 successive float numbers between shared memory and registers in one instructions, therefore greatly reduce to number of transferring instructions, and easier to enable the calculation instructions to hide the transferring time. 2. Interleaving the calculation instructions and transferring instructions to implement the pipeline of data prefetching and calculation 3. Blocking algorithm make use of the fast shared memory to cache the data which needs to be accessed multiple times. If the idea is developed and do another level of blocking, each block matrix is further blocked into smaller sub-blocks, and cache the data in shared memory with registers, some additional speedup can be expected. Note that the lower level blocking method is different from the higher level blocking, and there is some addition difficulty to resolve for it. The precise control for GPU instructions and registers to implement the ideas above has been beyond the expression capacity of CUDA, therefore these ideas have to be implemented with native assemble language of GPU (not even the pseudo assemble language like PTX). However, it is still possible to use some C-like pseudo-code which is more expressive to describe the implementation. There is straightforward conversion form pseudo code to assemble code, which is implemented will a Perl script in Maxas # Outline of maxas algorithms Here consider the multiplication of 2 $64\times64$ matrices all on registers. In the previous straightforward method, each element of C matrix is calculated according to the definition of MM $C_{ij} = \sum_{k} A_{ik}B_{kj}$, which is the inner product of a row in A and a column in B. Therefore a row in A and a column in B will bu used 64 times. In order to make full use of the register, the 3 $64\times64$ matrix, each occupying 16KB, is very big pressure for register file, if they are all stored in registers. Another problem of doing on-register MM is the registers are not shared among threads. A thread not only need to allocate registers not only for the elements it is responsible for in output C, but all rows in A and columns in B needed. As a result, many registers for different threads are used to store duplicate data, which is totally unacceptable. However, instead from the perspective of output matrix C, considering the problem from the perspective of input matrices. It can be figured out that the k-th column (not row!) of A is only used to do multiplication with the k-th row (not column!) of B. In another word, taking the k-th column of A and the k-th row of B, we can multiply all the element pairs and add the result to the output matrix element it contributes: The row and column then can be discarded as they have completed their task in MM an will no longer be needed. The method can greatly reduce to occupation of input matrices on registers. Moreover, the loading of $2N$ elements ($N$ for A and $N$ for B) is accompanyed by $N²$ add-multiply operation, which is exactly the benefit of using registers to cache data in shared memory. The method can be easily extented to A and B whose have different number of rows and columns, if only the column index of A and the row index of B are the same. Maxas uses 64 threads to implement the parallelization of block matrix multiplication, each thread taking care of the multiplication of 4 ($2\times2$) $4\times4$ matrices. The layout of the 64 threads are $8\times8$, therefore the block matrix whose length is $N=2\times4\times8=64$ is determined. This is the main difference from original blocking algorithm, where each thread calculates each element of the matrix, and is critical to make full use of the super low latenty of registers. The vector on the left is a column of A, and the vector on the top is the corresponding row in B matrix. The green data (8 float number for each vector) is what thread 0 needs. It is easy to derived the data needed by the other threads) The implementation of Maxas is just to serve the algorithm decribed in this section, the problems solved in the following sections are also what will be encountered during the implementation. The choice of the parameters, i.e. why 64 threads, are based on the hardware resource of GPU, to create as many thread wars as possible while each thread can still get enough registers requried. The purpose is to enable scheduler to launch some warp to do calculation while waiting for other warps to fetch the data. # Load input matrix to shared memory The first thing for the 64 threads, described in the section above, is to load the data they need of the two input matrices from main memory. It is worthwhile pointing out an implicit assumption not mentioned in the original document, which is matrices are stored in row-major format in Maxas, namely the columns in matrix are continuous. Otherwise it is not possible the explain the algorithms in the following section. Compared with the loading method in CUDA official tutorial, the loading method here also need some change due to the change in calculation algorithm, together with some optimization: 1. Since row data is used for B matrix, which is not successive when the format of B is column-major, B matrix needs to be transposed to make column to row. Therefore A and B can be loaded with the same method to simplify the code 2. A and B matrices are loaded as 1D texture, so that not only the texture data can be cached with texture cache, but also can benefit from the feature of texture loading that all the data out of boundary are set to 0, so that the such data will have no effect on the MM result. The knowledge of the preprocessing about will help understand the psedu-code in the following section. The creation of texture and transpose should have been done before the execution of GEMM kernel on GPU and will not affect the performance of the kernel. The data in texture memory are also loaded into shared memory segment by segment. According to the original method, each segment should be a tile of $64\times64$ elements. However in order to make full use of register resources, Maxas use some totally different calculation method. For a thread block responsible for $C_{ij}$, matrix A is first divided into $64 \times N$ stripes, each contains 64 rows, and all the data needed are located on $i$th stripe. However the amount of data on the stripe is still large and needs to be loaded by smaller segment. In Maxas each loading consumes $64\times8$ and, there are $\frac{N}{8}$ loadings to finish the work. The method for matrix B is similar, except that the stripes are divided into $N\times 64$ stripes by columns. After transposing the loading method is exactly the same as A. The memory layout is illustrated in the following figure. In the figure each lattice is the data element for each thread which is responsible for loading it, in which green lattice is the 4 elements that thread 0 needs to load one by one, and yellow lattice are the data to be loaded by the rest 31 threads in the first iteration. The whole warp loads 2 rows each time, and finish after repeating for 4 times. The execution unit on GPU is called warp consisting of 32 threads, therefore the 64 threads are executed by 2 warps. One of the warp (thread 0–31) loads A while the other (thread 32–63) loads B. There is a confusing place in the figure, the dimensions of the matrix in the figure is $8\times16$ instead of $8\times64$. This is because each thread behind will use vector instruction to load 4 float number at the same time, therefore each lattice itself indicates 4 float numbers. In the following code it can be seen that when using vector instruction on texture memory, the offset is the actual number of elements divided by 4. The loading method above is not unique for sure, and my understanding is since the loading methods for A and B are the same, except applying for different textures. Therefore compared with loading A and B with a single thread, the number of loading instructions, which is unrelated to computation, can be reduced. The data loaded is staged in registers, waiting to be stored in shared memory. The data layout in shared memory is shown in the following figure: Since the unit of offset in shared memory is 1 byte, we can get back to the straightforward representation. Therefore it can be seen that the data storage patterns in the 2 figures above are exactly the same, both are $8\times64$ column major storage. The only difference is in shared memory the data of B follow right after A. There are still two problems in the code above worth clarification: 1. track0 = blk*64/4 + tid15 + (ldx * tid2); where the first term blk*64/4 is used to select the 1D offset of the top-left corner of blk-th stripe with respect to the whole matrix of input matrix A or transposed matrix B in texture. Since the top-left corners of all stripes are located in the first column of input matrix, and in row-major storage the offset of any element in the first column is just the row number, therefore for the 'blk'-th stripe its top-left corner is blk*64, and /4 comes from the factor of vector instruction. The meaning of tid15 + (ldx * tid2) is more straightforward, which is the relative position of corresponding yellow lattice with respect to green lattice of the current thread in Fig.3. tid15 can be regarded as the column coordinator, and tid2 the row coordinator, which should be multiplied by the stride idx at this dimension in 1D representation. 2. 4 track variables has been used to record the offset of 4 loading instructions. The reader may wonder why not just just 1 track variable, and add the offset by 2 rows after each loading to do the same thing: The problem is after tex.1d.v4.f32.s32 instruction is issued (and before completion) the its track operand is not going to be saved. In order to ensure that it is not changed by the following increment instruction, some control code must be inserted to wait for the previous loading instruction to complete. In the worst case that instruction may take hundreds of clock cycles. Using 4 offset variables can avoid waiting for the completion of data transfer and help GPU issue all the 4 loading instruction one after another, and therefore acts like a pipeline. The cost is for each thread an addition of 3 registers need to be occupied, and fortunately there are still enough of them on GPU. # Load data in shared memory to registers After the job above is done, there are 8 rows of data for both A and B, and each row contains 64 floats. Getting a row from each matrix, then we can do the fused-multiply-add operations with the elements in the 2 rows. Once completed getting another row until the 8 rows in shared memory are consumed, when the other warp should have completed the transferring from texture memory to the other group in shared memory, and can be switched to do computation on the data. As shown in Fig.2, each thread actually just needs 8 of the 64 floats in total, and their offsets in vector A and B can be calculated according the figure. The detailed process in the code is done by a series of bit manipulations, which can be explained here in advance: Thread 2*i and 2*i+1 in the figure use the same segment of A, which can be written as (i/2) % 8. The series of bit manipulations is just the implementation of this expression, in which <<4 implements the interval of 16 bytes, which is the size of 4 floats loaded by vector loading, between the segment of column vector of A. The selection in the row vector of B is more complicated. Firstly note that for threads with even index, every 16 threads the distance long the direction of B the distance is 2 segments (8 floats), therefore for the 64 threads which can be represented with 6 bits, tid & 0x30 indicates the last 4 bits representing tid mod 16 can be masked, and only the first 2 bits is meaningful for the selection of B. The following >>3 actually draws the first 2 bits to the lowest digits with >>3 firstly, then represent the distance of 2 segments with *2.| (tid & 1) is equivalent to +(tid & 1), indicating thread 2*i+1 always selects the segment of data after thread 2*i. That segment of data also fills the gap between the distance of 2 segments. It also may have been noted that the layout of the threads in Fig.2 is really awkward, which is not sorted by thread index in either row or column direction. The reason to do that is to avoid bank conflict during access to shared memory. The definition of bank conflict and the condition for it to happen has been described in details in the CUDA official document. Briefly, the access to shared memory is divided into a couple of banks according to the address (the easiest method is by doing modulus), and if 2 threads need to access the shared memory whose addresses are in the same bank, then it is not possible to complete the access simultaneously, and they have to be access in series, therefore the access time is multiplied by the number of threads which need to access the same bank. However, that is just the most general case, and there are some optimization mechanisms provided by GPU, for example, broadcast, to reduce the negative effect of bank conflict. The other awkward thing is each thread does the calculation of 4 $4\times4$ blocks instead of a single $8\times8$ directly. This is actually another trick to avoid bank conflict, the calculation process of 4 $4\times4$ blocks for each thread is equivalent to a single $8\times8$. There is another trick used in the implementation. Although each thread requires only 16 input numbers, the number of registers allocated is actually twice of that number. The purpose of doing that is the same as before, to use two groups of registers to implement a pipeline, in which each thread can prefetch the next line of data while calculating the current line. # Calculation of C matrix: register allocation and order of calculation Now all the data needed for calculation have been sent to the registers with high-efficiency method, and looks like it is the time to use FFMA instruction to manipulate them directly, which is what the GEMM kernel should do. Unfortunately before that there is one difficulty, which maybe the biggest trouble in this whole project, the bank conflict of register access. In order to fill a stream process unit with a whole lot of threads, GPU contains as many as 32K register files, therefore the access to them cannot be as straightforward as on CPU, instead it is via bank (similar to shared memory access). As a result the bank conflict is inevitable, and the performance can degrade a lot once it happens. The register files on Maxas have 4 bank of 32-bits, and each register can be mapped to its corresponding bank with <register id> mod 4. Bank conflict can happen in the following instruction during the calculation of C matrix: Note that the register bank of each new generation of GPU varies, like in Volta architecture there are 2 banks of 64-bits, and that the main reason Maxas cannot perform as well on the current mainstream GPU as on Maxwell. One of the advantage of coding in assembly language is to make it possible to allocate registers manually to minimize the bank conflict: * Register 0–63 for C matrix * Register 64–71 and 80–87 for A matrix, 72–79 and 88–95 for B matrix (twice of actually needed registers are allocated for prefetch in pipeline) Since the loading is done with vector instruction, the 4 registers allocated for A and B must be successive, therefore all the four banks will be accessed simultaneously and there is no space to optimize it. All Maxas can do is to shuffle the order of 63 registers allocated, the numbering is illustrated in the figure below: : Obviously that’s the best result of shuffling the register indices. The bank conflict marked with black box is inevitable no matter how the indices of C matrix are shuffled, as the source of bank conflict happens in A and B, which need to use the same bank. The operands in A and B not only needs to occupy all the 4 banks (two element on each bank), but also need to be paired with all the other operands in the other matrix, therefore each register of A must bank conflict with 2 registers in B. Actually if the most naïve numbering method is used on C, for example, the first row is numbered as 0~7, the each register will bank conflict with its corresponding B operand, which is horrible numbering method. In order to further reduce the bank conflict which cannot be eliminated by register allocation, the operand reuse feature in Maxwell has to be used. When an instruction is issued, it is possible to set some of the operands as reuse, and hardware will send this operand to a reuse cache. If the following instruction is going to use the same operand in the same slot, then the instruction can get it directly from the reuse cache instead of via the register bank, and therefore bank conflict can be avoided. If the 64 registers of matrix C in Fig.5 are traversed line-by-line or column-by-column, and the registers of A are set as reuse, the 14 of 16 bank conflicts between registers of A and B can be eliminated. The only remaining bank conflicts are between register R3 and R35, as they are the operands of matrix A which are used first instruction, and therefore haven’t been saved in reuse cache. Once the cause is understood these 2 bank conflicts can also be easily resolved, simply by traversing the even lines from right to left (from 26 to 3 for row 0) and traversing the odd rows from left to right (from 7 to 30 for row 1). Moreover Maxas is still not satisfied with this result. It proposed a more tricky traversal method which applies an additional spiral on top of the back-and-forth traversal: According to the Maxas document, each operand has 8 bytes of reuse cache for 2 4-bit register, and line-by-line traversal only uses 1 of them to cache data in registers for A, therefore the usage of reuse cache is still low. My guess is it is considered that some B operands also has reuse cache but not utilized, therefore the usage of reuse cache of line-by-line traversal is 4/8/2=25%. The estimate of usage for back-and-forth traversal is not that straightforward. Maxas document gives the result 39% directly, also the usage of spiral traversal to be 49%. From the assembly code generated by Maxas, it is confirmed that there are instructions where reuse cache are used for both A and B matrices, and meanwhile caches 2 registers for each operand: However, the purpose of increasing the usage of reuse cache is not to increase the reuse frequency, given that back-and-forth traversal can fully resolve all the bank conflicts. It is actually to avoid the latency incurred by loading instruction being filled with the computation instructions which depend on the data of the loading instructions. The first 8 computation instructions and the loading instructions inserted between them to traverse the C matrix registers are as below: Since loading instructions take >20 clock cycles (the latency of shared memory) to complete, during that time the all their 1st operand may be accessed via bank, it is possible that the following computational instructions are issued and need to access the same bank. This is the cause of delayed bank conflict. However this is just a principle, I’m not clear about how the traversal order in Maxas can avoid the delayed bank conflict. This section shows that even if all the computation is conducted in registers, 2 tricks are still necessary to achieve the optimal performance: 1. Optimal register numbering 2. Optimal traversal order For the computation itself, it has become to trivial to be mentioned in the document of Maxas # Transfer C matrix to main memory After each thread block has completed the computation of the sub-block of matrix assigned to it, the last job is to transfer it from register to main memory. Since registers cannot be shared among threads (there are a series of _shfl_sync() instructions for the purpose but they are not applicable here), therefore each thread has to firstly transfer the 4 $4\times4$ matrices it calculated to shared memory. The reason not to transfer them directly from register to main memory is the current thread layout cannot make use of the huge bandwidth of GPU. In order to make full use of it we wish the data transferred by the 32 threads in each warp are continuous, so that in one clock cycle 128 bytes of data can be transferred at the same time. If the data are separated, they need to be transferred for 32 times in the worst case. According to the thread layout in Fig.2, the continuous 64-bits of data in a column distribute in 8 threads, for example, the result of the first 4 rows of the first column are saved in registers controlled by thread 0,2,4,6,8,10,12,14, in which each thread has 8 registers in that row. And in order to avoid bank conflict the 8 registers are not continuous, and therefore vector transferring instruction cannot be used here. Then it takes 8 times to complete the transferring of a warp, meanwhile the rest 24 threads in the warp will be idle as their data are not in the same row. To solve this problem we can first use shared memory to stage the data of all the threads, and then transfer the data in shared memory to main memory with another thread layout. Firstly still need to write the data in register to shared memory. The 4 $4\times4$ matrices saved in the registers of each thread are divided into two pairs aligned on column. According the the register allocation illustrated by Fig.5, there are 8 registers on each row. For example, register 3,7,1,5 are on the first row, belonging to a column of a $4\times4$ matrix, and register 35,39,33,37 for a column in another $4\times4$ matrix. As the result matrix C is also column major, if copy register 3,7,1,5 to 4 continuous registers (named as cs<0–3> in Maxas), and register 35,39,33,37 to cs<4–7>, then it is possible to use only 2 vector saving instruction to copy the 8 numbers to the corresponding position in shared memory. The left figure in Fig.6, which can be regarded as extracting a slice in every 4 columns from the the $64\times64$ in Fig.2 and concatenate them together, illustrates this process. After completion there will be a $64\times8$ matrix in shared memory, in which each column is continuous and corresponds to a row in matrix C. Then change the order of threads, and let one thread in the warp to transfer a number on this column to finish the 32 continuous float numbers simultaneously. The buffer in shared memory can take the space allocated for loading data in A and B, which are no longer useful after C has been calculated. The implementation code is shown below. Although this method needs to transfer data between registers and shared memory again and again, the latency of shared memory can be hidden by swapping tasks between 2 warps. Anyway it is still faster than accessing main memory for many times. Note that the method is implemented step-by-step, but there is no synchronization instructions in the code, as all the data exchange is done in the 32 threads of the same warp directly, and there is no data exchange among warps. The execution model of GPU can guarantee that the same instruction in the threads of a warp can be completed at the same time. No synchronization instructions can be regarded as an advantage of the parallelization method in Fig.2. There is another Figure in Maxas document illustrating this process. However the author may not fully understand it and feels it is kind of confusing, therefore it is not quoted here. The code itself should be self-explained enough. The job of the GEMM kernel generate by Maxas is done after the completion of transferring data to main memory. Based on the implementation of use 64 threads per thread block as described above, it is possible to scale it for 4 times to 256 threads and the work each thread to do remains the same. Therefore the block matrix each thread block calculates will be enlarged for 2 times along each dimension of the block, becoming $128\times128$. The loading of input matrix and the output of the result will have some corresponding change, which should be straightforward after understanding the 64-thread implementation and omitted here. For large matrix the 256-thread implementation has some advantage on performance. See the detailed performance test result in Maxas document. # Conclusion Although the pseudo-code in the original document has been further commented as detailed as possible, it is still a relative high-level implementation. There is still an important topic at the level of GPU machine code, control code, which has not been covered. Since the purpose is just to introduce some ideas and implementation method in GPU optimization, the part of Maxas document related to control code is also not covered here. In summary, the overall ideas of optimization used by Maxas is clear, which have already been proposed by other literature according to the Maxas document. The most difficult part in it is that the author of Maxas has to do a lot of tough reverse engineering to derive the hardware implementation details, which nVidia refuses to disclose, to achieve the peak performance of hardware. It is quite possible that the author of Maxas built a test platform to figure out the performance impact of subtle difference between some instructions. Anyway it is a great piece of work, and is worthwhile for all the engineers aiming at the extreme performance to deep dive. -- -- ## More from Wang Xiaoyu Computer Vision Engineer
# Less than a month ago, Jennifer began working as an RN at a regional hospital in... ###### Question: 1. What are your thoughts about the hospital’s protocol, as described by Brenda? Is Jennifer right, or does Brenda have a legitimate point? Is it common for known (or possible) sexual assault victims to wait long periods to be seen in the ED? If so, what is the rationale for that? Is this rationale acceptable? 2. What is Jennifer talking about when she says that a SANE is needed on staff? How could this help with the situation that Jennifer sees as unacceptable at the hospital? Is this level of training unrealistic for a small-town hospital to provide? #### Similar Solved Questions ##### Select the factors that might influence the rate of a chemical reaction. changing the concentration of... Select the factors that might influence the rate of a chemical reaction. changing the concentration of reactants reaction rate cannot change adding a catalyst changing the temperature... ##### A 20-year old female presents in her nurse practioner’s officewith genital itching and sharp, severe pain on the labia. Shecomplains of three previous episodes of pain over the past 6months, each of which were followed by the appearance of red soreswhich crusted and healed without a scar.On examination the nurse practioner observes a cluster of smallred blisters localized in the area of the worst pain. Nosignificant discharge was observed from the vagina. The patient’surine was clear and yel A 20-year old female presents in her nurse practioner’s office with genital itching and sharp, severe pain on the labia. She complains of three previous episodes of pain over the past 6 months, each of which were followed by the appearance of red sores which crusted and healed without a scar.... ##### Boat leaves the marina and sails miles north. then miles northeast the boat. and in How lar fiOt the marina what direction must sail to head directly baek t0 the marina?27.A person starts walking from home and walks miles east , miles southcast , miles south_ miles southwest. and miles easl. How far have they walked? tiey walked straight home. how far would they have t0 walk?miles south. starts walking from home and walks 2 miles east, 6 miles southeast, person walked? If they walked straight ho boat leaves the marina and sails miles north. then miles northeast the boat. and in How lar fiOt the marina what direction must sail to head directly baek t0 the marina? 27.A person starts walking from home and walks miles east , miles southcast , miles south_ miles southwest. and miles easl. How fa... ##### Die Die 2 Die 33/18 2/13 1/95/18 2/13 1/94/18 2/13 3/94/18 2/13 2/91/18 2/13 1/94/18 3/13 1/9(iv) Determine the probability of rolling a 6 if one die is selected at random. Note that; if Al, Az and Az indicate the events of selecting die 1, 2 and 3, then; according to part (iii)P(A1) = P(Az) = P(A:) = 1/3_Compare the probability value that you obtain with that of a fair die.Determine the probability that die 2 was chosen if 6 was rolled with the random dieSimilarly, determine the probability tha Die Die 2 Die 3 3/18 2/13 1/9 5/18 2/13 1/9 4/18 2/13 3/9 4/18 2/13 2/9 1/18 2/13 1/9 4/18 3/13 1/9 (iv) Determine the probability of rolling a 6 if one die is selected at random. Note that; if Al, Az and Az indicate the events of selecting die 1, 2 and 3, then; according to part (iii) P(A1) = P(Az)... ##### Many biological measurements on the same species follow a Normal distribution quite closely: The weights of seeds of a variety of winged bean are approximately Normal with mean / = 525 milligrams (mg) and standard deviation & = 110 mg_ We consider selecting at random 4 seeds of this variety:What is the probability that any one seed weighs more than 525 mg? (Enter your answer rounded to four decimal places )P(X 525 mg)What is the probability that any one seed weighs between 500 and 550 mg? (E Many biological measurements on the same species follow a Normal distribution quite closely: The weights of seeds of a variety of winged bean are approximately Normal with mean / = 525 milligrams (mg) and standard deviation & = 110 mg_ We consider selecting at random 4 seeds of this variety: Wha... ##### 11, Let X, X; X2, be independent and identicallv distributed random variables taking values 1, 2 with Px(0) 4, Px(1) { and px (2) Define Sn Xi +.+Xn21.Compute the probability generating function of X. Find the probability generating function of Sn_ Find P(Sn 2) from the probability generating function. Derive the moment generating function of Sn from its probability generating function and then compute E(S,) and V(S,, Use the moment generating function method to prove that S4-E(Su 2 Vv(S, where 11, Let X, X; X2, be independent and identicallv distributed random variables taking values 1, 2 with Px(0) 4, Px(1) { and px (2) Define Sn Xi +.+Xn21. Compute the probability generating function of X. Find the probability generating function of Sn_ Find P(Sn 2) from the probability generating func... ##### (5 points) Find all values of = such that & =-(a+Hi (o is the same olle aS in thebeginning nole) (5 points) Find all values of = such that & =-(a+Hi (o is the same olle aS in the beginning nole)... ##### 1) Which of the following best indicates the meaning of the term autoionization? Select the correct... 1) Which of the following best indicates the meaning of the term autoionization? Select the correct answer below: a)Autoionization is a reaction involving the transfer of a proton from an acid to water, yielding hydronium ions and the conjugate base of the acid. b)Autoionization is a reaction involv... ##### US Hele I Svstem Ann ssignment CES Chapter 21, Problem 025 How many electrons would have... US Hele I Svstem Ann ssignment CES Chapter 21, Problem 025 How many electrons would have to be removed from a con to leave it with ad are of +4.8 x 10-7O Number the tolerance is +/-3% Click if you would like to Show Work for this question: Open Show.wo Units Pokcy set by your instrustor accessing th... ##### LUULA TU U Contents Part 2 of 3 - Example Questions Please use the kinetic data... LUULA TU U Contents Part 2 of 3 - Example Questions Please use the kinetic data provide below for the reaction shown below A+B +C+D Determine the order of the reactions with respect to A & B and write the rate expression for the reaction in the following Experiment [A] (M) 0.100 0.100 0.100 0.20... ##### Describe the following concepts completely, critically and withexamples:a. Independent Samplesb. One-tailed testc. The null hypothesisd. Regression Modele. Analysis of variancef. Dependent samplesg. Chi-square-test Describe the following concepts completely, critically and with examples: a. Independent Samples b. One-tailed test c. The null hypothesis d. Regression Model e. Analysis of variance f. Dependent samples g. Chi-square-test... ##### What is the primer in DNA replication? What is the primer in DNA replication?... ##### S E S ! AaBDCDK AaBbcc AaBbci AaBbcc A 1 Normal No Spac... Heading 1 Heading... S E S ! AaBDCDK AaBbcc AaBbci AaBbcc A 1 Normal No Spac... Heading 1 Heading 2 T Paragraph Styles 1. The diagram to the right initially represents market demand and supply for a competitive industry The supply curve represents the horizontal sum of the individual firms' marginal cost curves. Ilg... ##### Aeak 13 Jotyet answered Marked out of 1C0 Flag E questionLet f be function from {1,2,3,4} to Z, where Z is the set of integers. 6{1,2,34}-Zsuch that f(x)-x Then range of f(x) isSelect one: a.2 .{2.3,4,5} c{1,4.9.16,25} d.{1,4,9,16}Queetlon 14 Not yet answered Marked out of25 mod 10 =Select one; a.3Alzl - quesdond.6Qleston 15 Notycr ansicrco Merted 0u{01 1001 Fak qluqnonCeiling of 4.1 IsSelect one' Aeak 13 Jotyet answered Marked out of 1C0 Flag E question Let f be function from {1,2,3,4} to Z, where Z is the set of integers. 6{1,2,34}-Zsuch that f(x)-x Then range of f(x) is Select one: a.2 .{2.3,4,5} c{1,4.9.16,25} d.{1,4,9,16} Queetlon 14 Not yet answered Marked out of 25 mod 10 = Select one;... ##### 41.645 -113,30 Calculate the pH of a solution of 0.1M pyridine solution (CHN) with a k)... 41.645 -113,30 Calculate the pH of a solution of 0.1M pyridine solution (CHN) with a k) = 1.7 x 10-9 W Colts N 1₂O ² ColH₂NH TOH eek Lase I o.l k, CH, NAD LOH-] [Estoni C -x E .1-x x =LOH 1.7.10-9 = 0.10-0 If the base H, NON has a ky 4.4 X 10", what is the pH of a 0.05M solution... ##### 3. If A, B, and C are the vertices of a triangle; compute AB + BC + CA A: 0 B. AB C. BC D. Ac E. -2 Ac F, undefined not enough information 3. If A, B, and C are the vertices of a triangle; compute AB + BC + CA A: 0 B. AB C. BC D. Ac E. -2 Ac F, undefined not enough information... ##### 1. Power Series, Taylor and McLaurin Series Given f (c) 21+3 , find its power series representation 2. Find its interval of conversion 3. Given g (r) re" , find the first 3 non-zero terms ofits Maclaurin series expansion at center c Find its interval of conversion 5. Given h (r) = In z,find the first 3non-zero terms of its Taylor series expansion at center c 6. Find its interval of conversion 1. Power Series, Taylor and McLaurin Series Given f (c) 21+3 , find its power series representation 2. Find its interval of conversion 3. Given g (r) re" , find the first 3 non-zero terms ofits Maclaurin series expansion at center c Find its interval of conversion 5. Given h (r) = In z,find the... ##### Suppose the pound is pegged to gold at 5 pound per ounce, whereas the German Mark... Suppose the pound is pegged to gold at 5 pound per ounce, whereas the German Mark is pegged to gold at 15 Mark per ounce. Currently the market exchange rate is 2.5 Mark per pound. 1. What is the implied exchange rate between German Mark and pound? Answer: 1 pound = German Mark (Please write your ans... ##### 19 Suppose that the demand and supply schedules for rental apartments in the city of Gotham... 19 Suppose that the demand and supply schedules for rental apartments in the city of Gotham are as given in the table below. Apartments Demanded Apartments Supplied Monthly Rent 2,500 2,000 1,500 1,000 500 12,500 15,000 17,500 20,000 22,500 17,500 15,000 12,500 10,000 7,500 b. If the local governme... ##### College Physics II (PHY1322) EXERCISE-Chapter 16 Extra-credit Submit on-line and email nacinigaussu.cat PHYSICS GIANCOLI gveen In... College Physics II (PHY1322) EXERCISE-Chapter 16 Extra-credit Submit on-line and email nacinigaussu.cat PHYSICS GIANCOLI gveen In the figure the red charge is 2 maC, the blue charge is 10 mC. and the green charge is 4 mC all o distances are in meters. To the searest newton what is the force on the g... ##### [-H1 Points]It You arollocdMlmt mnmja"oieThelon42un Irdlnn 490 TJurt Roun} D Fna ridjlc *Ve, Mdoy meout Madban Nse uE Firxt Marri*re (nt Wortienunasr'J0io riret marriage (or women Iram I6w0 continura decteash penora} trond ot thc Medlan 509 Mnlil I950 after [nat t€ Itumn 444 Detcrit omian inbdic quich mma Hedlan DQ8 DE Firet (artioqs octnue ncredtr net Uu Inedian 096 Metttn mostiy detzedsed untii 1950_ Hnmmenn dat Frs Marttant Iolawe Frroda cuccase prriout oreny momen nuctuated rapic [-H1 Points] It You arollocd Mlmt mnmja" oie Thelon42un Irdlnn 490 TJurt Roun} D Fna ridjlc *Ve, Mdoy meout Madban Nse uE Firxt Marri*re (nt Wortien unasr 'J0io riret marriage (or women Iram I6w0 continura decteash penora} trond ot thc Medlan 509 Mnlil I950 after [nat t€ Itumn 444 De... ##### 8 6 an the integral upper-case "C" for 1 7 the constant dx of integration_Submit answers 8 6 an the integral upper-case "C" for 1 7 the constant dx of integration_ Submit answers... ##### In radical chlorination of alkanes, non-equivalent hydrogens react with chlorine atoms at different rates. At 35... In radical chlorination of alkanes, non-equivalent hydrogens react with chlorine atoms at different rates. At 35 °C, primary, secondary, and tertiary C-H bonds react at relative rates of 1 : 3.9 : 5.2 respectively These are conditions of kinetic control where product ratios are determined by rel... ##### A spherical conductor has a total charge of -7 micro-coulombs and a radius of 0.48 meters... A spherical conductor has a total charge of -7 micro-coulombs and a radius of 0.48 meters has its center at the origin of a coordinate system. What is the magnitude of the electric field produced by this conductor at x = 0 m, y = -0.24 m in 10^5 N/C?... ##### = O GRAPHS AND FUNCTIONS Inverse functions: Cubic, cube root v The one-to-one function f is... = O GRAPHS AND FUNCTIONS Inverse functions: Cubic, cube root v The one-to-one function f is defined below. F(x) = 9-x+6 Find f-'(x), where f-' is the inverse of f. -1 f (x) = 1... ##### 8) Find L10 for f(x) = 22 +3.c +1 from 2 = 1 to 2 =... 8) Find L10 for f(x) = 22 +3.c +1 from 2 = 1 to 2 = 2.... ##### Consider a temperature-regulated oven where the temperature at position (1,y, 2), in cm; is given by T(w,y,2) = Ttz in 'C a ) Particle A is at (4,3,1) and starts moving towards (2,1,0). How fast is the temperaturc at particle A changing? (b) Particle B is at (2,1,0) and wants to moving the direction which will decrease its tem- perature the fastest_ What direction should Particle B move? Consider a temperature-regulated oven where the temperature at position (1,y, 2), in cm; is given by T(w,y,2) = Ttz in 'C a ) Particle A is at (4,3,1) and starts moving towards (2,1,0). How fast is the temperaturc at particle A changing? (b) Particle B is at (2,1,0) and wants to moving the di... ##### The theory of island biogeography envisions the species richness of island communities as a balance between the processes of colonization and local extinction. In what way can this theory be applied to isolated habitat patches in terrestrial environments? The theory of island biogeography envisions the species richness of island communities as a balance between the processes of colonization and local extinction. In what way can this theory be applied to isolated habitat patches in terrestrial environments?... ##### 1)20 points. There are severab recessive mutants in worms_ mutant X slimy_ mutanty IS green and utant has big mouth_ wild type FI female was crossed t0 slimy, green, big-mouth male_ The following progeny were produced:4) What were the phenotypes ofthe inbred parents of the FI female? b) Draw the metaphase chromosomes of the Fl female with the genes marked in the appropriate positions_ Construct genetic map for the FI female_ d) How much interference is there in this cross? What caused the interf 1)20 points. There are severab recessive mutants in worms_ mutant X slimy_ mutanty IS green and utant has big mouth_ wild type FI female was crossed t0 slimy, green, big-mouth male_ The following progeny were produced: 4) What were the phenotypes ofthe inbred parents of the FI female? b) Draw the me... ##### Kindly help me to formulate or design interview guideline to achieve the objectives of the study... Kindly help me to formulate or design interview guideline to achieve the objectives of the study below: Research Topic Investigate the effects of Introducing flexible working methods on employee satisfaction and performance of the Namibia Training Authority (NTA), training department. The Objectives... ##### JIsul (abi; 2)Consider the following matrix:-12 A= 0~]-1Which of the following statements is true:The columns are linearly dependentThe matrix has determinant = -1The matrix is not invertibleNone of the above JIsul (abi; 2) Consider the following matrix: -1 2 A= 0 ~] -1 Which of the following statements is true: The columns are linearly dependent The matrix has determinant = -1 The matrix is not invertible None of the above... ##### Question 2 (3 points) Suppose the economy currently is in a recessionary gap. The Fed engages... Question 2 (3 points) Suppose the economy currently is in a recessionary gap. The Fed engages in expansionary monetary policy. The impact of expansionary monetary policy will be to increase aggregate demand, increase prices, and increase real GDP - increase aggregate demand, increase prices, and dec... ##### Point) The set B ={[5] [18}} is a basis for R? . Find the coordinates of the vector & e- [Awith respect to the basis B: point) The set B = {[5] [18}} is a basis for R? . Find the coordinates of the vector & e- [A with respect to the basis B:...
Show Summary Details More options … # Nanophotonics Editor-in-Chief: Sorger, Volker IMPACT FACTOR 2018: 6.908 5-year IMPACT FACTOR: 7.147 CiteScore 2018: 6.72 In co-publication with Science Wise Publishing Open Access Online ISSN 2192-8614 See all formats and pricing More options … # Enhanced terahertz emission from imprinted halide perovskite nanostructures Viacheslav I. Korolev / Anatoly P. Pushkarev / Petr A. Obraztsov • ITMO University, St. Petersburg, Russia • Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow 119991, Russia • Other articles by this author: / Anton N. Tsypkin / Anvar A. Zakhidov • ITMO University, St. Petersburg, Russia • University of Texas at Dallas, Richardson, TX 75080, USA • Other articles by this author: / Sergey V. Makarov Published Online: 2019-12-27 | DOI: https://doi.org/10.1515/nanoph-2019-0377 ## Abstract Lead halide perovskites were known to be a prospective family of materials for terahertz (THz) generation. On the other hand, perovskite nanostructures, nanoantennas, and metasurfaces allow tailoring perovskites optical characteristics, resulting in more efficient interaction with incident or emitted light. Moreover, the perovskites are robust materials against formation of defects caused by mechanical deformations and can be efficiently nanostructured by various high throughput methods. In this work, we have enhanced THz emission from MAPbI3 perovskite upon femtosecond laser irradiation using nanoimprint lithography. The formed nanostructures not only improve absorption of the incident laser pulses, but also lead to a non-symmetric near-field distribution. As a result, we have enhanced the efficiency of THz emission from the nanostructured perovskite by 3.5 times as compared with a smooth perovskite film. Our results paved the way for a new application of large-scale perovskite nanostructuring, making halide perovskites competitive with more expensive conventional semiconductors for THz generation. This article offers supplementary material which is provided at the end of the article. ## 1 Introduction Methylammonium lead iodide perovskite (CH3NH3PbI3 or MAPbI3) is an organic-inorganic material, which combines vital advantages for modern photonic sources [1], optoelectronics, [2] and photovoltaics [3], such as strong direct interband transitions resulting in high absorption in visible range (α>105 cm−1) and efficient luminescence, whereas huge carrier lifetimes (up to microsecond scale) yield long carrier diffusion lengths [4]. Moreover, it is a solution processed semiconductor and thin films are formed at low temperatures considerably to simplify functional devices fabrication. Recently, this material was proposed as a novel and relatively efficient source of terahertz (THz) emission after pumping it above bandgap with femtosecond laser pulse [5], [6], [7]. Photo-Dember [5], bulk photovoltaic [6], and surface depletion field [7] effects were proposed to be main mechanisms responsible for the THz generation in MAPbI3. Also, the generation of THz radiation from lead bromide perovskite due to shift current mechanism upon two-photon nonlinear excitation was demonstrated very recently [8]. Despite the exact physical origin and contribution of different effects to the process of THz emission in perovskites, it is still the field of debates. Relatively efficient emission of THz radiation was recently demonstrated with the electric field amplitude only one order of magnitude lower than that of InAs [7], paving the way for creation of cheap, compact, and efficient THz sources. Further optimization of the THz emission efficiency can be done by employing advanced nanophotonic structures, as it was achieved with various semiconductors [9], [10], [11], [12], [13], [14], [15], [16]. In turn, halide perovskite nanophotonics [17], [18] and metaoptics [19] are rapidly developing platforms for boosting efficiencies of various optical and optotelectronic devices from lasers to solar cells. In particular, periodic arrays of perovskite nanostructures were employed for luminescence enhancement [20], [21], [22], [23], solar cells efficiency improvement [24], and optimization of light outcoupling from perovskite light emitting devices [25]. Halide perovskite integrated with various plasmonic structures were also employed for efficient THz pulses modulation [26], [27], [28], [29]. Moreover, halide perovskites are soft enough to enable nanoimprint lithography (NIL), being one of the most throughput method for perovskite functional patterning [30], which is hard to exploit for conventional semiconductors (e.g. III–V compounds) that are the most efficient sources of THz. In this work, we proposed an approach to enhance THz emission from a halide perovskite thin film by means of its nanopatterning. We showed experimentally that THz field amplitude can be enhanced up to 1.75 times (or up to 3.5 times for THz intensity) with strong dependence on polarization of the incident light. Our numerical simulations show that the nanostructure supports lateral gradients for near-field in the sub-surface layer making the contribution stronger to THz generation via free carriers transient spatial separation. The applied NIL approach for the perovskites nanopatterning paves the way for cost-efficient THz emitters fabrication. ## 2.1 Perovskite synthesis Lead(II) iodide (PbI2, 99.99%, TCI), methylammonium iodide (MAI, 99.8%, Dyesol), dimethylsulfoxide (DMSO, 99.8%, anhydrous, Alfa Aesar), N, N-dimethylformamide (DMF, 99.8%, anhydrous, Sigma-Aldrich), diethyl ether (95%, Vecton) were used as received. PbI2 (1 mmol, 461 mg) and MAI (1 mmol, 159 mg) were mixed and dissolved in DMF (700 mg) and DMSO (70 mg) by shaking for 5 min to give 1.25 m perovskite precursor solution. The solution was filtered by using 0.45 μm PTFE syringe adapter. All the manipulations were carried out in a N2-filled glove box with H2O and O2 levels not exceeding 1 ppm. ## 2.2 Thin film deposition Glass substrates of 1.5×1.5 cm size were cleaned mechanically with sodium bicarbonate and after that subsequently sonicated in acetone and 2-propanol (IPA) for 5 min. Before the spin-coating, the substrates were treated with ozone for 3 min to eliminate residual surface contaminants. Then, substrates were evenly covered with 35 μl of the perovskite ink and spun for 55 s. A spin-coating procedure consisted of two steps: (i) 1000 rpm for 10 s; (ii) 2000 rpm for 45 s. One milliliter of diethyl ether was dripped on top of the precursor layer at 15 s to precipitate perovskite in the form of polycrystalline thin film. After the spin-coating, the substrates were annealed on the hot plate for 30 min at 50°C. ## 2.3 Nanoimprint lithography After the annealing the samples were subjected to NIL process. As a master form a standard commercial optical DVD disk was exploited. The mold was prepared in the following way: a plastic part of a disc was separated from a part covered with metal foil; the latter was cut in 1×1 cm2 pieces suitable for NIL procedure. At room temperature, pressure of 1 ton per 1 cm2 was applied to the mold laying over perovskite film. In order to get high-quality grating, NIL procedure should be accomplished in a certain time window when the concentration of DMF/DMSO in MAPbI3 film is small enough to avoid removal of the perovskite material from the substrate with the mold. However, the remaining solvent fraction should be sufficient for surface modification in the pressure range when the substrate does not crack. ## 2.4 Samples characterization Morphology of the perovskite films before and after NIL process was studied with a scanning electron microscope (SEM, Zeiss Auriga FIB-SEM) in back-scattering electrons mode. Accelerating voltage was 5 kV to avoid perovskite damage or degradation upon the electron beam. ## 2.5 THz experiments For detection of THz radiation, we used reflection geometry (the angle of incidence to the surface is equal to 45°) with phase-sensitive electrooptic sampling. As a pump source we took laser with 35 fs pulse duration, wavelength 400 nm (second harmonic generated in BBO crystal), with rate 1 kHz, the size of the beam is 0.5 cm2. For electro-optic sampling 1 mm ZnTe crystal was employed. It should be noted that the experimental setup did not provide nitrogen purging. Experimental setup is shown in more details in Section S1 in Supporting Information. ## 2.6 Transient photocurrent measurements To induce photocurrents in fabricated samples, we employed second harmonic of Ti:Sapphire amplified system output, delivering 40 fs, 2.5 mJ pulses with 1 kHz repetition rate at 800 nm (1.55 eV) fundamental wavelength. The pump fluences used to excite the samples are well below the damage threshold of initial perovskite layer and did not result in any visible damage during the measurements. In experiments, the unfocused pump beam with diameter of 0.5 cm was directed to the sample surface at incidence angle 45°, providing fluence around 100 μJ/cm2. The induced photocurrents were measured at room temperature and zero-bias across conductive electrodes applied to the surface of perovskite samples by the voltage drop on the 50 Ω input impedance of a 600 MHz digital oscilloscope connected to electrodes in short-circuit scheme. ## 2.7 Numerical simulations Numerical simulations were performed by using frequency-domain solver in the commercial software CST Microwave Studio. Incidence of a plane wave with λ=400 nm on a MAPbI3 perovskite film [31] of a given profile was considered. ## 3.1 Samples fabrication For fabrication of the samples, we used NIL, which is one of the most throughput method of nanopatterning perovskite thin films, because a polycrystalline thin layer of MAPbI3 behaves as a soft material that allowed us to work in a pressure range where the substrate does not crack [30]. Usually, lithographically-made silicon molds are employed for surface nanostructuring. However, manufacturing of such molds is a complex and expensive process, therefore, risk of damaging during NIL restrains their utilization for large-scale and mass-production applications. In order to make a nanostructure on a perovskite layer, we used a metallic part of an unrecorded commercial DVD disk, representing 1D grating with a period of 0.75 μm. The schematic illustration of the fabrication process is shown in Figure 1A. Initially, 600-nm-thick perovskite film covers the glass substrate evenly (Figure 1B, more details on the films synthesis and deposition see Section 2 Methods), and after applying NIL, a grating with period ≈0.75 μm, line width ≈0.5 μm, and height ≈0.1 μm is formed on the film surface (Figure 1C). Also, the samples show rainbow colours under white light illumination due to light diffraction on the formed grating (Figure 1A). Such surface grating might be useful for not only standard optical applications, but also can be employed for enhanced THz emission as schematically shown in Figure 1D. Figure 1: Samples and concept. (A) Technological steps for nanostructured MAPbI3 films fabrication by means of nanoimprint lithography (NIL). SEM images of a perovskite film before (B) and after (C) NIL. (D) Schematic illustration of the principle of THz generation enhancement in the nanostructured perovskite film (upper) as compared with a smooth film (lower). ## 3.2 THz measurements In order to generate THz emission from the obtained samples we irradiated them by second harmonic of 35-fs amplified Ti:Sapphire laser centered at 400 nm (for details, see Methods). In agreement with previous studies [5], [7], Figure 2A shows, that the peak amplitude of the emitted THz radiation is a linear function of incident laser fluence. Deviation from the linear dependence is observed only at higher pump fluence (>100 μJ/cm2) which is close to the threshold of perovskite visible modification. Namely, at higher fluences, the THz signal is not stable and film irreversibly changes color to yellow in 10 minutes approximately, which might be related to perovskite decomposition MA. Figure 2: THz measurement results. (A) Experimentally measured dependence of THz amplitude from smooth (yellow line) and nanoimprinted (NIL, brown and orange lines) MAPbI3 films on fluence of femtosecond laser pump. (B) Spectra of the THz pulse generated from the MAPbI3 films at fluence 100 μJ/cm2, where the curve colors are similar to those on picture (A). (C) Comparison of THz field amplitude from smooth and nanostructured perovskite films with a commercial InAs slab. The inset of Figure 2A shows the mutual orientation of polarization vector of the incident laser field and grooves direction. We considered two incident polarizations for the laser pulse falling on the MAPbI3 film at the angle of incidence equal to 45°, i.e. when the polarization vector is parallel (TE) and perpendicular (TM) to the grooves, respectively. The insets of Figure 2B show generated THz field amplitude in time domain measured at fluence equal to 100 μJ/cm2. At the time ≈1.8 ps relatively to the arbitrary chosen starting point similar for the all measurements, we observed the build up of THz emission. The maximum THz field amplitude was observed after ≈3.1 ps. The field oscillations of THz pulses can be detected until ≈6 ps. After that, we did not detect any measurable signal at the picosecond scale. To analyze spectrum of emitted THz radiation, we applied Fast Fourier Transformation. The resulting spectra are shown in Figure 2B, where central peaks of the THz emission are located at ≈0.6 THz, which is consistent with previously reported experiments [5], [7]. We also compared THz emission from perovskite and InAs semiconductor, which predominantly generates THz via photo-Dember effect [32]. Figure 2C represents the comparison between THz field amplitudes from different samples measured in reflection configuration upon identical excitation conditions (fluence 40 μJ/cm2). As compared to the InAs slab, the generated THz radiation is ≈3.5 times (lower) for the smooth perovskite thin film and ≈2 times lower for the nanopatterned sample pumped with TM-polarized light. ## 3.3 Ultrafast photocurrent measurements In order to provide the comparison between performance of smooth and NIL perovskite layers, we performed additional photocurrent measurements. We employed the experimental procedure developed in the work [6] and have described in details in Section 2 Methods. The experimental geometry is schematically depicted in Figure 3A. Upon excitation with femtosecond pulses, the duration of the induced transient photocurrent response obtained from the perovskite sample is determined by the registration system bandwidth rather than by the internal characteristics of the sample, while its magnitude represents the time-integrated current. Since the photocurrent is triggered by an ultrashort (40 fs) laser pulse and the current dynamics in perovskite is expected to be on a sub-picosecond time scale [26], [29], the time-varying current J should result in the emission of the THz wave proportional to dJ/dt [33]. Since these photoexcitations occur in the unbiased sample, the transient photocurrents Jx and Jy or the resultant free-space emission of TE- or TM-polarized THz radiation, respectively, provides information regarding the internal bias near the sample surface. Figure 3: Photocurrent response. (A) A scheme of the photocurrent experiment. (B) Typical temporal profiles of photocurrent pulses induced in smooth and nanoimprinted perovskite films under excitation with different optical polarizations. Changes in the photocurrent with light polarization: (C) smooth perovskite sample; (D) NIL perovskite sample. The typical temporal profiles of photocurrent pulses induced in smooth and NIL perovskite samples under excitation with different optical polarizations are shown in Figure 3B. As one can see from Figure 3B both TE- and TM-polarizations induce pronounced photocurrent response in smooth and NIL samples. While the amplitude of photocurrent signal in smooth sample is oscillating around zero, in the case of structured sample there is pronounced enhancement of response when the polarization vector of light is perpendicular to the grooves of the imprinted grating. By measuring the voltage drop across two orthogonal pairs of electrodes, we were able to simultaniously measure longitudinal (Jx) and transverse (Jy) components with respect to the light incidence plane components of the induced photocurrent as well as to the patterned grooves in the NIL sample. Polarization dependences of the induced Jx and Jy components of the photocurrents were identified by measuring the peak-to-peak amplitudes of the corresponding photocurrent waveform while rotating half (λ/2) waveplate positioned before the sample by an angle, which varied the state of the laser polarization with a period of 90°. The peak amplitude values of the photocurrent signals induced in smooth and NIL perovskite films as a function of light polarization (rotation of λ/2 plate) are shown in Figure 3C and D, respectively. Generally, one can see the increased amplitude of the sinusoidal curves for NIL sample as compared with smooth one (see additional description of the measurements in Supporting information), which also indicates the enhancement of photocurrent in the nanostructured perovskite film. ## 3.4 Discussion According to our experimental findings (Figure 2A), the peak amplitude of the emitted THz field (ETHz) from both smooth and nanostructured samples demonstrate linear dependence on the excitation laser fluence. This is fully consistent with most of possible mechanisms of THz emission from semiconductors involving one-photon absorption which includes photo-Dember and lateral photo-Dember [5], bulk photovoltaic [6], and surface depletion field [7] effects. Indeed, independent on the exact mechanism the amplitude of emitted THz field is proportional to the number of photogenerated carriers (ETHz~Neh, where Neh is the density of photogenerated carriers) and, therefore, in the case of one-photon, absorption is linearly dependent on the incident laser fluence. Thus, the linear dependence of THz field amplitude on the pump fluence is a common feature for THz emission via one-photon absorption and cannot be used by itself to distinguish between different mechanisms of THz emission [33]. In the previously reported work [5], the photo-Dember effect was proposed as the main contribution to the generation of terahertz radiation in MAPbI3. In the photo-Dember mechanism, transient current (oscillating dipole) induced by the spatial difference in photogenerated electrons and holes concentrations, and it has the direction predominantly normal to the film surface. However, radiation of such a dipole is directed mostly along the surface which obstructs the output THz field from the sample and in the absence of factors which provide the tilt of the dipole orientation from parallel to perpendicular to the surface normal photo-Dember effect would not provide any THz signal in our experimental geometry with detection in the direction of the pump laser pulses reflection. Moreover, the Dember effect by itself cannot explain the pronounced polarization dependence clearly observed in our photocurrent experiments (Figure 3C–D). On the other hand the bulk photovoltaic effect (BPVE) was recently proposed as the possible origin ultrafast photocurrent response and THz emission from MAPbI3 [6]. In general BPVE can be considered as a second-order non-linear optical effect and therefore would demonstrate linear dependence of excitation laser fluence. The distinct feature of the BPVE is pronounced dependence of induced photocurrent direction and amplitude on polarization and helicity of pump photon. Moreover, this dependence has a complex form which encodes the symmetry and the electronic band structure features of the emitting material. However, in the current study we resolved the prominent polarization dependence only in photocurrent experiment rather then in THz emission. Therefore, the separation of contributions from various effects into the generation process is not a trivial task. We considered the possible factors tilting the emitting dipole orientation leading to THz radiation pattern having its maximum in the direction of laser pulse reflection. We would like to note here that we did not exclude joint contribution of several effects, for example via constructive or destructive contribution of bulk and surface photocurrents associated with photo-Dember, lateral photo-Dember and BPVE. In order to explain the efficient THz emission in the direction coinciding the reflection of pump light, it is necessary to consider factors increasing electron-hole pairs generation efficiency (i.e. absorption increase) or change the dipole orientation (so-called lateral photo-Dember effect [34]. The contribution from increased electron-hole pairs generation in the nanostructured perovskite film can be estimated from numerical calculations of the absorption of 45° incident plane wave with wavelenght 400 nm by means of CST Microwave Studio. In Figure 4A, we have shown the resulting absorption for smooth and structured films for different orientations of the polarization of the plane wave with respect to the grooves direction. For both polarizations, the absorption for the modified samples at the wavelength of excitation increases about 8% and 9% for perpendicular and parallel polarizations, respectively. The calculated increase of absorption correlates with our additional measurements of photoluminescence and photocurrent enhancement from the smooth and nanoimprinted films (for details, see Section S2 in Supporting information). Apparently, the increased absorption itself is not sufficient to describe the 2–3.5-fold enhancement of THz intensity $\left({E}_{THz}^{2}\right),$ and near-field structure of the penetrated incident light has to be considered in more details. Figure 4: Calculations. (A) Theoretically calculated absorption spectra from a flat MAPbI3 perovskite film with thickness 600 nm (yellow), nanostructured film for two orthogonal polarizations of an incident light. (B) Calculated near-field distributions in the sub-surface layer of smooth (1) and nanostructured (2 and 3) MAPbI3 at different polarizations (2–TM and 3–TE) of the incident light. In order to understand the effect of nanostrusturing of the MAPbI3 film surface, we calculated electric-field distributions with respect to the polarization vector and morphology. The resulting images are shown in Figure 4B, where three cases are shown: smooth and nanostructured films with TE and TM polarizations of the incident light. For the smooth film, the field distribution is homogeneous with no concentration gradient along the surface, this corresponds to the movement of charges perpendicularly to the film surface. Alternatively, for the imprinted films the field distribution is not homogeneous for both polarizations. It means that the areas with local field enhancement have higher concentration of charges than rest of the regions. It induces the diffusion current in the direction where the concentration is lower. For the nanostructured samples, the near-field distribution is non-symmetric in the case of TM-polarization for 45° incident plane wave. This results in some nonzero angle relative to the surface normal for the transient current in xy-plane, which leads to the tilting of the induced dipole and improvement of the THz emission outcoupling. Therefore, such surface nanostructuring can improve overall efficiency of THz generation owing to both lateral photo-Dember effect and enhanced incident light absorption. ## 4 Conclusion To conclude, we have demonstrated considerable enhancement of THz signal from a nanostructured MAPbI3 perovskite film upon femtosecond laser irradiation. Our numerical simulations have revealed that the grating on perovskite not only improves absorption for incident laser power, but also changes the direction of diffusion current of photogenerated carriers due to non-symmetric near-field distribution. As a result, the nanoimprint lithography can be considered as one of the best approach of large scale perovskites patterning for improvement of their THz emission characteristics. Our results paved the way for new application of large-scale perovskite nanostructuring, making the halide perovskites competitive with conventional semiconductors for THz emission. Further integration with advanced photonic designs [35] and up-scaling would have a great potential for a wide range of applications from medical imaging to security screening [36]. ## Acknowledgement The authors acknowledge Dr. Komissarenko for SEM measurements, and Prof. Yuri Kivshar for fruitful discussions. This work was supported by the Ministry of Education and Science of the Russian Federation (Project 16.8939.2017/8.9, Funder Id: http://dx.doi.org/10.13039/501100003443), for simulations and optical measurements, the Russian Science Foundation (Project 19-73-30023, Funder Id: http://dx.doi.org/10.13039/501100006769), for synthesis and characterization. P.O. is thankful to ITMO Fellowship Program. ## References • [1] Quan LN, Rand BP, Friend RH, Mhaisalkar SG, Lee T-W, Sargent EH. Perovskites for next-generation optical sources. Chem Rev 2019;119:7444–77. • [2] Zhao Y, Zhu K. Organic–inorganic hybrid lead halide perovskites for optoelectronic and electronic applications. Chem Soc Rev 2016;45:655–89. • [3] Nayak PK, Mahesh S, Snaith HJ Cahen D. Photovoltaic solar cell technologies: analysing the state of the art. Nat Rev Mat 2019;4:269. • [4] Stranks SD, Eperon GE, Grancini G, et al. Electron-hole diffusion lengths exceeding 1 micrometer in an organometal trihalide perovskite absorber. Science 2013;342:341–4. • [5] Guzelturk B, Belisle RA, Smith MD, et al. Terahertz emission from hybrid perovskites driven by ultrafast charge separation and strong electron–phonon coupling. Adv Mat 2018;30:1704737. • [6] Obraztsov PA, Lyashenko D, Chizhov PA, et al. Ultrafast zero-bias photocurrent and terahertz emission in hybrid perovskites. Commun Phys 2018;1:14. • [7] Ponseca Jr CS, Arlauskas A, Yu H, et al. Pulsed terahertz emission from solution-processed lead iodide perovskite films. ACS Photon 2019;6:1175–81. • [8] He Y, Su R, Huang Y, et al. High-order shift current induced terahertz emission from inorganic cesium bromine lead perovskite engendered by two-photon absorption. Adv Functl Mat 2019;4:1904694. Google Scholar • [9] Seo M, Park H, Koo S, et al. Terahertz field enhancement by a metallic nano slit operating beyond the skin-depth limit. Nat Photon 2009;3:152. • [10] Park S-G, Jin KH, Yi M, Ye JC, Ahn J, Jeong K-H. Enhancement of terahertz pulse emission by optical nanoantenna. ACS Nano 2012;6:2026–31. • [11] Park S-G, Choi Y, Oh Y-J, Jeong K-H. Terahertz photoconductive antenna with metal nanoislands. Opt Exp 2012;20:25530–5. • [12] Berry CW, Wang N, Hashemi MR, Unlu M, Jarrahi M. Significant performance enhancement in photoconductive terahertz optoelectronics by incorporating plasmonic contact electrodes. Nat Commun 2013;4:1622. • [13] Jooshesh A, Smith L, Masnadi-Shirazi M, et al. Nanoplasmonics enhanced terahertz sources. Opt Expr 2014;22:27992–8001. • [14] Khiabani N, Huang Y, Garcia-Muñoz LE, Shen Y-C, Rivera-Lavado A. A novel sub-thz photomixer with nano-trapezoidal electrodes. IEEE Trans Thtz Sci Tech 2014;4:501–8. • [15] Yang S-H, Hashemi MR, Berry CW, Jarrahi M. 7.5% optical-to-terahertz conversion efficiency offered by photoconductive emitters with three-dimensional plasmonic contact electrodes. IEEE Trans Thtz Sci Tech 2014;4:575–81. • [16] Yang S-H, Jarrahi M. Frequency-tunable continuous-wave terahertz sources based on gaas plasmonic photomixers. Appl Phy Lett 2015;107:131111. • [17] Makarov S, Furasova A, Tiguntseva E, Hemmetter A. Berestennikov A, Pushkarev A. Zakhidov A. Kivshar Y. Halide-perovskite resonant nanophotonics. Adv Opt Mater 2019;7:1800784. • [18] Zhang Y, Lim C-K, Dai Z, et al. Photonics and optoelectronics using nano-structured hybrid perovskite media and their optical cavities. Phys Repo 2019;795:1–51. • [19] Berestennikov AS, Voroshilov PM, Makarov SV, Kivshar YS. Active meta-optics and nanophotonics with halide perovskites. Appl Phys Rev 2019;6:031307. • [20] Gholipour B, Adamo G, Cortecchia D, et al. Organometallic perovskite metasurfaces. Adv Mat 2017;29:1604268. • [21] Makarov SV, Milichko V, Ushakova EV, et al. Multifold emission enhancement in nanoimprinted hybrid perovskite metasurfaces. ACS Photon 2017;4:728–735. • [22] Gao Y, Huang C, Hao C, et al. Lead halide perovskite nanostructures for dynamic color display. ACS Nano 2018;12:8847–54. • [23] Zhang C, Xiao S, Wang Y, et al. Lead halide perovskite-based dynamic metasurfaces. Las Photon Rev 2019;13:1900079. • [24] Deng K, Liu Z, Wang M, Li L. Nanoimprinted grating-embedded perovskite solar cells with improved light management. Adv Funct Mater 2019;29:1900830. • [25] Shen Y, Cheng L-P, Li Y-Q, et al. High-efficiency perovskite light-emitting diodes with synergetic outcoupling enhancement. Advan Mater 2019;31:1901517. Google Scholar • [26] Manjappa M, Srivastava YK, Solanki A, Kumar A, Sum TC, Singh R. Hybrid lead halide perovskites for ultrasensitive photoactive switching in terahertz metamaterial devices. Adv Mater 2017;29:1605881. • [27] Chanana A, Zhai Y, Baniya S, Zhang C, Vardeny ZV, NahataA. Colour selective control of terahertz radiation using two-dimensional hybrid organic inorganic lead-trihalide perovskites. Nat Commun 2017;8:1328. • [28] Cong L, Srivastava YK, Solanki A, Sum TC, Singh R. Perovskite as a platform for active flexible metaphotonic devices. ACS Photon 2017;4:1595–601. • [29] Chanana A, Liu X, Zhang C, Vardeny ZV, Nahata A. Ultrafast frequency-agile terahertz devices using methylammonium lead halide perovskites. Sci Adv 2018;4:eaar7353. • [30] Wang H, Haroldson R, Balachandran B, et al. Nanoimprinted perovskite nanograting photodetector with improved efficiency. ACS Nano 2016;10:10921–8. • [31] Phillips LJ, Rashed AM, Treharne RE, et al. Dispersion relation data for methylammonium lead triiodide perovskite deposited on a (100) silicon wafer using a two-step vapour-phase reaction process. Data In Brief 2015;5:926–8. • [32] Lewis RA. A review of terahertz sources. J Phys D: Appl Phys 2014;47:374001. • [33] Huang Y, Yao Z, He C, et al. Terahertz surface and interface emission spectroscopy for advanced materials. J Phys: Cond Mat 2019;31:153001. Google Scholar • [34] Klatt G, Hilser F, Qiao W, et al. Terahertz emission from lateral photo-dember currents. Opt Exp 2010;18:4939–47. • [35] Chen H-T, O’Hara JF, Azad AK, Taylor AJ. Manipulation of terahertz radiation using metamaterials. LPhoton Rev 2011;5:513–33. Google Scholar • [36] Jepsen PU, Cooke DG, Koch M. Terahertz spectroscopy and imaging–modern techniques and applications. Las Photon Rev 2011;5:124–66. ## Supplementary Material Revised: 2019-11-18 Accepted: 2019-11-27 Published Online: 2019-12-27 Citation Information: Nanophotonics, Volume 9, Issue 1, Pages 187–194, ISSN (Online) 2192-8614, Export Citation
My next successful quadratic surface was an ellipsoid. This surface I simply imported from Mathematica and then added equations to it, using the same process as described in my post on the hyperboloid of one sheet. The first ellipsoid I made in was $$\frac{x^2}{16}+\frac{y^2}{25}+\frac{z^2}{4}=1$$. When I put my .STL file into the MakerBot Desktop program I noticed that the program created supported that went up to the equation because it was on a curved surface and cut into the object. I decided to print the object with these supports and the equation on the side of ellipse and no raft. The first print I did of the ellipsoid I cancelled the print early on so that I could inspect the sides. The surface looked a bit melty, where the filament had shrunk. We decided this was fine and to try to print it again. The second time I printed this object at about half way through the print it fell over and we found it covered in a stringy mess of filament. Taking this failure and the melty-ness of the ellipsoid, we decided to create a new ellipsoid that was a little rounder. I followed the same process as for the first ellipsoid using the equation $$\frac{x^2}{4}+\frac{y^2}{6}+\frac{z^2}{3}=1$$ This time when I printed it I decided to put the equations on the top of the ellipse and use a raft. It printed perfectly. After my success with my second ellipsoid I decided to try to print my first ellipsoid again and this time with a raft. The object never fell over and printed perfectly. These ellipsoids can be found on Thingiverse here and here.
GENETICS 2014 Web Spotlights G3: Genes|Genomes|Genetics 2014 Web Spotlights The GSA Reporter GSA e-News Genes to Genomes: the GSA blog Advertising with GSA Spotlight on Undergraduate Research G3: Genes|Genomes|Genetics publishes high-quality, valuable findings, regardless of perceived impact. G3 publishes foundational research that generates useful genetic and genomic information such as genome maps, single gene studies, QTL studies, mutant screens and advances in methods and technology, novel mutant collections, genome-wide association studies (GWAS)  including gene expression, SNP, and CNV studies; exome sequences related to a specific disease but lacking functional follow-up, personal exome and genome sequencing case, disease, and population reports, and more.   Conceived by the Genetics Society of America, with its first issue published June 2011, G3 is fully open access. G3 uses a Creative Commons license that allows the most free use of the data, which anyone can download, analyze, mine, and reuse, provided that the authors of the article receive credit. GSA believes that rapid dissemination of useful data is the necessary foundation for analysis that leads to mechanistic insights. It is our hope is that this strategy will spawn new discovery.   Like GENETICS, G3 is fast—with a 31-day turnaround time from submission to first decision—and rapid time-to-publication. And like GENETICS, G3 manuscripts are thoroughly peer-reviewed, with careful decisions made by practicing scientists. Before publication, G3 articles receive a thorough copy-edit, ensuring that articles enjoy maximum clarity and impact. Thompson Reuters JCR Impact Factor (2014): 3.198 EigenFactor (2014): 0.00978 Cited Half-life (2014): 2.1 years       What's Inside the Current Issue of G3    Thursday, October 13 2016 09:06:55 AM Multiparental Populations: A Call for Papers Thursday, October 13 2016 09:06:55 AM A Drosophila LexA Enhancer-Trap Resource for Developmental Biology and Neuroendocrine Research Kockel, L., Huq, L. M., Ayyar, A., Herold, E., MacAlpine, E., Logan, M., Savvides, C., Kim, G. E. S., Chen, J., Clark, T., Duong, T., Fazel-Rezai, V., Havey, D., Han, S., Jagadeesan, R., Kim, E. S. J., Lee, D., Lombardo, K., Piyale, I., Shi, H., Stahr, L., Tung, D., Tayvah, U., Wang, F., Wang, J.-H., Xiao, S., Topper, S. M., Park, S., Rotondo, C., Rankin, A. E., Chisholm, T. W., Kim, S. K. Novel binary gene expression tools like the LexA-LexAop system could powerfully enhance studies of metabolism, development, and neurobiology in Drosophila. However, specific LexA drivers for neuroendocrine cells and many other developmentally relevant systems remain limited. In a unique high school biology course, we generated a LexA-based enhancer trap collection by transposon mobilization. The initial collection provides a source of novel LexA-based elements that permit targeted gene expression in the corpora cardiaca, cells central for metabolic homeostasis, and other neuroendocrine cell types. The collection further contains specific LexA drivers for stem cells and other enteric cells in the gut, and other developmentally relevant tissue types. We provide detailed analysis of nearly 100 new LexA lines, including molecular mapping of insertions, description of enhancer-driven reporter expression in larval tissues, and adult neuroendocrine cells, comparison with established enhancer trap collections and tissue specific RNAseq. Generation of this open-resource LexA collection facilitates neuroendocrine and developmental biology investigations, and shows how empowering secondary school science can achieve research and educational goals. Thursday, October 13 2016 09:06:55 AM Tandem Duplication Events in the Expansion of the Small Heat Shock Protein Gene Family in Solanum lycopersicum (cv. Heinz 1706) Krsticevic, F. J., Arce, D. P., Ezpeleta, J., Tapia, E. In plants, fruit maturation and oxidative stress can induce small heat shock protein (sHSP) synthesis to maintain cellular homeostasis. Although the tomato reference genome was published in 2012, the actual number and functionality of sHSP genes remain unknown. Using a transcriptomic (RNA-seq) and evolutionary genomic approach, putative sHSP genes in the Solanum lycopersicum (cv. Heinz 1706) genome were investigated. A sHSP gene family of 33 members was established. Remarkably, roughly half of the members of this family can be explained by nine independent tandem duplication events that determined, evolutionarily, their functional fates. Within a mitochondrial class subfamily, only one duplicated member, Solyc08g078700, retained its ancestral chaperone function, while the others, Solyc08g078710 and Solyc08g078720, likely degenerated under neutrality and lack ancestral chaperone function. Functional conservation occurred within a cytosolic class I subfamily, whose four members, Solyc06g076570, Solyc06g076560, Solyc06g076540, and Solyc06g076520, support ~57% of the total sHSP RNAm in the red ripe fruit. Subfunctionalization occurred within a new subfamily, whose two members, Solyc04g082720 and Solyc04g082740, show heterogeneous differential expression profiles during fruit ripening. These findings, involving the birth/death of some genes or the preferential/plastic expression of some others during fruit ripening, highlight the importance of tandem duplication events in the expansion of the sHSP gene family in the tomato genome. Despite its evolutionary diversity, the sHSP gene family in the tomato genome seems to be endowed with a core set of four homeostasis genes: Solyc05g014280, Solyc03g082420, Solyc11g020330, and Solyc06g076560, which appear to provide a baseline protection during both fruit ripening and heat shock stress in different tomato tissues. Thursday, October 13 2016 09:06:55 AM Genome-Wide Analysis of Polyadenylation Events in Schmidtea mediterranea Lakshmanan, V., Bansal, D., Kulkarni, J., Poduval, D., Krishna, S., Sasidharan, V., Anand, P., Seshasayee, A., Palakodeti, D. In eukaryotes, 3' untranslated regions (UTRs) play important roles in regulating posttranscriptional gene expression. The 3'UTR is defined by regulated cleavage/polyadenylation of the pre-mRNA. The advent of next-generation sequencing technology has now enabled us to identify these events on a genome-wide scale. In this study, we used poly(A)-position profiling by sequencing (3P-Seq) to capture all poly(A) sites across the genome of the freshwater planarian, Schmidtea mediterranea, an ideal model system for exploring the process of regeneration and stem cell function. We identified the 3'UTRs for ~14,000 transcripts and thus improved the existing gene annotations. We found 97 transcripts, which are polyadenylated within an internal exon, resulting in the shrinking of the ORF and loss of a predicted protein domain. Around 40% of the transcripts in planaria were alternatively polyadenylated (ApA), resulting either in an altered 3'UTR or a change in coding sequence. We identified specific ApA transcript isoforms that were subjected to miRNA mediated gene regulation using degradome sequencing. In this study, we also confirmed a tissue-specific expression pattern for alternate polyadenylated transcripts. The insights from this study highlight the potential role of ApA in regulating the gene expression essential for planarian regeneration. Thursday, October 13 2016 09:06:55 AM Characterization of a Novel MMS-Sensitive Allele of Schizosaccharomyces pombe mcm4+ Ranatunga, N. S., Forsburg, S. L. The minichromosome maintenance (MCM) complex is the conserved helicase motor of the eukaryotic replication fork. Mutations in the Mcm4 subunit are associated with replication stress and double strand breaks in multiple systems. In this work, we characterize a new temperature-sensitive allele of Schizosaccharomyces pombe mcm4+. Uniquely among known mcm4 alleles, this mutation causes sensitivity to the alkylation damaging agent methyl methanesulfonate (MMS). Even in the absence of treatment or temperature shift, mcm4-c106 cells show increased repair foci of RPA and Rad52, and require the damage checkpoint for viability, indicating genome stress. The mcm4-c106 mutant is synthetically lethal with mutations disrupting fork protection complex (FPC) proteins Swi1 and Swi3. Surprisingly, we found that the deletion of rif1+ suppressed the MMS-sensitive phenotype without affecting temperature sensitivity. Together, these data suggest that mcm4-c106 destabilizes replisome structure. Thursday, October 13 2016 09:06:55 AM The Evolution of the FT/TFL1 Genes in Amaranthaceae and Their Expression Patterns in the Course of Vegetative Growth and Flowering in Chenopodium rubrum Drabešova, J., Černa, L., Mašterova, H., Kolouškova, P., Potocky, M., Štorchova, H. The FT/TFL1 gene family controls important aspects of plant development: MFT-like genes affect germination, TFL1-like genes act as floral inhibitors, and FT-like genes are floral activators. Gene duplications produced paralogs with modified functions required by the specific lifestyles of various angiosperm species. We constructed the transcriptome of the weedy annual plant Chenopodium rubrum and used it for the comprehensive search for the FT/TFL1 genes. We analyzed their phylogenetic relationships across Amaranthaceae and all angiosperms. We discovered a very ancient phylogenetic clade of FT genes represented by the CrFTL3 gene of C. rubrum. Another paralog CrFTL2 showed an unusual structural rearrangement which might have contributed to the functional shift. We examined the transcription patterns of the FT/TFL1 genes during the vegetative growth and floral transition in C. rubrum to get clues about their possible functions. All the genes except for the constitutively expressed CrFTL2 gene, and the CrFTL3 gene, which was transcribed only in seeds, exhibited organ-specific expression influenced by the specific light regime. The CrFTL1 gene was confirmed as a single floral activator from the FT/TFL1 family in C. rubrum. Its floral promoting activity may be counteracted by CrTFL1. C. rubrum emerges as an easily manipulated model for the study of floral induction in weedy fast-cycling plants lacking a juvenile phase. Thursday, October 13 2016 09:06:55 AM Php4 Is a Key Player for Iron Economy in Meiotic and Sporulating Cells Brault, A., Rallis, C., Normant, V., Garant, J.-M., Bahler, J., Labbe, S. Meiosis is essential for sexually reproducing organisms, including the fission yeast Schizosaccharomyces pombe. In meiosis, chromosomes replicate once in a diploid precursor cell (zygote), and then segregate twice to generate four haploid meiotic products, named spores in yeast. In S. pombe, Php4 is responsible for the transcriptional repression capability of the heteromeric CCAAT-binding factor to negatively regulate genes encoding iron-using proteins under low-iron conditions. Here, we show that the CCAAT-regulatory subunit Php4 is required for normal progression of meiosis under iron-limiting conditions. Cells lacking Php4 exhibit a meiotic arrest at metaphase I. Microscopic analyses of cells expressing functional GFP-Php4 show that it colocalizes with chromosomal material at every stage of meiosis under low concentrations of iron. In contrast, GFP-Php4 fluorescence signal is lost when cells undergo meiosis under iron-replete conditions. Global gene expression analysis of meiotic cells using DNA microarrays identified 137 genes that are regulated in an iron- and Php4-dependent manner. Among them, 18 genes are expressed exclusively during meiosis and constitute new putative Php4 target genes, which include hry1+ and mug14+. Further analysis validates that Php4 is required for maximal and timely repression of hry1+ and mug14+ genes. Using a chromatin immunoprecipitation approach, we show that Php4 specifically associates with hry1+ and mug14+ promoters in vivo. Taken together, the results reveal that in iron-starved meiotic cells, Php4 is essential for completion of the meiotic program since it participates in global gene expression reprogramming to optimize the use of limited available iron. Thursday, October 13 2016 09:06:55 AM Genome Evolution in Three Species of Cactophilic Drosophila Sanchez-Flores, A., Penaloza, F., Carpinteyro-Ponce, J., Nazario-Yepiz, N., Abreu-Goodger, C., Machado, C. A., Markow, T. A. We report genomes of two species of cactophilic Drosophila: Drosophila arizonae and D. navojoa. These two are the closest relatives of D. mojavensis, forming the D. mojavensis cluster. D. mojavensis and D. arizonae diverged from D. navojoa ~5.8 Mya, while the split between D. arizonae and D. mojavensis is more recent, at 1.5 Mya. Together the three genomes provide opportunities to examine genomic changes associated with speciation and host shifts in this ecologically defined group of flies. The three species are also separated by fixed inversion differences in three of their six chromosomes. While the levels of nucleotide divergence in the colinear chromosomes are significantly lower than in the inverted chromosomes, consistent with a past role of the inversions in preventing gene flow, the patterns differ among the inverted chromosomes when the locations of nucleotides inside or outside of the inversions are considered. For Muller element E, there is greater divergence external to the inversion breakpoints. For Muller A, the divergence is slightly higher inside the inversions, while for Muller B, the breakpoints and hence the difference in substitutions in relation to the inversions could not be determined. The differences among the inverted chromosomes, especially once the breakpoints are clearly established, could aid in dating the origins of the inversions. Thursday, October 13 2016 09:06:55 AM Cross-Validation Without Doing Cross-Validation in Genome-Enabled Prediction Gianola, D., Schon, C.-C. Cross-validation of methods is an essential component of genome-enabled prediction of complex traits. We develop formulae for computing the predictions that would be obtained when one or several cases are removed in the training process, to become members of testing sets, but by running the model using all observations only once. Prediction methods to which the developments apply include least squares, best linear unbiased prediction (BLUP) of markers, or genomic BLUP, reproducing kernels Hilbert spaces regression with single or multiple kernel matrices, and any member of a suite of linear regression methods known as "Bayesian alphabet." The approach used for Bayesian models is based on importance sampling of posterior draws. Proof of concept is provided by applying the formulae to a wheat data set representing 599 inbred lines genotyped for 1279 markers, and the target trait was grain yield. The data set was used to evaluate predictive mean-squared error, impact of alternative layouts on maximum likelihood estimates of regularization parameters, model complexity, and residual degrees of freedom stemming from various strengths of regularization, as well as two forms of importance sampling. Our results will facilitate carrying out extensive cross-validation without model retraining for most machines employed in genome-assisted prediction of quantitative traits. Thursday, October 13 2016 09:06:55 AM Cryptic Genetic Variation for Arabidopsis thaliana Seed Germination Speed in a Novel Salt Stress Environment Yuan, W., Flowers, J. M., Sahraie, D. J., Purugganan, M. D. The expansion of species ranges frequently necessitates responses to novel environments. In plants, the ability of seeds to disperse to marginal areas relies in part to its ability to germinate under stressful conditions. Here we examine the genetic architecture of Arabidopsis thaliana germination speed under a novel, saline environment, using an Extreme QTL (X-QTL) mapping platform we previously developed. We find that early germination in normal and salt conditions both rely on a QTL on the distal arm of chromosome 4, but we also find unique QTL on chromosomes 1, 2, 4, and 5 that are specific to salt stress environments. Moreover, different QTLs are responsible for early vs. late germination, suggesting a temporal component to the expression of life history under these stress conditions. Our results indicate that cryptic genetic variation exists for responses to a novel abiotic stress, which may suggest a role of such variation in adaptation to new climactic conditions or growth environments. Thursday, October 13 2016 09:06:55 AM Interallelic Transcriptional Enhancement as an in Vivo Measure of Transvection in Drosophila melanogaster Noble, G. P., Dolph, P. J., Supattapone, S. Transvection—pairing-dependent interallelic regulation resulting from enhancer action in trans—occurs throughout the Drosophila melanogaster genome, likely as a result of the extensive somatic homolog pairing seen in Dipteran species. Recent studies of transvection in Drosophila have demonstrated important qualitative differences between enhancer action in cis vs. in trans, as well as a modest synergistic effect of cis- and trans-acting enhancers on total tissue transcript levels at a given locus. In the present study, we identify a system in which cis- and trans-acting GAL4-UAS enhancer synergism has an unexpectedly large quantitative influence on gene expression, boosting total tissue transcript levels at least fourfold relative to those seen in the absence of transvection. We exploit this strong quantitative effect by using publicly available UAS-shRNA constructs from the TRiP library to assay candidate genes for transvection activity in vivo. The results of the present study, which demonstrate that in trans activation by simple UAS enhancers can have large quantitative effects on gene expression in Drosophila, have important new implications for experimental design utilizing the GAL4-UAS system. Thursday, October 13 2016 09:06:55 AM Glucose or Altered Ceramide Biosynthesis Mediate Oxygen Deprivation Sensitivity Through Novel Pathways Revealed by Transcriptome Analysis in Caenorhabditis elegans Ladage, M. L., King, S. D., Burks, D. J., Quan, D. L., Garcia, A. M., Azad, R. K., Padilla, P. A. Individuals with type 2 diabetes display metabolic abnormalities, such as hyperglycemia, increased free fatty acids, insulin resistance, and altered ceramide levels, that contribute to vascular dysfunctions and compromised oxygen delivery. Caenorhabditis elegans fed a glucose-supplemented diet or with altered ceramide metabolism, due to a hyl-2 mutation, are sensitive to oxygen deprivation (anoxia). Our experiments showed that the combination of these factors further decreased the anoxia survival. RNA-sequencing analysis was performed to assess how a glucose-supplemented diet and/or a hyl-2 mutation altered the transcriptome. Comparison analysis of transcripts associated with anoxia-sensitive animals [hyl-2(tm2031) mutation or a glucose diet] revealed 199 common transcripts encoded by genes with known or predicted functions involving innate immunity, cuticle function (collagens), or xenobiotic and endobiotic phase I and II detoxification system. Use of RNA interference (RNAi) to target gene products of the xenobiotic and endobiotic phase I and II detoxification system (UDP-glycosyltransferase and Cytochrome p450 genes; ugt-15, ugt-18, ugt-19, ugt-41, ugt-63, cyp-13A12, cyp-25A1, and cyp-33C8) increased anoxia survival in wild-type animals fed a standard diet. Anoxia sensitivity of the hyl-2(tm2031) animals was suppressed by RNAi of cyp-25A1 or cyp-33C8 genes. A glucose diet fed to the P0 hermaphrodite decreased the anoxia survival of its F1 embryos; however, the RNAi of ugt-63 and cyp-33C8 suppressed anoxia sensitivity. These studies provide evidence that the detoxification system impacts oxygen deprivation responses and that C. elegans can be used to model the conserved detoxification system. Thursday, October 13 2016 09:06:55 AM Efficient CRISPR-Mediated Post-Transcriptional Gene Silencing in a Hyperthermophilic Archaeon Using Multiplexed crRNA Expression Zebec, Z., Zink, I. A., Kerou, M., Schleper, C. CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-mediated RNA degradation is catalyzed by a type III system in the hyperthermophilic archaeon Sulfolobus solfataricus. Earlier work demonstrated that the system can be engineered to target specifically mRNA of an endogenous host reporter gene, namely the β-galactosidase in S. solfataricus. Here, we investigated the effect of single and multiple spacers targeting the mRNA of a second reporter gene, α-amylase, at the same, and at different, locations respectively, using a minimal CRISPR (miniCR) locus supplied on a viral shuttle vector. The use of increasing numbers of spacers reduced mRNA levels at progressively higher levels, with three crRNAs (CRISPR RNAs) leading to ~ 70–80% reduction, and five spacers resulting in an α-amylase gene knockdown of > 90% measured on both mRNA and protein activity levels. Our results indicate that this technology can be used to increase or modulate gene knockdown for efficient post-transcriptional gene silencing in hyperthermophilic archaea, and potentially also in other organisms. Thursday, October 13 2016 09:06:55 AM A New Advanced Backcross Tomato Population Enables High Resolution Leaf QTL Mapping and Gene Identification Fulop, D., Ranjan, A., Ofner, I., Covington, M. F., Chitwood, D. H., West, D., Ichihashi, Y., Headland, L., Zamir, D., Maloof, J. N., Sinha, N. R. Quantitative Trait Loci (QTL) mapping is a powerful technique for dissecting the genetic basis of traits and species differences. Established tomato mapping populations between domesticated tomato (Solanum lycopersicum) and its more distant interfertile relatives typically follow a near isogenic line (NIL) design, such as the S. pennellii Introgression Line (IL) population, with a single wild introgression per line in an otherwise domesticated genetic background. Here, we report on a new advanced backcross QTL mapping resource for tomato, derived from a cross between the M82 tomato cultivar and S. pennellii. This so-called Backcrossed Inbred Line (BIL) population is comprised of a mix of BC2 and BC3 lines, with domesticated tomato as the recurrent parent. The BIL population is complementary to the existing S. pennellii IL population, with which it shares parents. Using the BILs, we mapped traits for leaf complexity, leaflet shape, and flowering time. We demonstrate the utility of the BILs for fine-mapping QTL, particularly QTL initially mapped in the ILs, by fine-mapping several QTL to single or few candidate genes. Moreover, we confirm the value of a backcrossed population with multiple introgressions per line, such as the BILs, for epistatic QTL mapping. Our work was further enabled by the development of our own statistical inference and visualization tools, namely a heterogeneous hidden Markov model for genotyping the lines, and by using state-of-the-art sparse regression techniques for QTL mapping. Thursday, October 13 2016 09:06:55 AM An Eye on Trafficking Genes: Identification of Four Eye Color Mutations in Drosophila Grant, P., Maga, T., Loshakov, A., Singhal, R., Wali, A., Nwankwo, J., Baron, K., Johnson, D. Genes that code for proteins involved in organelle biogenesis and intracellular trafficking produce products that are critical in normal cell function . Conserved orthologs of these are present in most or all eukaryotes, including Drosophila melanogaster. Some of these genes were originally identified as eye color mutants with decreases in both types of pigments found in the fly eye. These criteria were used for identification of such genes, four eye color mutations that are not annotated in the genome sequence: chocolate, maroon, mahogany, and red Malpighian tubules were molecularly mapped and their genome sequences have been evaluated. Mapping was performed using deletion analysis and complementation tests. chocolate is an allele of the VhaAC39-1 gene, which is an ortholog of the Vacuolar H+ ATPase AC39 subunit 1. maroon corresponds to the Vps16A gene and its product is part of the HOPS complex, which participates in transport and organelle fusion. red Malpighian tubule is the CG12207 gene, which encodes a protein of unknown function that includes a LysM domain. mahogany is the CG13646 gene, which is predicted to be an amino acid transporter. The strategy of identifying eye color genes based on perturbations in quantities of both types of eye color pigments has proven useful in identifying proteins involved in trafficking and biogenesis of lysosome-related organelles. Mutants of these genes can form the basis of valuable in vivo models to understand these processes. Thursday, October 13 2016 09:06:55 AM Rapid Screening for CRISPR-Directed Editing of the Drosophila Genome Using white Coconversion Ge, D. T., Tipping, C., Brodsky, M. H., Zamore, P. D. Adoption of a streamlined version of the bacterial clustered regular interspersed short palindromic repeat (CRISPR)/Cas9 defense system has accelerated targeted genome engineering. The Streptococcus pyogenes Cas9 protein, directed by a simplified, CRISPR-like single-guide RNA, catalyzes a double-stranded DNA break at a specific genomic site; subsequent repair by end joining can introduce mutagenic insertions or deletions, while repair by homologous recombination using an exogenous DNA template can incorporate new sequences at the target locus. However, the efficiency of Cas9-directed mutagenesis is low in Drosophila melanogaster. Here, we describe a strategy that reduces the time and effort required to identify flies with targeted genomic changes. The strategy uses editing of the white gene, evidenced by altered eye color, to predict successful editing of an unrelated gene-of-interest. The red eyes of wild-type flies are readily distinguished from white-eyed (end-joining-mediated loss of White function) or brown-eyed (recombination-mediated conversion to the whitecoffee allele) mutant flies. When single injected G0 flies produce individual G1 broods, flies carrying edits at a gene-of-interest were readily found in broods in which all G1 offspring carried white mutations. Thus, visual assessment of eye color substitutes for wholesale PCR screening of large numbers of G1 offspring. We find that end-joining-mediated mutations often show signatures of microhomology-mediated repair and that recombination-based mutations frequently involve donor plasmid integration at the target locus. Finally, we show that gap repair induced by two guide RNAs more reliably converts the intervening target sequence, whereas the use of Lig4169 mutants to suppress end joining does not improve recombination efficacy. Thursday, October 13 2016 09:06:55 AM Whole-Genome Sequencing and iPLEX MassARRAY Genotyping Map an EMS-Induced Mutation Affecting Cell Competition in Drosophila melanogaster Lee, C.-H., Rimesso, G., Reynolds, D. M., Cai, J., Baker, N. E. Cell competition, the conditional loss of viable genotypes only when surrounded by other cells, is a phenomenon observed in certain genetic mosaic conditions. We conducted a chemical mutagenesis and screen to recover new mutations that affect cell competition between wild-type and RpS3 heterozygous cells. Mutations were identified by whole-genome sequencing, making use of software tools that greatly facilitate the distinction between newly induced mutations and other sources of apparent sequence polymorphism, thereby reducing false-positive and false-negative identification rates. In addition, we utilized iPLEX MassARRAY for genotyping recombinant chromosomes. These approaches permitted the mapping of a new mutation affecting cell competition when only a single allele existed, with a phenotype assessed only in genetic mosaics, without the benefit of complementation with existing mutations, deletions, or duplications. These techniques expand the utility of chemical mutagenesis and whole-genome sequencing for mutant identification. We discuss mutations in the Atm and Xrp1 genes identified in this screen. Thursday, October 13 2016 09:06:55 AM The Genetic Architecture of Noise-Induced Hearing Loss: Evidence for a Gene-by-Environment Interaction Lavinsky, J., Ge, M., Crow, A. L., Pan, C., Wang, J., Salehi, P., Myint, A., Eskin, E., Allayee, H., Lusis, A. J., Friedman, R. A. The discovery of environmentally specific genetic effects is crucial to the understanding of complex traits, such as susceptibility to noise-induced hearing loss (NIHL). We describe the first genome-wide association study (GWAS) for NIHL in a large and well-characterized population of inbred mouse strains, known as the Hybrid Mouse Diversity Panel (HMDP). We recorded auditory brainstem response (ABR) thresholds both pre and post 2-hr exposure to 10-kHz octave band noise at 108 dB sound pressure level in 5–6-wk-old female mice from the HMDP (4–5 mice/strain). From the observation that NIHL susceptibility varied among the strains, we performed a GWAS with correction for population structure and mapped a locus on chromosome 6 that was statistically significantly associated with two adjacent frequencies. We then used a "genetical genomics" approach that included the analysis of cochlear eQTLs to identify candidate genes within the GWAS QTL. In order to validate the gene-by-environment interaction, we compared the effects of the postnoise exposure locus with that from the same unexposed strains. The most significant SNP at chromosome 6 (rs37517079) was associated with noise susceptibility, but was not significant at the same frequencies in our unexposed study. These findings demonstrate that the genetic architecture of NIHL is distinct from that of unexposed hearing levels and provide strong evidence for gene-by-environment interactions in NIHL. Thursday, October 13 2016 09:06:55 AM Ctr9, a Key Component of the Paf1 Complex, Affects Proliferation and Terminal Differentiation in the Developing Drosophila Nervous System Bahrampour, S., Thor, S. The Paf1 protein complex (Paf1C) is increasingly recognized as a highly conserved and broadly utilized regulator of a variety of transcriptional processes. These include the promotion of H3K4 and H3K36 trimethylation, H2BK123 ubiquitination, RNA Pol II transcriptional termination, and also RNA-mediated gene silencing. Paf1C contains five canonical protein components, including Paf1 and Ctr9, which are critical for overall complex integrity, as well as Rtf1, Leo1, and Cdc73/Parafibromin(Hrpt2)/Hyrax. In spite of a growing appreciation for the importance of Paf1C from yeast and mammalian studies, there has only been limited work in Drosophila. Here, we provide the first detailed phenotypic study of Ctr9 function in Drosophila. We found that Ctr9 mutants die at late embryogenesis or early larval life, but can be partly rescued by nervous system reexpression of Ctr9. We observed a number of phenotypes in Ctr9 mutants, including increased neuroblast numbers, increased nervous system proliferation, as well as downregulation of many neuropeptide genes. Analysis of cell cycle and regulatory gene expression revealed upregulation of the E2f1 cell cycle factor, as well as changes in Antennapedia and Grainy head expression. We also found reduction of H3K4me3 modification in the embryonic nervous system. Genome-wide transcriptome analysis points to additional downstream genes that may underlie these Ctr9 phenotypes, revealing gene expression changes in Notch pathway target genes, cell cycle genes, and neuropeptide genes. In addition, we find significant effects on the gene expression of metabolic genes. These findings reveal that Ctr9 is an essential gene that is necessary at multiple stages of nervous system development, and provides a starting point for future studies of the Paf1C in Drosophila. Thursday, October 13 2016 09:06:55 AM Genome-Wide Association Studies with a Genomic Relationship Matrix: A Case Study with Wheat and Arabidopsis Gianola, D., Fariello, M. I., Naya, H., Schon, C.-C. Standard genome-wide association studies (GWAS) scan for relationships between each of p molecular markers and a continuously distributed target trait. Typically, a marker-based matrix of genomic similarities among individuals (G) is constructed, to account more properly for the covariance structure in the linear regression model used. We show that the generalized least-squares estimator of the regression of phenotype on one or on m markers is invariant with respect to whether or not the marker(s) tested is(are) used for building G, provided variance components are unaffected by exclusion of such marker(s) from G. The result is arrived at by using a matrix expression such that one can find many inverses of genomic relationship, or of phenotypic covariance matrices, stemming from removing markers tested as fixed, but carrying out a single inversion. When eigenvectors of the genomic relationship matrix are used as regressors with fixed regression coefficients, e.g., to account for population stratification, their removal from G does matter. Removal of eigenvectors from G can have a noticeable effect on estimates of genomic and residual variances, so caution is needed. Concepts were illustrated using genomic data on 599 wheat inbred lines, with grain yield as target trait, and on close to 200 Arabidopsis thaliana accessions. Thursday, October 13 2016 09:06:55 AM Regulation of the MEI-1/MEI-2 Microtubule-Severing Katanin Complex in Early Caenorhabditis elegans Development Beard, S. M., Smit, R. B., Chan, B. G., Mains, P. E. Thursday, October 13 2016 09:06:55 AM Characterization of the Far Transcription Factor Family in Aspergillus flavus Luo, X., Affeldt, K. J., Keller, N. P. Metabolism of fatty acids is a critical requirement for the pathogenesis of oil seed pathogens including the fungus Aspergillus flavus. Previous studies have correlated decreased ability to grow on fatty acids with reduced virulence of this fungus on host seed. Two fatty acid metabolism regulatory transcription factors, FarA and FarB, have been described in other filamentous fungi. Unexpectedly, we find A. flavus possesses three Far homologs, FarA, FarB, and FarC, with FarA and FarC showing a greater protein similarity to each other than FarB. farA and farB are located in regions of colinearity in all Aspergillus spp. sequenced to date, whereas farC is limited to a subset of species where it is inserted in an otherwise colinear region in Aspergillus genomes. Deletion and overexpression (OE) of farA and farB, but not farC, yielded mutants with aberrant growth patterns on specific fatty acids as well as altered expression of genes involved in fatty acid metabolism. Marked differences included significant growth defects of both farA and farB on medium-chain fatty acids and decreased growth of OE::farA on unsaturated fatty acids. Loss of farA diminished expression of mitochondrial β-oxidation genes whereas OE::farA inhibited expression of genes involved in unsaturated fatty acid catabolism. FarA also positively regulated the desaturase genes required to generate polyunsaturated fatty acids. Aflatoxin production on toxin-inducing media was significantly decreased in the farB mutant and increased in the OE::farB mutant, with gene expression data supporting a role for FarB in tying β-oxidation processes with aflatoxin accumulation. Thursday, October 13 2016 09:06:55 AM Head Transcriptomes of Two Closely Related Species of Fruit Flies of the Anastrepha fraterculus Group Reveals Divergent Genes in Species with Extensive Gene Flow Rezende, V. B., Congrains, C., Lima, A. L. A., Campanini, E. B., Nakamura, A. M., Oliveira, J. L. d., Chahad-Ehlers, S., Junior, I. S., Alves de Brito, R. Several fruit flies species of the Anastrepha fraterculus group are of great economic importance for the damage they cause to a variety of fleshy fruits. Some species in this group have diverged recently, with evidence of introgression, showing similar morphological attributes that render their identification difficult, reinforcing the relevance of identifying new molecular markers that may differentiate species. We investigated genes expressed in head tissues from two closely related species: A. obliqua and A. fraterculus, aiming to identify fixed single nucleotide polymorphisms (SNPs) and highly differentiated transcripts, which, considering that these species still experience some level of gene flow, could indicate potential candidate genes involved in their differentiation process. We generated multiple libraries from head tissues of these two species, at different reproductive stages, for both sexes. Our analyses indicate that the de novo transcriptome assemblies are fairly complete. We also produced a hybrid assembly to map each species’ reads, and identified 67,470 SNPs in A. fraterculus, 39,252 in A. obliqua, and 6386 that were common to both species. We identified 164 highly differentiated unigenes that had a mean interspecific index ($$\overline{D}$$) of at least 0.94. We selected unigenes that had Ka/Ks higher than 0.5, or had at least three or more highly differentiated SNPs as potential candidate genes for species differentiation. Among these candidates, we identified proteases, regulators of redox homeostasis, and an odorant-binding protein (Obp99c), among other genes. The head transcriptomes described here enabled the identification of thousands of genes hitherto unavailable for these species, and generated a set of candidate genes that are potentially important to genetically identify species and understand the speciation process in the presence of gene flow of A. obliqua and A. fraterculus. Thursday, October 13 2016 09:06:55 AM Sporadic Gene Loss After Duplication Is Associated with Functional Divergence of Sirtuin Deacetylases Among Candida Yeast Species Rupert, C. B., Heltzel, J. M. H., Taylor, D. J., Rusche, L. N. Gene duplication promotes the diversification of protein functions in several ways. Ancestral functions can be partitioned between the paralogs, or a new function can arise in one paralog. These processes are generally viewed as unidirectional. However, paralogous proteins often retain related functions and can substitute for one another. Moreover, in the event of gene loss, the remaining paralog might regain ancestral functions that had been shed. To explore this possibility, we focused on the sirtuin deacetylase SIR2 and its homolog HST1 in the CTG clade of yeasts. HST1 has been consistently retained throughout the clade, whereas SIR2 is only present in a subset of species. These NAD+-dependent deacetylases generate condensed chromatin that represses transcription and stabilizes tandemly repeated sequences. By analyzing phylogenetic trees and gene order, we found that a single duplication of the SIR2/HST1 gene occurred, likely prior to the emergence of the CTG clade. This ancient duplication was followed by at least two independent losses of SIR2. Functional characterization of Sir2 and Hst1 in three species revealed that these proteins have not maintained consistent functions since the duplication. In particular, the rDNA locus is deacetylated by Sir2 in Candida albicans, by Hst1 in C. lusitaniae, and by neither paralog in C. parapsilosis. In addition, the subtelomeres in C. albicans are deacetylated by Sir2 rather than by Hst1, which is orthologous to the sirtuin associated with Saccharomyces cerevisiae subtelomeres. These differences in function support the model that sirtuin deacetylases can regain ancestral functions to compensate for gene loss. Thursday, October 13 2016 09:06:55 AM Mutations in the Motile Cilia Gene DNAAF1 Are Associated with Neural Tube Defects in Humans Miao, C., Jiang, Q., Li, H., Zhang, Q., Bai, B., Bao, Y., Zhang, T. Neural tube defects (NTDs) are severe malformations of the central nervous system caused by complex genetic and environmental factors. Among genes involved in NTD, cilia-related genes have been well defined and found to be essential for the completion of neural tube closure (NTC). We have carried out next-generation sequencing on target genes in 373 NTDs and 222 healthy controls, and discovered eight disease-specific rare mutations in cilia-related gene DNAAF1. DNAAF1 plays a central role in cytoplasmic preassembly of distinct dynein-arm complexes, and is expressed in some key tissues involved in neural system development, such as neural tube, floor plate, embryonic node, and brain ependyma epithelial cells in zebrafish and mouse. Therefore, we evaluated the expression and functions of mutations in DNAAF1 in transfected cells to analyze the potential correlation of these mutants to NTDs in humans. One rare frameshift mutation (p.Gln341Argfs*10) resulted in significantly diminished DNAAF1 protein expression, compared to the wild type. Another mutation, p.Lys231Gln, disrupted cytoplasmic preassembly of the dynein-arm complexes in cellular assay. Furthermore, results from NanoString assay on mRNA from NTD samples indicated that DNAAF1 mutants altered the expression level of NTC-related genes. Altogether, these findings suggest that the rare mutations in DNAAF1 may contribute to the susceptibility for NTDs in humans. Thursday, October 13 2016 09:06:55 AM Global Fitness Profiling Identifies Arsenic and Cadmium Tolerance Mechanisms in Fission Yeast Guo, L., Ganguly, A., Sun, L., Suo, F., Du, L.-L., Russell, P. Heavy metals and metalloids such as cadmium [Cd(II)] and arsenic [As(III)] are widespread environmental toxicants responsible for multiple adverse health effects in humans. However, the molecular mechanisms underlying metal-induced cytotoxicity and carcinogenesis, as well as the detoxification and tolerance pathways, are incompletely understood. Here, we use global fitness profiling by barcode sequencing to quantitatively survey the Schizosaccharomyces pombe haploid deletome for genes that confer tolerance of cadmium or arsenic. We identified 106 genes required for cadmium resistance and 110 genes required for arsenic resistance, with a highly significant overlap of 36 genes. A subset of these 36 genes account for almost all proteins required for incorporating sulfur into the cysteine-rich glutathione and phytochelatin peptides that chelate cadmium and arsenic. A requirement for Mms19 is explained by its role in directing iron–sulfur cluster assembly into sulfite reductase as opposed to promoting DNA repair, as DNA damage response genes were not enriched among those required for cadmium or arsenic tolerance. Ubiquinone, siroheme, and pyridoxal 5'-phosphate biosynthesis were also identified as critical for Cd/As tolerance. Arsenic-specific pathways included prefoldin-mediated assembly of unfolded proteins and protein targeting to the peroxisome, whereas cadmium-specific pathways included plasma membrane and vacuolar transporters, as well as Spt–Ada–Gcn5-acetyltransferase (SAGA) transcriptional coactivator that controls expression of key genes required for cadmium tolerance. Notable differences are apparent with corresponding screens in the budding yeast Saccharomyces cerevisiae, underscoring the utility of analyzing toxic metal defense mechanisms in both organisms. Thursday, October 13 2016 09:06:55 AM Obp56h Modulates Mating Behavior in Drosophila melanogaster Shorter, J. R., Dembeck, L. M., Everett, L. J., Morozova, T. V., Arya, G. H., Turlapati, L., St. Armour, G. E., Schal, C., Mackay, T. F. C., Anholt, R. R. H. Social interactions in insects are driven by conspecific chemical signals that are detected via olfactory and gustatory neurons. Odorant binding proteins (Obps) transport volatile odorants to chemosensory receptors, but their effects on behaviors remain poorly characterized. Here, we report that RNAi knockdown of Obp56h gene expression in Drosophila melanogaster enhances mating behavior by reducing courtship latency. The change in mating behavior that results from inhibition of Obp56h expression is accompanied by significant alterations in cuticular hydrocarbon (CHC) composition, including reduction in 5-tricosene (5-T), an inhibitory sex pheromone produced by males that increases copulation latency during courtship. Whole genome RNA sequencing confirms that expression of Obp56h is virtually abolished in Drosophila heads. Inhibition of Obp56h expression also affects expression of other chemoreception genes, including upregulation of lush in both sexes and Obp83ef in females, and reduction in expression of Obp19b and Or19b in males. In addition, several genes associated with lipid metabolism, which underlies the production of cuticular hydrocarbons, show altered transcript abundances. Our data show that modulation of mating behavior through reduction of Obp56h is accompanied by altered cuticular hydrocarbon profiles and implicate 5-T as a possible ligand for Obp56h. Thursday, October 13 2016 09:06:55 AM Genomes of Candidatus Wolbachia bourtzisii wDacA and Candidatus Wolbachia pipientis wDacB from the Cochineal Insect Dactylopius coccus (Hemiptera: Dactylopiidae) Ramirez-Puebla, S. T., Ormeno-Orrillo, E., Vera-Ponce de Leon, A., Lozano, L., Sanchez-Flores, A., Rosenblueth, M., Martinez-Romero, E. Dactylopius species, known as cochineal insects, are the source of the carminic acid dye used worldwide. The presence of two Wolbachia strains in Dactylopius coccus from Mexico was revealed by PCR amplification of wsp and sequencing of 16S rRNA genes. A metagenome analysis recovered the genome sequences of Candidatus Wolbachia bourtzisii wDacA (supergroup A) and Candidatus Wolbachia pipientis wDacB (supergroup B). Genome read coverage, as well as 16S rRNA clone sequencing, revealed that wDacB was more abundant than wDacA. The strains shared similar predicted metabolic capabilities that are common to Wolbachia, including riboflavin, ubiquinone, and heme biosynthesis, but lacked other vitamin and cofactor biosynthesis as well as glycolysis, the oxidative pentose phosphate pathway, and sugar uptake systems. A complete tricarboxylic acid cycle and gluconeogenesis were predicted as well as limited amino acid biosynthesis. Uptake and catabolism of proline were evidenced in Dactylopius Wolbachia strains. Both strains possessed WO-like phage regions and type I and type IV secretion systems. Several efflux systems found suggested the existence of metal toxicity within their host. Besides already described putative virulence factors like ankyrin domain proteins, VlrC homologs, and patatin-like proteins, putative novel virulence factors related to those found in intracellular pathogens like Legionella and Mycobacterium are highlighted for the first time in Wolbachia. Candidate genes identified in other Wolbachia that are likely involved in cytoplasmic incompatibility were found in wDacB but not in wDacA. Thursday, October 13 2016 09:06:55 AM Evaluation of Ligand-Inducible Expression Systems for Conditional Neuronal Manipulations of Sleep in Drosophila Li, Q., Stavropoulos, N. Drosophila melanogaster is a powerful model organism for dissecting the molecular mechanisms that regulate sleep, and numerous studies in the fly have identified genes that impact sleep–wake cycles. Conditional genetic analysis is essential to distinguish the mechanisms by which these genes impact sleep: some genes might exert their effects developmentally, for instance by directing the assembly of neuronal circuits that regulate sleep; other genes may regulate sleep in adulthood; and yet other genes might influence sleep by both developmental and adult mechanisms. Here we have assessed two ligand-inducible expression systems, Geneswitch and the Q-system, for conditional and neuronally restricted manipulations of sleep in Drosophila. While adult-specific induction of a neuronally expressed Geneswitch transgene (elav-GS) is compatible with studies of sleep as shown previously, developmental induction of elav-GS strongly and nonspecifically perturbs sleep in adults. The alterations of sleep in elav-GS animals occur at low doses of Geneswitch agonist and in the presence of transgenes unrelated to sleep, such as UAS-CD8-GFP. Furthermore, developmental elav-GS induction is toxic and reduces brood size, indicating multiple adverse effects of neuronal Geneswitch activation. In contrast, the transgenes and ligand of the Q-system do not significantly impact sleep–wake cycles when used for constitutive, developmental, or adult-specific neuronal induction. The nonspecific effects of developmental elav-GS activation on sleep indicate that such manipulations require cautious interpretation, and suggest that the Q-system or other strategies may be more suitable for conditional genetic analysis of sleep and other behaviors in Drosophila. Thursday, October 13 2016 09:06:55 AM Preservation Analysis of Macrophage Gene Coexpression Between Human and Mouse Identifies PARK2 as a Genetically Controlled Master Regulator of Oxidative Phosphorylation in Humans Codoni, V., Blum, Y., Civelek, M., Proust, C., Franzen, O., Cardiogenics Consortium, IDEM Leducq Consortium CADGenomics, Bjorkegren, J. L. M., Le Goff, W., Cambien, F., Lusis, A. J., Tregouet, D.-A. Macrophages are key players involved in numerous pathophysiological pathways and an in-depth characterization of their gene regulatory networks can help in better understanding how their dysfunction may impact on human diseases. We here conducted a cross-species network analysis of macrophage gene expression data between human and mouse to identify conserved networks across both species, and assessed whether such networks could reveal new disease-associated regulatory mechanisms. From a sample of 684 individuals processed for genome-wide macrophage gene expression profiling, we identified 27 groups of coexpressed genes (modules). Six modules were found preserved (P < 10–4) in macrophages from 86 mice of the Hybrid Mouse Diversity Panel. One of these modules was significantly [false discovery rate (FDR) = 8.9 x 10–11] enriched for genes belonging to the oxidative phosphorylation (OXPHOS) pathway. This pathway was also found significantly (FDR < 10–4) enriched in susceptibility genes for Alzheimer, Parkinson, and Huntington diseases. We further conducted an expression quantitative trait loci analysis to identify SNP that could regulate macrophage OXPHOS gene expression in humans. This analysis identified the PARK2 rs192804963 as a trans-acting variant influencing (minimal P-value = 4.3 x 10–8) the expression of most OXPHOS genes in humans. Further experimental work demonstrated that PARK2 knockdown expression was associated with increased OXPHOS gene expression in THP1 human macrophages. This work provided strong new evidence that PARK2 participates to the regulatory networks associated with oxidative phosphorylation and suggested that PARK2 genetic variations could act as a trans regulator of OXPHOS gene macrophage expression in humans. Thursday, October 13 2016 09:06:55 AM Main Effect QTL with Dominance Determines Heterosis for Dynamic Plant Height in Upland Cotton Shang, L., Ma, L., Wang, Y., Su, Y., Wang, X., Li, Y., Abduweli, A., Cai, S., Liu, F., Wang, K., Hua, J. Plant height, which shows dynamic development and heterosis, is a major trait affecting plant architecture and has an indirect influence on economic yield related to biological yield in cotton. In the present study, we carried out dynamic analysis for plant height and its heterosis by quantitative trait loci (QTL) mapping at multiple developmental stages using two recombinant inbred lines (RILs) and their backcross progeny. At the single-locus level, 47 QTL were identified at five developmental stages in two hybrids. In backcross populations, QTL identified at an early stage mainly showed partial effects and QTL detected at a later stage mostly displayed overdominance effects. At the two-locus level, we found that main effect QTL played a more important role than epistatic QTL in the expression of heterosis in backcross populations. Therefore, this study implies that the genetic basis of plant height heterosis shows dynamic character and main effect QTL with dominance determines heterosis for plant height in Upland cotton. Thursday, October 13 2016 09:06:55 AM Seizure Suppression by High Temperature via cAMP Modulation in Drosophila Saras, A., Tanouye, M. A. Bang-sensitive (BS) Drosophila mutants display characteristic seizure-like activity (SLA) and paralysis after mechanical shock . After high-frequency electrical stimulation (HFS) of the brain, they generate robust seizures at very low threshold voltage. Here we report an important phenomenon, which effectively suppresses SLA in BS mutants. High temperature causes seizure suppression in all BS mutants (parabss1, eas, sda) examined in this study. This effect is fully reversible and flies show complete recovery from BS paralysis once the temperature effect is nullified. High temperature induces an increase in seizure threshold after a brief pulse of heat shock (HS). By genetic screening, we identified the involvement of cAMP in the suppression of seizures by high temperature. We propose that HS induces adenylyl cyclase which in turn increases cAMP concentration which eventually suppresses seizures in mutant flies. In summary, we describe an unusual phenomenon, where high temperature can suppress SLA in flies by modulating cAMP concentration. Thursday, October 13 2016 09:06:55 AM A Genome-Wide Association Study Identifies Multiple Regions Associated with Head Size in Catfish Geng, X., Liu, S., Yao, J., Bao, L., Zhang, J., Li, C., Wang, R., Sha, J., Zeng, P., Zhi, D., Liu, Z. Skull morphology is fundamental to evolution and the biological adaptation of species to their environments. With aquaculture fish species, head size is also important for economic reasons because it has a direct impact on fillet yield. However, little is known about the underlying genetic basis of head size. Catfish is the primary aquaculture species in the United States. In this study, we performed a genome-wide association study using the catfish 250K SNP array with backcross hybrid catfish to map the QTL for head size (head length, head width, and head depth). One significantly associated region on linkage group (LG) 7 was identified for head length. In addition, LGs 7, 9, and 16 contain suggestively associated regions for head length. For head width, significantly associated regions were found on LG9, and additional suggestively associated regions were identified on LGs 5 and 7. No region was found associated with head depth. Head size genetic loci were mapped in catfish to genomic regions with candidate genes involved in bone development. Comparative analysis indicated that homologs of several candidate genes are also involved in skull morphology in various other species ranging from amphibian to mammalian species, suggesting possible evolutionary conservation of those genes in the control of skull morphologies. Thursday, October 13 2016 09:06:55 AM A Genetic Screen for Fission Yeast Gene Deletion Mutants Exhibiting Hypersensitivity to Latrunculin A Asadi, F., Michalski, D., Karagiannis, J. Fission yeast cells treated with low doses of the actin depolymerizing drug, latrunculin A (LatA), delay entry into mitosis via a mechanism that is dependent on both the Clp1p and Rad24p proteins. During this delay, cells remain in a cytokinesis-competent state that is characterized by continuous repair and/or reestablishment of the actomyosin ring. In this manner, cells ensure the faithful completion of the preceding cytokinesis in response to perturbation of the cell division machinery. To uncover other genes with a role in this response, or simply genes with roles in adapting to LatA-induced stress, we carried out a genome-wide screen and identified a group of 38 gene deletion mutants that are hyper-sensitive to the drug. As expected, we found genes affecting cytokinesis and/or the actin cytoskeleton within this set (ain1, acp2, imp2). We also identified genes with roles in histone modification (tra1, ngg1), intracellular transport (apl5, aps3), and glucose-mediated signaling (git3, git5, git11, pka1, cgs2). Importantly, while the identified gene deletion mutants are prone to cytokinesis failure in the presence of LatA, they are nevertheless fully capable of cell division in the absence of the drug. These results indicate that fission yeast cells make use of a diverse set of regulatory modules to counter abnormal cytoskeletal perturbations, and furthermore, that these modules act redundantly to ensure cell survival and proliferation. Thursday, October 13 2016 09:06:55 AM A Forward Genetic Screen and Whole Genome Sequencing Identify Deflagellation Defective Mutants in Chlamydomonas, Including Assignment of ADF1 as a TRP Channel Hilton, L. K., Meili, F., Buckoll, P. D., Rodriguez-Pike, J. C., Choutka, C. P., Kirschner, J. A., Warner, F., Lethan, M., Garces, F. A., Qi, J., Quarmby, L. M. With rare exception, ciliated cells entering mitosis lose their cilia, thereby freeing basal bodies to serve as centrosomes in the formation of high-fidelity mitotic spindles. Cilia can be lost by shedding or disassembly, but either way, it appears that the final release may be via a coordinated severing of the nine axonemal outer doublet microtubules linking the basal body to the ciliary transition zone. Little is known about the mechanism or regulation of this important process. The stress-induced deflagellation response of Chlamydomonas provides a basis to identifying key players in axonemal severing. In an earlier screen we uncovered multiple alleles for each of three deflagellation genes, ADF1, FA1, and FA2. Products of the two FA genes localize to the site of axonemal severing and encode a scaffolding protein and a member of the NIMA-related family of ciliary-cell cycle kinases. The identity of the ADF1 gene remained elusive. Here, we report a new screen using a mutagenesis that yields point mutations in Chlamydomonas, an enhanced screening methodology, and whole genome sequencing. We isolated numerous new alleles of the three known genes, and one or two alleles each of at least four new genes. We identify ADF1 as a TRP ion channel, which we suggest may reside at the flagellar transition zone. Thursday, October 13 2016 09:06:55 AM Potential Direct Regulators of the Drosophila yellow Gene Identified by Yeast One-Hybrid and RNAi Screens Kalay, G., Lusk, R., Dome, M., Hens, K., Deplancke, B., Wittkopp, P. J. The regulation of gene expression controls development, and changes in this regulation often contribute to phenotypic evolution. Drosophila pigmentation is a model system for studying evolutionary changes in gene regulation, with differences in expression of pigmentation genes such as yellow that correlate with divergent pigment patterns among species shown to be caused by changes in cis- and trans-regulation. Currently, much more is known about the cis-regulatory component of divergent yellow expression than the trans-regulatory component, in part because very few trans-acting regulators of yellow expression have been identified. This study aims to improve our understanding of the trans-acting control of yellow expression by combining yeast-one-hybrid and RNAi screens for transcription factors binding to yellow cis-regulatory sequences and affecting abdominal pigmentation in adults, respectively. Of the 670 transcription factors included in the yeast-one-hybrid screen, 45 showed evidence of binding to one or more sequence fragments tested from the 5' intergenic and intronic yellow sequences from D. melanogaster, D. pseudoobscura, and D. willistoni, suggesting that they might be direct regulators of yellow expression. Of the 670 transcription factors included in the yeast-one-hybrid screen, plus another TF previously shown to be genetically upstream of yellow, 125 were also tested using RNAi, and 32 showed altered abdominal pigmentation. Nine transcription factors were identified in both screens, including four nuclear receptors related to ecdysone signaling (Hr78, Hr38, Hr46, and Eip78C). This finding suggests that yellow expression might be directly controlled by nuclear receptors influenced by ecdysone during early pupal development when adult pigmentation is forming. Thursday, October 13 2016 09:06:55 AM RNAi-Based Suppressor Screens Reveal Genetic Interactions Between the CRL2LRR-1 E3-Ligase and the DNA Replication Machinery in Caenorhabditis elegans Ossareh-Nazari, B., Katsiarimpa, A., Merlet, J., Pintard, L. Cullin-RING E3-Ligases (CRLs), the largest family of E3 ubiquitin-Ligases, regulate diverse cellular processes by promoting ubiquitination of target proteins. The evolutionarily conserved Leucine Rich Repeat protein 1 (LRR-1) is a substrate-recognition subunit of a CRL2LRR-1 E3-ligase. Here we provide genetic evidence supporting a role of this E3-enzyme in the maintenance of DNA replication integrity in Caenorhabditis elegans. Through RNAi-based suppressor screens of lrr-1(0) and cul-2(or209ts) mutants, we identified two genes encoding components of the GINS complex, which is part of the Cdc45-MCM-GINS (CMG) replicative helicase, as well as CDC-7 and MUS-101, which drives the assembly of the CMG helicase during DNA replication. In addition, we identified the core components of the ATR/ATL-1 DNA replication checkpoint pathway (MUS-101, ATL-1, CLSP-1, CHK-1). These results suggest that the CRL2LRR-1 E3-ligase acts to modify or degrade factor(s) that would otherwise misregulate the replisome, eventually leading to the activation of the DNA replication checkpoint.
# region command ## Syntax region ID style args keyword arg ... • ID = user-assigned name for the region • style = delete or block or cone or cylinder or plane or prism or sphere or union or intersect delete = no args block args = xlo xhi ylo yhi zlo zhi xlo,xhi,ylo,yhi,zlo,zhi = bounds of block in all dimensions (distance units) cone args = dim c1 c2 radlo radhi lo hi dim = x or y or z = axis of cone c1,c2 = coords of cone axis in other 2 dimensions (distance units) radlo,radhi = cone radii at lo and hi end (distance units) lo,hi = bounds of cone in dim (distance units) cylinder args = dim c1 c2 radius lo hi dim = x or y or z = axis of cylinder c1,c2 = coords of cylinder axis in other 2 dimensions (distance units) c1,c2, and radius can be a variable (see below) lo,hi = bounds of cylinder in dim (distance units) plane args = px py pz nx ny nz px,py,pz = point on the plane (distance units) nx,ny,nz = direction normal to plane (distance units) prism args = xlo xhi ylo yhi zlo zhi xy xz yz xlo,xhi,ylo,yhi,zlo,zhi = bounds of untilted prism (distance units) xy = distance to tilt y in x direction (distance units) xz = distance to tilt z in x direction (distance units) yz = distance to tilt z in y direction (distance units) sphere args = x y z radius x,y,z = center of sphere (distance units) radius = radius of sphere (distance units) x,y,z, and radius can be a variable (see below) union args = N reg-ID1 reg-ID2 ... N = # of regions to follow, must be 2 or greater reg-ID1,reg-ID2, ... = IDs of regions to join together intersect args = N reg-ID1 reg-ID2 ... N = # of regions to follow, must be 2 or greater reg-ID1,reg-ID2, ... = IDs of regions to intersect • zero or more keyword/arg pairs may be appended • keyword = side or units or move or rotate or open side value = in or out in = the region is inside the specified geometry out = the region is outside the specified geometry units value = lattice or box lattice = the geometry is defined in lattice units box = the geometry is defined in simulation box units move args = v_x v_y v_z v_x,v_y,v_z = equal-style variables for x,y,z displacement of region over time rotate args = v_theta Px Py Pz Rx Ry Rz v_theta = equal-style variable for rotaton of region over time (in radians) Px,Py,Pz = origin for axis of rotation (distance units) Rx,Ry,Rz = axis of rotation vector open value = integer from 1-6 corresponding to face index (see below) • accelerated styles (with same args) = block/kk ## Examples region 1 block -3.0 5.0 INF 10.0 INF INF region 2 sphere 0.0 0.0 0.0 5 side out region void cylinder y 2 3 5 -5.0 EDGE units box region 1 prism 0 10 0 10 0 10 2 0 0 region outside union 4 side1 side2 side3 side4 region 2 sphere 0.0 0.0 0.0 5 side out move v_left v_up NULL region openbox block 0 10 0 10 0 10 open 5 open 6 units box region funnel cone z 10 10 2 5 0 10 open 1 units box ## Description This command defines a geometric region of space. Various other commands use regions. For example, the region can be filled with atoms via the create_atoms command. Or a bounding box around the region, can be used to define the simulation box via the create_box command. Or the atoms in the region can be identified as a group via the group command, or deleted via the delete_atoms command. Or the surface of the region can be used as a boundary wall via the fix wall/region command. Commands which use regions typically test whether an atom’s position is contained in the region or not. For this purpose, coordinates exactly on the region boundary are considered to be interior to the region. This means, for example, for a spherical region, an atom on the sphere surface would be part of the region if the sphere were defined with the side in keyword, but would not be part of the region if it were defined using the side out keyword. See more details on the side keyword below. Normally, regions in LAMMPS are “static”, meaning their geometric extent does not change with time. If the move or rotate keyword is used, as described below, the region becomes “dynamic”, meaning it’s location or orientation changes with time. This may be useful, for example, when thermostatting a region, via the compute temp/region command, or when the fix wall/region command uses a region surface as a bounding wall on particle motion, i.e. a rotating container. The delete style removes the named region. Since there is little overhead to defining extra regions, there is normally no need to do this, unless you are defining and discarding large numbers of regions in your input script. The lo/hi values for block or cone or cylinder or prism styles can be specified as EDGE or INF. EDGE means they extend all the way to the global simulation box boundary. Note that this is the current box boundary; if the box changes size during a simulation, the region does not. INF means a large negative or positive number (1.0e20), so it should encompass the simulation box even if it changes size. If a region is defined before the simulation box has been created (via create_box or read_data or read_restart commands), then an EDGE or INF parameter cannot be used. For a prism region, a non-zero tilt factor in any pair of dimensions cannot be used if both the lo/hi values in either of those dimensions are INF. E.g. if the xy tilt is non-zero, then xlo and xhi cannot both be INF, nor can ylo and yhi. Note Regions in LAMMPS do not get wrapped across periodic boundaries, as specified by the boundary command. For example, a spherical region that is defined so that it overlaps a periodic boundary is not treated as 2 half-spheres, one on either side of the simulation box. Note Regions in LAMMPS are always 3d geometric objects, regardless of whether the dimension of a simulation is 2d or 3d. Thus when using regions in a 2d simulation, you should be careful to define the region so that its intersection with the 2d x-y plane of the simulation has the 2d geometric extent you want. For style cone, an axis-aligned cone is defined which is like a cylinder except that two different radii (one at each end) can be defined. Either of the radii (but not both) can be 0.0. For style cone and cylinder, the c1,c2 params are coordinates in the 2 other dimensions besides the cylinder axis dimension. For dim = x, c1/c2 = y/z; for dim = y, c1/c2 = x/z; for dim = z, c1/c2 = x/y. Thus the third example above specifies a cylinder with its axis in the y-direction located at x = 2.0 and z = 3.0, with a radius of 5.0, and extending in the y-direction from -5.0 to the upper box boundary. For style plane, a plane is defined which contain the point (px,py,pz) and has a normal vector (nx,ny,nz). The normal vector does not have to be of unit length. The “inside” of the plane is the half-space in the direction of the normal vector; see the discussion of the side option below. For style prism, a parallelepiped is defined (it’s too hard to spell parallelepiped in an input script!). The parallelepiped has its “origin” at (xlo,ylo,zlo) and is defined by 3 edge vectors starting from the origin given by A = (xhi-xlo,0,0); B = (xy,yhi-ylo,0); C = (xz,yz,zhi-zlo). Xy,xz,yz can be 0.0 or positive or negative values and are called “tilt factors” because they are the amount of displacement applied to faces of an originally orthogonal box to transform it into the parallelepiped. A prism region that will be used with the create_box command to define a triclinic simulation box must have tilt factors (xy,xz,yz) that do not skew the box more than half the distance of corresponding the parallel box length. For example, if xlo = 2 and xhi = 12, then the x box length is 10 and the xy tilt factor must be between -5 and 5. Similarly, both xz and yz must be between -(xhi-xlo)/2 and +(yhi-ylo)/2. Note that this is not a limitation, since if the maximum tilt factor is 5 (as in this example), then configurations with tilt = …, -15, -5, 5, 15, 25, … are all geometrically equivalent. The radius value for style sphere and cylinder can be specified as an equal-style variable. If the value is a variable, it should be specified as v_name, where name is the variable name. In this case, the variable will be evaluated each timestep, and its value used to determine the radius of the region. For style sphere also the x-, y-, and z- coordinate of the center of the sphere and for style cylinder the two center positions c1 and c2 for the location of the cylinder axes can be a variable with the same kind of effect and requirements than for the radius. Equal-style variables can specify formulas with various mathematical functions, and include thermo_style command keywords for the simulation box parameters and timestep and elapsed time. Thus it is easy to specify a time-dependent radius or have a time dependent position of the sphere or cylinder region. See the Howto tricilinc doc page for a geometric description of triclinic boxes, as defined by LAMMPS, and how to transform these parameters to and from other commonly used triclinic representations. The union style creates a region consisting of the volume of all the listed regions combined. The intersect style creates a region consisting of the volume that is common to all the listed regions. Note The union and intersect regions operate by invoking methods from their list of sub-regions. Thus you cannot delete the sub-regions after defining a union or intersection region. The side keyword determines whether the region is considered to be inside or outside of the specified geometry. Using this keyword in conjunction with union and intersect regions, complex geometries can be built up. For example, if the interior of two spheres were each defined as regions, and a union style with side = out was constructed listing the region-IDs of the 2 spheres, the resulting region would be all the volume in the simulation box that was outside both of the spheres. The units keyword determines the meaning of the distance units used to define the region for any argument above listed as having distance units. It also affects the scaling of the velocity vector specified with the vel keyword, the amplitude vector specified with the wiggle keyword, and the rotation point specified with the rotate keyword, since they each involve a distance metric. A box value selects standard distance units as defined by the units command, e.g. Angstroms for units = real or metal. A lattice value means the distance units are in lattice spacings. The lattice command must have been previously used to define the lattice spacings which are used as follows: • For style block, the lattice spacing in dimension x is applied to xlo and xhi, similarly the spacings in dimensions y,z are applied to ylo/yhi and zlo/zhi. • For style cone, the lattice spacing in argument dim is applied to lo and hi. The spacings in the two radial dimensions are applied to c1 and c2. The two cone radii are scaled by the lattice spacing in the dimension corresponding to c1. • For style cylinder, the lattice spacing in argument dim is applied to lo and hi. The spacings in the two radial dimensions are applied to c1 and c2. The cylinder radius is scaled by the lattice spacing in the dimension corresponding to c1. • For style plane, the lattice spacing in dimension x is applied to px and nx, similarly the spacings in dimensions y,z are applied to py/ny and pz/nz. • For style prism, the lattice spacing in dimension x is applied to xlo and xhi, similarly for ylo/yhi and zlo/zhi. The lattice spacing in dimension x is applied to xy and xz, and the spacing in dimension y to yz. • For style sphere, the lattice spacing in dimensions x,y,z are applied to the sphere center x,y,z. The spacing in dimension x is applied to the sphere radius. If the move or rotate keywords are used, the region is “dynamic”, meaning its location or orientation changes with time. These keywords cannot be used with a union or intersect style region. Instead, the keywords should be used to make the individual sub-regions of the union or intersect region dynamic. Normally, each sub-region should be “dynamic” in the same manner (e.g. rotate around the same point), though this is not a requirement. The move keyword allows one or more equal-style variables to be used to specify the x,y,z displacement of the region, typically as a function of time. A variable is specified as v_name, where name is the variable name. Any of the three variables can be specified as NULL, in which case no displacement is calculated in that dimension. Note that equal-style variables can specify formulas with various mathematical functions, and include thermo_style command keywords for the simulation box parameters and timestep and elapsed time. Thus it is easy to specify a region displacement that change as a function of time or spans consecutive runs in a continuous fashion. For the latter, see the start and stop keywords of the run command and the elaplong keyword of thermo_style custom for details. For example, these commands would displace a region from its initial position, in the positive x direction, effectively at a constant velocity: variable dx equal ramp(0,10) region 2 sphere 10.0 10.0 0.0 5 move v_dx NULL NULL Note that the initial displacement is 0.0, though that is not required. Either of these variables would “wiggle” the region back and forth in the y direction: variable dy equal swiggle(0,5,100) variable dysame equal 5*sin(2*PI*elaplong*dt/100) region 2 sphere 10.0 10.0 0.0 5 move NULL v_dy NULL The rotate keyword rotates the region around a rotation axis R = (Rx,Ry,Rz) that goes through a point P = (Px,Py,Pz). The rotation angle is calculated, presumably as a function of time, by a variable specified as v_theta, where theta is the variable name. The variable should generate its result in radians. The direction of rotation for the region around the rotation axis is consistent with the right-hand rule: if your right-hand thumb points along R, then your fingers wrap around the axis in the direction of rotation. The move and rotate keywords can be used together. In this case, the displacement specified by the move keyword is applied to the P point of the rotate keyword. The open keyword can be used (multiple times) to indicate that one or more faces of the region are ignored for purposes of particle/wall interactions. This keyword is only relevant for regions used by the fix wall/region and fix wall/gran/region commands. It can be used to create “open” containers where only some of the region faces are walls. For example, a funnel can be created with a cone style region that has an open face at the smaller radius for particles to flow out, or at the larger radius for pouring particles into the cone, or both. Note that using the open keyword partly overrides the side keyword, since both exterior and interior surfaces of an open region are tested for particle contacts. The exception to this is a union or intersect region which includes an open sub-region. In that case the side keyword is still used to define the union/intersect region volume, and the open settings are only applied to the individual sub-regions that use them. The indices specified as part of the open keyword have the following meanings: For style block, indices 1-6 correspond to the xlo, xhi, ylo, yhi, zlo, zhi surfaces of the block. I.e. 1 is the yz plane at x = xlo, 2 is the yz-plane at x = xhi, 3 is the xz plane at y = ylo, 4 is the xz plane at y = yhi, 5 is the xy plane at z = zlo, 6 is the xy plane at z = zhi). In the second-to-last example above, the region is a box open at both xy planes. For style prism, values 1-6 have the same mapping as for style block. I.e. in an untilted prism, open indices correspond to the xlo, xhi, ylo, yhi, zlo, zhi surfaces. For style cylinder, index 1 corresponds to the flat end cap at the low coordinate along the cylinder axis, index 2 corresponds to the high-coordinate flat end cap along the cylinder axis, and index 3 is the curved cylinder surface. For example, a cylinder region with open 1 open 2 keywords will be open at both ends (e.g. a section of pipe), regardless of the cylinder orientation. For style cone, the mapping is the same as for style cylinder. Index 1 is the low-coordinate flat end cap, index 2 is the high-coordinate flat end cap, and index 3 is the curved cone surface. In the last example above, a cone region is defined along the z-axis that is open at the zlo value (e.g. for use as a funnel). For all other styles, the open keyword is ignored. As indicated above, this includes the intersect and union regions, though their sub-regions can be defined with the open keyword. Styles with a gpu, intel, kk, omp, or opt suffix are functionally the same as the corresponding style without the suffix. They have been optimized to run faster, depending on your available hardware, as discussed on the Speed packages doc page. The accelerated styles take the same arguments and should produce the same results, except for round-off and precision issues. The code using the region (such as a fix or compute) must also be supported by Kokkos or no acceleration will occur. Currently, only block style regions are supported by Kokkos. These accelerated styles are part of the Kokkos package. They are only enabled if LAMMPS was built with that package. See the Build package doc page for more info. You can specify the accelerated styles explicitly in your input script by including their suffix, or you can use the -suffix command-line switch when you invoke LAMMPS, or you can use the suffix command in your input script. See the Speed packages doc page for more instructions on how to use the accelerated styles effectively. ## Restrictions A prism cannot be of 0.0 thickness in any dimension; use a small z thickness for 2d simulations. For 2d simulations, the xz and yz parameters must be 0.0. ## Default The option defaults are side = in, units = lattice, and no move or rotation.
# Metrology for Laser Optics This is Sections 15.1, 15.2, 15.3, 15.4, 15.5, and 15.6 of the Laser Optics Resource Guide. Metrology is crucial for ensuring optical components consistently meet their desired specifications and function safely. This reliability is especially important for systems utilizing high-power lasers or where changes in throughput may cause inadequate system performance. A wide range of metrology is used to measure laser optics including cavity ring down spectroscopy, atomic force microscopy, differential interference contrast microscopy, interferometry, Shack Hartmann wavefront sensors, and spectrophotometers. ## Cavity Ring Down Spectroscopy Cavity ring down spectroscopy (CRDS) is a technique used to determine the composition of gas samples, but for laser optics it is used to make high sensitivity loss measurements of optical coatings. In a CRDS system, a laser pulse is sent into a resonant cavity bounded by two highly-reflective mirrors. With each reflection, a small amount of light is lost to absorption, scattering, and transmission while the reflected light continues to oscillate in the resonant cavity. A detector behind the second mirror measures the decrease in intensity of the reflected light (or “ring down”), which is used to calculate the loss of the mirrors (Figure 1). Characterizing the loss of a laser mirror is essential for ensuring a laser system will achieve its desired throughput. ##### Figure 1: Cavity ring down spectrometers measure the intensity decay rate in the resonant cavity, allowing for higher accuracy measurements than techniques that just measure absolute intensity values The intensity of the laser pulse inside the cavity (I) is described by: (1)$$I = I_{0} e^{ \frac{-T \, t \, c}{2L} }$$ I0 is the initial intensity of the laser pulse, Ƭ is the total cavity mirror loss from transmission, absorption, and scattering, t is time, c is the speed of light, and L is the length of the cavity. The value determined in CRDS is the loss of the entire cavity. Therefore, multiple tests are required in order to determine the loss of one mirror. Two reference mirrors are used to make an initial measurement (A), and then two more measurements are taken: one with the first reference mirror replaced by the mirror being tested (B) and one with the other reference mirror replaced by the test mirror (C). These three measurements are used to determine the loss of the test mirror. (2)$$A = M_1 + M_2$$ (3)$$B = M_3 + M_2$$ (4)$$C = M_1 + M_3$$ (5)$$C + B - A = M_1 + M_3 + M_3 + M_2 - M_1 = 2 M_3$$ (6)$$M_3 = \frac{C + B - A}{2}$$ M1 and M2 are the loss of the two reference mirrors and M3 is the loss of the test mirror. The loss from air in the cavity is assumed to be negligible. CRDS is an ideal technique for characterizing the performance of reflective laser optics because it is much easier to accurately measure a small amount of loss rather than a large reflectance (Table 1). Transmissive components with anti-reflection coatings can also be tested by inserting them into a resonant cavity and measuring the corresponding increase in loss. CRDS must be performed in a clean environment with meticulous care, as any contamination on the mirrors or to the inside of the cavity will affect the loss measurements. ##### Table 1: The sensitivity of measuring the reflectance of a mirror directly with an uncertainty of ±0.1% is two orders of magnitude greater than measuring the mirrors loss with an uncertainty of ±10%. This demonstrates that loss measurements for highly reflective mirrors are much more accurate than reflectance measurements To learn more about CRDS and its benefits for measuring high reflectivity laser mirrors, watch the webinar recording below. ## Interferometry Interferometers utilize interference to measure small displacements, surface irregularities, and changes in refractive index. They can measure surface irregularities <λ/20 and are used to qualify flats, spherical lenses, aspheric lenses, and other optical components. Interference occurs when multiple waves of light are superimposed and added together to form a new pattern. In order for interference to occur, the multiple waves of light must be coherent in phase and have non-orthogonal polarization states.1 If the troughs, or low points, of the waves align they cause constructive interference add their intensities, while if the troughs of one wave align with the peaks of the other they will cause destructive interference and cancel each other out (Figure 2). ##### Figure 2: Interferometers us constructive interference (left) and destructive interference (right) to determine surface figure, as differences in surface figure between the test optic and reference optic cause a phase difference that results in visible interference fringes Interferometers typically use a beamsplitter to split light from a single source into a test beam and a reference beam. The beams are recombined before reaching a photodetector, and any optical path difference between the two paths will create interference. This allows for comparing an optical component in the path of the test beam to a reference in the reference beam (Figure 3). Constructive and destructive interference between the two paths will create a pattern of visible interference fringes. Both reflective and transmissive optical components can be measured by comparing the transmitted or reflected wavefront to a reference. ##### Figure 3: Sample image from an interferometer showing bright areas where the test and reference beams constructively interfered and dark rings where they destructively interfered (left), as well as the resulting 3D reconstruction of the test optic (right) There are several common interferometer configurations (Figure 4). Mach–Zehnder interferometers utilize one beamsplitter to separate an input beam into two separate paths. A second beamsplitter recombines the two paths into two outputs, which are sent to photodetectors. Michelson interferometers use a single beamsplitter for splitting and recombining the beams. One variant of Michelson interferometers are Twyman–Green interferometers, which measure optical components with a monochromatic point source as the light source. Fizeau interferometers utilize a single beamsplitter oriented perpendicularly to the beamsplitter in Michelson interferometers, which causes the system to only require one mirror. Fabry–Pérot interferometers allow for multiple trips of light by using two parallel partially transparent mirrors instead of two separated beam paths. ##### Figure 4: Various common interferometer configurations Dust particles or imperfections on optical components that make up an interferometer, besides the optic being tested, can lead to optical path differences that may be misconstrued as surface defects on the optic. Interferometry requires precise control of the beam paths, and measurements may also be subject to laser noise and quantum noise. ## Short Coherence Length Interferometry Some unique interferometer configurations, such as short coherence length and photothermal common-path interferometers, serve a different purpose than conventional interferometers. Short coherence length interferometers use specialized LEDs as their illumination source instead of a laser.2 The LED has a coherence length longer than a typical LED but less than that of a laser. This allows the system to measure parallel, flat surfaces while minimizing reflections from back surfaces (Figure 5). ##### Figure 5: Short coherence length interferometers using specialized LEDs as their light source are able to measure parallel, flat surfaces without noise from light reflecting off of the back surface (right), while conventional laser-based interferometers will be affected by this noise (left). Image from InterOptics LLC2 In order to measure parallel flat surfaces on windows, laser crystals, and other optics using a conventional interferometer, Vaseline or another substance must be applied to the back surface to prevent light from reflecting off of it and interfering with measurements of the front surface. Applying this additional substance reduces noise but increases the time it takes to record measurements, as it must be applied, cleaned off, then reapplied to measure the other surface, then cleaned before coating. The most effective way to clean away Vaseline is vinegar, and this poses the risk of staining certain materials that are sensitive to acids. This technique is also not effective for optics having a much lower or higher refractive index than glass and is completely ineffective if the back surface has been coated. Using a specialized LED with a short coherence length allows the front surface of an optic to be isolated from the rear surface, eliminating the need for special treatment of the rear surface. This minimizes measurement time, the risk of damage to the part, and the risk of inaccurate measurements.2 However, because the coherence length is short there is a limited range of where the surface being measured must be placed in order to resolve interference fringes. One benefit of this limited measurement range is that it prevents dust, scratches, and other defects in the optical train outside of the limited measurement window from affecting measurements (Figure 6). This technique is also less sensitive to vibration by design and does not need to be placed on an expensive vibration isolation table. ## Photothermal Common-Path Interferometry Photothermal common-path interferometers (PCIs) use a focused pump beam to heat a target area while a single probe beam experiences phase distortion because of thermal expansion and the resulting change in refractive index.3 PCIs provide accurate measurements of absorption, allowing for better characterization of the spectral properties of optical coatings. The phase of the probe beam is distorted in the heated area while the pump beam is active. This distortion generates a second weak wave with its phase shifted by half of a period relative to the stronger, undistorted probe beam wave.3 Interference between these two waves does not occur right away, but at some distance further from the sample (Figure 7). The heating will apply a similar phase shift to the reflection of the probe beam from the front surface, allowing for either transmission or reflection measurement of the surface absorption. ##### Figure 7: Thermal distortion creates a weak, second wave from the probe beam in a photothermal common-path interferometer, and this secondary wave generates an interference pattern with the undistorted probe beam later in the system3 An optical chopper is typically used to break up a continuous wave (CW) laser source and periodically heat the sample. After transmitting through the sample, the probe beam passes through an aperture before its periodic distortion is measured (Figure 8). The final signal is proportional to absorption in the sample, allowing for accurate absorption measurement. ##### Figure 8: Typical schematic of a photothermal common-path interferometer3 These absorption measurements provide valuable information for better understanding the spectral properties of optical coatings and bulk substrates. For example, two different anti-reflection coatings could have similar measured reflectivity values, but the true performance of the coatings is not understood without also understanding the effects of absorption and scatter, as two coatings could have identical reflectivities but different absorption values. PCIs measure absorption more accurately than spectrophotometers, which determine absorption by directly measuring transmission.3 Spectrometers can struggle to measure low absorption levels. PCIs can also separate measurements of absorption in the coating from absorption in the bulk substrate. ## Atomic Force Microscopy Atomic force microscopy (AFM) is a technique that provides surface topography with atomic resolution (Figure 9). An extremely small and sharp tip scans across a sample’s surface, resulting in a 3D reconstruction of the surface. The tip is attached to a rectangular or triangular cantilever that connects to the rest of the microscope head. The cantilever’s motion is controlled by piezoelectric ceramics, which ensures 3D positioning of the cantilever with subnanometer resolution.4 In laser optics, AFM is primary used to calculate an optical component’s surface roughness, which may significantly affect the performance of a laser optical system as it is often the main source of scattering. AFM can provide a 3D map of a surface with a precision of a few Angstrom’s.5 ##### Figure 9: Atomic force microscopy produces nanometer-level topography maps, which can be useful for characterizing gratings The tip is either scanned across the sample while in constant contact with the system, known as contact mode, or in intermittent contact with the surface, known as tapping mode. In tapping mode, the cantilever oscillates at its resonant frequency, with the tip only contacting the surface for a short time during the oscillating cycle. Contact mode is less complicated than tapping mode and provides a more accurate reconstruction of the surface. However, the possibility of damaging the surface during scanning is higher and the tip wears out faster, leading to a shorter lifetime of the tip. In both modes, a laser is reflected off the top of the cantilever onto a detector. Changes in the height of the sample surface deflect the cantilever and change the position of the laser on the detector, generating an accurate height map of the surface (Figure 10). ##### Figure 10: Changes in surface topography move the AFM tip, changing the position of the reflected laser on the detected and allowing for surface topography measurement The shape and composition of the tip play a key role in the spatial resolution of AFM and should be chosen according to the specimen requiring a scan. The smaller and sharper the tip, the higher the lateral resolution. However, small tips have longer scanning times and a higher cost than larger tips. Control of the distance between the tip and the surface determines the vertical resolution of an AFM system. Mechanical and electrical noise limit the vertical resolution as surface features smaller than the noise level cannot be resolved.6 The relative position between the tip and the sample is also sensitive to the expansion or contraction of AFM components as a result of thermal variations. AFM is a time-consuming metrology technique and is mainly used for process validation and monitoring, where a small fraction of a sample surface on the order of 100μm x 100μm is measured to provide a statistically significant representation of its manufacturing process as a whole. ## White Light Interferometry for Superpolished Surface Roughness Measurement White light interferometry (WLI) can also be used to measure surface roughness. The combination of AFM and WLI allows optical fabricators to measure surface topography over a wide range of spatial frequencies, even measuring the sub-angstrom RMS surface roughness of superpolished surfaces. Most interferometers utilize a monochromatic laser as the illumination source because the laser’s long coherence length makes it easy to observe interference fringes, but white light interferometers utilize a broadband illumination source to analyze surface height. Surface height can be measured because the interference at a given location is highest when the reference and measured optical path lengths are equal, so modulating the distance between the WLI and the test surface generates surface topography data. White light interferometers are typically Michelson interferometer setups with the test optic placed in one arm and a reference optic in the other (Figure 11). The length of the reference arm is varied by translating the reference optic through some range. ##### Figure 11: Schematic of a typical white light Michelson interferometer used to determine surface roughness. The instrument is kept stationary as the height of the test surface is varied. WLI and AFM have overlapping spatial frequency ranges and can both be utilized for measuring sub-angstrom surface roughness of superpolished surfaces (Table 2), which instrument is better is dependent on the spatial frequency range being measured.7 It is widely accepted that optics intended to be used in the visible spectra do not need to be measured beyond ~2000 cycles/mm, which is ideal for WLI. However, for optics intended to be used in the UV spectra the higher spatial frequency range of the AFM may be required. The AFM can also measure lower spatial frequencies (as seen in Table 2), but other factors make AFM less production friendly. Due to longer measurement times, AFM has an extreme sensitivity to temperature fluctuations and external vibrations. Therefore, AFM is better suited for the controlled environment of a test lab while WLI is better suited for a factory setting. Instrument Lower Spatial Frequency Limit [cycle/mm] Upper Spatial Frequency Limit [cycle/mm] Note White Light Interferometer(Zygo NewView) 13592540 50901803609001,800 (Objective mag.)2.755102050100 Atomic Force Microscope 3035456090185 8,0009,60012,00016,00024,00050,000 Depending on tip radius and instrument setup ## Shack-Hartmann Wavefront Sensors A Shack-Hartmann wavefront sensor (SHWFS) measures the transmitted and reflected wavefront error of an optical component or system with high dynamic range and accuracy. The SHWFS has become very popular due to its ease of use, fast response, relatively low cost, and ability to work with incoherent light sources. The wavefront of an optical wave is a surface over which the wave has a constant phase. Wavefronts are perpendicular to the direction of propagation, therefore collimated light has a planar wavefront and converging or diverging light has a curved wavefront (Figure 12). Aberrations in optical components lead to wavefront errors, or distortions in transmitted or reflected wavefronts. By analyzing transmitted and reflected wavefront error, the aberrations and performance of an optical component can be determined. ##### Figure 12: Perfectly collimated light has a planar wavefront. Light diverging or converging after a perfect, aberration-free lens will have a spherical wavefront SHWFS utilizes an array of microlenses, or lenslets, with the same focal length to focus portions of incident light onto a detector. The detector is divided in small sectors, with one sector for each microlens. A perfect planar incident wavefront results in a grid of focused spots with the same separation as the center-to-center spacing of the microlens array. If a distorted wavefront with some amount of wavefront error is incident on a SHWFS, the position of the spots on the detector will change (Figure 13). The deviation, deformation, or loss in intensity of the focal spots determines the local tilt of the wavefront at each of the microlenses. The discrete tilts can be used to recreate the full wavefront. ##### Figure 13: Any wavefront error present in light entering a SHWFS will lead to a displacement of the focused spot positions on the detector array One advantage of SHWFS compared to interferometry is that the dynamic range is essentially independent of wavelength, offering more flexibility. However, the dynamic range of SHWFS is limited by the detector sector allocated to each microlens. The focal spot of each microlens should cover at least 10 pixels on its respective sector to achieve an accurate reconstruction of the wavefront. The larger the detector area covered by the focal spot, the greater the SHWFS’ sensitivity, though this comes with a tradeoff of shorter dynamic range. In general, the focal spot of the microlens should not cover more than half of the designated detector sector; this guarantees a reasonable compromise between sensitivity and dynamic range.8 Increasing the number of microlenses in an array results in an increase in spatial resolution and less averaging of the wavefront slope over the microlens aperture, but there are less pixels allocated to each microlens. Larger microlenses produce a more sensitive and precise measurement for slowly varying wavefronts, but this may not sufficiently sample complex wavefronts and result in an artificial smoothing of the reconstructed wavefront.9 ## Spectrophotometers Spectrophotometers measure the transmission and reflectivity of optical components and are essential for characterizing the performance of optical coatings (Figure 14). A typical spectrophotometer consists of a broadband light source, a monochrometer, and a detector (Figure 15). Light from the light source is sent into the monochrometer’s entrance slit where it splits into its component wavelengths using a dispersive element such as a diffraction grating or prism. The monochrometer’s exit slit blocks all wavelengths except for a narrow band that passes through the slit, and that narrow wavelength band illuminates the test optic. Changing the angle of the diffraction grating or prism changes the wavelengths that pass through the exit slit, allowing the test wavelength band to be finely tuned. Light reflected or transmitted through the test optic is then directed onto a detector, determining the optic’s reflectivity or transmission at a given wavelength. ##### Figure 15: The test wavelength of a spectrophotometer can be finely tuned by adjusting the angle of the diffraction grating or prism in the monochrometer The light source must be incredibly stable and have adequate intensity across a broad range of wavelengths to prevent false readings. Tungsten halogen lamps are one of the most commonly used light sources for spectrophotometers because of their long lifespan and ability to maintain a constant brightness.10 Multiple light sources covering different wavelength ranges are often used if a very broad total range is required. The smaller the width of the monochrometer’s slits, the higher the spectral resolution of the spectrophotometer. However, reducing the width of the slits also reduces the transmitted power and may increase the reading acquisition time and amount of noise.12 A wide variety of detectors are used in spectrophotometers as different detectors are better suited for different wavelength ranges. Photomultiplier tubes (PMTs) and semiconductor photodiodes are common detectors used for ultraviolet, visible, and infrared detection.8 PMTs utilize a photoelectric surface to achieve unmatched sensitivity compared to other detector types. When light is incident on the photoelectric surface, photoelectrons are released and continue to release other secondary electrons, which causes a high gain. The high sensitivity of PMTs is beneficial for low intensity light sources or when high levels of precision are required. Semiconductor photodiodes such as avalanche photodiodes are less expensive alternatives to PMTs; however, they have more noise and a lower sensitivity than PMTs. While most spectrophotometers are designed for use in the ultraviolet, visible, or infrared spectra, some spectrophotometers operate in more demanding spectral regions such as the extreme ultraviolet (EUV) spectrum, with wavelengths from 10-100nm. EUV spectrophotometers typically use diffraction gratings with extremely small grating spacings to effectively disperse the incident EUV radiation. ## Group Delay Dispersion Measurement White light interferometers are used to measure the group delay dispersion (GDD) of both reflective and transmissive optical components. GDD is critical to the performance of ultrafast laser optics, as the short pulse duration of ultrafast lasers leads to significant chromatic dispersion in optical media. More information on GDD and ultrafast optics can be found in our Ultrafast Dispersion application note. Most interferometers utilize a monochromatic laser as the illumination source because the laser’s long coherence length makes it easy to observe interference fringes, but white light interferometers utilize a broadband illumination source to analyze dispersion. White light interferometers are typically Michelson interferometer setups with the test optic placed in one arm and a reference optic in the other (Figure 16). The length of the reference arm is varied by translating the reference optic through some range. Interferograms reveal signals whenever the optical path lengths of the two arms become identical, and the exact position at which this occurs is wavelength dependent. This allows for the optical path length difference between different wavelengths to be precisely determined, revealing the test optic’s GDD (Figure 16). ##### Figure 16: Plot of GDD vs. wavelength for a highly-dispersive ultrafast mirror obtained using white light interferometry The signal is either detected by a photodetector or a spectrometer. Photodetectors integrate the signals of different wavelengths over time, and applying a Fourier transform algorithm to the captured interferograms reveals the wavelength-dependent GDD and chromatic dispersion.7 Using a spectrometer instead of a photodetector eliminates the need for a Fourier transfer of the captured data. The sensitivity of photodetector-based white light interferometers is dependent on the step sizes of the stage used to translate the reference optic, but this is not an issue with spectrometer-based systems. ## Differential Interference Contrast Microscopy Differential interference contrast (DIC) microscopy is used for highly-sensitive defect detection in transmissive materials, particularly for identifying laser damage in optical coatings and surfaces (Figure 18). It is difficult to observe these features using traditional brightfield microscopy because the sample is transmissive, but DIC microscopy improves contrast by converting gradients in the optical path length from variations in refractive index, surface slope, or thickness into intensity differences at the image plane. Slopes, valleys and surface discontinuities are imaged with improved contrast to reveal the profile of the surface. DIC images give the appearance of a 3D relief corresponding to the variation of optical path length of the sample. However, this appearance of 3D relief should not be interpreted as the actual 3D topography of the sample. ##### Figure 17: DIC microscopy converts gradients in optical path length into intensity differences at the image plane, allowing for visualization of laser-induced damage that would be otherwise hard to detect DIC microscopy uses polarizers and a birefringent Wollaston or Nomarski prism to separate a light source into two orthogonally polarized rays (Figure 18). An objective lens focuses the two components onto the sample surface displaced by a distance equal to the resolution limit of the microscope. After being collimated by a condenser lens, the two components are then recombined using another Wollaston prism. The combined components then pass through a second polarizer known as an analyzer, which is oriented perpendicular to the first polarizer. The interference from the difference in the two component’s optical path length leads to visible brightness variations. ##### Figure 18: Typical DIC microscopy setup where a Wollaston prism splits the input beam into two separately polarized states One limitation of DIC microscopy is increased cost compared to other microscopy techniques. The Wollaston prisms used to separate and recombine the different polarization states are more expensive than the components needed for microscopy techniques such as phase contrast or Hoffman modulation contrast microscopy.11 ### WATCH NOW References 1. Hinterdorfer, Peter, and Yves F Dufrêne. “Detection and Localization of Single Molecular Recognition Events Using Atomic Force Microscopy.” Nature Methods, vol. 3, no. 5, 2006, pp. 347–355., doi:10.1038/nmeth871. 2. InterOptics LLC. "Engineered coherence interferometry." InterOptics, 2018, http://www.inter-optics.com/tech.html 3. Stanford Photo-Thermal Solutions. "Photothermal technology: common-path (single beam) interferometry." Stanford Photo-Thermal Solutions, Nov. 2021, https://www.stan-pts.com/howitworks.html 4. InterOptics LLC. "Engineered coherence interferometry." InterOptics, 2018, http://www.inter-optics.com/tech.html 5. Stanford Photo-Thermal Solutions. "Photothermal technology: common-path (single beam) interferometry." Stanford Photo-Thermal Solutions, Nov. 2021, https://www.stan-pts.com/howitworks.html 6. Binnig, G., et al. “Atomic Resolution with Atomic Force Microscope.” Surface Science, vol. 189-190, 1987, pp. 1–6., doi:10.1016/s0039-6028(87)80407-7. 7. Dr. Johannes H. Kindt. “AFM enhancing traditional Electron Microscopy Applications.” Atomic Force Microscopy Webinars, Bruker, Feb. 2013, www.bruker.com/service/education-training/webinars/afm.html. 8. Murphey, Douglas B, et al. “DIC Microscope Configuration and Alignment.” Olympus, www.olympus-lifescience.com/en/microscope-resource/primer/techniques/dic/dicconfiguration/ 9. Paschotta, Rüdiger. Encyclopedia of Laser Physics and Technology, RP Photonics, October 2017, www.rp-photonics.com/encyclopedia.html. 10. Forest, Craig R., Claude R. Canizares, Daniel R. Neal, Michael McGuirk, and Mark Lee Schattenburg. "Metrology of thin transparent optics using Shack-Hartmann wavefront sensing." Optical engineering 43, no. 3 (2004): 742-754. 11. John E. Greivenkamp, Daniel G. Smith, Robert O. Gappinger, Gregory A. Williby, "Optical testing using Shack-Hartmann wavefront sensors," Proc. SPIE 4416, Optical Engineering for Sensing and Nanotechnology (ICOSN 2001), (8 May 2001); doi: 10.1117/12.427063 12. Wassmer, William. “An Introduction to Optical Spectrometry (Spectrophotometry).” Azooptics.com, https://www.azooptics.com/Article.aspx?ArticleID=753. ## Related Products Designed or tested for use with lasers, check out our selection of laser grade lenses, mirrors, filters, optics assemblies, and more. Need to contact Edmund Optics? Use any of our fast and friendly services to meet your needs.
Competitions # Smooth Divisors The positive integer m is called a smooth divisor of n if the quotient and remainder of dividing n by m are equal. The positive integer n is given. Find the number of its smooth divisors. #### Input The positive integer n (1n106). #### Output Print the required number of smooth divisors for number n. Time limit 1 second Memory limit 122.17 MiB Input example #1 20 Output example #1 2 Author Pavel Kuznecov, Fedor Menschikov
# Back to the Butterfly Backstory: five years ago I signed myself up for a SCUBA diving course. At the first class there is a swim test: 200m, and treading water for 10min. I splashed and gurgled my way through the swim, then sank and drowned. Twice, and barely survived on the third trial. “That’s awful, dude”, I said to myself, “you’ve got to learn how to swim.” So I gave myself two years to learn to swim the butterfly. “But why the butterfly when you can’t even swim?” Why not? “How do you know you could ‘swim the butterfly’?” I’ll swim 1000m of it. # TextMate-WordPress Integration Test Just began a conversion to using TextMate for writing $\LaTeX$, python, and wordpress posts. This wordpress post is written with Markdown syntax. How is this working out? Continue reading # The Best Hangman Word Monte and I was sessioning over a deck of cards when Kate came in the living room.  She had the thinking expression on.  “What’s the best hangman word?”, she asked.  Without missing a beat, Monte replied, “Denim.”  It took both of us by surprise; denim? . # Love, Marriage, and Monte Carlo (1/3) A three part series that explore the Marriage Problem using Monte Carlo.  This first part lays out the premise of the “problem”. . . .
4 Spelling In his Set Theory. An Introduction to Indepencence Proofs, Kunen develops $ZFC$ from a platonistic point of view because he believes that this is pedagogically easier. When he talks about the intended interpretation of set theory he says such things as, for example, that the domain of discourse $V$ is the collection of all (well-founded, when foundation is introduced) hereditary sets. This point of view has always made me feel a bit uncomfortable. How can a variable in a first-order language run over the elements of a collection that is not a set? Only recently I realized that one thing is to be a platonist, an and another thing is to believe such an odd thing. A first-order theory of sets with a countable language can only prove the existence of countably many sets. Let me call them provable sets for short. Platonistically, we wish our intended interpretation of that theory to be one in which every provable set is actually the set the theory says it is. So we don't need our interpretation to contain every set, we just need that it contains at least the true provable sets. This collection is, really, a set, although it doesn't know it. To be a bit more concrete, if one is a platonist and the cumulative hierarchy is what one has in mind as the real universe of sets, one can think that the $V$ of one's theory actually refers to a an initial segment of that hierarchy, hence variables run no more over the real $V$ but only over the elements of some $V_\alpha$. There's a parallel to these ideas. For example, when we want to prove consistency with ZFC $ZFC$ of a given sentence, we do not directly look for a model of ZFC $ZFC$ where that sentence is true, but instead we take advantage of knowing that every finite fragment of $ZFC$ is consistent and that every proof involves only finitely many axioms. My question is, : then, is this position tenable or am I going awfully wrong? I apologize that this seems a philosophical issue rather than a mathematical one. I also apologize for stating things so simply (for lazyness)out of laziness). 3 Corrected spelling In his Set Theory. An Introduction to Indepencence Proofs, Kunen develops $ZFC$ from a platonistic point of view because he believes that this is pedagogically easier. When he talks about the intended interpretation of set theory he says such things as, for example, that the domain of discourse $V$ is the collection of all (well-founded, when foundation is introduced) hereditary sets. This point of view has always made me feel a bit uncomfortable. How can a variable in a first-order language run over the elements of a collection that is not a set? Only recently I realized that one thing is to be a platonist, an another thing is to believe such an odd thing. A first-order theory of sets with a countable language can only prove the existence of countably many sets. Let me call them provable sets for short. Platonistically, we wish our intended interpretation of that theory to be one in which every provable set is actually the set the theory says it is. So we don't need our interpretation to contain every set, we just need that it contains at least the true provable sets. This collection is, really, a set, although it doesn't know it. To be a bit more concrete, if one is a platonist and the cumulative hierarchy is what one has in mind as the real universe of sets, one can think that the $V$ of one's theory actually refers to a an initial segment of that hierarchy, hence variables run no more over the real $V$ but only over the elements of some $V_\alpha$. There's a parallel to these ideas. For example, when we want to prove consistency with ZFC of a given sentence, we do not directly look for a model of ZFC where that sentence is true, but instead we take advantage of knowing that every finite fragment of $ZFC$ is consistent and that every proof involves only finitely many axioms. My question is, then, is this position tenable or am I going awfully wrong? I apologize that this seems a philosophical issue rather than a mathematical one. I also apologize for stating things so simply (for lazyness). 2 edited tags 1
# Category talk:Guild commands Jump to: navigation, search Is the external link really adding something? We can't update it. It has a few more commands like the ritual bury commands and a column for "Taught By", but we can add those to the Commands page on the wiki if wanted. --Frazyl 03:27, 17 November 2010 (UTC) Well... In the event Wallsy's ever goes down, let's update the wiki and just leave his link under a 'See Also' subcategory. That way we're covered and crediting him (if we're picking up anything from his page). --Helaena 04:12, 17 November 2010 (UTC) I put it there in case there's any information there that's not easily accessible on the wiki already. If you want to get rid of it, I ask that you copy any missing info from it into the tables on the commands page, and then I have no objection to removing the link. I only made the page so I'd have a quick reference for that info, so if there's one here, I'm happy. Tiggum 06:41, 17 November 2010 (UTC)
# Let $A$ be the set of all functions $f:\mathbb{R}\to\mathbb{R}$ that satisfy the following two properties. Let $$A$$ be the set of all functions $$f:\mathbb{R}\to\mathbb{R}$$ that satisfy the following two properties: (1) $$f$$ has derivative of all orders, and (2) for all $$x,y \in \mathbb{R}$$, $$f(x+y)-f(y-x)=2xf'(y)$$. Which of the following sentences is true? (a) Any $$f\in A$$ is a polynomial of degree less than or equal to 1 (b) Any $$f \in A$$ is a polynomial of degree less than or equal to 2 (c) $$\exists f \in A$$ which is not polynomial (d) $$\exists f \in A$$ which is a polynomial of degree 4 It is a problem from TIFR GS-2018. I have found polynomials upto degree 2 hold this properties but have to exclude the possibility of (c) and (d). I also found out if $$f'(x)$$ has at most one real root and clearly graph of $$f$$ is symetric about that root. How can I proceed further? We have $$f(x+y)-f(y-x)=2xf'(y)$$ Apply $$\frac{\mathrm d}{\mathrm dx}$$: $$f'(x+y)+f'(y-x)=2f'(y)$$ and once more: $$f''(x+y)-f''(y-x)=0.$$ With $$x\leftarrow \frac t2$$, $$t\leftarrow \frac t2$$, this becomes $$f''(t)=f''(0).$$ Thus $$f$$ is a polynomal and of degree $$\le 2$$. • Thanks. But, there may be a typing error, it should $y \leftarrow \frac{t}{2}$ – Offlaw Nov 26 '18 at 5:21
# Mach Number and Velocity 1. Jun 23, 2014 ### LaReina 1. The problem statement, all variables and given/known data An object is flying through the air at M=0.5. The free stream temperature is equal to 180 K. At what speed should the object fly when the temperature is 100 K in order to maintain the same Mach number? (therefore ensuring compressibility effects are the same). What was the speed of the first object. 2. Relevant equations $M=\frac{V}{a}$ $a=\sqrt{γRT}$ 3. The attempt at a solution I've worked out the speed for the first object which is as follows $a=\sqrt{1.4\times287\times180}=268.931m/s$ $V=0.5\times268.931=134.465m/s$ However when I work out the speed for the second temperature using the exact procedure, I get 100.225 as an answer. The answer that has been given is 88.52m/s. 2. Jun 23, 2014 ### Staff: Mentor Please show us your work for the second temperature. Chet 3. Jun 25, 2014 ### LaReina $a=\sqrt{1.4\times287\times100}=200.448$ $V=200.448\times0.5=100.224$ 4. Jun 25, 2014 ### Staff: Mentor This calculation looks OK to me. Chet 5. Jun 25, 2014 ### dauto May be the question has a typo and it meant to ask what happens if the temperature drops 100K (which means it drops to 80K). That brings the answer closer to the answer provided.
CAT1995-72 1 vote 416 views Four sisters Suvarna, Tara, Uma, and Vibha are playing a game such that the loser doubles the money of each of the other players from her share. They played four games and each sister lost one game in the alphabetical order. At the end of fourth game each sister had Rs. 32. 1. 60 2. 34 3. 66 4. 28 edited If we follow backward approach, it can be easy to find the answer. each sister lost one game in the alphabetical order, so in 1st round Suvarna lost, in 2nd round Tara lost, in 3rd round Uma lost & in 4th or last round Vibha lost. ROUND Suvarna Tara Uma Vibha 4th (Vibha lost) 32 32 32 32 3rd (Uma lost) 32 / 2 =16 32 / 2 =16 32 / 2 =16 32+16+16+16 =80 2nd (Tara lost) 16 / 2 =8 16 / 2 =8 16+40+8+8 =72 80 / 2 =40 1st (Suvarna lost) 8 / 2 =4 8+4+36+20 =68 72 / 2 =36 40 / 2 =20 Initial 4+34+18+10 =66 68 /2 =34 36 /2 =18 20 /2 =10 Therefore, Suvarna started with 66 rupees.(C) 5.3k points 10 69 200 Related questions 1 vote 1 515 views Four sisters Suvarna, Tara, Uma, and Vibha are playing a game such that the loser doubles the money of each of the other players from her share. They played four games and each sister lost one game in the alphabetical order. At the end of fourth game each sister had Rs. 32. What was the amount with Uma at the end of the second round? 36 72 16 None of these 1 vote 2 613 views Four sisters Suvarna, Tara, Uma, and Vibha are playing a game such that the loser doubles the money of each of the other players from her share. They played four games and each sister lost one game in the alphabetical order. At the end of fourth game each sister had Rs. 32. Who started with the highest amount? Suvarna Tara Uma Vibha 1 vote Use the following data: A and B are running along a circular course of radius 7 km in opposite directions such that when they meet they reverse their directions and when they meet, A will run at the speed of B and vice-versa, Initially, the speed of A is thrice the speed of B. Assume that ... at $M_{4}$. What is the distance travelled by A when they meet at $M_{3}$? $77$km $66$km $99$km $88$km Use the following data: A and B are running along a circular course of radius 7 km in opposite directions such that when they meet they reverse their directions and when they meet, A will run at the speed of B and vice-versa, Initially, the speed of A is thrice the speed of B. Assume that they ... , and finally at $M_{4}$. Which is the point that coincides with M0? $M_{1}$ $M_{2}$ $M_{3}$ $M_{4}$
# Is pseudorandom number generator testing atypical? Particular pseudo-random number generators are tested for their their ability to produce sequences of numbers that behave like values of independent variables that are each uniformly distributed on [0,1) or (0,1). Most tests apply a statistical test such as a $$\chi^2$$ or Kolmogorov-Smirnov test to a function of data generated by the PRNG. If the the p-value from a particular test is too close to 0 or to 1, that counts against the PRNG, since it is producing patterns that would be very improbable if the generator were truly uniformly distributed. If the p-value is not close to 0 or 1, we say that the PRNG has "passed" that particular test. There are things about this procedure that seem odd to me, and I want to see whether I understand. Does the following reflect misunderstandings about PRNG testing or statistical testing in general? 1. The procedure is based on a null hypothesis that the PRNG produces numbers that are uniformly distributed. However, there's one sense in which it's known from the start that the output is not II uniformly distributed: each number generated has a probability of 1 conditional on the internal state of the generator (and that state is a deterministic function of earlier state, etc.). 2. A related point: If you read Knuth or L'Ecuyer or other authors on this topic, even if a PRNG has passed all previous tests it's always assumed that there may be further tests that a given PRNG would not pass. The sense you get is not that this is null hypothesis testing in the usual sense. If a PRNG passes all tests so far, the conclusion is not that we cannot reject the assumption that the output is uniformly distributed. The conclusion is that the output looks enough like what a truly uniformly distributed r.v. would produce, that it's OK, as far as we know, to use this PRNG for simulations. (EDIT: I am assuming that the usual goal of frequentist testing is to make inferences about the nature of an underlying process. [I don't think this assumption is free of controversy, but I can cite statistical authorities that make it, if requested.] The point here is that the PRNG authorities are never so rash as to think that any PRNG actually is a process that produces uniformly distributed output. All they want is a good simulation of uniformly distributed output.) 3. The alternative hypothesis is not even a composite probabilistic hypothesis. It's not that the output has some probability distribution or other, though we have no clue as to what that would be. The alternative also includes the possibility that the output is not even probabilistically distributed at all (except in the sense that given a particualr initial seed, the subsequent sequence of numbers has probability 1). Maybe it's OK that the only alternative is completely vague, but it means, for example, that it's impossible to calculate power. 4. The null in this case is unusual in that it doesn't represent a default state of affairs, or a low-information assumption, or lack of structure, or a safe assumption. If anything, the claim that the null is false is the default, low-information, lack of structure assumption, and it's dangerous to assume the null is true: If you incorrectly assume that your generator produces uniform-like output, your simulations might be misleading. (PRNG designers have to work very hard to design an algorithm that will not lead to rejecting the null.) I understand that Cross Validated isn't a forum for debate. I just want to know whether I am simply confused about something above. • Chapter 10 might contain answers to some of your questions. – Dimitriy V. Masterov May 29 at 20:50 • Your question seems overly broad. Perhaps with a little clarification it might be less so. 1. In what way are the aspects mentioned in point 2 unlike usual hypothesis testing for goodness of fit? 2. For that matter, how is point 3 unlike any typical omnibus goodness of fit testing? Note than you can calculate power against any specific alternative, and the power will in general be different for each such specific alternative. Neither of these points seem to be raising anything specifically different when testing RNGs compared to goodness of fit testing in other applications. – Glen_b May 30 at 0:37 • Thanks @DimitriyVMasterov. I had not read that piece by Cook. It really is a nice introduction to certain aspects of the subject. It's not too relevant, as it turns out, because much of the focus is on avoiding a buggy implementation of a good PRNG algorithm. My question concerns testing different algorithms when they are implemented as intended. The question is whether a particular algorithm does what it's supposed to do. This is a nontrivial challenge for PRNGs. – Mars May 30 at 0:55 • Thanks @Glen_b. I've added clarification to #2. About your comment on #3, and maybe the comment on #2 as well: I think these are actually answers, or partial answers, at least. You are telling me why my assumptions are wrong. That's what I wanted to know. – Mars May 30 at 1:04
0 like 0 dislike The following number line contains the points A, B, C, D, and E. Complete each of the following prompts with a numerical answer only. Enter answers in their most simplified form. The probability that a point chosen at random on AE is on AB is... 1/2 4/5 1/5 7/10 0 like 0 dislike The probability that a point chosen at random on AE is on AB is; 1/5. What is the probability that a point chosen at random on AE is on AB? The total distance over AE is given as; |-5 -5| = 10. And the total distance covered over AB is; |-5 -(-3)| = 2. On this note, the required probability is; 2/10 = 1/5.
# 8.8 Vectors  (Page 7/22) Page 7 / 22 ## Verbal What are the characteristics of the letters that are commonly used to represent vectors? lowercase, bold letter, usually $\text{\hspace{0.17em}}u,v,w$ How is a vector more specific than a line segment? What are $\text{\hspace{0.17em}}i\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}j,$ and what do they represent? They are unit vectors. They are used to represent the horizontal and vertical components of a vector. They each have a magnitude of 1. What is component form? When a unit vector is expressed as $⟨a,b⟩,$ which letter is the coefficient of the $\text{\hspace{0.17em}}i\text{\hspace{0.17em}}$ and which the $\text{\hspace{0.17em}}j?$ The first number always represents the coefficient of the $\text{\hspace{0.17em}}i,\text{\hspace{0.17em}}$ and the second represents the $\text{\hspace{0.17em}}j.$ ## Algebraic Given a vector with initial point $\text{\hspace{0.17em}}\left(5,2\right)\text{\hspace{0.17em}}$ and terminal point $\text{\hspace{0.17em}}\left(-1,-3\right),\text{\hspace{0.17em}}$ find an equivalent vector whose initial point is $\text{\hspace{0.17em}}\left(0,0\right).\text{\hspace{0.17em}}$ Write the vector in component form $⟨a,b⟩.$ Given a vector with initial point $\text{\hspace{0.17em}}\left(-4,2\right)\text{\hspace{0.17em}}$ and terminal point $\text{\hspace{0.17em}}\left(3,-3\right),\text{\hspace{0.17em}}$ find an equivalent vector whose initial point is $\text{\hspace{0.17em}}\left(0,0\right).\text{\hspace{0.17em}}$ Write the vector in component form $⟨a,b⟩.$ $〈7,-5〉$ Given a vector with initial point $\text{\hspace{0.17em}}\left(7,-1\right)\text{\hspace{0.17em}}$ and terminal point $\text{\hspace{0.17em}}\left(-1,-7\right),\text{\hspace{0.17em}}$ find an equivalent vector whose initial point is $\text{\hspace{0.17em}}\left(0,0\right).\text{\hspace{0.17em}}$ Write the vector in component form $⟨a,b⟩.$ For the following exercises, determine whether the two vectors $\text{\hspace{0.17em}}u\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}v\text{\hspace{0.17em}}$ are equal, where $\text{\hspace{0.17em}}u\text{\hspace{0.17em}}$ has an initial point $\text{\hspace{0.17em}}{P}_{1}\text{\hspace{0.17em}}$ and a terminal point $\text{\hspace{0.17em}}{P}_{2}\text{\hspace{0.17em}}$ and $v$ has an initial point $\text{\hspace{0.17em}}{P}_{3}\text{\hspace{0.17em}}$ and a terminal point $\text{\hspace{0.17em}}{P}_{4}$ . ${P}_{1}=\left(5,1\right),{P}_{2}=\left(3,-2\right),{P}_{3}=\left(-1,3\right),\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{P}_{4}=\left(9,-4\right)$ not equal ${P}_{1}=\left(2,-3\right),{P}_{2}=\left(5,1\right),{P}_{3}=\left(6,-1\right),\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{P}_{4}=\left(9,3\right)$ ${P}_{1}=\left(-1,-1\right),{P}_{2}=\left(-4,5\right),{P}_{3}=\left(-10,6\right),\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{P}_{4}=\left(-13,12\right)$ equal ${P}_{1}=\left(3,7\right),{P}_{2}=\left(2,1\right),{P}_{3}=\left(1,2\right),\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{P}_{4}=\left(-1,-4\right)$ ${P}_{1}=\left(8,3\right),{P}_{2}=\left(6,5\right),{P}_{3}=\left(11,8\right),\text{\hspace{0.17em}}$ and ${P}_{4}=\left(9,10\right)$ equal Given initial point $\text{\hspace{0.17em}}{P}_{1}=\left(-3,1\right)\text{\hspace{0.17em}}$ and terminal point $\text{\hspace{0.17em}}{P}_{2}=\left(5,2\right),\text{\hspace{0.17em}}$ write the vector $\text{\hspace{0.17em}}v\text{\hspace{0.17em}}$ in terms of $\text{\hspace{0.17em}}i\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}j.\text{\hspace{0.17em}}$ Given initial point $\text{\hspace{0.17em}}{P}_{1}=\left(6,0\right)\text{\hspace{0.17em}}$ and terminal point $\text{\hspace{0.17em}}{P}_{2}=\left(-1,-3\right),\text{\hspace{0.17em}}$ write the vector $\text{\hspace{0.17em}}v\text{\hspace{0.17em}}$ in terms of $\text{\hspace{0.17em}}i\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}j.\text{\hspace{0.17em}}$ $7i-3j$ For the following exercises, use the vectors u = i + 5 j , v = −2 i − 3 j ,  and w = 4 i j . Find u + ( v w ) Find 4 v + 2 u $-6i-2j$ For the following exercises, use the given vectors to compute u + v , u v , and 2 u − 3 v . $u=⟨2,-3⟩,v=⟨1,5⟩$ $u=⟨-3,4⟩,v=⟨-2,1⟩$ $u+v=〈-5,5〉,u-v=〈-1,3〉,2u-3v=〈0,5〉$ Let v = −4 i + 3 j . Find a vector that is half the length and points in the same direction as $\text{\hspace{0.17em}}v.$ Let v = 5 i + 2 j . Find a vector that is twice the length and points in the opposite direction as $\text{\hspace{0.17em}}v.$ $-10i–4j$ For the following exercises, find a unit vector in the same direction as the given vector. a = 3 i + 4 j b = −2 i + 5 j $-\frac{2\sqrt{29}}{29}i+\frac{5\sqrt{29}}{29}j$ c = 10 i j $d=-\frac{1}{3}i+\frac{5}{2}j$ $-\frac{2\sqrt{229}}{229}i+\frac{15\sqrt{229}}{229}j$ u = 100 i + 200 j u = −14 i + 2 j $-\frac{7\sqrt{2}}{10}i+\frac{\sqrt{2}}{10}j$ For the following exercises, find the magnitude and direction of the vector, $\text{\hspace{0.17em}}0\le \theta <2\pi .$ $⟨0,4⟩$ $⟨6,5⟩$ $|v|=7.810,\theta =39.806°$ $⟨2,-5⟩$ $⟨-4,-6⟩$ $|v|=7.211,\theta =236.310°$ Given u = 3 i − 4 j and v = −2 i + 3 j , calculate $\text{\hspace{0.17em}}u\cdot v.$ Given u = − i j and v = i + 5 j , calculate $\text{\hspace{0.17em}}u\cdot v.$ $-6$ Given $\text{\hspace{0.17em}}u=⟨-2,4⟩\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}v=⟨-3,1⟩,\text{\hspace{0.17em}}$ calculate $\text{\hspace{0.17em}}u\cdot v.$ Given u $=⟨-1,6⟩$ and v $=⟨6,-1⟩,$ calculate $\text{\hspace{0.17em}}u\cdot v.$ $-12$ ## Graphical For the following exercises, given $\text{\hspace{0.17em}}v,\text{\hspace{0.17em}}$ draw $v,$ 3 v and $\text{\hspace{0.17em}}\frac{1}{2}v.$ $⟨2,-1⟩$ $⟨-1,4⟩$ $⟨-3,-2⟩$ For the following exercises, use the vectors shown to sketch u + v , u v , and 2 u . For the following exercises, use the vectors shown to sketch 2 u + v . For the following exercises, use the vectors shown to sketch u − 3 v . For the following exercises, write the vector shown in component form. #### Questions & Answers give me an example of a problem so that I can practice answering Jenefa Reply x³+y³+z³=42 Robert dont forget the cube in each variable ;) Robert of she solves that, well ... then she has a lot of computational force under her command .... Walter what is a function? CJ Reply I want to learn about the law of exponent Quera Reply explain this Hinderson Reply what is functions? Angel Reply A mathematical relation such that every input has only one out. Spiro yes..it is a relationo of orders pairs of sets one or more input that leads to a exactly one output. Mubita Is a rule that assigns to each element X in a set A exactly one element, called F(x), in a set B. RichieRich If the plane intersects the cone (either above or below) horizontally, what figure will be created? Feemark Reply can you not take the square root of a negative number Sharon Reply No because a negative times a negative is a positive. No matter what you do you can never multiply the same number by itself and end with a negative lurverkitten Actually you can. you get what's called an Imaginary number denoted by i which is represented on the complex plane. The reply above would be correct if we were still confined to the "real" number line. Liam Suppose P= {-3,1,3} Q={-3,-2-1} and R= {-2,2,3}.what is the intersection Elaine Reply can I get some pretty basic questions Ama Reply In what way does set notation relate to function notation Ama is precalculus needed to take caculus Amara Reply It depends on what you already know. Just test yourself with some precalculus questions. If you find them easy, you're good to go. Spiro the solution doesn't seem right for this problem Mars Reply what is the domain of f(x)=x-4/x^2-2x-15 then Conney Reply x is different from -5&3 Seid All real x except 5 and - 3 Spiro ***youtu.be/ESxOXfh2Poc Loree how to prroved cos⁴x-sin⁴x= cos²x-sin²x are equal jeric Reply Don't think that you can. Elliott By using some imaginary no. Tanmay how do you provided cos⁴x-sin⁴x = cos²x-sin²x are equal jeric Reply What are the question marks for? Elliott Someone should please solve it for me Add 2over ×+3 +y-4 over 5 simplify (×+a)with square root of two -×root 2 all over a multiply 1over ×-y{(×-y)(×+y)} over ×y Abena Reply For the first question, I got (3y-2)/15 Second one, I got Root 2 Third one, I got 1/(y to the fourth power) I dont if it's right cause I can barely understand the question. Is under distribute property, inverse function, algebra and addition and multiplication function; so is a combined question Abena ### Read also: #### Get the best Precalculus course in your pocket! Source:  OpenStax, Precalculus. OpenStax CNX. Jan 19, 2016 Download for free at https://legacy.cnx.org/content/col11667/1.6 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'Precalculus' conversation and receive update notifications? By By By By OpenStax By OpenStax By Richley Crapo By JavaChamp Team By Joli Julianna By JavaChamp Team By OpenStax By Cath Yu By Jordon Humphreys
# Donkey Voting in Maltese General Elections “have a look at how many MPs are in parliament because of their surname” It was July 2017, and despite being finally elected as an MP, Hermann Schiavone was still willing to talk about electoral system reform to anyone who would listen. And in this case, that someone was MaltaToday’s Yannick Pace. Schiavone wanted (and presumably still does) to reform Malta’s STV system for multiple reasons, but the one we’ll discuss here today is donkey voting. ## What is the Donkey Vote? The donkey vote hypothesis is that in elections, candidates who feature on the top of voting ballots tend to do better than candidates who feature towards the bottom. Here’s what a typical Maltese general election ballot looks like: And sure enough, this example from the 2017 general election seems to have the donkey vote trademarks: a deliberate selection of the 1st and 2nd order candidates, before numbering sequentially from the top to the bottom. Donkey votes are usually a form of protest in countries where voting is mandatory, as in Australia, so it’s presence in Maltese elections is more peculiar. Instead of apathy, the most commonly flaunted reason is party loyalty: party diehards intentionally keep the vote within the party to increase the probability of transferred votes, maximising that party’s gains. ## The Data The extent to which donkey voting is influencing who gets elected however is unclear. Some like Schiavone (whose surname coincidentally places him close to the end in ballots) are categorical: “have a look at how many MPs are in parliament because of their surname” he had said in 2017. But a deep dive into the matter is hard to come by. The sole analytically inclined article I did find was more oriented towards the local elections. But a treasure trove of Maltese political data does exist and is hosted here by the University of Malta. Originally started by Professor of Political Science John C. Lane, the project is now in the hands of local scholars and support staff. Could we use this to look at the phenomenon? We can certainly try. To study this we’ll load the general elections dataset, spanning all elections between 1921 and 2013. Firstly we’ll look at two variables. BALL2 is the order that candidate appeared in his party’s group on the ballot. CT1 is a candidate’s number of first count votes. We’ll use Ballot position as a factor to distinguish different groups, and 1st Count Votes as a dependent variable to see if there is variability in its value between the groups. ## Visualising the Hypothesis: Boxplot A boxplot would be a great way to visually set up this hypothesis. To do this, I’ve used a subset of the data which only features elections post 1986. I decided on this split because the current political landscape was cemented at around that time. RecentElections <- MalteseElections %>% filter(YEAR >= 1986, BALL2 != 0) ggplot(RecentElections, aes(factor(BALL2), CT1, fill = factor(BALL2)))+ geom_boxplot()+ theme(legend.position = "none")+ labs(title = "First Count Votes by Ballot Order")+ xlab("Ballot Order") The first thing that’s apparent are several extreme values. This is logical, a handful of candidates are much more successful than many others. To help us visualise things better, we’ll plot the log of First Count Votes for now: ggplot(RecentElections, aes(factor(BALL2), log(CT1), fill = factor(BALL2)))+ geom_boxplot()+ theme(legend.position = "none")+ labs(title = "Log of First Count Votes by Ballot Order")+ ylab("Log of First Count Votes")+ xlab("Ballot Order") What the Donkey vote hypothesis suggests is that the median (bold line in the bars) should be higher in the first few ballot positions compared to the others. And that does not seem to be the case. We can visualise our experiment in a slightly different way by doing away with the boxplots, and drawing a transparent dot for each candidate, with the count of votes on the y-axis and the ballot position on the x-axis. RecentElections %>% ggplot(aes(factor(BALL2), log(CT1), col = factor(BALL2)))+ geom_point(position = "jitter", alpha = 0.2)+ theme(legend.position = "none")+ labs(title = "Log of First Count Votes by Ballot Order")+ ylab("Log of First Count Votes")+ xlab("Ballot Order") What we’ll try to test is if the distribution of those dots in each group is different than the distribution of the whole data. Instead of looking at it visually, we can make it more rigorous by using a statistical test. Since we’re interested in seeing whether the variation between groups (ballot order) is greater than the variation we see within groups (the spread of CT1 in the same group), the logical test would be the one taught in every undergraduate statistics course: ANOVA. But before we get to that, let’s fix two things in our data. Firstly, we’ll only analyse data for the two main political parties, since independent candidates and small parties routinely contest districts with only a single candidate (in other words, ballot position is always 1). Secondly, we’ll group any position after 10 into a single group called “10+”. The rationale for this is simple. Every election year and district combination will have a ballot position 1, but very few will have a ballot position of 19 or 18. This step will ensure that we’ll be comparing groups with around the same number of data points. # First let's filter only for PN/PL and recode positions after 10 BigPartiesOnly <- RecentElections %>% filter(PARTY %in% c(13, 15)) %>% mutate(BallotPos = factor(ifelse(BALL2 >=10, "10+", BALL2), levels = c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10+"))) Our data now looks like this: BigPartiesOnly %>% ggplot(aes(BallotPos, log(CT1), col = BallotPos))+ geom_point(position = "jitter", alpha = 0.2)+ theme(legend.position = "none")+ labs(title = "Log of First Count Votes by Ballot Order")+ ylab("Log of First Count Votes")+ xlab("Ballot Position") Now, time to run the test! # Conduct the analysis of variance test ANOVA <- aov(log(CT1) ~ BallotPos, data = BigPartiesOnly) # Summary of the analysis summary(ANOVA) ## Df Sum Sq Mean Sq F value Pr(>F) ## BallotPos 9 41 4.558 2.247 0.0171 * ## Residuals 1569 3183 2.028 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 And what do you know, a modest F-value! If there was no difference in first count votes between ballot position, the F-value, which is the ratio of the differences between the groups divided by the difference within the groups, would be close to 1. And our p-value is low, indicating that the probability of obtaining this result due to chance is also low. But all this tells us is that at least one of our groups is different from all the others. To see where the difference lies, we’ll have to dig deeper with a post-hoc test. What post-hoc tests do is compare all the different permutations of pairs together. However the issue with carrying out so many statistical tests is that eventually, one of them might end up being significant purely by chance when it is not, so many different frameworks of how to carry out post-hoc tests safely have been devised. In this case, we’ll use Tukey’s Honestly Significant Difference, invented by Princeton mathematician John Tukey (who also invented the boxplot and coined the term “bit”, among many other things. # Run Tukey's HSD on the ANOVA TukeyHSD(ANOVA) %>% tidy() %>% filter(adj.p.value < 0.05) ## # A tibble: 1 x 7 ## term contrast null.value estimate conf.low conf.high adj.p.value ## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 BallotPos 10+-1 0 -0.523 -1.01 -0.0399 0.0218 Since the output is literally 45 different statistical tests, I only filtered out for rows that are significant… which turned out to be only one: the comparison between the first ballot position and the ballot positions after the 10th. What this tells us is that there might be some difference in how first count votes are distributed between candidates who appear first and those who appear in the bottom of the ballot, but these differences don’t extend to candidates who appear first and those who appeared in any other position combination. All in all, this is weak evidence for donkey voting. It might be flimsier still. If the 1st and 2nd choices are (likely) deliberate, doesn’t the effect of the donkey vote come into effect after a few counts? While all we’ve shown is that ballot position is slightly relevant in first counts. So, let’s see if, say, the 5th count, when some of those transfers will have come into effect, shows a difference. ### Count 5… BigPartiesOnly %>% mutate(Count5 = as.numeric(Ct5)) %>% ggplot(aes(BallotPos, Count5, fill = BallotPos))+ geom_boxplot()+ theme(legend.position = "none")+ labs(title = "5th Count Votes (as percentage of party) by Ballot Order", subtitle = "Only PN and PL Candidates")+ xlab("Ballot Order") If anything, this graph shows less going on. Because those high 1st count vote values have ben transferred, we can do away with transforming our data. And another ANOVA shows no effect of ballot position on 5th count votes. summary(aov(as.numeric(Ct5)~BallotPos, data = BigPartiesOnly)) ## Df Sum Sq Mean Sq F value Pr(>F) ## BallotPos 9 1.122e+07 1246171 1.061 0.389 ## Residuals 1569 1.843e+09 1174696 ## Let’s see mean seated as a function of ballot order… Let’s approach this from another direction, and calculate the proportion of candidates seated for each ballot order. To do this, we’ll first filter where the Seated variable is either 0 (not seated) or 1 (seated) in our dataset. We do it this way because 2 codes for a candidate seated in the middle of a parliamentary session to replace a member who stepped down for instance. Next we’ll use a handy quirk of R. When you calculate the mean of a 0, 1 categorical vector, the result is the percent of the 1 occurring. BigPartiesOnly %>% filter(SEATED <= 1) %>% group_by(BallotPos) %>% summarise(PercSeated = mean(SEATED)) %>% ggplot(aes(BallotPos, PercSeated))+ geom_col()+ scale_y_continuous(labels=scales::percent)+ labs(title = "Proportion of Candidates Seated for Each Ballot Position", subtitle = "Only PN and PL Candidates")+ ylab("Percentage Seated")+ xlab("Ballot Order") And it seems all ballot positions have a roughly 20-30% chance of being elected, with positions 1, 2, 4 and 7 being roughly equal. It is position 8 that in fact seems the lowest. And while it’s not drastically lower, let’s try a Chi-squared test to be sure. After all, this is an entirely different hypothesis now: we’re saying that ballot position might have an influence on being seated in parliament (1) or not (0). Seated <- BigPartiesOnly %>% filter(SEATED <= 1) chisq.test(BigPartiesOnly$SEATED, BigPartiesOnly$BallotPos) ## Warning in chisq.test(BigPartiesOnly$SEATED, BigPartiesOnly$BallotPos): Chi- ## squared approximation may be incorrect ## ## Pearson's Chi-squared test ## ## data: BigPartiesOnly$SEATED and BigPartiesOnly$BallotPos ## X-squared = 23.871, df = 18, p-value = 0.1593 And since the p-value is large, we can’t say that the proportion of candidates seated is different according to the order with which they appeared in the ballot. ## The Nuanced Conclusion So, what have we learned? Well, if you contested Maltese elections for either of the two big parties since 1986, the order with which you appeared on the ballot largely didn’t influence your first count votes. The sole exception to this seems to be in the bottom few positions, and even then, the difference is only between these and the top first position. The difference is practically non-existent in your votes at the fifth count, and, perhaps most importantly, whether you are seated in parliament or not seems to be independent of your ballot position. This second conclusion suggests that even if donkey voting exists, it does not appear to shape which candidates get elected. Intuitively, the reason could be simple: each party only gets 2-3 seats per district, and voters usually start to donkey vote after a few deliberate choices. Anyway, since we do have a dataset of all Maltese elections spanning 1921 to 2013 loaded, let’s have some more fun… ### Top performing candidates Which candidates get the most votes? MalteseElections %>% arrange(desc(TOPS)) %>% select(NAME, YEAR, Dist, TOPS) %>% reactable() ### When has contesting Multiple Elections been a thing? Some candidates contest more than one district, either to improve the odds for themselves, or to increase first count votes for their party. Has the proportion of candidates who contested more than one district been changing through the years? DistrictsContested <- MalteseElections %>% filter(NAME != "* Non-Trans. *") %>% #Filter out non transferable votes group_by(NAME, YEAR) %>% summarise(count = n()-1) %>% group_by(YEAR) %>% summarise(PercContested2 = mean(count)) ## summarise() has grouped output by 'NAME'. You can override using the .groups argument. ggplot(DistrictsContested, aes(y = PercContested2, x = YEAR))+ geom_point()+ geom_line()+ scale_y_continuous(labels=scales::percent)+ labs(title = "Proportion of Candidates Contesting 2 Districts")+ ylab("Percentage of Candidates")+ xlab("Year") Looks like it was rarely a thing before the 1960’s. ### Which districts are the most oversubscribed? The two main parties make no attempts to try and limit their candidates in a district, and they often end up fielding many more candidates than available seats. Is this phenomenon steady across districts, and how has it evolved over time? ## summarise() has grouped output by 'YEAR', 'Dist'. You can override using the .groups argument. So around 4-5 candidates per available seat seems to be the norm in recent elections. There was a slight uptick in the 1960’s, and this probably has to do with the fact that those times were some of the only elections where we had a true multi-party system, with splinter parties led by Toni Pellegrini and Herbert Ganado. The 1962 election saw no less than 5 parties securing seats. Interestingly, my district, Gozo, seems to have the lowest number of candidates per seat. ### Who contested the most elections? MalteseElections %>% group_by(NAME) %>% summarise(TimesContested = max(AGAIN)) %>% arrange(desc(TimesContested)) %>% reactable() The record holder seems to be Mintoff, with a remarkable 14 elections contested. Many of the names here are interesting for one reason or another, but I think Amabile Cauchi is the one most deserving of a mention. The Gozitan MP kept a pet monkey, which one day escaped and climbed atop the steeple of Ghajnsielem’s old parish church. ### Gender Balance How has the proportion of women that contest the general elections evolved? Well, prior to 1947, it was 0, since women couldn’t even vote before this. It remained relatively meagre up until the mid 1990’s, and now is trending upwards. ### Candidates contesting through the years Which leads to another question. Has the number of candidates contesting changed through the years? MalteseElections %>% group_by(YEAR) %>% distinct(NAME) %>% tally() %>% ggplot(aes(factor(YEAR), n))+ geom_bar(stat = "identity")+ theme(axis.text.x = element_text(angle = 60))+ labs(title = "Number of Candidates Contesting the General Election")+ ylab("Number of Candidates")+ xlab("Election Year") Since this is a distinct count, candidates who contest 2 districts will only be counted once. And perhaps unsurprisingly, the record belongs to the 1962 general election, which saw 231 names contest. It’s been in the 175 candidate region since then. The post war 1945 election had the lowest number of candidates (16). ## Incumbency Effect A rich body of political science tells us that incumbency is a big boost in electability in democracies across the world. How big of a boost is it here? MalteseElections %>% filter(INCUMB != 99) %>% ggplot(aes(factor(INCUMB), CT1, fill = factor(INCUMB)))+ geom_boxplot()+ theme(legend.position = "none")+ labs(title = "Effect of Incumbency on 1st Count Votes")+ xlab("Incumbent") Incumbency <- MalteseElections %>% filter(INCUMB != 99) lm(CT1 ~ INCUMB, data = Incumbency) %>% summary() ## ## Call: ## lm(formula = CT1 ~ INCUMB, data = Incumbency) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1674.9 -412.7 -239.3 248.2 12279.1 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 453.73 19.91 22.79 <2e-16 *** ## INCUMB 1235.20 35.03 35.26 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1054 on 4136 degrees of freedom ## Multiple R-squared: 0.2311, Adjusted R-squared: 0.231 ## F-statistic: 1243 on 1 and 4136 DF, p-value: < 2.2e-16 What the boxplot and linear regression tell us is that the average candidate in a Maltese General Election gets 453 votes. If that candidate is an incumbent, he or she gets another additional 1235 votes - quite a decent boost that’s equivalent to two thirds the quota usually.
# What allows a pull-back toy car to drive further than it was pushed? Imagine you have a pull back toy car. Its back part is on $$x_0$$. You push it down and move it in the back direction to the point $$y$$ (not marked): Then you leave the car to move away: Then you mark the final position by $$x_1$$: Let's say, that the distance you have pushed the car is $$d_1$$ and the distance the car has travelled is $$d_2$$. As you can see, the car has travelled more than before (same as $$d_1 < d_2$$). If you don't believe it, try it yourself. Why has that happened? The law of conservation of the energy tells us, that energy can't be created out of anything. What has happened? • upsbatterycenter.com/blog/pull-back-toy-motor-work And interesting question btw...! – joshuaronis Mar 22 '20 at 19:52 • Great drawing skills! I approve. And nice question too. – John Alexiou Mar 22 '20 at 20:41 • I think some context is missing for this question. E.g. what's the flywheel you refer to in the first sentence? And what is making the car move? – David Z Mar 23 '20 at 5:24 • @DavidZ: it looks like this is the kind of car toy you push back, pressing firmly to the ground. It stores energy as you push it and then you let i got and wheeeeee it rushes forward. At least this was the toy we got with my brother for Christmas in 1978. – WoJ Mar 23 '20 at 8:24 • Since there's been no clarification about that, I removed the reference to flywheels. The question seems to be perfectly understandable without it. – David Z Mar 24 '20 at 21:19 Are you are expecting that if you roll the car backwards $$30\ \mathrm{cm}$$ then release it, it should move forward $$30\ \mathrm{cm}$$? Why? Most toy cars wouldn't move at all. If you put a stone in a catapult, pull it back $$30\ \mathrm{cm}$$ then release it, it goes forward much further than $$30\ \mathrm{cm}$$. If you did this in empty space the stone would keep going indefinitely. Energy hasn't been created out of nothing. You have done work against the catapult, storing elastic energy. When you release the catapult the stored elastic energy is transformed into the kinetic energy of the stone, which is dissipated as heat and sound as the stone flies through the air and hits a target. If there is no air resistance or friction, and nothing impedes the stone, its kinetic energy remains constant forever – its speed doesn't change, it goes infinitely more than $$30\ \mathrm{cm}$$. The toy car is the same. Instead of an elastic band, it contains a spring. Pushing down engages a gear wheel. As you push the toy car backwards you wind up the spring quickly using a relatively large force. You do work, elastic energy is stored in the spring. When the car is released, it springs back up and a different gear wheel is engaged. Now the spring unwinds itself slowly, supplying a much smaller force to the toy car. (See Note.) Elastic energy is transformed into the kinetic energy of the car, which is dissipated by friction. The car loses its kinetic energy gradually; it slows down and stops. If there was no friction the car would keep going indefinitely on a flat surface. It is not the distances which you need to compare but the work done, which is force times distance. You give the car elastic potential energy by pushing with a large force over a short distance. The much smaller force of friction takes that energy away over a much longer distance after it has been transformed into kinetic energy. Suppose the friction force is $$0.1\ \mathrm N$$ and you push the car backwards with a force of $$5.1\ \mathrm N$$ through a distance of $$30\ \mathrm{cm}$$. Then you have done $$5.1\ \mathrm N \times 0.3\ \mathrm m = 1.53\ \mathrm{Nm}$$ of work. Friction works in both directions so $$0.1\ \mathrm N \times 0.3\ \mathrm m = 0.03\ \mathrm{Nm}$$ of the work you do is wasted pushing against friction. The remaining $$1.50\ \mathrm{Nm}$$ of energy gets stored in the spring. When the car is released it is transformed into the kinetic energy of the car. The friction force of $$0.1\ \mathrm N$$ slows the car. You can expect the car to go a distance of $$15\ \mathrm m$$ before stopping because $$0.1\ \mathrm N \times 15\ \mathrm m = 1.5\ \mathrm{Nm}$$. The car goes $$50$$ times further forwards than you moved it backwards. But you haven't created any energy. In fact, some energy was lost pushing against friction. Only $$1.50\ \mathrm{Nm}$$ of the $$1.53\ \mathrm{Nm}$$ of energy which you supplied was used to move the car forward. Note: When the spring is fully unwound it is disengaged from the wheels so that the car rolls forward freely instead of winding the spring back up. That's like the catapult which releases the stone; otherwise the stone would stretch the elastic again and keep oscillating until its kinetic energy was used up. • A better example might be a bicycle wheel. Turn the bike upside down, as if you were changing a tire, and give the front wheel a little shove, moving the rim maybe 1/20 of a meter. It will keep rotating for quite a while (at least if your bearings are in good shape and your brakes aren't rubbing), with the rim travelling a distance of many meters. – jamesqf Mar 23 '20 at 16:56 • I think you need to bold the point that the gearing to the spring is different when releasing. I'd never concidered this before, and didn't realise it was using different gearing in reverse, to forward (otherwise you coud push it forward, and it'd shoot backwards too!) – djsmiley2kStaysInside Mar 23 '20 at 17:51 • @jamesqf Thanks. Yes that is a simpler example than the catapult but it is more difficult to compare that with the motion of the car. – sammy gerbil Mar 23 '20 at 20:27 • I've played with cars that have incorrectly made flywheels that releases the energy in a quick burst; they barely go anywhere because the force is too high; it instantly overcomes the wheel's static friction against the surface and can even flip the car over. – Nelson Mar 24 '20 at 3:12 • @djsmiley2kStaysInside: only powering the wheels in one direction could be achieve with a simpler ratchet (which it already has so you can't over-unwind the spring). Separate gearing matters because of practical considerations like limited traction when only its own weight is pressing it down. And because accelerating with the full pull-back force for only ~30cm would make it go out of control from small variations in traction / balance leading to turning. Plus turning all that energy into kinetic right away, then coasting, wouldn't work well. – Peter Cordes Mar 24 '20 at 14:12 It works kind of like the sketch below. When you push down and backwards on the car, a high ratio gear meshes that winds the spring, so a few inches backward push turns the car wheels back a few turns and winds the spring several turns. When you release the car, the body lifts. This un-meshes the first gear and meshes a low ratio gear instead, so several unwinding turns of the spring results in many forwards turns of the wheels. Ignore the belt drive I have inserted between the gear usually both gears mesh with just one drive wheel - but that would have made my sketch messy and harder to understand. Maybe a bit like this in reality. • Good description of what's happening inside the car, but the gearing ratio is completely irrelevant to the fact that the car can move forward farther than you pull it back. You can achieve the same thing just by wrapping a rubber band around an axle with no gears at all. – Nuclear Hoagie Mar 23 '20 at 14:04 • some cars don't have the push down action and there's some kind of a ratchet like setup that does the gear engagement instead – htmlcoderexe Mar 24 '20 at 9:59 • @NuclearWang The different gearing ratio is not completely irrelevant. Without it, the car would stop powering itself at the point where we started pushing it backwards and then just roll a bit further until it stops. However, it is easy to observe that this is not what happens, so the simpler answer would just seem wrong and require an immediate follow-up question. – JiK Mar 24 '20 at 15:16 • @JiK I agree the gearing ratio helps to overcome the real mechanical inefficiencies of a cheap toy car to give it more "zip". But in principle, if the car could coast with very low friction, you could even reverse the gearing ratio and have it work - the drive would disengage before where it started, but the car would continue to roll an arbitrarily far distance. – Nuclear Hoagie Mar 24 '20 at 15:29 • Regarding gear ratios. In my experience (a few years out of date as my son is past the age where toy cars were a thing), there are two types of pull-back car. In one, the pull-back stores energy in the spring and then releases it fairly quickly when the car is released. In these cars, most of the forwards motion is just coasting, after the energy has been transferred back through the wheels. In the second, the gear ratio is significant as the car is driven for a significant distance.The second version is more fun as the car can climb obstacles incommensurate with its apparent forward momentum. – Penguino Mar 25 '20 at 0:55 tl;dr The toy car can go forward longer because it's not being resisted by an equal resistive force. By contrast, pendulums are resisted by an equal resistive force, so they'll go no further than you pull them back. # It's not a pendulum. Imagine a pendulum hanging still at the center. That's like the toy car. If you pull back the pendulum, that's like winding up the toy car. And if you release it, it'll launch forward, like the toy car. The pendulum will go no further past the center than you pulled it back, because it's storing up gravitational potential energy as it goes along. The toy car, by contrast, isn't storing energy back up in a second spring to launch backwards in the opposite direction; it's just letting its kinetic energy carry it. • It's more like a pendulum where the bob is released at the bottom of the back-swing and thus continues to roll a long way (no longer trying to go back up the gravity well) – Carl Witthoft Mar 25 '20 at 12:09 I think a few details are hidden in unclarity of the question. Very nice drawing skills indeed though! ### Detail 1 1. A flywheel is a wheel with mass that starts rotating, and stores its energy as kinetic energy (rotating energy). 2. A torsion spring stores its energy as mechanical energy by winding up the spring. This potential energy. I think the car you talk about, which goes forward when pulled back, does not use a flywheel, but actually uses a torsion spring. as shown in the link provided by Joshua Ronis in the comments. ### Detail 2 The car you sketched moves along a horizontal surface. That means it does not gain any gravitational potential energy when it moves forward (translates). (It does lose a bit/all of its energy on: air-friction(mainly), rolling friction(mainly), creating noise(small), and I think temperature radiation (small). So for example, if there was no friction anymore once the car has accelerated in forward direction, the car would just keep on travelling forward forever, (a bit like satellites in high orbits that appear to go on forever, since they experience very little friction (when all they needed was an initial push by a rocket when they get up there {in reality "getting up there and giving the initial push" is usually mingled for energy efficiency though}). Therefore, the car can indeed travel beyond its initial point of push back. What that implies is that the friction forces it experiences are lower than the forces that are generated by the flywheel/torsion springs potential energy(over the distance up to the starting point). The forces that can be created by the flywheel/torsion springs (over the distance up to the starting point) must be lower than the forces you put on it with your hand when rolling it backwards (due to the conservation of energy and real-life {mechanical} energy translation losses). ### Mathematical description of answer This could be mathematically described with: $$s=\frac{1}{2}\cdot a\cdot t^2$$ where: • $$s$$ = distance travelled by car in $$m$$ • $$a$$ = acceleration in $$\frac{m}{s^2}$$ (coming from $$f-d=m\cdot a$$) • $$f-d$$ = the accelerating force in newtons created from the flywheel/torsional spring - the drag the car experiences from friction. • $$m$$ = mass of the car in $$kg$$ • $$t$$ = the time in seconds Which can be rewritten to: $$s=\frac{1}{2}\cdot \frac{f-d}{m}\cdot t^2$$ hence if the force $$f$$ is large enough, and the drag $$d$$ is small enough, $$s$$ will become arbitrarily large if time becomes large enough. (In reality, $$f$$ is a function of time that goes to 0 in a nonlinear way). • If there were friction, then one couldn't wind it up by rolling it on the floor. – Acccumulation Mar 23 '20 at 21:28 • Thank you, I included a nuance that ensures the no-friction scenario is discussed only after the car has accelerated in forward direction. Since the acceleration forward happens after winding it up, the description allows friction while winding the car up. – a.t. Mar 25 '20 at 13:06 The spring is what stores the energy that you transform from your mechanical energy by pushing back the car. Now the flywheel needs to be very heavy, actually heavier then the car itself, thus, when you release the car, the spring transforms back the stored potential energy onto the flywheel, starting it to roll. Now why does the flywheel roll more forward then backwards? It is because it does have inertia. When you push the car back, and load the energy into the spring, you do not use the flywheel (and its inertia) to move the car back at all, you just use your mechanical force. When you release the car, the flywheel is actually rolling and its inertia is what drags the car forward until the flywheel loses this inertia caused by the slowdown because of friction on the axle (caused really by gravity).
# Use the upper and lower Riemann sums to approximate the area of the region using `m=5` equal subintervals Estimate the area under the curve `y=v(1-x^2)`, `0<=x<=1`, `v` constant http://www.webassign.net/larson/4_02-30.gif The upper sum `U` is the sum of the highest point of the function in each of the `m=5` intervals multiplied by the width of the intervals `U = 0.2(f(0)+f(0.2)+f(0.4)+f(0.6)+f(0.8)) = ` `0.2v(1 + 0.96 + 0.84 + 0.64 + 0.36) = 0.760v` The lower sum`L` is the sum of the lowest point of the function in each of the `m` intervals multiplied by the width of the intervals `L = 0.2(f(0.2)+f(0.4)+f(0.6)+f(0.8)) = ` `0.2v(0.96+0.84+0.64+0.36+0) = 0.560v` The integral is approximated by `L+(U-L)/2 = 0.660v` `int_0^1 v(1-x^2)dx approx 0.660v`  answer Approved by eNotes Editorial Team
In existographies, Amedeo Avogadro (1776-1856) (IQ:175|#285) (GCE:21) (CR:11) was an Italian chemist noted for [] - Overview In 1811, Avogadro, in his "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", advanced the hypothesis that all gases under the same conditions of temperature and pressure, in unit volume, have the same number of molecules. The formulation of Avogadro’s law, supposedly was one of the first to coin the term “molecule”, as differing from an atom. The following is the Avogadro constant NA: $N_{\rm A}=6.022 \times 10^{23} \left ( \frac{entities}{mol} \right ) \,$ Thermodynamics In thermodynamics, the ideal gas constant R divided by the Avogadro constant is the Boltzmann constant kB: $k_{B} = \frac{R}{N_{\rm A}}\,$ which is used a good deal in statistical thermodynamics. In 1895, German chemist Walther Nernst wrote an early book on chemical thermodynamics based on Avogadro's law. [1] References 1. Nernst, Walther. (1895). Theoretical Chemistry: from the Standpoint of Avogadro’s Rule & Thermodynamics (section: The Measure of Affinity, pgs. 586-88). MacMillan and Co.
# How do you simplify 10 sqrt6-3 sqrt6? Aug 5, 2016 $10 \sqrt{6} - 3 \sqrt{6} = 7 \sqrt{6}$ #### Explanation: It is exactly the same as $10 x = 3 x$ $10 - 3 = 7$ $10 \sqrt{6} - 3 \sqrt{6} = 7 \sqrt{6}$
# 8-5(2a+3)= ## Simple and best practice solution for 8-5(2a+3)= equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework. If it's not what You are looking for type in the equation solver your own equation and let us solve it. ## Solution for 8-5(2a+3)= equation: 8-5(2a+3)=0We simplify the equation to the form, which is simple to understand 8-5(2a+3)=0Reorder the terms in parentheses8+(-10a-15)=0Remove unnecessary parentheses+8-10a-15=0We move all terms containing a to the left and all other terms to the right. -10a=0-8+15We simplify left and right side of the equation. -10a=+7We divide both sides of the equation by -10 to get a. a=-0.7` ## Related pages 4x 2y 10sin 6x cos 6xfactor tree for 108v lwh solve for l6x 2 5x-435x2144 prime factorizationsin 2x differentiatewhat is the gcf of 84derivative of ln cosxderivative cos 2xderivative of 1 cos2xsimplify 4x 20.375 in fraction2xmsin 3x graphcscx cotxlog3x 4180-145greatest prime factor of 68roman numerals mmxii6x 5yx 2 7x 3 02-314find the greatest common factor of 270 and 360prime factorization of 450maths equations solver4x graphwhat is the prime factorization of 650ax b 0 solve for xgcf of 72 and 108equation solver with fractionssin of arctan49 prime factorization9ipcalculator multiply fractions20 off 39.95sq root of 1215000 naira in dollarsprt equationsimplify the square root of 150666-5find the prime factorization of 35factorization calculatorsolve cos 2xprime factorization of 148graph 3x 2y 6what is the gcf of 843x 2y 4z4x 3y 6prime factorization of 47how to add fractions on calculatorsolve 2sin 2x 1x2 2x 240.452 as a percentmultiplying and dividing fractions calculator299792458multiplying with fractions calculatordecimals fractions and percentagesderivative of sin 1 xmaths equations calculatorabsolute value equations calculatorhcf of 725x 3yroman numerals for 1969sin 4x cos 4xtimesing fractions calculatorwhats a 5 percenter29 prime factorizationfractions calculator with variables5x 2 3x 2how to write 1984 in roman numeralsadding and multiplying fractions calculatorbwh calculatorequations calculator with fractionsfactor x 2 3x 5roman numerals 1962
## Stream: general ### Topic: tidy lost my metavars #### Johan Commelin (Nov 29 2018 at 09:29): Somehow tidy claims it closed all goals, but the kernel says there are still metavariables left. Is there a good approach to debugging this? Somewhere a metavariable got removed from the goal-list without being fully instantiated. I guess it should be possible to track this, right? #### Mario Carneiro (Nov 29 2018 at 10:07): it is possible to write a tactic that will tell you if the current tactic state is broken, but you will have to sprinkle it around and it will often give false positives because of focus and such #### Mario Carneiro (Nov 29 2018 at 10:08): the recover tactic does this, essentially #### Johan Commelin (Nov 29 2018 at 10:12): Thanks. Didn't know about recover. I'll try it out. #### Johan Commelin (Nov 29 2018 at 12:06): Oh by the way, recover worked. It figured out that there was some naturality condition that wasn't proven. I don't know how it got lost. #### Keeley Hoek (Nov 29 2018 at 12:11): It'd be really great to see a reproducible case of that Johan, probably there is a bug in a tactic somewhere #### Johan Commelin (Nov 29 2018 at 12:26): @Keeley Hoek https://github.com/leanprover-community/mathlib/blob/sheaf/category_theory/presheaf.lean#L113 Voila. I retried this with a freshly restarted Lean. Problem still occurs. I have no idea how I could build a MWE out of this. It's pretty deep down in ugly maths. #### Keeley Hoek (Nov 29 2018 at 13:40): Seems like a bug in constructor to me #### Keeley Hoek (Nov 29 2018 at 13:41): For anyone who is interested: def oopsie (F : C ⥤ D) : functor.id (presheaf C) ⟹ yoneda_extension F ⋙ restricted_yoneda F := begin constructor, -- One goal recover, -- Two goals :O sorry end #### Johan Commelin (Nov 29 2018 at 13:42): But I guess this ties in to the auto_params, doesn't it? #### Keeley Hoek (Nov 29 2018 at 13:43): I'm not sure I understand #### Keeley Hoek (Nov 29 2018 at 13:43): I mean, surely it shouldn't erase a metavar it creates from history #### Johan Commelin (Nov 29 2018 at 13:43): Maybe constructor is throwing away goals that have an auto_param attached to them? #### Keeley Hoek (Nov 29 2018 at 13:45): I wonder if the extract_opt_auto_param in get_constructors_for has anything to do with it Actually, I bet it is the mk_const on line 23 of constructor_tactic.lean in lean core #### Keeley Hoek (Nov 29 2018 at 13:45): That could create metavariables which don't get fully bound by the apply maybe #### Johan Commelin (Nov 29 2018 at 13:48): Thanks for debugging this! Last updated: May 08 2021 at 18:17 UTC
# Separate output screen; clearing (cleaning) output screen 1. Is there a way to split the screen so that the code is visible on top, and the output generated from its execution appears at the bottom? 2. Is there any way to clear the screen from previous runs, so that the result of execution is presented in a clean screen, something like the cls command in BASIC? • I don't think the functionality exists in the standard notebook interface, although it could probably be created in a custom notebook with the help of dynamic interactivity. I could imagine this might be useful for some things, but notebooks are not organized like that by default. – Jens Apr 7 '15 at 2:41 • In the book "An Introduction to Programming with Mathematica" by Wellin, Gaylord & Kamin all of chapter 10 is on programming the "front end" of Mathematica. Using functions like NotebookCreate and NotebookWrite you might be able to do some or much of what you want. – Bill Apr 7 '15 at 3:15 Cls := (SelectionMove[InputNotebook[], All, Notebook]; $Post = (If[Head@$outputNB == Symbol, $outputNB = CreateNotebook[]]; If[# === Null, 1;, Paste[$outputNB, #]]) &; Cls is the screen-cleaning command, it deletes all the cells in current notebook. Then the \$Post variable is modified to redirect all the outputs. • Nice solution. I would use cls[] for the clear command as that is safer and more conventional. May 6 '15 at 6:30
• A worker makes a toy in every 2 h. If he works for 80 h, then how many toys will he make? A) 40 B) 54 C) 45 D) 39 A) 40 Let number of toys be x. More hours, More toys (Direct proportion) 360 : 60 :: 192 : x Therefore, x = $$\Large \frac{60 \times 192 }{360}$$ = 32 bananas ##### Similar Questions 1). 12 men can do a piece of work in 24 days. How many days are needed to complete the work, if 8 men are engaged in the same work? A). 28 B). 36 C). 48 D). 52 2). If 45 m of a uniform rod weighs 171 kg, then what will be the weight of 12 m of the same rod? A). 49 kg B). 42.5 kg C). 55 kg D). 45.6 kg 3). 22 men can complete a job in 16 days. In how many days, will 32 men complete that job? A). 14 B). 12 C). 16 D). 11 4). If 10 spiders can catch 10 flies in 10 min, then how many flies can 200 spiders catch in 200 min? A). 2000 B). 5000 C). 4000 D). 3000 5). 2000 soldiers in a fort had enough food for 20 days. But some soldiers were transferred to another fort and the food lasted for 25 days. How many soldiers were transferred? A). 400 B). 450 C). 525 D). 500 6). If in a hostel, food is available for 45 days for 50 students. For how many days will this food be sufficient for 75 students? A). 25 days B). 28 days C). 30 days D). 40 days A). 12 days B). 10 days C). $$\Large 8\frac{2}{3}$$ days D). $$\Large 9\frac{3}{4}$$ days
# Particle in a box with the finite depth Tags: 1. Jul 23, 2015 ### fricke For particle in a box with the finite depth, is it traveling wave? or standing wave? I am confused with its ability to pass through the potential walls that is classically forbidden area which makes me think it is traveling wave. But for particle in a box with infinite potential, I understand that it is standing wave since the presence of infinite potential walls makes a restriction towards the wave function. So, I kind of have no idea if it is traveling wave or standing wave for particle in a box with the finite depth. Help me please, thank you. 2. Jul 23, 2015 ### ShayanJ At first lets see what is a standing wave. Maybe calling such a thing a wave is misleading, because a wave is, by definition, accompanied by propagation of energy but a standing wave doesn't propagate any energy. The equation of a standing wave is of the form $\psi(x,t)=\chi(t) \phi(x)$. The point in such a definition is that the spatial parts gives an amplitude for the oscillation at a particular point and the temporal part is responsible for that oscillation. So in a standing wave, you only have an infinite number of oscillators lined up that have nothing to do with each other. Now by the criterion $\psi(x,t)=\chi(t) \phi(x)$, any energy eigenstate of a system with a time-independent potential, is a standing wave because the time dependence of the wave-function is always given by multiplying the spatial part by a $e^{-i\frac E \hbar t}$, so the wave-function of the energy eigenstate is always of the form $\psi(x,t)= e^{-i\frac E \hbar t} \phi(x)$. But if you consider a state that is the superposition of several energy eigenstates, then you may have a travelling wave. The point here is that when your problem is indicating that the world is divided into several regions each with a different potential, then you should solve the Schrodinger equation in each region separately and so the above considerations are different for each region. Another point is that the penetration of the wave-function in the classically forbidden region is done via a exponentially decaying function which is not a wave. But even if the potential was something else that implied that the penetration was done via a wave, then we could have a standing wave in one region that connects to a travelling wave in another region. It would be no problem if you have the right interpretation in mind. 3. Jul 23, 2015
Matlab plotting Homework Statement I'm a little lost on how to plot this data and function. I included the homework question and my attempt at plotting in the attached picture. I'm pretty sure what I have is completely wrong and I honestly don't have much of an understanding of matlab, so the more you dumb it down the better. Any help is greatly appreciated! :) The Attempt at a Solution Attachments • 71.1 KB Views: 387 • 8.5 KB Views: 379 Related Engineering and Comp Sci Homework Help News on Phys.org jedishrfu Mentor Your t array has more elements than your Q array and so MATLAB can't pair them together for an (x,y) point to plot. As an example: Matlab: x=[0:1:10] y=x.*x plot(x,y) produces two vectors ##x## and ##y## where ##y=x^2## and so the plot(x,y) draws a simple parabola. In your case, MATLAB can't match up each value in t with one in Q and hence issues the error message you got. Ask yourself why does t have a different length from Q? Isn't Q dependent on t somehow? Here's the definition of linspace which may be the source of your problem: http://www.mathworks.com/help/matlab/ref/linspace.html?s_tid=gn_loc_drop Notice you used a 3 argument version where you want to create a vector t from 0 to 10 and you want 100 points and thats what you got but the Q is only eight points. So one solution is to modify the linspace arguments. jdawg donpacino Gold Member You can use the length function to evaluate vector lengths. Then make sure they are the same size jdawg Ok, I tried changing a few things and managed to get a plot to come up, but I'm not super confident that its right. I was thinking that I was supposed to use linspace somehow to plug t values into my q function and then plot the actual Q and t vectors as data points on the same graph.. I hope that makes sense. Or could I just say something like plot q and then somehow add the Q and T vectors onto the same plot as data points? Also, did I type the q function correctly? Matlab keeps giving me errors when I try to run it. Attachments • 54.7 KB Views: 357 donpacino Gold Member Ok, I tried changing a few things and managed to get a plot to come up, but I'm not super confident that its right. I was thinking that I was supposed to use linspace somehow to plug t values into my q function and then plot the actual Q and t vectors as data points on the same graph.. I hope that makes sense. Or could I just say something like plot q and then somehow add the Q and T vectors onto the same plot as data points? Also, did I type the q function correctly? Matlab keeps giving me errors when I try to run it. A few things.... 1. that plot looks correct. 2. you can use the following commands to make your plot look better xlabel('string'); ylabel('string'); title('string') legend('plot1','plot2',...) 3. exp is a function, get rid of the ^ exp(stuff) will execute the math function e^stuff 4. just do this plot( t , vector1 , t , vector2 ); jdawg donpacino Gold Member Ok, I tried changing a few things and managed to get a plot to come up, but I'm not super confident that its right. I was thinking that I was supposed to use linspace somehow to plug t values into my q function and then plot the actual Q and t vectors as data points on the same graph.. I hope that makes sense. Or could I just say something like plot q and then somehow add the Q and T vectors onto the same plot as data points? Also, did I type the q function correctly? Matlab keeps giving me errors when I try to run it. in general the matlab forums are VERY good. when you get an error, just toss it in google. when you want to say plot two vectors, type "matlab plot two vectors" into google. You'll be surprised by what has already been answered and the guides mathworks have put together. jdawg You were so much help!! Thanks a bunch!
# Can cycles Wireframe Input be coerced into displaying Tris, Quads and Ngons edit 2 years later: Here is an alternative answer that meets the desired end-result using FreeStyle. Is there a way to make the wireframe input show the geometry as BMesh polygons instead of the raw triangulated mesh? The example below displays the wireframe of a mesh in edit mode (as you can see it doesn't have any triangles). When I render the mesh it appears triangulated, but being able to show quads and ngons (instead of only tris) would be more useful to me. • Which wireframe material is this? Can you link to it please? – CharlesL Jun 3 '13 at 21:35 • it's a mix of wireframe input and ambient occlusion shader. nothing special. dl.dropboxusercontent.com/u/3397495/blender_related/… – zeffii Jun 3 '13 at 21:42 • @zeffii had a look around, this might be what you are looking for, I didn't go through it yet tho. blendswap.com/blends/view/41864 – iKlsR Jun 3 '13 at 21:46 • @iKlsR that does look promising! – zeffii Jun 3 '13 at 21:50 • <br> One of the Techniques (UV Based) of the .Blend file on Blendswap is explained here youtube.com/watch?v=zjhdGY21WqQ But it didnt work out for me, as the Lines get drawn in different Thickness acording to their Face-side ratio.<br> With fairly even distributes equally sized Polygons its working quite good. Hope that helps – Damir Jun 3 '16 at 10:15
183_notes:gravitation Section 3.1, 3.2, 3.3 and 3.4 in Matter and Interactions (4th edition) Earlier, you read about the gravitational force near the surface of the Earth. This force was constant and was always directed “downward” (or rather toward the center of the Earth). In these notes, you will read about Newton's formulation of the gravitational force that (in his day) helped explain the motion of the solar system including why the Sun was at the center of the solar system. Using a number of empirical observations (by Tycho Brahe and Johannes Kepler) of the motion of various astronomical objects, Isaac Newton was able to develop an empirical formula for the interactions of the those objects that could predict the future (and explain the past) motion of those objects. This formula became known as Newton's Universal Law of Gravitation. We will refer to it as the Model of the Gravitational Force 1). Newton found that the interaction between two objects with mass is attractive, directly proportional to the product of their masses, inversely proportional to the square of their separation, and directed along the line between their centers. The figure to the right illustrates the force that planet 2 exerts on planet 1. To be explicit, consider the vector ($\vec{r}$) that points from planet 2 to planet 1. If the location of planet 1 relative to the origin is $\vec{r}_1$ and the location of planet 2 relative to the same origin is $\vec{r}_2$, then this relative position or separation vector can be mathematically represented like this: $$\vec{r} = \vec{r}_1 - \vec{r}_2$$ The separation vector is represented by the black arrow in the figure to the right. The length of this separation vector ($|\vec{r}|$) is the how far apart the two planets are. The unit vector that points from planet 2 to planet 1 is given by, $$\hat{r} = \dfrac{\vec{r}}{|\vec{r}|}$$ With these vectors written, you can now write down Newton's model of the gravitational force from the description above, $$\vec{F}_{grav} = -G\dfrac{m_1 m_2}{|\vec{r}|^2}\hat{r}$$ where $G$ is a constant of proportionality that characterizes the strength of the gravitational force. This force is represented by the red arrow in the figure to the right. In SI units, $G = 6.67384 \times 10^{-11} \dfrac{m^3}{kg\:s^2}$. #### Why the minus sign? The gravitational force is an attractive force. That is, two objects that interact gravitationally are attracted to each other. The gravitational force formula uses the separation vector ($\vec{r}$) that points from the object that exerts the force to the object that experiences the force. For example, in the figure above, $m_2$ exerts the force on $m_1$, so the separation vector points from $m_2$ to $m_1$ (black arrow in the figure above). But, the force that $m_1$ experiences is directed towards $m_2$; it is attracted towards $m_2$. The minus sign ensures that the force (red arrow in the figure above) points in this direction. The gravitational force the Earth exerts on the Moon is the same magnitude as the gravitational force the Moon exerts on the Earth. The gravitational force provides the first example of Newton's 3rd Law, which you might have heard colloquially as “For every action, there is an equal and opposite reaction.” Unfortunately, this colloquialism is a terribly inaccurate definition that gets applied incorrectly quite often, even by the Mythbusters! Newton's 3rd Law results from the idea that a force quantifies the interaction between two objects. You can also think of it as an empirical fact, which stems from our definition of force. That is, we observe when one object exerts a force on another object, the second object exerts a force on the first object of the same size but opposite in direction. To be more concrete, you can think about the gravitational interaction between the Earth and the moon (shown in the figure below). The magnitude of these gravitational forces are the same (see the equation above), but the vector direction for each always points directly towards the other object. We will find other examples of Newton's 3rd Law pairs when you learn about contact interactions. When we discuss contact interactions, it turns out, these are the result of the electrostatic force. #### If the forces are the same size, why isn't the motion the same? The motion of systems is governed by the Momentum Principle. In this case, you might find it useful to think about the acceleration of the system, which tells you how the velocity of the system changes. While the Earth and Moon experience the same size gravitational force, the small mass of the Moon (compared to the Earth) results in a much larger acceleration for the Moon, and this change in the Moon's velocity is large (compared to the Earth's). #### Acceleration due to the gravitational force Consider a person ($m_{person}$) who is standing on the surface of the Earth ($R_{Earth}$ from the center of the Earth). The magnitude of the force acting on either the person due to the Earth or on the Earth due to the person is the same size, namely, $$|F_{grav}| = G\dfrac{m_{person}M_{Earth}}{R_{Earth}^2}$$ where $|F_{grav}|$ is simply the magnitude of the gravitational force. If you want to find the magnitude of the acceleration that the person experiences as a result of the gravitational force, simply divide the above equation by the mass of the person (i.e., $a = F/m$ for the net force), $$|a_{person}| = \dfrac{|F_{grav}|}{m_{person}} = G\dfrac{M_{Earth}}{R_{Earth}^2}$$ This acceleration is fully defined by known quantities (i.e., $G$, $M_{Earth}$, and $R_{Earth}$) and turns out to give the Near-Earth Gravitational acceleration ($g=9.81 \dfrac{m}{s^2}$). If instead, you are interested in the acceleration the Earth experiences due to the person, you divide by the mass of the Earth (a mass that is $10^{22}$ times larger than the person's mass), $$|a_{Earth}| = \dfrac{|F_{grav}|}{M_{Earth}} = G\dfrac{m_{person}}{R_{Earth}^2}$$ Thus, the acceleration that the Earth would experience due a single person is about 0.0000000000000000000001*$g$! This value is incredibly small; we often neglect changes in the motion of the Earth due to objects that are not astronomically large. In these notes, the vector acceleration due to gravitational interactions is calculated explicitly. Newton's model of the gravitational force was considered one of the simplest and most explanatory models for many years. We have since made observations that no longer fit with Newton's model (e.g., Gravitational lensing). Our best model for gravitation, which observations continue to fit, is called "general relativity" (GR) and was developed by Albert Einstein. While this model provides us with far better predictions and explanations of a variety of observations, we still use Newton's model of the gravitational force for two reasons: (1) it can provide reasonable predictions for many cases, and (2) the mathematics that is used in GR is sufficiently sophisticated that you will need more physics and mathematics experience to gain deep insight into its use. 1) We call this “law” a model because, as with all physical formulae, there are limitations to its predictive power. Newton was incredibly frustrated that the motion of Mercury could not be predicted by his “law.” In fact, a new model had to be developed. • 183_notes/gravitation.txt • Last modified: 2021/02/05 00:01 • by stumptyl
# Analysis of Trends Contents ## Introduction We will study trend models for the analysis of time series data. Trend models that we will cover include linear, quadratic and harmonic trend models and those account for seasonality. We will observe that while the trend models are very good at capturing the trend in time series data, their performance is poor on capturing serial correlation in time series data. We will have hands-on tasks to deepen your understanding of trend models and improve your skillsets for the implementation of time series analysis methods. The trend in time series is closely related to the mean function of the series. Changes in mean over time create a trend in the series. In general, the mean function is an arbitrary function of time. We will consider relatively simple functions of time to model the trend in time series. In this module, we will study • the deterministic and stochastic trend, • modeling deterministic trends, • estimation of constant mean, • regression approach to model the trend, • analysis of residuals after modelling the trend. ## learning objectives This week will contribute to Course Learning Objectives: 1. Present time series in an informative way, both graphically and with summary statistics 2. Develop stationary and non-stationary, and seasonal and non-seasonal time series models One of the challenges in time series analysis is that the same time series may be viewed quite differently by different analysts. For example, one can foresee a trend in a simulated random walk with a constant mean for all time. The perceived trend is a result of the strong positive correlation between the series values at nearby time points and the increasing variance in the process as time goes by. Therefore, one can see different trends in the next simulations. This type of trend is called stochastic trend. In the average monthly temperatures example of the first module, we got the following time series plot in Figure 1: Here we have a cyclical or seasonal trend, but here the reason for the trend is clear that the Northern Hemisphere’s changing inclination toward the sun. We can model this trend by Yt=Xt+μt, where μt is a deterministic function that is periodic with period 12 and it should satisfy μt=μt−12 for all t. We can assume that Xt represent an unobserved variation around μt and has zero mean for all t. So, this model assumes that μt is the mean function for the observed series Yt. Because the mean function is determined beforehand and we can set the “functions form” of a trend, the trend considered here is a deterministic trend. It is possible to set a linear mean function such that μt=β0+β1t or a quadratic time trend such as μt=β0+β1t+β2t2. ## Estimation of a Constant Mean When we consider a constant mean over time, we set μt=μ, for all t. So, our model is written as Our aim is to estimate the value of μ using the observed series Y1,Y2,…,Yn. The straightforward estimate of μ is the sample mean calculated as Here sample mean is an unbiased estimator of the constant mean. To investigate its efficiency, we need to find the variance of the sample mean. Suppose that {Yt} is a stationary time series with autocorrelation function ρk. Then, the variance of the sample mean is obtained as Note that if the series {Yt} is just white noise then ρk=0 for k>0; and hence, Var(Y¯ reduces to simply γ0/n, which is the population variance divided by the sample size. Instead of constant mean, we can set a moving average model such that Yt=et−1/2et−1, which is also stationary. Then, we find that ρ1=−0.4, which means that we have a negative correlation at lag 1, and ρk=0 for k>1. In this case, we have For a large n, the correction factor (n−1)/n will approach to 1. Thus, we get So, the variance of the estimator of μ for the moving average model is less than that of for the constant mean model: 0.2(γ0/n)<γ0/n. The reason for getting a more efficient estimator with a moving average model is that in the moving average model, it is possible for the series to oscillate back and forth across the mean. On the other hand, if ρk≥0 for all k≥1, Var(Y¯) will be larger than γ0/n. For many stationary processes, the autocorrelation function decays quickly enough with increasing lags. under this assumption and given a large sample, we obtain the following approximation: Here, negative correlations and large sample size both increase the efficiency of the estimator. We should note that the precision of the sample mean as an estimator of μ can be strikingly different for a nonstationary process with a constant mean. For example, for the random walk process defined in Module 1, we find the following: Notice that in this special case the variance of our estimate of the mean actually increases as the sample size n increases. Because this is unacceptable, we need to consider other estimation techniques for nonstationary series. ## Regression Approach Classical regression analysis can be used to model nonconstant mean trend. We will consider linear, quadratic, seasonal means, and cosine trends. The deterministic linear trend model is expressed as follows: μt=β0+β1t where β0 represents intercept and β1 corresponds to the slope of the linear trend. Suppose β^0 and β^1 are the classical least squares estimates of β0 and β1, respectively. Then, β^0 and β^1 are obtained as follows: where t=(n+1)/2 is the average of integers 1,2,…,n. Consider the simulated random walk process in Figure 2: 1 2 3 data(rwalk) model1 = lm(rwalk~time(rwalk)) # label the model as model1 summary(model1) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ## ## Call: ## lm(formula = rwalk ~ time(rwalk)) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.70045 -0.79782 0.06391 0.63064 2.22128 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.007888 0.297245 -3.391 0.00126 ** ## time(rwalk) 0.134087 0.008475 15.822 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.137 on 58 degrees of freedom ## Multiple R-squared: 0.8119, Adjusted R-squared: 0.8086 ## F-statistic: 250.3 on 1 and 58 DF, p-value: < 2.2e-16 Estimates of slope and intercept are β^1=0.1341 and β^0=−1.008, respectively. Here slope is statistically significant at 5% significance level. The trend line is plotted over the time series in Figure 3: 1 2 plot(rwalk,type='o',ylab='y', main = "Figure 3. Fitted linear model to the simulated random walk series.") abline(model1) # add the fitted least squares line from model1 Appropriateness of this linear trend model will be considered later. The deterministic quadratic trend model is expressed as follows μt=β0+β1t+β2t2 where β0 represents intercept, β1 corresponds to the linear trend, and β2 corresponds to quadratic trend in time. The following code chunk fits a quadratic trend model to the random walk data: 1 2 3 4 t = time(rwalk) t2 = t^2 model1.1 = lm(rwalk~t+t2) # label the model as model1 summary(model1.1) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ## ## Call: ## lm(formula = rwalk ~ t + t2) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.69623 -0.76802 0.00826 0.85337 2.34468 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.4272911 0.4534893 -3.147 0.00262 ** ## t 0.1746746 0.0343028 5.092 4.16e-06 *** ## t2 -0.0006654 0.0005451 -1.221 0.22721 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.132 on 57 degrees of freedom ## Multiple R-squared: 0.8167, Adjusted R-squared: 0.8102 ## F-statistic: 127 on 2 and 57 DF, p-value: < 2.2e-16 Fitted quadratic trend is shown in Figure 4: 1 2 3 4 plot(ts(fitted(model1.1)), ylim = c(min(c(fitted(model1.1), as.vector(rwalk))), max(c(fitted(model1.1),as.vector(rwalk)))),ylab='y' , main = "Figure 4. Fitted quadratic curve to the random walk series.") lines(as.vector(rwalk),type="o") Consider now modeling and estimating seasonal trends, such as for the average monthly temperature data in Figure 5. Here we assume that the observed series can be represented as Yt=μt+Xt where E(Xt)=0 for all t. The most general assumption for μt with monthly seasonal data is that there are 12 parameters, β1,β2,…,β12, giving the expected average temperature for each of the 12 months. To represent seasonality, we may write a seasonal model such that We need to set up indicator variables (sometimes called dummy variables) that indicate the month to which each of the data points pertains before going on with estimation of parameters. We can also include an intercept term β0 in the model. 1 2 3 4 data(tempdub) month.=season(tempdub) # period added to improve table display and this line sets up indicators model2=lm(tempdub~month.-1) # -1 removes the intercept term summary(model2) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ## ## Call: ## lm(formula = tempdub ~ month. - 1) ## ## Residuals: ## Min 1Q Median 3Q Max ## -8.2750 -2.2479 0.1125 1.8896 9.8250 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## month.January 16.608 0.987 16.83 <2e-16 *** ## month.February 20.650 0.987 20.92 <2e-16 *** ## month.March 32.475 0.987 32.90 <2e-16 *** ## month.April 46.525 0.987 47.14 <2e-16 *** ## month.May 58.092 0.987 58.86 <2e-16 *** ## month.June 67.500 0.987 68.39 <2e-16 *** ## month.July 71.717 0.987 72.66 <2e-16 *** ## month.August 69.333 0.987 70.25 <2e-16 *** ## month.September 61.025 0.987 61.83 <2e-16 *** ## month.October 50.975 0.987 51.65 <2e-16 *** ## month.November 36.650 0.987 37.13 <2e-16 *** ## month.December 23.642 0.987 23.95 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.419 on 132 degrees of freedom ## Multiple R-squared: 0.9957, Adjusted R-squared: 0.9953 ## F-statistic: 2569 on 12 and 132 DF, p-value: < 2.2e-16 All of the parameters corresponding to months are statistically significant at 5% level. We can include the intercept parameter as follows: 1 2 model3=lm(tempdub~month.) # remove -1 to include the intercept term in the model summary(model3) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 ## ## Call: ## lm(formula = tempdub ~ month.) ## ## Residuals: ## Min 1Q Median 3Q Max ## -8.2750 -2.2479 0.1125 1.8896 9.8250 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 16.608 0.987 16.828 < 2e-16 *** ## month.February 4.042 1.396 2.896 0.00443 ** ## month.March 15.867 1.396 11.368 < 2e-16 *** ## month.April 29.917 1.396 21.434 < 2e-16 *** ## month.May 41.483 1.396 29.721 < 2e-16 *** ## month.June 50.892 1.396 36.461 < 2e-16 *** ## month.July 55.108 1.396 39.482 < 2e-16 *** ## month.August 52.725 1.396 37.775 < 2e-16 *** ## month.September 44.417 1.396 31.822 < 2e-16 *** ## month.October 34.367 1.396 24.622 < 2e-16 *** ## month.November 20.042 1.396 14.359 < 2e-16 *** ## month.December 7.033 1.396 5.039 1.51e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.419 on 132 degrees of freedom ## Multiple R-squared: 0.9712, Adjusted R-squared: 0.9688 ## F-statistic: 405.1 on 11 and 132 DF, p-value: < 2.2e-16 R omits the January coefficient in this case. Notice that when we have the intercept in the model, we interpret resulting parameters as the difference between the first month and the related one. Now the February coefficient is interpreted as the difference between February and January average temperatures, the March coefficient is the difference between March and January average temperatures, and so forth. In this model, all of the differences between January and the other months are statistically significant at 5% level in both models. Notice that the Intercept coefficient plus the February coefficient here equals the February coefficient the model with no intercept parameter. In the seasonal means model, we separate the effect of each month. However, there is nothing about the shape of the seasonal trend in the seasonal means model. We can include the information on the shape of the seasonal trend in the model by assigning a cosine curve as the mean function μt: μt=βcos(2πft+Φ) Here, β(>0), f, and Φ are called the amplitude, frequency, and phase of the curve. As t varies, the curve oscillates within [−β,β] interval. Since the curve repeats itself exactly every 1/f time units, 1/f is called the period of the cosine wave. When we set f=1/12, a cosine wave will repeat itself every 12 months. So we say that the period is 12. For the estimation purposes, we need to make the above cosine trend model linear in terms of its parameters. With the following misinterpretation, we get βcos(2πft+Φ)=β1cos(2πft)+β2sin(2πft) where β=β21+β22−−−−−−√ and Φ=atan(−β2/β1) and, conversely, β1=βcos(Φ) and β2=βsin(Φ). Consequently, we will use cos(2πft) and sin(2πft) to estimate β1 and β2, respectively. The simplest such model for the trend would be expressed as μt=β0+β1cos(2πft)+β2sin(2πft) Here the constant term β0 represents a cosine with frequency zero. In any practical example, we must be careful how we measure time, as our choice of time measurement will affect the values of the frequencies of interest. For example, if we have monthly data but use 1,2,3,… as our time scale, then 1/12 would be the most interesting frequency, with a corresponding period of 12 months. However, if we measure time by year and fractional year, say 1980 for January, 1980.08333 for February of 1980, and so forth, then a frequency of 1 corresponds to an annual or 12-month periodicity. The following code chunk fits a cosine curve at the fundamental frequency to the average monthly temperature series. 1 2 3 har.=harmonic(tempdub,1) # calculate cos(2*pi*t) and sin(2*pi*t) model4=lm(tempdub~har.) summary(model4) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ## ## Call: ## lm(formula = tempdub ~ har.) ## ## Residuals: ## Min 1Q Median 3Q Max ## -11.1580 -2.2756 -0.1457 2.3754 11.2671 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 46.2660 0.3088 149.816 < 2e-16 *** ## har.cos(2*pi*t) -26.7079 0.4367 -61.154 < 2e-16 *** ## har.sin(2*pi*t) -2.1697 0.4367 -4.968 1.93e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.706 on 141 degrees of freedom ## Multiple R-squared: 0.9639, Adjusted R-squared: 0.9634 ## F-statistic: 1882 on 2 and 141 DF, p-value: < 2.2e-16 The following code chunk plots the fitted curve along with the observed average monthly temperature series in Figure 6. 1 2 3 4 plot(ts(fitted(model4),freq=12,start=c(1964,1)),ylab='Temperature',type='l', ylim=range(c(fitted(model4),tempdub)),main="Figure 6. Fitted model to average monthly temperature series.") # ylim ensures that the y axis range fits the raw data and the fitted values points(tempdub) The cosine trend model fits the data quite well with the exception of most of the January values, where the observations are lower than the model would predict. Interpreting Regression Output Estimates of regression parameters are obtained under some assumptions on the stochastic component {Xt} of linear trend model. So, some properties of regression output heavily depend on the assumption that Xt is white noise and some other parts depend on approximate normality of Xt. When we have μt=β0+β1t as the mean function, the unobserved stochastic component Xt can be estimated (predicted) by Yt−μ^t. If Xt has a constant variance, we estimate the standard deviation of Xt, namely γ0−−√, by the residual standard deviation s=1n−p∑t=1n(Yt−μ^t)2−−−−−−−−−−−−−−−−√ where p is the number of parameters estimated in μt and n−p is the so-called degrees of freedom for s. The smaller the value of s, the better the fit. Another measure of goodness of fit of the trend is the coefficient of determination, namely R2. One interpretation of R2 is that it is the square of the sample correlation coefficient between the observed series and the estimated trend. It is also the fraction of the variation in the series that is explained by the estimated trend. High but not close to 1 values of R2 implies a satisfactory fit. When we fit the straight line to the random walk data, we get the following output: 1 2 model1=lm(rwalk~time(rwalk)) summary(model1) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ## ## Call: ## lm(formula = rwalk ~ time(rwalk)) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.70045 -0.79782 0.06391 0.63064 2.22128 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.007888 0.297245 -3.391 0.00126 ** ## time(rwalk) 0.134087 0.008475 15.822 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.137 on 58 degrees of freedom ## Multiple R-squared: 0.8119, Adjusted R-squared: 0.8086 ## F-statistic: 250.3 on 1 and 58 DF, p-value: < 2.2e-16 According to multiple R2, about 81% of the variation in the random walk series is explained by the linear time trend. The adjusted version of multiple R2 provides an approximately unbiased estimate of true R2. The standard deviations of the coefficients labeled Std. Error on the output need to be interpreted carefully. They are appropriate only when the usual regression assumption that the stochastic component is white noise. This assumption rarely true for time series data! If the stochastic component is normally distributed white noise, then the p-values are given under “Pr(>|t|)” can be used to test the null hypothesis that the corresponding unknown regression coefficient is zero. ## Residual Analysis The estimator or predictor of unobserved stochastic component {Xt}, X^t=Yt−μ^t is called residual corresponding to the tth observation. An estimate is the guess of an unknown parameter and a prediction is an estimate of an unobserved random variable. If the trend model is reasonably correct, then the residuals should behave roughly like the true stochastic component, and various assumptions about the stochastic component can be assessed by looking at the residuals. If the stochastic component is white noise, then the residuals should behave roughly like independent (normal) random variables with zero mean and standard deviation of s. We can standardise residuals to make their mean zero. After computation of residuals or standardised residual, we examine various residual plots. The first plot to examine is the plot of the residuals over time. If the series is seasonal, we can use labels while plotting to identify the seasonality better. In the first example, We will use the monthly average temperature series which we fitted with seasonal means as our first example to illustrate some of the ideas of residual analysis. The following chunk generates a time series plot for the standardized residuals of the monthly temperature data fitted by seasonal means: 1 plot(y=rstudent(model3),x=as.vector(time(tempdub)), xlab='Time',ylab='Standardized Residuals',type='o', main = "Figure 7. Time series plot of residuals.") If the stochastic component is white noise and the trend is adequately modeled, we would expect such a plot to suggest a rectangular scatter with no discernible trends whatsoever. There are striking departures from randomness seen in the plot in Figure 7. The labels of months are added in Figure 8. 1 2 plot(y=rstudent(model3),x=as.vector(time(tempdub)),xlab='Time', ylab='Standardized Residuals',type='l', main = "Figure 8. Time series plot of residuals with labels.") points(y=rstudent(model3),x=as.vector(time(tempdub)), pch=as.vector(season(tempdub))) There is no apparent pattern relating to different months of the year in Figure 8. Next, we look at the standardized residuals versus the corresponding trend estimate, or fitted value in Figure 9. The function rstudent() computes standardised residuals. 1 2 3 plot(y=rstudent(model3),x=as.vector(fitted(model3)), xlab='Fitted Trend Values', ylab='Standardized Residuals',type='n', main = "Figure 9. Time series plot of standardised residuals versus fitted trend values.") points(y=rstudent(model3),x=as.vector(fitted(model3)),pch=as.vector(season(tempdub))) As anomaly with this plot small residuals would be associated with small fitted trend values and large residuals with large fitted trend values, or there would be less variation for residuals associated with certain sized fitted trend values or more variation with other fitted trend values. Although there is somewhat more variation for the March residuals and less for November, the plot does not indicate any dramatic patterns that would cause us to doubt the seasonal means model. Normality of residuals can be checked with a histogram. Figure 10 displays a frequency histogram of the standardized residuals from the seasonal means model for the temperature series. 1 2 hist(rstudent(model3),xlab='Standardized Residuals', main = "Figure 10. Histogram of the standardized residuals from the seasonal means model.") The plot is somewhat symmetric and tails off at both the high and low ends as a normal distribution does. Another plot to check normality is the quantile-quantile (QQ) plot. Such a plot displays the quantiles of the data versus the theoretical quantiles of a normal distribution. With normally distributed data, the QQ plot looks approximately like a straight line. Figure 11 shows the Q-Q scores (calculated under normal distribution) plot for the standardized residuals from the seasonal means model for the temperature series. 1 2 3 4 y = rstudent(model3) qqnorm(y, main = "Figure 11. Normal Q-Q plot of the standardized residuals from the seasonal means model.") qqline(y, col = 2, lwd = 1, lty = 2) The straight-line pattern here supports the assumption of a normally distributed stochastic component in this model. In addition to visualisations, there are various hypothesis tests that can be used to check the normality assumption of the stochastic component. One of these tests is the Shapiro-Wilk test that calculates the correlation between the residuals and the corresponding normal quantiles. We apply the Shapiro-Wilk test to the residuals of temperature series using the following code chunk 1 2 y = rstudent(model3) shapiro.test(y) 1 2 3 4 5 ## ## Shapiro-Wilk normality test ## ## data: y ## W = 0.9929, p-value = 0.6954 We get the p-value of 0.6954. So we conclude not to reject the null hypothesis that the stochastic component of this model is normally distributed. Independence in the stochastic component is another assumption to check. The runs test can be applied over the residuals. The runs test applied over the residuals of temperature series leads to a p-value of 0.216. Thus, we conclude not to reject the null hypothesis stating the independence of the stochastic component in this seasonal means model. ## Sample Autocorrelation Function Sample autocorrelation function (ACF) is a very useful and important tool in the analysis of time series data. We compute the sample correlation between the pairs k units apart in time. However, we modify this slightly, taking into account that we are assuming stationarity, which implies a common mean and variance for the series. With this in mind, we define the sample autocorrelation function, rk, at lag k as for k=1,2,…. A plot of rk versus lag k is often called a correlogram. Because we are interested in discovering possible dependence in the stochastic component, the sample autocorrelation function for the standardized residuals is of interest. Figure 12 displays the sample autocorrelation for the standardized residuals from the seasonal means model of the temperature series. 1 acf(rstudent(model3), main = "Figure 12. ACF of standardized residuals") All values are within the horizontal dashed lines, which are placed at ±2/n−−√. According to the ACF plot none of the hypotheses ρk=0 can be rejected at the usual significance levels for k=1,2,…,21. Thus, we infer that the stochastic component of the series is white noise. As a second example, a time series plot of the standardized residuals arising from fitting a straight line to the random walk time series is shown in Figure 13: 1 2 plot(y=rstudent(model1),x=as.vector(time(rwalk)), ylab='Standardized Residuals',xlab='Time',type='o', main = "Figure 13. Time series plot of the standardized residuals from fitting a straight line to the random walk series.") In Figure 13, the residuals “hang together” too much for the white noise-the plot is too smooth. Furthermore, there seems to be more variation in the last third of the series than in the first two-thirds. When we plot standardised residuals versus fitted trend line values, we observe a similar effect with larger residuals associated with larger fitted values from Figure 14. 1 plot(y=rstudent(model1),x=fitted(model1), ylab='Standardized Residuals',xlab='Fitted Trend Line Values', type='p', main = "Figure 14. Scatter plot of standardised residuals versus fitted trend line values.") The sample ACF of the standardized residuals is given in Figure 15: 1 acf(rstudent(model1), main = "Figure 15. ACF of the standardized residuals.") This ACF plot confirms the smoothness of the time series plot as we have correlation values higher than the confidence bound at several lags. This is not what we expect from a white noise process. As another example, we return to the annual rainfall in Los Angeles for which we found no evidence of dependence in that series and check the normality assumption using the QQ plot in Figure 16. 1 2 3 4 data(larain) y = larain qqnorm(y, main = "Figure 16. Normal Q-Q plot of LA rain series.") qqline(y, col = 2, lwd = 1, lty = 2) Because we see a considerable amount of departure from the reference line, we conclude that the normality assumption does not hold for the annual rainfall series in Los Angeles. The Shapiro-Wilk test also confirms this inference with a p-value less than 0.05. 1 2 3 y = larain shapiro.test(y) 1 2 3 4 5 6 ## ## Shapiro-Wilk normality test ## ## data: y ## W = 0.94617, p-value = 0.0001614 ## Forecasting with regression models After ensuring that the fitted model is suitable for prediction purposes, we use the model to find forecasts. For time series regression models, this task is simply based on the straightforward use of the fitted regression model. We apply the following steps to find h steps ahead forecasts: Generate a sequence of time points of lengths h starting from the last observation point. For example, suppose we have a time series of length 10 and h=4. Then the new sequence becomes t=11,12,13,14. Write each value of the new sequence generated in the previous step in place in the fitted model and calculate forecasts. We can implement these steps using the predict() function with the fitted model object and the sequence created at step 1 as inputs. To illustrate, let’s use the fitted linear model for the random walk data to find 5 steps ahead forecasts. The following code chunk does this task: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 data(rwalk) # Read the data t = time(rwalk) # Create time points for model fitting model1 = lm(rwalk~t) # label the model as model1 h = 5 # 5 steps ahed forecasts # Now we will implement the two-step algorithm new = data.frame(t = seq((length(t)+1), (length(t)+h), 1)) # Step 1 # Notice here that I'm using the same variable name "t" as in the # fitted model above, where the name of the variable showing time # is also "t". To run the predict() function properly, # the names of variables in fitted model and "new" data frame # must be the same!!! forecasts = predict(model1, new, interval = "prediction") # Here interval argument shows the prediction interval print(forecasts) 1 2 3 4 5 6 ## fit lwr upr ## 1 7.171430 4.819249 9.523611 ## 2 7.305517 4.949546 9.661487 ## 3 7.439604 5.079727 9.799480 ## 4 7.573691 5.209794 9.937588 ## 5 7.707778 5.339745 10.075811 We can plot these forecasts next to the time series of interest by the following code chunk as in Figure 17: 1 2 3 4 5 6 7 8 9 plot(rwalk, xlim = c(1,66), ylim = c(-3, 11), ylab = "Random walk data", main = "Figure 17. Random walk series with forecasts.") # We need to convert forecasts to time series object starting from the first # time steps-ahead to be able to use plot function. # We do this for all columns of forecasts lines(ts(as.vector(forecasts[,1]), start = 61), col="red", type="l") lines(ts(as.vector(forecasts[,2]), start = 61), col="blue", type="l") lines(ts(as.vector(forecasts[,3]), start = 61), col="blue", type="l") legend("topleft", lty=1, pch=1, col=c("black","blue","red"), text.width = 18, c("Data","5% forecast limits", "Forecasts")) As another example, the harmonic model fitted to the average monthly temperature series and find forecasts for 7 months ahead. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 har.=harmonic(tempdub,1) # calculate cos(2*pi*t) and sin(2*pi*t) t1 = har.[,1] # To make it easier assign harmonic variables to separate variables t2 = har.[,2] model4=lm(tempdub~t1+t2) # Fit the model with separate variables # We need to create continuous time for 7 months starting from the first month of 1976 t = c(1976.000, 1976.083, 1976.167 ,1976.250, 1976.333, 1976.417 ,1976.500, 1976.583 ) t1 = cos(2*pi*t) t2 = sin(2*pi*t) new = data.frame(t1 , t2) # Step 1 # Notice here that I'm using the same variable names "t1" and "t2" as in the # fitted model above, where the name of the variables showing sine and cosine # components are also "t1" and "t2". To run the predict() function properly, # the names of variables in fitted model and "new" data frame # must be the same!!! forecasts = predict(model4, new, interval = "prediction") print(forecasts) 1 2 3 4 5 6 7 8 9 ## fit lwr upr ## 1 19.55804 12.15595 26.96012 ## 2 22.02737 14.62528 29.42945 ## 3 31.07915 23.67707 38.48124 ## 4 44.09622 36.69414 51.49831 ## 5 57.69014 50.28806 65.09223 ## 6 68.34270 60.94062 75.74479 ## 7 72.97391 65.57182 80.37599 ## 8 70.50458 63.10249 77.90666 We plot the forecasts along with the original series with the following code chunk in Figure 18. The meaning of the colors is the same as Figure 17. 1 2 3 4 5 plot(tempdub, xlim = c(1964,1977), ylim = c(9, 80), ylab = "Average monthly temperature", main = "Figure 18. Average monthly temperature series with forecasts.") # Here we convert the forecasts and prediction limits to monthly time series! lines(ts(as.vector(forecasts[,1]), start = c(1976,1), frequency = 12), col="red", type="l") lines(ts(as.vector(forecasts[,2]), start = c(1976,1), frequency = 12), col="blue", type="l") lines(ts(as.vector(forecasts[,3]), start = c(1976,1), frequency = 12), col="blue", type="l") Forecasts from the harmonic model successfully follow the repeating pattern in the original series. ## Summary In this module, we focused on describing, modeling, and estimating deterministic trends in time series. The simplest deterministic “trend” is a constant-mean function. Regression methods were then pursued to estimate trends that are linear or quadratic in time. Methods for modeling cyclical or seasonal trends came next, and the reliability and efficiency of all of these regression methods were investigated. Finally, we studied residual analysis to investigate the quality of the fitted model. We also introduced the important sample autocorrelation function, which is a very useful and important tool in the analysis of time series.
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. QUESTION # A balanced coin is tossed 8 times. (a) How many outcomes of this experiment are possible? A balanced coin is tossed 8 times. (a) How many outcomes of this experiment are possible? Assume we observe the order order in which each flip results. (b) How many outcomes have exactly k heads, where k = 0, 1, 2, 3, 4, 5, 6, 7, 8? Clearly label each. (c) Let Ak be the event that we observe exactly k heads (k = 0,1,...,8). Compute P(Ak). (d) Let F be the event that the first three of the 8 tosses result in exactly one head. Compute P (F ). (e) Let G be the event that the last five of the 8 tosses result in exactly three heads. Compute P (G). (f) Compute P (F ∩ G) and P (F ∪ G). Clearly label each. (g) Compute P(F|A4).
Share # Using Properties of Proportion Solve for X: (3x + Sqrt(9x^2 - 5))/(3x - Sqrt(9x^2 - 5)) = 5 - Mathematics Course #### Question Using properties of proportion solve for x: (3x + sqrt(9x^2 - 5))/(3x - sqrt(9x^2 - 5)) = 5 #### Solution (3x + sqrt(9x^2 - 5))/(3x - sqrt(9x^2 - 5)) = 5 Applying componendo and dividendo (3x + sqrt(9x^2 - 5) + 3x - sqrt(9x^2 - 5))/(3x sqrt(9x^2 - 5) - 3x + sqrt(9x^2 - 5)) = (5 + 1)/(5 - 1) (6x)/(2sqrt(9x^2 - 5)) = 6/4 x/sqrt(9x^2 - 5) = 1/2 Squaring both sides x^2/(9x^2 - 5) = 1/4 4x^2 - 9x^2 - 5 5x^2 = 5 5x^2  = 5 x^2 = 1 x = 1 Is there an error in this question or solution?
Equivariant algebraic topology Annales de l'Institut Fourier, Volume 23 (1973) no. 2, p. 87-91 Let $G$ be a topological group. We give the existence of an equivariant homology and cohomology theory, defined on the category of all $G$-pairs and $G$-maps, which both satisfy all seven equivariant Eilenberg-Steenrod axioms and have a given covariant and contravariant, respectively, coefficient system as coefficients. In the case that $G$ is a compact Lie group we also define equivariant $CW$-complexes and mention some of their basic properties. The paper is a short abstract and contains no proofs. Soit $G$ un groupe topologique ; nous montrons l’existence des théories homologiques et cohomologiques équivariantes, définies sur la catégorie des $G$-paires et $G$-applications qui satisfont tous les sept axiomes équivariants d’Eilenberg-Steenrod et qui ont le système des coefficients covariants (resp. contrevariants) donné. Dans le cas d’un groupe de Lie Compact $G$ nous définissons aussi les $CW$-complexes équivariants et nous donnons quelques-unes de leurs propriétés fondamentales. Cet article est un bref résumé et ne contient aucune démonstration. @article{AIF_1973__23_2_87_0, author = {Illman, S\"oren}, title = {Equivariant algebraic topology}, journal = {Annales de l'Institut Fourier}, publisher = {Imprimerie Durand}, address = {28 - Luisant}, volume = {23}, number = {2}, year = {1973}, pages = {87-91}, doi = {10.5802/aif.458}, zbl = {0261.55007}, mrnumber = {50 \#11220}, language = {en}, url = {http://www.numdam.org/item/AIF_1973__23_2_87_0} } Illman, Sören. Equivariant algebraic topology. Annales de l'Institut Fourier, Volume 23 (1973) no. 2, pp. 87-91. doi : 10.5802/aif.458. http://www.numdam.org/item/AIF_1973__23_2_87_0/ [1] G. Bredon, Equivariant cohomology theories, Bull. Amer. Math. Soc., 73 (1967), 269-273. | Zbl 0162.27301 [2] G. Bredon, Equivariant cohomology theories, Lecture Notes in Mathematics, Vol. 34, Springer-Verlag (1967). | MR 35 #4914 | Zbl 0162.27202 [3] T. Bröcker, Singuläre Definition der Äquivarianten Bredon Homologie, Manuscripta Matematica 5 (1971), 91-102. | Zbl 0213.49902 [4] S. Illman, Equivariant singular homology and cohomology for actions of compact Lie groups. To appear in : Proceedings of the Conference on Transformation Groups at the University of Massachusetts, Amherst, June 7-18 (1971) Springer-Verlag, Lecture Notes in Mathematics. | Zbl 0251.55004 [5] S. Illman, Equivariant Algebraic Topology, Thesis, Princeton University (1972). [6] S. Illman, Equivariant singular homology and cohomology. To appear in Bull. Amer. Math. Soc. | Zbl 0297.55003 [7] T. Matsumoto, Equivariant K-theory and Fredholm operators, Journal of the Faculty of Science, The University of Tokyo, Vol. 18 (1971), 109-125. | Zbl 0213.25402 [8] R. Palais, The classification of G-spaces, Memoirs of Amer. Math. Soc., 36 (1960). | MR 31 #1664 | Zbl 0119.38403 [9] C. T. Yang, The triangulability of the orbit space of a differentiable transformation group, Bull. Amer. Math. Soc., 69 (1963), 405-408. | MR 26 #3813 | Zbl 0114.14502
## Prealgebra (7th Edition) $-6$ Evaluate $-|x|$ if $x=-6$ Plug $-6$ into the expression. $-|6|$ The negatives do not cancel out and the result is the opposite of the absolute value of x, in this case, 6. So it is $=6$.
Plots a principal component analysis based on peptide or precursor intensities. qc_pca( data, sample, grouping, intensity, condition, components = c("PC1", "PC2"), digestion = NULL, plot_style = "pca" ) Arguments data a data frame that contains sample names, peptide or precursor identifiers, corresponding intensities and a condition column indicating e.g. the treatment. a character column in the data data frame that contains the sample name. a character column in the data data frame that contains either precursor or peptide identifiers. a numeric column in the data data frame that contains the corresponding intensity values for each peptide or precursor. a column in the data data frame that contains condition information (e.g. "treated" and "control"). a character vector indicating the two components that should be displayed in the plot. By default these are PC1 and PC2. You can provide these using a character vector of the form c("PC1", "PC2"). optional, a character column in the data data frame that indicates the mode of digestion (limited proteolysis or tryptic digest). Alternatively, any other variable by which the data should be split can be provided. a character value that specifies what plot should be returned. If plot_style = "pca" is selected the two PCA components supplied with the components argument are plottet against each other. This is the default. plot_style = "scree" returns a scree plot that displays the variance explained by each principal component in percent. The scree is useful for checking if any other than the default first two components should be plotted. Value A principal component analysis plot showing PC1 and PC2. If plot_style = "scree", a scree plot for all dimensions is returned. Examples set.seed(123) # Makes example reproducible # Create example data data <- create_synthetic_data( n_proteins = 100, frac_change = 0.05, n_replicates = 3, n_conditions = 2, ) # Plot scree plot qc_pca( data = data, sample = sample, grouping = peptide, intensity = peptide_intensity_missing, condition = condition, plot_style = "scree" ) # Plot principal components qc_pca( data = data, sample = sample, grouping = peptide, intensity = peptide_intensity_missing, condition = condition )
Browse Questions # The rms velocity can be calculated by $(a)\;u=\sqrt{\large\frac{3P}{d}}\qquad(b)\;u = 1.58\sqrt{\large\frac{T}{M}}\times 10^4cm\;sec^{-1}\qquad(c)\;\large\frac{u_2}{u_1}=\sqrt{\large\frac{T_2}{T_1}}\qquad(d)\;Any\;of\;the\;above$ Can you answer this question? $u = \sqrt{\large\frac{3RT}{M}}$ $\;\;\;=\sqrt{\large\frac{T}{M}}\times \sqrt{3R}$ $\;\;\;=\sqrt{3\times8.314\times10^7}\times \sqrt{\large\frac{T}{M}}$ $\;\;\;=1.58\times10^4\times \sqrt{\large\frac{T}{M}}$ $\;\;\;=\sqrt{\large\frac{3PV}{M}}$ $\;\;\;\;=\sqrt{\large\frac{3P}{d}}$ Hence answer is (d) answered Mar 2, 2014
property to be exact a 1-form on $\mathbb R^2 -\{(0,0)\}$ (a) Let $\omega$ a $1-$form defined on the open set $U \subset \mathbb R ^n$ and $c:[a,b] \to U$ a $C^1 -$differentiable curve such that $|\omega (c(t))| \leq M \quad \forall t \in [a,b]$ Prove that $$\displaystyle{\Bigg| \int_c \omega \Bigg| \leq ML}$$ where $L$ is the length of the curve $c$. (b) Let $\omega=a_1dx +a_2dy$ a closed $1-$form defined on $\mathbb R^2 -\{(0,0)\}$. If $\omega$ is bounded ( which means that $a_1 ,a_2$ are bounded) on a disk with center the origin $O(0,0)$ prove that: $$\omega \text{ is exact on } \mathbb R^2 -\{(0,0)\}$$ (c) If $\omega$ is a closed $1-$form defined on $\mathbb R^2 -\{(0,0)\}$ such that $\displaystyle{ \lim_{x^2 + y^2 \to 0} \left( \sqrt{x^2 + y^2} \omega \right) =0 }$ prove that : $$\omega \text{ is exact on } \mathbb R^2 -\{(0,0)\}$$ I have done (a) but I am stuck in (b) and (c). I think I have to use (a) and Poincare's lemma for $1-$ forms but I don't know how. - I don't know what your proof of ii) looks like, but I'd expect that you estimate $$| \int_{\gamma} \omega | \le C M r$$ for closed curves $\gamma$ surrounding the origin, e.g. $\gamma(t)= r(\cos t , \sin t)$ and then let $r\rightarrow 0$. That approach should do the trick, for iii), as well. (in ii) you know the integrals approach $0$ as fast as $r$, but you don't need that rate of decay, convergence to $0$ is sufficient). Edit (added explanation): if you write down the integral with the curves I defined earlier, you get $$\left| \int_\gamma \omega\,\right| =\left| \int_0^{2\pi}r(\omega_1\circ\gamma\sin(t) + \omega_2\circ\gamma \cos(t)) dt \right|\le \int_0^{2\pi}|\cdots |\le C r|\sup_{x=r} \omega |$$ for some constant $C$ not depending on $r$ or $\omega$ which swallows the integral over $\sin, \cos$. Note $r=\sqrt{x^2+y^2}$, so the expression on the rhs tends to $0$ when $r$ does. Since $\omega$ is closed this implies that $\int_\gamma \omega$ vanishes for all these $\gamma$, since they are homotopic to each other in the domain of definition of $\omega$, so the integral is the same for all these curves. You should be in the posession of a theorem which allows you to conclude from this that $\omega$ is exact. If this is not true you should explain how you solved ii) Another edit: In simply connected domains every closed form is exact, this you seem to know. Recall the proof of this. In general, this is not true. What remains true is that a closed differential form $\omega$ is exact if the integral along any closed curve vanishes. This applies here, as I outlined earlier. (You should have seen a theorem claiming something like that if you are given this kind of exercise. If not, it is not too hard to see, though a bit technical if you want to make it rigorous. You pick and fix any point $p$ in the domain you are looking at, $U$ say, (assume it's open and connected, then it's path connected), and for $q$ in $U$ choose any smooth curve $\gamma_{pq}$ joining $p$ and $q$. Then define $$F(p) = 0; \,\, F(q) :=\int_{\gamma_{pq}} \omega$$ You need to verify that this is well defined, i.e. does not depend on the particular choice of $\gamma_{pq}$, but this is what follows from the fact that the integral over any closed curve vanishes. The reasoning above does show this only for curves winding once around the origin, but is easily generalizd accordingly. I don't go into the details of this particular claim since I don't know which tools you have available. Once you know $F$ is well defined you show $dF=\omega$, this is similar to the case of simply connected domains). - This is, btw, a nice little exercise which shows that one should become wary if $|\omega|$ grows faster than $1/r$ when $r\rightarrow 0$. –  user20266 Jun 18 '12 at 11:40 I can't understand how from this I will prove that $\omega$ is exact. Can you explain it with a little more details? Thank you! –  passenger Jun 18 '12 at 12:53 @passenger I added an explanation to the answer. –  user20266 Jun 18 '12 at 13:43 I didn't said that I solved (b) ! Anyway if we were in a simple connected set then this is true but since $\mathbb R^2$ is not I can't see how I conclude this. –  passenger Jun 18 '12 at 15:00 @passenger: you can use brute force here, since $\sin$ and $\cos$ are both bounded by $1$ in absolute value, so, for example, $|r\omega_1 \circ \gamma(t) \sin(t)| \le |r\omega_1\circ \gamma(t) | \le r \sup_{x=r}| \omega_1 |$. This last rhs is constant if $r$ is fixed, so integrating everything gives you this constant times $2\pi$. The same can be done for the $\omega_2$ term, so you arrive at something like $\cdots \le 2 \pi r (\sup_r |\omega_1 | + \sup_r |\omega_2 |) \le 4\pi r \sup_r |\omega|$ –  user20266 Jun 18 '12 at 17:46
15th European Turbulence Conference 2015 August 25-28th, 2015, Delft, The Netherlands ## Invited speakers: Prof. Marc Brachet. Ecole Normale Superieure, Paris, France Prof. Peter G. Frick, Institute of Continuous Media Mechanics, Perm, Russia Prof. Bettina Frohnapfel,  Karlsruher Institut fur Technology, Germany Prof. Andrea Mazzino, Dipartimento di Fisica, University of Genova, Italy Prof. Bernhard Mehlig. Department of Physics, University of Gothenburg, Sweden Prof. Lex Smits, Mechanical and Aerospace Engineering, Princeton University, USA Prof. Chao Sun Physics of Fluids, University of Twente, The Netherlands Prof. Steve Tobias, Applied Mathematics, University of Leeds, United Kingdom 10:30 15 mins Direct numerical simulation of weakly spanwise-rotating turbulent plane Couette flow Jie Gai, Zhenhua Xia, Qingdong Cai Abstract: In this report, we conduct direct numerical simulations (DNS) of weakly spanwise-rotating plane Couette flows at Reynolds number $Re_w = U_wh/\nu= 1300$ (here, $U_w$ is the half the wall velocity difference, and $h$ is half-channel height). A series of simulations with different rotation numbers $Ro = 2\Omega h/U_w$ ($\Omega$ is constant angular velocity component in the spanwise direction) is carried out to investigate the effect of $Ro$ on the flow statistics. Our results show that the flow statistics are affected by the $Ro$, and a "critical" rotation number $Ro^*$ (between $Ro=0.01$ and $Ro=0.05$) is observed, where the kinetic energy of secondary flow contributes about a half of the turbulent kinetic energy, and the mean shear rate at the center line reaches a minimum value. We conjecture that different mechanisms should exist around $Ro^*$, and will be investigated further. 10:45 15 mins Wavelet analysis of broadband signals to extract amplitude and frequency modulation: an application to wall turbulence Woutijn Baars, Krishna Talluru, Nick Hutchins, Ivan Marusic Abstract: Large-scale structures in wall-bounded turbulent flows are known to exhibit a coupling with the small-scale energy in the flow. Besides a superposition of large-scale energy onto the near-wall dynamics, this coupling comprises an amplitude and frequency modulation of the small-scale fluctuations by the large-scale motions. In this work we use wavelet analysis to examine amplitude and frequency modulation. Albeit the wavelet-based approach for amplitude modulation condenses to analyses presented in earlier studies, the strength of the approach becomes evident from a convenient extension of the technique to extract local instantaneous frequency modulation. While discrete techniques were employed previously, an application of a continuous approach results in inherent advantages when phase lags between the large- and small-scale fluctuations in terms of amplitude and frequency modulation are investigated. 11:00 15 mins Spectra of turbulent energy transport in channel flows Yoshinori Mizuno Abstract: To reveal the scale-dependences of the transport of turbulent energy in a channel flow, the constituents of the budget equation of turbulent energy for the Fourier modes of velocity fluctuations are computed by using direct numerical simulations. At each height in the buffer and overlap regions, the transport in the wall-normal direction by the turbulent convection provides energy to the fluctuations at small scales, but takes it away from those at large-scales. Furthermore, energy taken from the large-scales in the overlap region is carried upward to the center of channel and also downward to the vicinity of the wall. This downward transport is expected to cause the anomaly of the turbulent intensity and the constituents of the budget equation near the wall. The transport between scales and their scaling will also be discussed in the talk. 11:15 15 mins Numerical investigation of localized exact solutions of the Navier-Stokes equations in pipe flow. Vladimir Pimanov, Nikolay Nikitin Abstract: The edge state solution in pipe flow at Re=2200 is calculated numerically. The solution has the form of spatially localized puff-like structure drifting downstream. In the moving frame it is represented by a steady average flow and time-periodic pulsation flow. It is shown, that the Kelvin-Helmholtz instability mechanism is not valid for pulsation generation in the edge state flow. 11:30 15 mins TURBULNENT STRUCTURES IN AN OPTIMAL TAYLOR-COUETTE FLOW BETWEEN TWO COUNTER-ROTATING CYLINDERS Razieh Jalalabadi, Muhammad Nadeem, Hyung Jin Sung Abstract: Taylor-Couette flow with two independently counter-rotating cylinders is investigated. Direct numerical simulation is applied to study flow structure and angular velocity transport for η = 0.714 at optimum and fully turbulent regime. The main purpose is to study the coherent structure in both axial and radial directions and its contribution to angular velocity transport in optimum condition. Visualizing the vortical structure (and other structural parameters) with the distribution of azimuthal velocity and ω-Nusselt number leads to a better understanding of turbulent flow structure at optimum condition comparing to non-optimum condition. 11:45 15 mins PREDICTING THE RESPONSE OF SMALL-SCALE NEAR-WALL TURBULENCE TO LARGE-SCALE OUTER MOTIONS Lionel Dr. Agostini, Michael Prof. Leschziner Abstract: Abstract: The paper deals with the question of how to determine – or “predict” - the near-wall-turbulence statistics from a Reynolds-number-independent, “universal”, small-scale signal, and the Reynolds-number-dependent large-scale outer motions in the log layer. An empirical model is proposed, which is intended to take into account the effect of “splatting”, not previously considered, thus offering an improved representation of the near-wall-turbulence field. 12:00 15 mins Turbulent plane Couette flow with wall-transpiration Sergio Hoyas, Stefanie V. Kraheberger, Martin Oberlack Abstract: In the present abstract, DNS results obtained for turbulent plane Couette flow with wall-normal transpiration velocity are presented. Important equations valid in such a flow are derived, describing the total shear stress and the relation between the friction velocities at the lower and upper wall. These expressions are of importance, as there are neither experimental nor DNS data to compare with. Equally important, we derive a center region and a viscous sublayer velocity scaling for the suction wall, which were both validated using the DNS data.
# How do you calculate the formal charge of Cl in ClO^- and ClO_3^-? Jul 30, 2016 In both examples, the chlorine atom is neutral, and the charge is presumed to reside on oxygen. #### Explanation: For $C l$, and $O$, there are $7$, and $6$ valence electrons respectively associated with the neutral atoms. For hypochlorite ion, $C l - {O}^{-}$, we have to distribute $7 + 6 + 1$ electrons in the Lewis structure. There are thus $7$ electron pairs. One of these electron pairs is conceived to form the $C l - O$ bond, and so around each chlorine and each oxygen atom there are 3 lone pairs of electrons. Because the bonding pair of electron is shared, i.e. one electron is claimed by $C l$, and one by $O$, this means that the chlorine atom owns 7 valence electrons, and is thus formally neutral, and the oxygen atom also owns 7 valence electrons, and thus has a FORMAL negative charge. That is oxygen, $Z = 8$, has 7 valence electrons, and 2 inner core electrons, and thus 9 electrons in total. Given this electronic formalism, the oxygen centre is formally negative, and our Lewis structure certainly represents this. And now for chlorate, $C l {O}_{3}^{-}$. We have $7 + 6 + 6 + 6 + 1$ valence electrons, 26 electrons, and 13 electron pairs to distribute. A Lewis stucture of ${\left(O =\right)}_{2} \ddot{C} l \left(- {O}^{-}\right)$ is reasonable, and I think, correctly accounts for the charge. Chlorine is neutral, and the singly bound oxygen has a negative charge. Of course, this charge is distributed to the other oxygen centres by resonance. For completeness, we should consider perchlorate, $C l {O}_{4}^{-}$, where chlorine has its max Group 7 oxidation number of $+ V I I$, Here, we have $7 + 4 \times 6 + 1 = 32$ valence electrons, and a Lewis structure of (""^(-)O-)Cl(=O)_3, again with the charge FORMALLY residing on an oxygen atom....which we CONCEIVE to be singly bound to chlorine...
• Create Account # Do overused monsters disappoint or annoy you? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 23 replies to this topic ### Poll: Do overused monsters disappoint or annoy you? (32 member(s) have cast votes) #### How do you feel about commonly-used monsters vs. more unusual ones? 1. I like commonly-used monsters, they are familiar and bring back fond memories (10 votes [31.25%]) Percentage of vote: 31.25% 2. I am sick of commonly-used monsters and don't want them as pets at all (9 votes [28.12%]) Percentage of vote: 28.12% 3. This issue is unimportant or irrelevant (13 votes [40.62%]) Percentage of vote: 40.62% Vote Guests cannot vote ### #1sunandshadow  Moderators   -  Reputation: 3531 Like 0Likes Like Posted 16 August 2011 - 04:04 PM I'm working on quest and level design for the first post-tutorial area of an RPG. The game has an overall requirement that the monsters must all be small at the beginning and most will get larger as the game progresses. Also the first area monsters should seem like appropriately mild dangers to send a student or teenager to deal with as a training exercise. So, of course many RPGs have done the same sort of thing. Slimes, rats, bats, beetles, spiders, carnivorous plants, and mushrooms all seem like very common choices for this type of area and purpose. They would also all do fine for the area's secondary purpose of introducing players to the monster capturing and breeding system, especially the beetles, rats, and slimes. The question is are they too common? Would you be disappointed or annoyed to enter a fantasy world and find yourself asked to fight one of these kinds of monsters and capture them as starter pets? Would something more exotic like, oh, knee-high mini tyrannosaurs which the game states to be a mildly annoying local pest, be a better choice? For a very reasonable fee I am available as a freelance design consultant, editor, or ghostwriter. PM me if interested. I have a general interest in 1. games involving pet breeding or farming, and 2. interactive story romance. If you'd like to discuss one of these you may PM me. ### #2Khaiy  Crossbones+   -  Reputation: 1263 Like 0Likes Like Posted 16 August 2011 - 05:34 PM For my part, I don't mind more common monsters provided that there are plentiful alternatives as well. I suppose it would also depend on how long I'd be stuck with that particular monster, which would also affect how much I see them around with other people. I think that I would probably prefer a more original set, but to do that you might be forced to make stylistic choices you would not need to with more common types. These could be a bit of a turn off for some players. ### #3Caldenfor  Members   -  Reputation: 323 Like 0Likes Like Posted 16 August 2011 - 07:31 PM I'm working on quest and level design for the first post-tutorial area of an RPG. The game has an overall requirement that the monsters must all be small at the beginning and most will get larger as the game progresses. Also the first area monsters should seem like appropriately mild dangers to send a student or teenager to deal with as a training exercise. So, of course many RPGs have done the same sort of thing. Slimes, rats, bats, beetles, spiders, carnivorous plants, and mushrooms all seem like very common choices for this type of area and purpose. They would also all do fine for the area's secondary purpose of introducing players to the monster capturing and breeding system, especially the beetles, rats, and slimes. The question is are they too common? Would you be disappointed or annoyed to enter a fantasy world and find yourself asked to fight one of these kinds of monsters and capture them as starter pets? Would something more exotic like, oh, knee-high mini tyrannosaurs which the game states to be a mildly annoying local pest, be a better choice? Do you know how many times I played Dragon Warrior and walked around the world praying NOT to find a certain type of slime? It all depends on the context. I personally couldn't vote for any of the selected responses,. but I wanted to leave a reason as to why. If done well, it doesn't really matter. Whatever fits to the world around it works for me. ### #4lithos  Members   -  Reputation: 413 Like 0Likes Like Posted 16 August 2011 - 07:43 PM I think that if I ever made an RPG. I would purposefully put the player in a school like experience, and their first "victims" will be upperclassmen manning the dungeon as a training exercise... The original ones starting out as purposefully losing(as they're supposed to for the exercise) and the final boss one taking it far too seriously(with real risk, due to something that happened earlier). ____________ anything but rats. Why out of anything does it need to be rats, goblins or kobolds. In your game you have players eventually being taught to raise monsters, why not put them against "mistakes" and horrible chimera's that can't really defend themselves because of their mix(large stupid beasts prone to rage and with very readable moves). ### #5sunandshadow  Moderators   -  Reputation: 3531 Like 0Likes Like Posted 16 August 2011 - 08:52 PM anything but rats. Why out of anything does it need to be rats, goblins or kobolds. I dunno if this was a rhetorical question or an actual question. But rats/mice are a common choice along with spiders and such because they are something people may have actually killed or wished they could kill in their homes or other local environment. Also rats have an association of carrying sickness, while spiders have an association of being poisonous and a bit vampiric. For a very reasonable fee I am available as a freelance design consultant, editor, or ghostwriter. PM me if interested. I have a general interest in 1. games involving pet breeding or farming, and 2. interactive story romance. If you'd like to discuss one of these you may PM me. ### #6HelloSkitty  Members   -  Reputation: 152 Like 1Likes Like Posted 16 August 2011 - 08:54 PM I sort of like having to start out fighting little rats that are easy to kill. If you start by teaching someone how to fight something like a one eyed purple flying unicorn, it gives a slight impression of unrealism, and almost specialization. Something unique will have unique strengths and weaknesses the player will have to pick up on later, but the first few missions whose purpose is to teach someone how to use the attack button should not also expect someone to remember that "to kill a purple flying cyclops unicorn, play the note on your flute corresponding to the blue button, then etc." Also, as a game progresses, enemies typically get more outrageous and flashy, forcing the player to continue just to see "what kind of monstrosity could the designer come up with that's better than this monster?" Starting off against exotic monsters implies a lesser degree of gradient between exoticness of a monster, which could lead to boredom. One thing to watch for, however, is excessive "teaching", or slateness. One game made me kill hundreds of armadillos using a bow and arrow (as a tutorial on attacking) before progressing to the next point in the story. Needless to say, 5 or 10 would suffice for teaching how to right click something. Don't force the player to focus on only one "stale enemy". Give them a choice between goblins, rats, tumble-weeds, orange slimes, etc as possible training subjects. If they can make a pet out of a defeated monster, rather than only having "get a pet rat" as a tutorial, different people will prefer a slime as a first pet, or a goblin servant. Khaiy says it well For my part, I don't mind more common monsters provided that there are plentiful alternatives as well. A penny for my thoughts? Do you think I have only half a brain? Not-so-proud owner of blog: http://agathokakologicalartolater.wordpress.com/ ### #7thePyro_13  Members   -  Reputation: 629 Like 0Likes Like Posted 16 August 2011 - 09:36 PM In the context of a player combat based RPG, I don't mind trope-y monsters. Killing rats is fine if you know you can just leave their corpse and never think of them again. However in a game where i'm expected to capture and devote time to one of these creatures, I tend to find myself put off by the more common monster types. I don't personally mind the mushrooms/slime type monsters(I haven't had much access to JRPGS, outside of dragon quest, so these don't feel as overused to me). But I find a monster capturing game which only includes what is basically normal animals to be a let down. Even if the game provides more interesting monsters further on, their's no point if I gave up at the start. Hope this helps. ### #8JoeCooper  Members   -  Reputation: 338 Like 0Likes Like Posted 17 August 2011 - 12:49 AM If this is a key part of your game, over-deliver. Legend of Mana had a minor monster raising thing. They had some critters found in all the mana games, and a vampire and some other things. It worked fine. But then there's Pokemon which is explicitly based around monster capture, raising, so they have to pull a lot of critters out of their rear. Even the ones that are basically super sized real animals at least have their names weirded out. If this is a core mechanic, a mix of fresh and traditional would be great. ### #9jbadams  Senior Staff   -  Reputation: 12106 Like 0Likes Like Posted 17 August 2011 - 01:14 AM The good thing about using the same common choices that pop up in most other games is that they will feel familiar with players and you can therefore rely on existing knowledge rather than having to explain everything. This should leave you free to explain what players need to know about your game rather than wasting time explaining that in this setting a knee-high-tyrannosaurus is a simple pest; this will probably be particularly applicable to you if you're introducing the player to one or more mechanics they might not find familiar from other RPGs. ### #10sunandshadow  Moderators   -  Reputation: 3531 Like 0Likes Like Posted 17 August 2011 - 02:15 AM One thing to watch for, however, is excessive "teaching", or slateness. One game made me kill hundreds of armadillos using a bow and arrow (as a tutorial on attacking) before progressing to the next point in the story. Needless to say, 5 or 10 would suffice for teaching how to right click something. Don't force the player to focus on only one "stale enemy". Give them a choice between goblins, rats, tumble-weeds, orange slimes, etc as possible training subjects. If they can make a pet out of a defeated monster, rather than only having "get a pet rat" as a tutorial, different people will prefer a slime as a first pet, or a goblin servant. To avoid boredom, I don't see changing the monster model used as being essential, changing the AI it uses and if possible the combat animations are much more effective at making it feel like something different. In this particular game the player is intended to capture every monster if he or she decides to participate in monster-capturing gameplay at all. The initial area will have something like two or three colors of monster species A and two or three colors of monster species B. I don't think it's really a problem to tell people to capture a "brown A" first since as soon as they accomplish that they will be expected to also capture the other options (if they are following the optional monster capturing track through the game). They will further be expected to try breeding all possible combinations of these 4-6 types of monster before completing the initial area, which will reveal two or three breedable-only versions of these monsters, including one completely new hybrid type. For a very reasonable fee I am available as a freelance design consultant, editor, or ghostwriter. PM me if interested. I have a general interest in 1. games involving pet breeding or farming, and 2. interactive story romance. If you'd like to discuss one of these you may PM me. ### #11sunandshadow  Moderators   -  Reputation: 3531 Like 0Likes Like Posted 17 August 2011 - 02:57 AM @JoeCooper and jbadams - Good comments, let me clarify a bit what the game's design is like. This is an "octopus" structured game if anyone has heard me talk about that before. It has about 6 types of gameplay which can be considered "core", even though all are optional. Each of these types of gameplay has its own path of quests or achievements (an octopus arm) along which the player travels from the common starting point for them all/social center location of the game (the octopus head). Monster capturing and breeding is one of these arms, and the bred monsters provide the basis for a second arm which is the secondary combat system: tactical turn-based as opposed to the primary combat system which is a standard realtime spellbar/cooldowns system. This secondary combat system exists in both pve and pvp forms and can be played by purchasing monsters other players have bred in the marketplace, if the player does not feel like capturing the monsters themselves. I'm not sure there's any completely new gameplay in the design to teach players, assuming that players have played some kind of monster capturing game before, and some type of tactical combat game before, and some type of MMO with crafting recipes and an auction house before, and some game with a reputation/relationship system before, and some game with a pvp ranking system before, etc. My style as a designer is mainly to combine existing gameplay elements in new ways. If you want a specific comparison for the monster system it's a bit more like that of the Monster Rancher series than Pokemon. Every monster type exists in a standard range of colors, and each color is associated with a combat type - for example, all red monsters might have extra high attack and extra low defense, while all green monsters have the opposite, and all white monsters can heal themselves and all pink monsters inflict status ailments, etc. Monster color could be seen as corresponding to the array of possible classes in a traditional RPG. (This all only applies to monsters in the wild though, bred monsters can pair any appearance with any set of tactical combat skills and tactical stats like action points and movement points.) So the point being there's nothing really arcane or confusing, although there's a minor danger of the player feeling overwhelmed at the beginning by being introduced to all the octopus arms and their respective gameplay at the beginning. I don't see explaining that mini t-rexes are the local equivalent of rats to be a waste of time - it's characterizing the world, and conveying the unique game world to a player helps the player become immersed in the game's atmosphere and story. On the other hand I never really liked the more extreme made-up monsters in Pokemon or Monster Rancher. Some of them are transparent - both have a cat monster, both have at least one dragon, etc. I don't care what they are called if I can recognize it as either a real animal or a mythological animal, or a real animal with a minor added element like horns or wings. But I personally don't like the ones that are like nothing I've ever seen before, because they have no associated meaning to me. It's often not clear how they might go about their daily lives or fit into any sort of an ecology, and not being able to picture how the game world works breaks my immersion. I also don't find the unrecognizable monsters to be memorable when I think about Pokemon or Monster Rancher in retrospect. Thus this poll, to see whether it would be regarded as boring if I don't have any monsters more original than griffins, winged versions of normal land animals, dinosaurs, and fish that swim through air instead of water. For a very reasonable fee I am available as a freelance design consultant, editor, or ghostwriter. PM me if interested. I have a general interest in 1. games involving pet breeding or farming, and 2. interactive story romance. If you'd like to discuss one of these you may PM me. ### #12DvDmanDT  Members   -  Reputation: 322 Like 0Likes Like Posted 17 August 2011 - 03:47 AM I think the mechanic is way more important than the setting for me. Letdowns for me are rather when I encounter enemies with lots of hitpoints and fairly low damage. I've recently played a fair amount of Sacred 2 which has pretty much exclusively cliché monsters. I hate the rats in that game simply because they are so hard to hit. They are not particulary dangerous, just hard to hit (I often resorted to some area spell with over a minute cooldown just to kill a single rat). For the record, my problems hitting those damn things may have partially been affected by my stat distribution and skill choices. ### #13Telgin  Members   -  Reputation: 198 Like 0Likes Like Posted 17 August 2011 - 08:33 AM I find myself being increasingly critical of games lately for not being original. I usually take more fault with the available player species and setting than anything else (i.e. elves that live in the trees and dwarves that live in the mountains / underground), but monsters can bug me too. I'll be honest in that I literally can't think of the last fantasy RPG that I played that didn't have giant spiders somewhere in it, and 90% have orcs and / or goblins. I don't mind fantasy games having interesting or even fantastic creatures like dragons, but I would like a little originality please. In particular, slimes / gelatinous cubes are one of the types of monsters that bug me most because of their implausibility and ubiquity. Giant arthropods are a close second, followed by green skins. If I was in charge of designing the critters that the player would come across, I'd do my best to make them original and reasonable. I don't know how much emphasis you're putting on the world itself, but as a player I would be impressed if the designers took the effort to draw up new and plausible creatures for their biomes. In a rain forest, for example, I'd expect to see lots of small reptiles (snakes and lizards of various sorts), lots of colorful birds and other things that like to hang out in trees or hide under fallen leaves. In a desert I'd expect to see little of anything, and what I do find would be small and hide a lot during the day. In the plains I'd expect to see larger herbivores and the predators that hunt them. I suppose if you reuse old "tried and true" monsters in an original way, I might be satisfied. Instead of carnivorous plants that are large enough to eat people, why not make the slimes a giant evolution of slime molds that eat smaller critters that get stuck in them? Instead of basilisks, why not just lizards that are large enough, intelligent enough and social enough to be trained to use as mounts or sentries (like a dog might)? As an aside, I do like the idea of mini t-rexes. It's plausible that such a creature could exist and adds a little flavor to the world. It's probably cute too. That's all very general though. For a starting area, I see nothing wrong with having players capture, train and / or breed things like rats, bats and snakes. It would probably take a bit of explaining for me to accept that you could train a spider, or that a mushroom can move around and attack people though (both of which I've seen in real games). In the end, it probably wouldn't really bother me all that much if you did just reuse the stock fantasy monsters. Literally everyone else does. I'd give definite bonus points if you had all original creatures though! After all, Pokemon did it. Success requires no explanation. Failure allows none. ### #14Luckless  Crossbones+   -  Reputation: 1270 Like 0Likes Like Posted 17 August 2011 - 08:40 AM Rats, spiders, and little goblins don't bother me. What bothers me is boring game play, level/character design, story telling, and general feel of a game. The fact that it happens to use rats, spiders, and little goblins is of little importance. Awhile ago I was involved in a pen and paper game, and for several sessions we battled nothing but giant rats, swarms of rats, or the big boss,... a swarm of giant rats. It was still fun because the GM made it fun and interesting. I have also played games where all the enemies were big red Es and everything else was represented by other colored ASCII characters, but I still had a great time playing it. If your signature on a web forum takes up more space than your average post, then you are doing things wrong. ### #15freddyscoming4you  Members   -  Reputation: 112 Like 0Likes Like Posted 17 August 2011 - 08:57 AM Halo, the flood. I hated the flood. Still gives me nightmares. Now is that good or bad? You decide. It's your game. ### #16LorenzoGatti  Crossbones+   -  Reputation: 1986 Like 0Likes Like Posted 17 August 2011 - 09:11 AM There are at least two rather orthogonal issues in the original post: which "traditional" monsters should appear in various stages of the game and which monsters are appropriate PC pets. Traditional monsters have an impact on the game's setting that depends on their type. • Natural dangerous animals (rats, snakes, birds of prey, etc.) are expected to be common. If they are rare or absent, it's clearly a very peculiar world and/or an extreme environment. • Straightforward exotic variants of natural animals (giant or intelligent varieties, flying snakes, 8-legs horses, etc.) are likely to be important. Only a few such species would exist (e.g. sentient penguins and 9' hamsters but no giant rats) and they are likely to be an important setting-defining feature: for example, how do fishermen coexist with sentient penguins? • Fantasy races of people and monsters have an heavier baggage of stereotypes: traditions of fantasy literature and games replace zoological common sense. Being original or adding details on top of the stereotypes are the two main ways to do a good job, and a lot of "screen time" is implied in both cases: few good races are usually better than many bad ones. • Some stuff is so cliché that the appearance of unoriginality is unavoidable. D&D-originated monsters (e.g. gelatinous cubes and illithids) are the worst offenders; the usual approach of roguelike games (using many of them as an affectionate semi-parody, and being original in other areas) might not be suitable for other genres. Appropriate pets should be cool, useful and interesting. Picture yourself with a domesticated ameboid slime in your lap, caressing it gently and hoping it doesn't squirt acid on you: wouldn't a plain old kitten be better? Produci, consuma, crepa ### #17laztrezort  Members   -  Reputation: 884 Like 0Likes Like Posted 17 August 2011 - 12:02 PM You could always consider playing off of a trope at some point. For example, in a Bioware RPG (I think it was DA:O), giant rats were one the first creatures you fight, and afterword a comment was made by one of the NPCs highlighting the absurdity of the cliche situation. Also, a common technique used way back during tabletop gaming was to purposely set up a situation using low-level common monsters, then have these monsters behave or possess powers that drastically increase their challenge (thus dashing the players' expectations). Example: your standard sword fodder kobolds - the twist being that they possess uncommon ingenuity and have riddled their lair with devious traps and ambushes (these traps also, of course, taking advantage of other expectations the players would have). Of course, I have no idea if any of this would fit into your particular game play or setting, but I thought I'd throw it out there. Turning tropes upside down for humor or challenges stand out in my memory, but I've never personally been annoyed by overused monsters themselves. ### #18third_ronin  Members   -  Reputation: 102 Like 0Likes Like Posted 17 August 2011 - 04:41 PM Depends. It depends on the type of game and the target platform. If it is a current PC, 360, PS3 game then yeah, there isn't much point for too much overuse when you have a budget of 8 million and a development staff of over 50 artists (including outsourcing). There isn't much of an excuse there, because you can have a lot of resources and storage space. Not to mention that there are a LOT of techniques that can be used to mitigate this (texture-swaps, generic models that can be fitted with different accessories, etc.) With an idie-type game or games for a portable console (PSP, phones) I can forgive it.. After all, it may be one or two people on the project working their a off...often working a regular job as well. Portable target platforms also have limited horsepower and storage space. Careful character design for characters can help control the overuse. Plan for different textures, accessories, weapons, armor, etc and really mix it up. I knew someone once that was even working on an in-engine morpher that would automatically vary the height and body type for generic characters while preserving texture coordinates. Pretty cool stuff. It is pitch black. You are likely to be eaten by a grue. ### #19Khaiy  Crossbones+   -  Reputation: 1263 Like 0Likes Like Posted 17 August 2011 - 07:34 PM One other thought I just had: If you have a training area with 3-4 different monsters available, and every player will have one of those four monsters as a pet for a period upon leaving the training area, they will be extremely common, particularly in areas where newer players will be spending a lot of their time. With that in mind, any special flair these starting creatures would have would be quickly overwhelmed by the fact that they are constantly in view and encountered by the players. The "everyone has one" mindset would quickly erode excitement I had in getting an original creature right away. Given this, I would probably prefer to see more common creatures in the mandatory training area and then quickly get the option to have at least one of a much wider variety of creatures. ### #20TechnoGoth  Crossbones+   -  Reputation: 1501 Like 0Likes Like Posted 18 August 2011 - 06:53 AM My main complaint about commonly used monsters is games where they recycle early enemies with high level ones of a different color. So you fight blue slimes at level 1 but then red slimes at level 5. Other then that while preference would be for more interesting and orginal creatues, I have no problems facing the same old set of predictable enemies. "Fate and Destiny only give you the opportunity, the rest you have to do on your own." "The people who don't enjoy life are the ones who don't get the joke." The Aspiring Writer Current Projects: Day 0 -prototype post apocalyptic survival game - Design V2 Upcoming Projects: Sanctuary Zero - post apocalyptic survival game - Design V2 Non Game Projects: Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. PARTNERS
# DEF CON Quals 2018: official (194 pts) TL;DR: a 1 byte overflow allows you to induce a small bias in the nonce used in the DSA signing algorithm. Use LLL to exploit this bias to find the private key. I also explore a more natural variant of the problem in which the bias is in the most significant byte of the nonce rather than the least significant, and recover the private key in this case as well. You can find the challenge files and my exploit code here. ## DSA This problem is about the Digital Signature Algorithm (DSA). This blog post assumes knowledge of how the algorithm works, but if you’re unfamiliar with it, you can understand it pretty easily by reading the Wikipedia page. The most important thing to know about DSA is that its security is highly dependent on the “entropy, secrecy, and uniqueness” of $$k$$, the nonce used in the signing step of the protocol. If the same $$k$$ is ever used twice to sign different messages, it’s a simple matter of arithmetic to derive the private key. fail0verflow famously exploited exactly this vulnerability to root the PlayStation 3. More recently, researchers discovered a flaw in the Android implementation of SecureRandom which resulted in colliding nonces in Android Bitcoin wallets. (Both of these attacks were actually on ECDSA, a variant of DSA that uses elliptic curves groups instead of multiplicative groups modulo a prime, but the vulnerability is exactly the same). It turns out that even if there’s a small bias in the values you choose for your nonce, you can still recover the private key given enough signatures on known messages. This attack is much more difficult to carry out, and is the subject of this challenge. ## Reverse Engineering The binary is pretty simple; it lets you sign messages starting with ls, du or stat (but not cat or anything else) and execute signed messages starting with ls, stat, du, or cat. It uses the GMP library to handle the signing and verifying stages. In order to generate $$k$$, the nonce, it reads 20 bytes from /dev/urandom and then (curiously), reverses these bytes right before signing the command. It uses fread to read the command to sign into a buffer of size 256. It does this one byte at a time and stops reading when it encounters a newline or when it’s read 256 bytes. It replaces the newline with a null terminator, but also appends a null terminator right after the last character read if it never encountered a newline. __int64 __fastcall fread_stuff(__int64 a1, unsigned int a2) { signed int i; // [rsp+18h] [rbp-8h]@1 for ( i = 0; i < a2; ++i ) { if ( (unsigned int)fread((void *)(i + a1), 1uLL, 1uLL, stdin) != 1 ) { exit(1); } if ( *(_BYTE *)(i + a1) == 10 ) { *(_BYTE *)(i + a1) = 0; return (unsigned int)i; } } *(_BYTE *)(i + a1) = 0; return (unsigned int)i; } This gives us our 1 byte overflow: if we send 256 bytes, none of which contain a newline, it will set the byte immediately after our buffer to null. As it turns out, this byte is the most-significant byte of $$k$$, our nonce. Of course, after the reversal, this will become the least-significant byte. ## The Exploit We have a DSA signing oracle which we can induce to sign messages with a biased nonce. The bias is small (only 8 bits), but it’s enough to cause a full break. The attack is described pretty well in this stackexchange answer. The attack uses the LLL algorithm which is quite possibly the biggest cryptographic hammer out there. It can be used to break a dizzying array of cryptographic algorithms, and it shows up in CTFs all the time these days. The LLL algorithm solves the problem of lattice reduction. Simply put, given a set of linearly independent vectors $$m_1$$, $$m_2$$, … $$m_n$$, the LLL algorithm will try to find a different set of vectors $$v_1$$, $$v_2$$, … $$v_n$$ that span the same space, with the goal of making the resulting vectors short (small in magnitude) and orthogonal. To use LLL, we have to set up our input vectors such that an (integer) linear combination of them can result in a very short vector which will indirectly yield the private key, $$x$$. The insight is that we can express our nonce $$k$$ as $$256 \cdot b$$ where $$b$$ is small. Specifically, $$b$$ is about 1/256th the size of $$q$$. With a little arithmetic we can express $$b$$ as $b \equiv u + xt \pmod{q}$ where $$t$$ and $$u$$ are simple functions of $$r$$, $$s$$ and our message hash $$h$$. We can rewrite this as: $b = u + xt + jq$ for some integer $$j$$. The intuition here is that the right-hand side is composed of big numbers (around the same size as $$q$$), but the left-hand side is small. We need to translate this notion into a vectorized form, so let’s say that we collect a bunch of such equations for different signature pairs $$(r_i, s_i)$$: $b_1 = u_1 + xt_1 + j_1q$ $b_2 = u_2 + xt_2 + j_2q$ $\vdots$ $b_n = u_n + xt_n + j_nq$ Now, consider the vector $\mathbf{b} = [b_1, b_2, \cdots, b_n].$ This vector is pretty small. Furthermore we can almost express it as a linear combination of $\mathbf{t} = [t_1, t_2, \cdots, t_n]$ and $\mathbf{u} = [u_1, u_2, \cdots, u_n]$ if we ignore the $$j_iq$$ terms. If we mix in the following $$n$$ vectors: $\mathbf{q_1} = [q, 0, \cdots 0]$ $\mathbf{q_2} = [0, q, \cdots 0]$ $\vdots$ $\mathbf{q_n} = [0, 0, \cdots q]$ we can express $$\mathbf{b}$$ as: $\mathbf{b} = \mathbf{u} + x\mathbf{t} + \sum_{i = 1}^{n}j_i \mathbf{q_i}$ So, we have $$n + 2$$ known vectors which can linearly combine to produce a short vector. Crucially, we don’t know what linear combination will produce $$\mathbf{b}$$, since we don’t have knowledge of the private key $$x$$, but we can use LLL to find the right weights. When expressed as a matrix, our vectors look like: $\begin{pmatrix} q & 0 & 0 & \cdots & 0 \\ 0 & q & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & q \\ t_1 & t_2 & t_3 & \cdots & t_n \\ u_1 & u_2 & u_3 & \cdots & u_n \end{pmatrix}$ The last trick we’ll employ allows us to more easily find $$x$$ once we’ve LLL-reduced our basis to produce $$\mathbf{b}$$. Specifically, we’ll augment each of our vectors with two extra dimensions: $\begin{pmatrix} q & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & q & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & q & 0 & 0 \\ t_1 & t_2 & t_3 & \cdots & t_n & s_T & 0 \\ u_1 & u_2 & u_3 & \cdots & u_n & 0 & s_U \end{pmatrix}$ where $$s_T = s_U = 1$$ act as sentinel values. When we run LLL on this basis, our short vector of interest will have $$s_U$$ as its last element, and $$xs_T$$ as its penultimate element. This will allow us to easily identify our short vector (as the one with 1 in its last position), and derive our private key from it (look at the second-to-last element). This trick was originally introduced in the problem description of the corresponding cryptopals challenge (the challenge is part of set 8, which you can’t find on the main cryptopals website but was released publicly anyway as a very indirect consequence of Donald Trump being elected president). cryptopals recommends setting $$s_T$$ as $$2^{-8}$$ and $$s_U$$ as $$q \cdot 2^{-8}$$ in order to make each entry of the resulting short vector around the same size. In practice it doesn’t matter too much, and you can get away with setting them both to 1. Just make sure to not set $$s_T$$ to something greater than $$1$$, because this will prohibitively increase the size of our target vector since it includes an $$xs_T$$ entry. I implemented the exploit in sage since that’s the easiest way to access an LLL() function from python code. The stackexchange post suggests that an 8-bit bias should require $$n = 20$$ signature pairs, but I needed at least 66. ## What if it was the most significant byte? The combination of the buffer overflow and the cryptographic attack makes this challenge quite cute. However, I’d argue the weird call to the reversal function in order to ensure that the null terminator ends up as the least significant byte of the nonce rather than the most significant byte makes it a little less elegant. Why would any real program do that? What would happen if that reversal function was never called? Well, instead of the least significant byte being 0, the most significant byte would be 0. This is still a bias in the nonce! Since $$q$$ is less than $$2^{160}$$, it might not be a bias of 8 bits, but it’s quite close. For our particular $$q$$ the bias ends up being ~7.75 bits, which is plenty. How would we modify our attack in this variant? Well, it’s really as simple as changing all the instances of $$2^8$$ to 1, since $$k = b$$ is already a small number in comparison to $$q$$. To try this out, I created a patched copy of the binary with the call to the nonce reversal function nopped out. I then collected the same data from this variant binary, and modified the sage exploit code to extract the private key. To try this out yourself, simply set the msb flag to True in the exploit code. I’m not sure why OOO included the reversal function at all. This problem was classified under “Fruits and Desserts”, so it was meant to be difficult, and it seems to me that the byte reversal was a pretty big giveaway. ## The Flag Once we have the private key, we can simply sign a message starting with cat, and execute it to give us the flag. (env) [defconquals2018-official]> python get_data.py interact [+] Opening connection to 3aef2bbc.quals2018.oooverflow.io on port 31337: Done [*] POW Challenge: 7rVwoiN0yN 22 [*] POW Solution: 890897 [*] Switching to interactive mode > $X cmd:$ cat r:$175672136897532857177216578242788547073729326124 s:$ 301997289336897032672653458915890188389476020087
# Changing scroll rate for page down key? [closed] I have a presentation remote that allows me to advance Mathematica slides in the SlideShow environment. The control sends a page down key. Some of my slides extend below the screen window, and page down scrolls a page at a time. Is there a way to change the scroll rate associated with a page down key-stroke? I'd like to advance the slides by half a page at a time, or a quarter, for example, or even cell-by-cell. The notebook options associated with ScrollingOptions seem not to control what I need. My system is MacOSX, running Mathematica 8.0.4. - ## closed as off-topic by Kuba, MarcoB, blochwave, m_goldberg, LouisNov 30 '15 at 19:29 This question appears to be off-topic. The users who voted to close gave this specific reason: • "The question is out of scope for this site. The answer to this question requires either advice from Wolfram support or the services of a professional consultant." – Kuba, MarcoB, blochwave, m_goldberg, Louis If this question can be reworded to fit the rules in the help center, please edit the question. If your Mac has a trackpad, simply control the scroll by a two-finger vertical swipe. It should respond in accordance with the length of the swipe. – DavidC Sep 17 '12 at 14:57 @DavidCarraher I do that, but I'd like the option of controlling the presentation while standing away from the computer, hence the presentation remote :). – JxB Sep 17 '12 at 15:15 @DavidCarraher youtube.com/watch?v=6ArqrAP_e_c – Dr. belisarius Sep 17 '12 at 15:16 @belisarius Absolutely lovely. – DavidC Sep 17 '12 at 15:43 I voted to close this questions since it is quite old so I suppose a documented answer doesn't exist. – Kuba Nov 30 '15 at 14:19
• 11 • 9 • 10 • 9 • 11 • ### Similar Content • While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project. #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction mov  qword ptr [rdx],rax which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you. • By lubbe75 As far as I understand there is no real random or noise function in HLSL. I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway... Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? • Hi, I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders. • By NikiTo Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures. if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction. MSN> discard: Do not output the result of the current pixel. <MSN As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too) I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip. (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy) • By NikiTo I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete. Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip? Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?! # DX12 Reading from the CPU ## Recommended Posts Hello, I'm working on a system based on Structured Buffers, the idea is that they can be used in the GPU as UAV/SRV and then the data can be read back in the CPU. This system is used to do some queries in my application. First I have two main resources: + Default Resource: This will be the resource used by the GPU. + Upload Resource: In case I want to upload data from the CPU I'll use this as an intermediate + Double Buffered ReadBack Resources: I have two Read Back buffers to copy data from the GPU to the CPU. Let me show some code: const void* OpenReadBuffer() { HRESULT res = S_FALSE; if (mFirstUse) mFirstUse = false; else mCbFence.WaitUntilCompleted(renderPlatform); // Map from the last frame } { } { // Schedule a copy for the next frame mCbFence.Signal(renderPlatform); // Swap it! } This is how I create the different resources: // Default D3D12_RESOURCE_FLAGS bufferFlags = computable ? D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS : D3D12_RESOURCE_FLAG_NONE; res = device->CreateCommittedResource ( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(mTotalSize,bufferFlags), nullptr, IID_PPV_ARGS(&mBufferDefault) ); res = device->CreateCommittedResource ( D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(mTotalSize), nullptr, ); res = device->CreateCommittedResource ( D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(mTotalSize), D3D12_RESOURCE_STATE_COPY_DEST, nullptr, ); I'm using a Fence to be 100% sure that it is synchronised, I could have more that 2 buffers but at the moment I would like to keep it simple. The values that I get from OpenReadBuffer() are all 0. If I debug it, it looks like the Read Back Buffers have some valid data. What could be the issue? Thanks! Edited by piluve ##### Share on other sites If you want to specify that you'll read the entire contents of the buffer when calling Map, then you can pass NULL as the "pReadRange" parameter. Passing a range where End <= Begin means that you won't be reading any data, which isn't what you want: "This indicates the region the CPU might read, and the coordinates are subresource-relative. A null pointer indicates the entire subresource might be read by the CPU. It is valid to specify the CPU won't read any data by passing a range where End is less than or equal to Begin." ##### Share on other sites MJP is "write", but try to not use nullptr for a range, because the debug layer warn it may be a performance issue to map a full resource and an unwanted action. You can just send { 0, size } to mute the message ##### Share on other sites @galop1n and @MJP are correct in that the range is wrong... but that's not your problem. That only matters on systems which don't have cache coherency, which I can pretty much guarantee you're not using. Since you claim that "while debugging, the data has valid values," I take that to mean that if you use breakpoints and inspect the data, you see correct contents, but if you let the app run normally, you don't. That sounds to me like your synchronization isn't actually working correctly, and when your breakpoint hits, the GPU continues executing and fills in the memory you're inspecting by the time you look at it. ##### Share on other sites Hello @MJP, @galop1n, totally agree, now that you mention it, I mostly always use the same 0,0 range, I will change it just to be nice to the API. Hi @SoldierOfLight , by debugging I was talking about a GPU debugger (like Visual Studio graphics debugger), I can see that the Read Back buffers had data, but then its al 0 (CPU side). EDIT I would like to point something, I've checking it and it looks like in most cases, this read back system is working just fine, but I have one case where it is returning invalid data. Could the problem be in other place, like the state of the resources? The thing is, Why I see valid data with the debugger? Edited by piluve ##### Share on other sites What OS version are you on? I'm currently aware of a bug where mapping a resource can cause it to incorrectly return 0s on some Windows 10 Insider builds. If you map the resource earlier and leave it mapped, does the problem go away? ##### Share on other sites @SoldierOfLight I'm using Windows 10 Pro (with the latest patch I guess), I tried mapping once the ReadBack buffers but getting same results :C. I'll dig around this one that is failing and try to find out why.
# Help with drawing a polygon with N sides [closed] I am at university on my games development, and I have been given work to fill in an empty pixel plotter program using the methods they have supplied for it to teach us the algorithms used in 2 dimensional games. I am currently trying to draw a polygon of N Sides using a function called void PixelPlotterForm::DrawPolygon( int Sides, int X, int Y, int R, Color PixelColour ) which we have been given the parameter but not the algorithm. Here is my attempt so far: void PixelPlotterForm::DrawPolygon( int Sides, int X, int Y, int R, Color PixelColour ) { // Fill in the correct code here int x = 0; int y = 0; //float currentr = 360 / Sides; int n = 0; int r = R; for (n = 0; n < Sides; n++) { x = r * cos(2 * PI*n / Sides) + X; y = r * sin(2 * PI* n / Sides) + Y; SetViewportPixel(x, y, PixelColour); // this is where the function is called // and the pixel is set from this. } } Currently the code does not work as desired, at the moment I have only reached drawing all 4 points, but something when drawing the polygon makes the points go skew-wif so to speak. The left image is the trace that is drawn as a bounding box from the mouse, dragging it makes it larger like any image editing software, but the result is on the right, seems to set the angle off somehow, not sure why as the point should correlate to the trace. <<-- This image shows the actual result. I am also of course required to draw the outline of the polygon, so I assume that I should store in a cycle of points to create a triangle so I can draw the hypotenuse, but I am unsure how to do this. Any help would be appreciated. NOTE: unfortunately the code for the trace is hidden, I've searched the whole program so I don't unfortunately know how that works. • Why are you adding R to your angles? – kolrabi Jan 27 '16 at 9:40 • @kolrabi Not sure, Think thats a mistake, still has a similar outcome on removal though. – RNewell122 Jan 27 '16 at 9:41 • @kolrabi nevermind fixed that problem, now I just have the difficulty of drawing from point to point. – RNewell122 Jan 27 '16 at 9:49 • I wouldn't recommend showing this sort of stuff from a Uni course, it's not fair to the creator. Asking about the theory would be more ethical in my opinion. – Syntac_ Jan 27 '16 at 10:00 • The university is ok with it, and so is the lecturer. – RNewell122 Jan 27 '16 at 11:32 I understand you also had problems drawing a line from point to point. I recommend you look into "Bresenham's Line Algorithm" as it typically gives the best result. Here is a Wikipedia article about it. The article includes pseudocode for the algorithm, including one that only relies on integer arithmetic. I won't be posting it here as I encourage you to actually read it. Once you have it working it should only be a matter of cycling through the points drawing lines between them. Hopefully this helps. • I'm sitting here wondering if this is really a gamedev related question to begin with. – Nils Ole Timm Jan 27 '16 at 10:03 • I thought it was because in the next worksheet we apply these principles to drawing pixels in a 2dimensional game, sorry I should have made it more clear. – RNewell122 Jan 27 '16 at 10:07 • @NilsOleTimm If it makes you feel any better, I think game-programmers have more knowlege about graphics and graphics algorithms than your standard web-developer, system maintainer or what have you. So I don't blame him for posting here. I go by the test of "Would a game-programmer know more about this than your average programmer?". – Christer Jan 27 '16 at 10:10 • I go by the same test, so I guess we have a different perspective then. I think any programmer can easily solve that issue. – Nils Ole Timm Jan 27 '16 at 10:13 • @Christer funnily enough , I have already used Bresenhams line algorithm, for the line function, I did not think of it thats such a simple answer. I obviously overcomplicated it in my head. – RNewell122 Jan 27 '16 at 10:20 You're not getting the angles properly, the sum of the internal angle of a simple polygon can be calculated with the formula: π(n-2). We then need the external angle which is the angle from point to point, this can be calculated with : π-internalAngle. Assuming the first point is in the lower left and there's no rotation this code should work. void PixelPlotterForm::DrawPolygon( int Sides, int X, int Y, int R, Color PixelColour ) { // Fill in the correct code here int x = X; int y = Y; internalAngle = PI * (Sides-2) / Sides; externalAngle = PI - internalAngle for (int n = 0; n < Sides; n++) { x += R * cos(externalAngle * n); y += R * sin(externalAngle * n); SetViewportPixel(x, y, PixelColour); // this is where the function is called // and the pixel is set from this. } } • This doesn't appear to work, it offsets the polygon and then misses the first point. – RNewell122 Jan 27 '16 at 10:15 • i.gyazo.com/4241d2a63b55c10ad5a0eb299e493746.png << heres an image of the outcome. – RNewell122 Jan 27 '16 at 10:17 • Made a mistake, have edited. – Tim Jan 27 '16 at 10:19 • Seems not to work again, maybe there is a discrepancy in the function parameters I need to correct ? i.gyazo.com/2a2bc409d87d8bcddf9bfaed74ab3624.png << image outcome – RNewell122 Jan 27 '16 at 10:23 • Ahhhh, I figured out the mistake, sorry the first point is always the top point, and the coordinates work downwards, as the Y coordinates actually start 0 at the top. I should have clarified that @Tim – RNewell122 Jan 27 '16 at 10:27
Research interests & activities The webpage for the conference on Topology, Embeddings, and Attractors can be found here The webpage for The Navier-Stokes Equations in Venice can be found here You can find an English translation of the classical paper by Leray (1934) due to Robert Terrell here. and a translation of the paper by Hopf due to Andreas Klockner here. Possible topics for essays, dissertations, or theses Continuity of attractors under perturbation. Global attractors of parametrised systems are continuous for a residual set of parameter values (Hoang et al., 2014). It would be interesting to have some "bad" examples where the set of discontinuities is large. Assouad dimension, equi-homogeneity, and the fine structure of sets. The Assouad dimension in the largest of a sequence of measures of dimension; equi-homogeneity (Olson et al., 2014) is a related (but distinct) notion that encodes the property of a degree of uniformity at different scales. These ideas can be used to investigate in more detail the properties of sets arising in dynamical systems (e.g. self-similar sets) and in other contexts (e.g. the set of space-time singularities of a solution of the 3D Navier-Stokes equations). Lagrangian trajectories arising in 3D fluid flows. For any suitable weak solution of the 3D Navier-Stokes equations, the solutions of the ODE $\dot X=u(X,t)$ are unique for almost every initial point $X(0)$ (Robinson & Sadowski, 2009). This gives an alternative way to view problems of uniqueness/regularity for the 3D Navier-Stokes equations. It would be instuctive to understand recent work by Jia & Sverak (2013) on possible non-uniqueness in this context. Magnetic relaxation and the equations of MHD. A heuristic method proposed by Arnol'd and Moffatt for constructing stationary Euler flows involves studying the asymptotic behaviour of solutions of the equations of MHD in the case of zero magnetic diffusivity. There are partial results justifying this approach under the assumption of regularity of the magnetic field (Nunez, 2007) and local existence results for the MHD equations, but no satisfying general theory is currently available. A toy scalar model of the 3D Navier-Stokes equations. The model of surface growth, $u_t-u_{xxxx}-\partial_x^2|u_x|^2$, shares many features in common with the 3D Navier-Stokes equations and provides an interesting testing ground for extending what we know for the NSE. Most NSE results have parallels for this equation, but so far not the $L^\infty(0,T;L^3)$ implies regularity result of Escauriaza et al. (2003). Semilinear parabolic equations and the heat equation. The heat equation $u_t-\Delta u=0$ and its simplest nonlinear version $u_t-\Delta u=f(u)$ are classical problems but open questions still remain, e.g. given the distribution function of $u(0)$, what can be said about the solutions of these two problems? Another open question is whether one can characterise those $f$ for which the semilinear problem has a unique solution (those $f$ that yield local existence have been characterised only recently by Laister et al., 2014). Lecture notes
Open access peer-reviewed chapter - ONLINE FIRST # Assessing Ecosystem Services Delivered by Public Green Spaces in Major European Cities By Rui Alexandre Castanho, José Cabezas, José Manuel Naranjo Gómez, José Martín Gallardo, Luis Fernández-Pozo, Sema Yilmaz Genç, Sérgio Lousada and Luís Loures Submitted: October 30th 2019Reviewed: January 29th 2020Published: April 22nd 2020 DOI: 10.5772/intechopen.91415 ## Abstract In the last decades, there was a significant population growth in urban areas. In this regard, the European major cities are not an exception; in fact, they are even still more affected by that populational exodus and consequently for an urban growth. Therefore, and considering that the urban parks in the cities are not growing at the same pace, a question is raised: “Are the public green spaces in the European major cities still able to provide the needed ecosystem services to their populations?” Based on the above-mentioned question, the present chapter aims to provide the first insights and answers to this question. Contextually, the study uses a case study research (CSR) method over several European major cities. Besides, GIS tools crossing statistical data are also used to analyze the data and consequently understood and establish a state of the art regarding this relevant issue. ### Keywords • ecosystem services • landscape planning • sustainability • urban green spaces ## 1. Introduction The original landscapes of our planet have been undergoing transformations by human activities. In Europe, a large part of the original forests existing during the human hunter-gatherer stage has been replaced by agricultural territories and large cities. At the same time, there is remarkably an uneven distribution of the population that results in very low densities in some territories, rural, and very high in urban areas, where significant percentage of inhabitants has been concentrated, throughout a process that has gone developing in the last 150 years [1]. The city was the focus of growth of the states, due, in large part, to the industrialization that led to an increase in the economy, which in turn led to a very rapid expansion and a first concentration of the industries and then of the services. But this great growth caused a disorganized and chaotic development. Urban planning techniques try to eliminate and prevent urban chaos. In this context, when comparing the pre- and post-industrial revolution growth of the cities, a key difference appears “(…) compared to the old cities with clear boundaries enclosed by walls, post-industrial revolution growth leads to the invasion of the surrounding landscape [2].” The exterior goes from being a threat to the city to being an element threatened by it. The city has evolved in recent centuries toward the need to develop an urban planning concept in which the existence of green spaces became more important. The Industrial Revolution caused the exodus from the countryside to the city and the emergence of epidemics related to lack of health; together with the growing demand for leisure and free time by the population, the need for public green spaces increased. The Urban Parks Movement (eighteenth century) appears, whose objective was to recreate the presence of nature in the urban environment, in order to improve the quality of life of its citizens ([3, 4]). This concept resulted in the creation of the main parks, the first of them in the United Kingdom: “Victoria Park” in London and “Birckenhead Park” in Liverpool; a little later, also in London, “Hyde Park” and “St. James Park”; while in Paris the “Bois de Boulogne” and “Bois de Vincennes” were built and in Madrid “El Retiro.” Urban green spaces are urban areas in which natural or seminatural ecosystems became urban spaces by human influence [4]. They provide a connection between the urban and nature [5]. Green spaces include street trees, green roads, green roof walls, urban parks, and even abandoned unbuilt land. In fact, its creation can be from scratch, modified from existing vegetation, generated by colonization or existing as a natural enclave [6]. Vegetation in cities has multiple benefits that have been the subject of vindication and study throughout the evolution of current urbanism and that have been enriched and concretized by the contribution of research from related fields such as ecology. The presence of abundant vegetation in cities is ideal with a universal appeal, which goes beyond temporal, spatial, and cultural divisions, associating itself with the concept of environmental quality, which leads to a better quality of life. In recent years there is an important interest in the environmental benefits of green spaces. Thus, a significant number of studies attempt to demonstrate, quantify, and incorporate them into planning. However, they still coexist with the marginality which they are treated in practice [7]. The presence of natural elements and values in the city is today a fundamental condition for the environmental recovery of urban territory. The natural and urban systems are part of the same space, and their integrated management is a requirement of the regional space and a condition of sustainability of the territories and cities. In addition, the agroforestry existence in the peripheries of cities and green spaces within the urban fabric represents an increase in environmental quality, which urban planning must strengthen and improve [3, 8, 9, 10, 11]. The visual approach of the green areas constitutes a powerful tool to activate and inspire the daily life of citizens. Besides, a deeper understanding of the ecological processes that occurs in nature, along with the economic and socio-cultural, can help city managers to better integrate all the above-mentioned aspects. This approach must go beyond the superficial, appreciating the stories that landscapes tell and helping to understand the place of humans in nature [12]. Studies on the valuation of ecosystem services (ES) focused on urban areas represent a small percentage in relation to the total number of articles devoted to the subject. Furthermore, Delgado and Marín [13] analyzed the growth of publications in a 24-year interval (from 1990 to 2013), demonstrating their exponential growth, which increased from 1 article in 1991 to less than 250 in 2007 and 1500 in 2013. Of these, only 6% focused on the direct services of the ecosystems associated with urban areas. According to Ibes [14], the valuation of the ES was originally designed for non-urban systems, so that new models are necessary for a correct assimilation of the services provided by the urbanized environments. In addition, it reflects on the difficulty of finding a balance between geographical, conceptual, and spatial considerations when the ES valuation paradigm applies to urban parks. Therefore, bearing in mind that urban parks cannot generate all the possible ES, excluding the necessary compensation, it will lead more often in losses rather than benefits. The key components that contribute to the total economic value of ES can be divided into three main blocks [15, 16]. The first is related to the direct use and includes both (a) the provision of services (e.g., the production of plant and/or animal biomass) and (b) social and cultural services (e.g., recreational activities, sports, family). The benefits associated with urban parks are mainly framed in the second group, presenting the contributions to the first residual character in general. The second block refers to indirect services (indirect use) that involve (c) regulating (such as the control of air, water or soil quality) and (d) supporting services that are necessary for the production of the rest of services of the ecosystem (e.g., nutrient cycles, soil formation, or water cycle). The parks contribute to a greater extent in the section of regulating services, with benefits that include the improvement of the air quality or the decrease of the load of nutrients that reach the water courses and are potential causes of eutrophication. The third block is dedicated to other aspects not contemplated in the previous ones. It comprises two sections: (e) option services, referring to the possibility of using a service in the future and maintaining resilience (ability to reverse changes in the ecosystem) and (f) nonuse/exploitation of resources of ecosystem resources for cultural reasons and of preservation for future generations or their intrinsic values. The ES of urban parks contribute more to the aspects related to the second section. Several authors have evaluated the benefits of the parks valuing some specific ES. Also, Breuste et al. [17] analyzed in three megalopolis the importance from the recreational point of view (Buenos Aires and Shanghai) and climate regulation (Karachi). They demonstrated that urban parks play an extremely important role by offering ES related to recreation and contact with nature. With regard to Karachi, they highlighted the importance of parks in the regulation of extreme weather conditions. Residential areas located near parks had a considerable higher degree of thermal comfort. Setälä et al. [18] assessed the retention of heavy metals and nutrients in the soil, highlighting the role of parks especially in cities with high levels of pollution. Regarding the contribution of the ES in urban parks, Gratani et al. [19] studied and quantified four parks located in Rome to carbon sequestration. Mediterranean-type parks, such as the Romans, sequestered CO2 throughout the year highlighting the results in those in which the native species of the Mediterranean basin were dominant. The annual economic value of the CO2 elimination would be equivalent to $23,537 ha−1. Moreover, Giedych and Maksymiuk [20] studied the Warsaw parks, concluding that the ES contributed by each of them depend on the local conditions and specific characteristics of each of them, the surface being one of the key variables in the regulating services. Less abundant are the works that analyze and value the set of ES that generate concrete parks. An example would be the holistic valuation of the ES generated by Central Park (New York, USA), estimated at$ 70 million/hectare/year [21]. Contextually, the present chapter through a case study research method aims to analyze the green urban areas surfaces evolution in seven European major cities. ## 2. Materials and methods Initially, land use data monitored by Land Cover Corine (CLC) were obtained (https://land.copernicus.eu/pan-european/corine-land-cover) [22] on a scale of 1:100,000, with a minimum mapping unit (MCU) of 25 Ha and using polygonal graphics features that evoke land uses in Europe. Some of the used CLC nomenclature/codes used are shown in Table 1. Level 1Level 2Level 3 1 Artificial surfaces11 Urban fabric111 Continuous urban fabric 112 Discontinuous urban fabric 12 Industrial, commercial, and transport units121 Industrial or commercial units 122 Road and rail networks and associated land 123 Port areas 124 Airports 13 Mine, dump, and construction sites131 Mineral extraction sites 132 Dump sites 133 Construction sites 14 Artificial, nonagricultural vegetated areas141 Green urban areas 142 Sport and leisure facilities ### Table 1. CLC nomenclature [22]. In addition, the urban boundaries of the cities analyzed were obtained from ESRI-free data, using a layer called Europe Shapefiles. In this case, polygon features were also used. In this regard, the authors have analyzed these two layers of information – which represent two variables in the same georeferenced position. For this reason, the two layers were transformed into the same reference system, using ETRS1989 Lambert azimuthal equal area, because this projection preserves the areas and is better suited to the different cities to be analyzed. From the two polygonal cartographic layers, an intersection was made between the two. Thus, polygons corresponding to land uses that are completely included in the boundaries of cities become part of the resulting layer. Also, the parts of the polygons corresponding to the land uses that are partially included and clipped by the boundaries of the cities are also part of this resulting layer. Thus, it was possible to obtain a layer with the land-use polygons within each city. Once this layer was obtained, we proceed to measure the surface of each of these polygons obtained evocative of the land uses, but in the projection used. In order to do this the ArcGIS 10.3 software was used. Subsequently, using Microsoft Access 2016, selection queries were made. Thus, only polygons whose use was 1.4.1 corresponding to Green urban areas were chosen, that is to say, areas with vegetation urban fabric which includes parks and cemeteries with vegetation. Later, a query was carried out so that the total area dedicated to green urban areas was obtained. Therefore, seven case studies of European major cities were selected (Figure 1). After the case study selection, an analysis for the years 1990, 2000, 2006, 2012, and 2018 was carried out. Nevertheless, for the cities of London and Stockholm for 1990, there was no data. Finally, thematic maps representative of land uses were obtained for each of the years and cities, highlighting the green urban areas. ## 3. Results and discussion From the 11 classes of the CLC, the study only analyzes Level 3 (land use code 141)—regarding green urban areas (Table 1). Those results were presented in acres and were assessed for each year of the studied period (1990, 2000, 2006, 2008, 2012, and 2018) (Table 2). Contextually, the results presented in Table 2 enabled to create a graph (Figure 2). This graph shows the cities being grouped into three levels. In the first level, we have London with the largest surface of green areas over the studied years—around 12,000 acres. On a second level, we have Stockholm, Madrid, and Paris that slightly have a surface of green urban areas superior to 4000 acres; however, any of those reach the 8000 acres. In this regard, it should be highlighted that in the first studied year (1990), Madrid was one of the cities with lowest values regarding green urban areas surfaces, and in the last studied year (2018) the Spanish capital reaches the third position—as one of the studied cities with the highest value of CLC 141. And in a third level, we have the studied cities with the lowest values of green urban areas, which are Berlin, Lisbon, and Roma, with less than 4000 acres of the land use 141—in fact, with a CLC 141 surface lower than 2000 acres. City19902000200620122018Dif.% Berlin2896.322868,462873,663102,043336,18439,8715,19 Lisbon1204,021465,011827,661783,961929,92725,9160,29 Londonn.d.11,429,7312,380,3812,195,2212,224,16794,436,95 Paris4564,595183,535212,095239,165187,85623,2613,65 Rome1654,961532,341456,551456,551455,86−199,09−12,03 Stockholmn.d.6954,176907,246901,446869,19−84,98−1,22 ### Table 2. Outcomes of the analyzed parameters of the green urban areas in European major cities (source: Authors). n.d., no data available; dif., difference between first and last year; %, percentage. Moreover, through the creation of individual graphics for each of the selected cities, it was possible to analyze in detail how the green urban area surfaces evolved over the 5 years studied (Figure 3). Through this analysis, it is possible to verify that two cities (Rome and Stockholm) are losing green urban areas in comparison with the first year analyzed (1990). On the other hand, all the other cities are gaining more green urban areas along the years. From all those cities that show an increase in the land use 141 over the years, it should be highlighted that Madrid and Lisbon show constant growth. In fact, this tendency was also identified in Berlin; however, it only starts in the year 2012 onwards—once the German capital presented a period of growth stagnation (of the land use 141) in the previous years. Besides, in Paris and London, we have been identified the opposite scenario. In Stockholm, the city was lost Green Urban Areas surface when compared to the 1990 reality; however, it was also started a similar growth process (regarding the land uses 141) in the year of 2012 – which is verified in the year of 2018; in an opposite tendency, we have the city of London. The city of London, even it has been passed through an increase of Green urban Areas in the first year studied (1990), is now facing a tendency of decrease in these green surfaces—which started in the year 2006. Regarding the results in percentage (Table 2), Roma and Stockholm have lost 12.03 and less than 1,22% of their green urban area surfaces, respectively. In contrast, the cities that gained more green urban areas have been Madrid, with 174,99%, and followed by Lisbon, Berlin, and Paris (between 60,29, 15,19, and 13,65%). Furthermore, London increases its land use 141 in less than 5%, nevertheless, with a negative tendency (Table 3). Case studiesPopulation (thousands) 1990200020102015 Stockholm1030121013602615 Rome3750371039604468 Paris21502130224012,524 London68007240860014,855 Berlin3200350034504314 Lisbon2540269027902810 ### Table 3. Demographic dynamics of the studied cities [23]. ## 4. Final remarks Through the present study, it is possible to understand how the green urban areas have evolved within the studied European major cities. Besides, throughout the analysis of patterns of the land use change (CLC 141) along with empirical knowledge of those cities’ territories, it was allowed us to assess the value of those Green Urban Areas within the cities. Therefore, it is possible to say that those green urban areas are not growing in the same pace as the demographic values as well as other land uses in development within these cities [24]. In this regard, and considering the relevance of the ES performed in the urban environments, we believe that in all the analyzed cities, the existing green urban areas are not able to provide the environmental needs for their inhabitants. In fact, even if those environmental needs could differ among the studied cities – once, some presents a higher number of Green Urban Areas than others as well as different demographic growth rates; all the analyzed European Major Cities shows a need for more Green Urban Areas. Additionally, the performed study enabled us to put forward some noteworthy ideas, related to the relevance of green space infrastructure in urban areas, regardless of their urban nature and of their major land use, which corroborate with the conclusions of previous studies that crossed the relevance of urban green spaces to urban sustainability and development [4, 9, 10, 25, 26, 27, 28, 29, 30, 31, 32]. In this regard, the creation of more green urban areas in these cities as well as in their metropolitan influential territories is seen as pivotal. Furthermore, guidelines should be provided for the main actors and decision-makers of the planning process to where the efforts toward a sustainable development and growth should be placed—for example to address green strategies and land use reconversion and redevelopment of urban areas. chapter PDF ## More © 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## How to cite and reference ### Cite this chapter Copy to clipboard Rui Alexandre Castanho, José Cabezas, José Manuel Naranjo Gómez, José Martín Gallardo, Luis Fernández-Pozo, Sema Yilmaz Genç, Sérgio Lousada and Luís Loures (April 22nd 2020). Assessing Ecosystem Services Delivered by Public Green Spaces in Major European Cities [Online First], IntechOpen, DOI: 10.5772/intechopen.91415. Available from:
# Question about a continuous function Let $(X,d)$ be a metric space. Let $f_n : X \rightarrow \mathbb{R}$ be continuous for each $n \geq 1$. Assume that $|f_n(x)|\leq a_n$ and assume that the series $\sum_n a_n$ converges. Show that $F(x) = \sum_{n=1}^\infty f_n(x)$ defines a continuous function. My attempt: Since $|f_n(x)|\leq a_n$ for all $n$ and since $\sum_n a_n$ converges, we know that $\sum f_n(x)$ converges. What do I do from here? - Did you learn this: If $(f_n)_{n \in \mathbb{N}}$ is a sequence of continuous functions converging uniformly to $f$ then f is continuous? – P.. Nov 23 '12 at 19:00 You are welcome. Check this. Is for $X=\mathbb{R}$ but is essensialy the same proof for any metric space $X$. – P.. Nov 23 '12 at 19:06 $\left(\sum_{k=1}^n f_k \right)_n$ converges uniformly to $F$ since $$\left| F(x)-\sum_{k=1}^n f_k(x) \right| = \left| \sum_{n+1}^\infty f_k(x) \right| \leq \sum_{k=n+1}^\infty |f_k(x)| \leq \sum_{k=n+1}^\infty \underbrace{\|f_k\|_\infty}_{\leq a_k} \qquad (x \in X)\\ \Rightarrow \left\|F-\sum_{k=1}^n f_k\right\|_{\infty} \leq \sum_{k=n+1}^\infty a_k \to 0 \quad (n \to \infty)$$ Since uniform limits of continuous functions are continuous, we conclude that $F$ is continuous. - We know more than just $\sum f_n(x)$ converges, since $|f_n(x)| \leq |a_n| \forall x \in X$ we have that $\sum f_n(x)$ converges uniformly to a function $f$. It is a well known result that the uniform limit of continuous functions is also continuous, and so we are done. - using the popular, $\epsilon/3$ argument. +1 – user17762 Nov 23 '12 at 19:04 It suffices to show that $\sum_1^N f_n(x)$ converges to $\sum f_n(x)$ uniformly. (which I leave to you.) It suffices to show this because if a sequence of continuous functions converges uniformly then the limiting function is also continuous. This can be seen via the triangle inequality. Break up $\lvert f(x) - f(y) \rvert$ into $$\lvert f(x) - f(y) \rvert \leq \lvert f(x) - f_n(x) \rvert + \lvert f_n(x) - f_n(y) \rvert + \lvert f_n(y) - f(y) \rvert$$ for some sufficiently large $n$ using the uniform convergence and then conclude using continuity of $f_n$. -
# Tag Info 3 This was a somewhat hotly debated question in the 1980s. The debate was more-or-less ended with papers like Cheeseman's In Defense of Probability. The short answer is that Fuzzy Logic does not just assign a continuous value to sentences, what it does is assign degrees of membership in different fuzzy sets. These degrees of membership range between 0 and 1. ... 3 Demster-Shafer Theory and Bayesian Networks were both techniques that rose to prominence within AI in the 1970's and 1980's, as AI started to seriously grapple with uncertainty in the world, and move beyond the sterilized environments that most early systems worked in. In the 1970's and perhaps even earlier, it became apparent that direct applications of ... 2 Introduction: MAP finds a point estimate! As opposed to your apparently current belief, in maximum a posteriori (MAP) estimation, you are looking for a point estimate (a number or vector) rather than a full probability distribution. The MAP estimation can be seen as a Bayesian version of the maximum likelihood estimation (MLE). Therefore, I will first ... 1 Using as a best reference accordingly my own google research, find the best post about best introductory Bayesian statistics book and summarize the answers. I find this post in stats.stackexchange about bayesian statistics books maybe this is the best recomendation for you. I read the post weeks ago and some books are stunning. This is my TOP 3 books from ... 1 In expectation step, firstly we calculate the posterior of latent variable $Z$ and then the $Q(θ | θ^{(t)})$ is defined as the expected value of the log likelihood of $θ$, with respect to the current conditional contribution of $Z$ given $X$ and the current estimates of $θ^{(t)}$. In maximization step, we update $θ$ using the argmax on $Q$, with respect to $... 1 Bernoulli naïve Bayes$P(x|c_k) = \prod^{n}_{i=1} p^{x_i}_{ki} (1-p_{ki})^{(1-x_i)}$Let's examine the example of document classification. Let K different text classes and n different terms that our vocabulary contains.$x_i$are boolean variables (0, 1) expressing if the$i^{th}$term exists in document x. x is a vector of dimension n.$P(x|c_k)\$ ... Only top voted, non community-wiki answers of a minimum length are eligible
Browse Questions # A uniform rod of length 5 m is placed against the wall as shown. If coefficient of friction $\mu$ is the same for both the walls, the minimum value for $\mu$ not to slip. $\begin {array} {1 1} (a)\;\mu=\frac{1}{2} & \quad (b)\;\mu=\frac{1}{4} \\ (c)\;\mu=\frac{1}{3} & \quad (d)\;\mu=\frac{1}{5} \end {array}$ For translation equations : $\sum F_x =0 => N_2= f_1$ $f_1=N_2$ $\sum F_y =0 => mg=f_2+N_1$ limiting equation $f_1= \mu N_1$ $f_2= \mu N_2$ $f_1=N_2=\mu N_1$ $\therefore f_2=\mu ^2 N_1$ For rotational equation: Clock wise moments= Anti clock wise moments $mg \times \frac{l}{2} \cos \theta+ f_1 \times l \sin \theta= N_1 \times l cos \theta$ Solving above equations: $\mu =\large\frac{1}{3}$
The OpenVX Specification  a73e458 Mean and Standard Deviation ## Detailed Description Computes the mean pixel value and the standard deviation of the pixels in the input image (which has a dimension width and height). The mean value is computed as [R00081]: $\mu = \frac{\left(\sum_{y=0}^h \sum_{x=0}^w src(x,y) \right)} {(width * height)}$ The standard deviation is computed as [R00082]: $\sigma = \sqrt{\frac{\left(\sum_{y=0}^h \sum_{x=0}^w (\mu - src(x,y))^2 \right)} {(width * height)}}$ ## Functions vx_node VX_API_CALL vxMeanStdDevNode (vx_graph graph, vx_image input, vx_scalar mean, vx_scalar stddev) [Graph] Creates a mean value and standard deviation node. More... ## ◆ vxMeanStdDevNode() vx_node VX_API_CALL vxMeanStdDevNode ( vx_graph graph, vx_image input, vx_scalar mean, vx_scalar stddev ) [Graph] Creates a mean value and standard deviation node. Parameters [in] graph The reference to the graph [R00207]. [in] input The input image. VX_DF_IMAGE_U8 is supported [R00208]. [out] mean The VX_TYPE_FLOAT32 average pixel value [R00209]. [out] stddev The VX_TYPE_FLOAT32 standard deviation of the pixel values [R00210]. Returns vx_node [R00211]. Return values vx_node A node reference. Any possible errors preventing a successful creation should be checked using vxGetStatus
Your email was sent successfully. Check your inbox. An error occurred while sending the email. Please try again. Proceed reservation? Export Filter • Chemical Engineering  (626) • 2010-2014 • 1990-1994  (626) • 1985-1989 • 1970-1974 • 1992  (626) Collection Keywords Publisher Years • 2010-2014 • 1990-1994  (626) • 1985-1989 • 1970-1974 Year • 1 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 297-301 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 1 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 2 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 321-327 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Aqueous solutions of phenol were oxidized in a flow reactor at temperatures between 300 and 420°C (0.89 ≤ Tr ≤ 1.07) and pressures from 188 to 278 atm (0.86 ≤ Pr ≤ 1.27). These conditions included oxidations in both near-critical and supercritical water. Reactor residence times ranged from 1.2 to 111 s. The initial phenol concentrations were between 50 and 330 ppm by mass, and the initial oxygen concentrations ranged from 0 to 1,100% excess. The oxidation experiments covered essentially the entire range of phenol conversions. Analysis of the kinetics data for phenol disappearance using a combination of the integral method and the method of excess revealed that the reaction was first order in phenol and 1/2 order in oxygen, and influenced by pressure. The global reaction order for water was taken to be nonzero, and the global rate constant was assumed to be independent of pressure so that the only effect of pressure was to alter the water concentration and hence the reaction rate. This approach led to a global reaction rate law that was 0.7 order in water and had a rate constant with an activation energy of 12.4 kcal/mol. The implications of these rate laws to the design of a commercial supercritical water oxidation reactor are also explored. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 3 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 363-376 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A film-theory model is presented for nonisothermal gas absorption with a second-order exothermic reaction. The model accounts for the volatility of the liquid reactant and heat transfer from the liquid surface to the gas phase. The pertinent equations were solved numerically using B-spline collocation. Results from this solution show that for intermediate values of Hatta number the liquid-reactant volatility is deterimental to the enhancement of gas absorption. As Hatta number approaches zero or infinity, however, the effect of liquid-reactant volatility becomes minor. Heat losses to the gas phase drastically reduce the interfacial temperature rise, which in turn enhances or inhibits the absorption rate depending on the effective activation energy being larger or less than zero, respectively. Approximate expressions for the enhancement factor and the interfacial temperature rise were also developed. Comparisons with the “exact” numerical solution verified the accuracy of these expressions over a reasonable spectrum of parameter values. The model developed was applied to two cases representing real conditions: the chlorination of toluene and the sulfonation of dodecylbenzene. Volatility effects are shown to be important for the former system, while the relatively nonvolatile dodecylbenzene served as a counter example. Additional Material: 16 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 4 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 397-404 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A reasonable analytical procedure of the overall reaction rate for the phase transfer catalysis with mass transfer is discussed. Alkaline hydrolysis of n-butyl acetate with a phase transfer catalyst Aliquat 336 (tricaprylmethylammonium chloride, Q+Cl-) was chosen as a model system and carried out in an agitated vessel with a flat interface. Overall reaction rates observed were proportional to the interfacial concentration of the actual reactant Q+OH- (the ion pair consisting of quaternary ammonium cation Q+ and OH-) for the hydrolysis in the organic phase. The interfacial concentration of Q+OH- was a unique function of bulk concentrations of the catalyst and NaOH, and the ionic strength of the aqueous solution. This behavior of the overall reaction rates was explained by the proposed model solution. The reaction rate constant, evaluated by fitting the rate data to the model prediction, was 47 m3/kmol·s at 298 K. It was 70 or more times greater than that of conventional alkaline hydrolysis in the aqueous phase. Additional Material: 8 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 5 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992) ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 6 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 56-66 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The tortuosities of fibrous media in the heretofore unexplored transition and ordinary regimes are computed using a Monte Carlo scheme based on the Einstein equation for random walkers. The model structure is that of fully penetrable cylinders (FPC) in a unit simulation volume. The mean square displacement technique is combined with the first passage time distribution to accelerate the progress of the walkers at low Knudsen number. The results include the computation of transition regime transport coefficients for the first time. The calculated ordinary tortuosities are approximately equal to the reciprocal of the porosity over a wide range, while the transition tortuosities are shown to deviate from the reciprocal porosity with a simple dependence on Knudsen number. The limits of the transition regime are shown to correspond roughly to Knudsen numbers of 0.50 and 100, respectively. The calculated Knudsen tortuosities are shown to improve on earlier results obtained by the authors using a flux-based technique. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 7 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 101-115 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Singularity theory, large activation energy asymptotics, and numerical methods are used to present a comprehensive study of the steady-state multiplicity features of three classical adiabatic autothermal reactor models: tubular reactor with internal heat exchange, tubular reactor with external heat exchange, and the CSTR with external heat exchange. Specifically, we derive the exact uniqueness-multiplicity boundary, determine typical cross-sections of the bifurcation set, and classify the different types of bifurcation diagrams of conversion vs. residence time. Asymptotic (limiting) models are used to determine analytical expressions for the uniqueness boundary and the ignition and extinction points. The analytical results are used to present simple, explicit and accurate expressions defining the boundary of the region of autothermal operation in the physical parameter space. Additional Material: 16 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 8 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 193-200 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A new control scheme is presented for feedforward control of unknown disturbances in the model-predictive control (MPC) scheme. In this control scheme, a neural network is connected in parallel with the MPC controller and trained online by minimizing the MPC controller output corresponding to the unmodeled effect. It is applied to distillation column control and nonlinear reactor control to illustrate its effectiveness. The result shows that the neural feedforward controller can cope well with strong interactions, time delays, nonlinearities, and process/model mismatch. The controller also offers such advantages as fault tolerance, generalization capability by interpolation, and learning capability by randdom input patterns. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 9 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 502-510 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A new technique for studying and recovering short-lived chemical intermediate species has been developed using a Couette reactor, which is an open one-dimensional reaction-diffusion system. Reaction occurs in the annulus between concentric cylinders with the inner one rotating and the outer one at rest. Fresh reagents are in contact with the ends of the annulus, but there is no net axial flow. The axial transport arising from the hydrodynamic motion is effectively diffusive, but has a diffusion coefficient 3 to 5 orders of magnitude larger than that of molecular diffusion. The oxidant (ClO2-) and reductant (I-) of an autocatalytic reaction are fed at opposite ends of the reactor. The reactants diffuse toward each other and react, forming a steady, sharp chemical front and a stable spatial concentration band of unstable intermediate species (HOCl) in the front region. Unstable intermediate species are thus stabilized at a well-defined spatial position where they can be recovered and studied. The experiments and numerical simulations demonstrate that the faster the reaction rate, the stabler the chemical front and the more effective the recovery of unstable intermediate species. Additional Material: 13 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 10 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 555-562 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: To cope with modeling uncertainties and randomness of external disturbances, a new tracking control called the natural control concept is designed. Its implementation is completely independent of the internal dynamics of a controlled system, its desired output and external disturbances. The design algorithm established ensures a prespecified exponential quality of output tracking. The theory presented in this article is applied effectively to the design of natural tracking control for a chemical reaction process described by the fourth-order, linear, state-space mathematical model. Additional Material: 13 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 11 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 219-226 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Current AC (alternating current) techniques are used often to characterize the energetics at a semiconducting solid phase/electrolyte interface. For thin layers having a strongly disordered or amorphous structure (such as oxide-passive layers anodically grown on valve metals), interpretative models currently used for crystalline semiconductors may produce misleading data.A new interpretation of the admittance data, based on recent models for amorphous semiconductors (a-Sc) Schottky barriers, is presented for passive films of Nb, W and Ti. The physical bases of the model are presented as well as its advantages and disadvantages. The new theory views the solid/electrolyte interface more satisfactorily and provides information on the solid-state properties and the electronic structure of the electrode useful for interpreting the electron exchange between the solid phase and redox couples in solution. Additional Material: 11 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 12 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 227-236 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: For reliable information on operating plants it is essential to design measuring points well by selecting directly measured quantities from the set of all measurable quantities.This article deals with a new method for optimizing measurement design. It is based on multiple Gauss-Jordan elimination of the system of linear mathematical model equations and solves the problem of instrumentation design in new plants as well as the problem of optimizing existing measuring systems. Optimization methods for linear objective functions and for objective functions of general type are proposed. The method also offers a complex classification of quantities (observability and redundancy). After the optimization, the problem is presolved and is ready for an optimal processing of measured data. The mathematical model is reduced to the minimum set of equations and quantities relevant to the solution of a given problem. From a numerical standpoint, the solution is efficient. Additional Material: 9 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 13 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 244-250 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The reaction between COS and aqueous solutions of primary and secondary amines has been studied by means of the stirred cell technique. Kinetic experiments at temperatures 283 to 333 K were carried out with MEA, DGA, DEA, DIPA, MMEA, AMP, and MOR. All kinetic experiments could be described by a zwitterion reaction mechanism similar to the mechanism proposed by Caplow (1968) for the reaction between CO2 and secondary amines: \documentclass{article}\pagestyle{empty}\begin{document}$$\begin{array}{l} COS + R_2 NH \leftrightarrow R_2 NH^ + COS^ - \\ R_2 NH^ + COS^ - + B \leftrightarrow R_2 NCOS^ - + BH^ + \\ \end{array}$$\end{document}Analysis of concentrated amine solutions at high COS concentrations by various analytical techniques confirmed the conclusions from the kinetic experiments. For all amines except for MEA, the overall reaction rate was found to be determined entirely by the zwitterion deprotonation rate. Additional Material: 8 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 14 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 273-283 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The system studied in this work is a dilute solution of rod-like molecules under simple shear flow and near a hard wall. The time evolution of the probability density function is described by a diffusion equation; particle trajectories that correspond to this equation are generated by stochastic methods. Several algorithms are presented to handle the constraints imposed by the presence of the wall. In good agreement with recent experimental work on xanthan solutions, for high shear rates we observe an increase in the thickness of the depletion layer near the wall. For low to intermediate shear rates, however, we find a transient decrease of the depletion layer thickness that has not been observed experimentally. Based on the results of our simulations, we present a simple procedure to determine a few, well defined characteristic parameters from the experimental density profiles. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 15 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 302-307 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 4 Tab. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 16 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 311-314 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 3 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 17 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 316-316 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 18 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992) ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 19 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 328-342 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: An experimental study of a semibatch reaction crystallization is presented. Dilute hydrochloric acid is fed to a stirred solution of sodium benzoate to crystallize benzoic acid. The weight mean size of the product crystals increases with increasing stirring rate, reaches a maximum, and then decreases again. Larger crystals may be produced if the reactant feed point is positioned close to the outlet stream of the impeller. At equal power input the influence of stirrer type is negligible. Decreasing reactant concentrations or feed rate increases the crystal size significantly. Experimental results are explained qualitatively focusing on nucleation and growth conditions and on feed point mixing. The feed point micromixing brings reactants together to generate supersaturation and allow for nucleation. Continued mixing, however, may partially dilute supersaturation before nucleation takes place or may restrict nuclei growth, thus promoting more efficient Ostwald ripening in the bulk. This may result in high bulk supersaturations which in turn hampers the dilution effects. Additional Material: 17 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 20 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 377-384 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Experimental responses from crystallization of copper sulfate pentahydrate, nickel ammonium sulfate, and soy protein in continuous MSMPR crystallizers were used to determine simultaneously crystal growth and nucleation rates and agglomeration kernels. Measured product crystal size distributions at steady state for all these systems were transformed into crystal volume coordinates to use two methods: moments analysis and optimization procedure for parameter characterization. An iterative nonlinear parameter estimation by optimization procedure was used to deduce the kinetic rate parameters in the solution of the agglomeration model in crystal volume coordinates, extended from the analysis by Liao and Hulburt (1976), from the translated data set for the product crystals. The kinetic results obtained for the copper sulfate pentahydrate system were correlated in terms of power law kinetic expressions depicting the effect of significant observable variables. Additional Material: 5 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 21 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 385-396 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The oxidation of CO by O2 and N2O over an oxidized 10 wt. % Cu-Cr/Al2O3 catalyst (Cu:Cr=1:1) has been studied by temperature-programmed reactivity measurements (400-550 K) over a wide range of partial reactant pressures, including inhibition by CO2. The CO oxidation rate is zeroth-order in oxygen and has orders between 0-1 in CO and N2O, depending on the gas-phase composition. Mechanistic information from literature combined with the kinetic data resulted in the selection of an Eley-Rideal-type of kinetic model without a priori assumptions on rate-determining processes. The model consists of the oxidation of reduced sites by O2 and/or N2O, followed by a reaction with CO, yielding a surface intermediate that releases CO2 in a consecutive step. CO2 inhibits both by reversible adsorption on oxidized and reduces sites, the latter under formation of the surface reaction intermediate. Apart from the surface oxidation by O2, the reaction rates of all assumed elementary processes are of the same order of magnitude and, therefore, determine the overall rate. The surface oxidation by oxygen is about four orders of magnitude larger, which explains the zeroth-order in oxygen and the observation that oxygen first reacts with CO before N2O is able to oxidize CO. The obtained activation energies of the elementary processes agree with values in the literature for corresponding systems. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 22 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 438-444 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The model developed predicts a priori potential errors associated with the energy trace recorded by an isoperibol differential power scanning calorimeter in the measurement of heat of adsorption of H2 on Pt and Pd catalysts. The uptake of H2 by the catalyst sample was approximated by a diffusion-limited quasi-steady-state moving boundary model. This approximation is valid only if the parameter [(adsorption capacity of cat. sample)/(inlet conc. of H2)] is extremely large (∼ 24). The effect of flow rate, amount of H2 adsorbed, sink temperature, and the thermal conductivity of the adsorbate mixture was examined. Model predictions indicate that the error in the energy trace recorded by the DSC is appreciable: if a large difference exists between the thermal conductivity of the inert carrier, Ar (K = 0.017 J/m·K·s), and the adsorbate, H2 (k = 0.174 J/m·K·s); if the heat sink temperature is much lower (∼ 90 K) than the measurement temperature. However, these errors can be eliminated by matching the thermal conductivity of the inert carrier and adsorbate, such as He (k = 0.143 J/m·K·s) and H2 (k = 0.174 J/m·K·s). The results agree well with the experimental observations of Vannice et al. (1987) on high-purity Pt and Pd powder and supported Pt catalysts, if the H2 uptake by the catalyst sample in the calorimeter is small (≤2 μmol). Additional Material: 6 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 23 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 461-465 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 4 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 24 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 473-476 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 4 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 25 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 480-480 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 26 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 489-501 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The coupled, unsteady Navier-Stokes, convective diffusion, and thermal energy equations that describe spin coating of colloidal suspensions are solved numerically. The theoretical model, absent of any adjustable parameters, is used to explore the effects of angular velocity, initial solvent weight fraction, solvent properties and spin coating protocol on the evolution of temperature and concentration profiles in the liquid film during spin coating. The predicted coated film thickness is found to be in excellent quantitative agreement with spin coating experiments performed with both hard-sphere and nonhard-sphere suspensions of monodisperse latex particles in water. The coated film thickness, determined by ellipsometry, is shown to depend on the inverse square root of the angular velocity except at high ionic strength when the dependence on angular velocity is weaker. Timescales that characterize spin coating of colloidal suspensions are shown to be quite different from those that characterize spin coating of polymer solutions, and consequently simple models for predicting the coated film thickness of polymer solutions (Bornside et al., 1991; Lawrence, 1989) are shown to be inadequate for colloidal suspensions. Rapid substrate acceleration, high rotation rates, partial saturation of the overlying gas phase, and high initial solids concentration are identified as spin coating protocols that suppress a convective instability that produces radial striations in the coated film. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 27 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 521-534 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Disturbance rejection capabilities of different controller structures, for example, diagonal, block diagonal or full multivariable controller, are discussed. A generalized version of Relative Disturbance Gain, Generalized Relative Disturbance Gain (GRDG), is defined to evaluate the disturbance rejection capabilities of all possible controller structures. Furthermore, the relative disturbance gain array (RDGA) is introduced. Basic properties of RDGA are derived. An important one is: GRDG of all possible controller structures can be calculated directly from the array. Therefore, with RDGA, the synthesis of the controller structure can be done in a straightforward manner. Physical implications and quantitative analyses of GRDG are given. These form the basis for the synthesis. Finally, frequency-dependent GRDG is developed which evaluates the performance further based on dynamic information. Several examples are used to illustrate the synthesis of the controller structure. The results show that better disturbance rejection can be achieved by selecting appropriate controller structure. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 28 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 544-554 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Experiments were carried out in bubble columns for a number of liquids at pressures between 0.1 and 2.0 MPa for two column sizes. Based on the experimental results as well as extensive literature data, the extent of the effect column dimensions have on gas holdup were determined, both at low and high pressures (which is of importance to scale-up). It was also demonstrated that none of the published empirical gas holdup equations incorporate the influence of gas density accurately. Therefore, a new improved gas hold-up equation is developed that incorporates the influence of gas and liquid properties with an average error of approximately 10%. Finally, it is also discussed to what extent theinfluence of pressure on other important design parameters such as the interfacial area, the liquid volumetric mass transfer coefficient, and gas and liquid mixing, can be estimated on the basis of empirical equations. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 29 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 573-591 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A theory is developed to predict the solubility of protein mixtures in solutions containing nonionic polymer. Effective protein-protein interactions due to polymer are taken to be volume-exclusion potentials derived using statistical mechanics. Statistical-mechanical perturbation theory is used to calculate chemical potentials. The effects of protein size, mole fraction and polymer concentration on solubility are explored. The theory is extended to include electrostatic interactions. The excess chemical potential of the proteins due to the charges on all species is calculated using the mean spherical approximation for a mixture of charged hard spheres. The theory predicts: the larger protein is preferentially precipitated over the smaller one; the more concentrated protein is more likely to precipitate; and increasing the charge of a particular protein reduces its ability to precipitate. Additional Material: 14 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 30 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 611-614 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 31 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 615-618 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 3 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 32 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 626-628 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 1 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 33 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992) ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 34 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 660-670 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Adsorption of water vapor on silica gel, at influent humidities from 6 to 80% at 25 and 50°C, yielded breakthrough curves of unusual shapes. Breakthrough patterns varied from the expected sigmoidal shape at low humidity to a curve resembling the tangent function, but symmetric about the stoichiometric breakthrough time. Unusual shapes were found to be due to subtle combination of Type-IV isotherm behavior and heat effects. A mathematical model was developed to simulate the performance. The results show that complex breakthrough behavior need not be ascribed to complicated causes (such as diffusion in bidisperse pores), which require multiparameter fitting of experimental data. In fact, the effects may be predicted from properties measured in simple independent experiments, though some care is required to account for the effects accurately. Additional Material: 14 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 35 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 703-715 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Influence of the tube and particle diameter and shape, as well as their ratio, on the radial heat transport in packed beds has been studied. Heat transport experiments were performed with four different packings in three wall-cooled tubes, which differed in inner diameter only. Experimental values for the effective radial heat conductivity and wall heat-transfer coefficient for the pseudo-homogeneous two-dimensional model and the overall heat-transfer coefficient for the one-dimensional model are presented. Values were obtained for glass spheres, alumina cylinders, and alumina Raschig rings. The effective radial heat conductivity and wall heat-transfer coefficient can both be correlated as a linear function of the gas flow rate. The Bodenstein number for heat at fully developed turbulent flow is influenced strongly by the shape of the packing: 10.9 for glass spheres, 7.6 for alumina cylinders, and 4.2 for alumina Raschig rings. For the same packing, no significant influence is found of the tube diameter on the effective radial heat conductivity or on the wall heat-transfer coefficient. The overall heat-transfer coefficient can be described very well by the so-called “lump equation,” which gives the relations among the overall heat-transfer coefficient, effective radial heat conductivity, and wall heat-transfer coefficient. The “lump factor,” as used in the lump equation, has a best-fit experimental value of 7.4. Additional Material: 9 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 36 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 733-741 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Local instantaneous changes in heat-transfer coefficients due to the passage of gas bubbles in liquid and liquid-solid systems are measured. A special heat-transfer probe is developed and located within the bed to trace the instantaneous local heat-transfer rate during the passage of single gas bubbles. A microfoil heat flow sensor is attached to a foil heater, and the sensor-heater probe assembly can accurately measure the heat flux and the surface temperature over a small area. Signals from the sensor are amplified and interfaced with the microcomputer data acquisition system. Simultaneous visualization is performed using a high-speed video camera and a borescope to establish the correspondence between the visual and sensor signals, and hence relate the local instantaneous hydrodynamics to the heat-transfer rate. Local heat-transfer coefficient vs. time traces are analyzed in conjunction with visual signals. The heat-transfer coefficient exhibits a sharp peak in the bubble wake. In both liquid and liquid-solid systems, the observed local maximum in heat-transfer coefficient behind a rising bubble is due to the bubble-wake-induced surface renewal. Enhancement in heat transfer due to the bubble increases with the size because of increased surface renewal caused by larger bubble wake and stronger vortices. The local maximum in heat transfer, however, is more pronounced in liquid than in liquid-solid systems. Additional Material: 12 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 37 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 761-770 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The extraction of caffeine from whole coffee beans with supercritical carbon dioxide was studied in a continuous-flow extraction apparatus. Decaffeination rates were determined as a function of CO2 flow rate, temperature and pressure by continuously monitoring the caffeine in the effluent with a flame ionization detector. Soaking the raw beans in water prior to decaffeination enhanced the rate of extraction, which increased markedly with water content. Using CO2 saturated with water also increased the rate of extraction. The rate of decaffeination increased with pressure and temperature and was influenced by both intraparticle diffusion in the water-soaked beans and external mass transfer. A mathematical model based on a linear-driving-force approximation of mass transfer and partitioning of caffeine between the water and the supercritical CO2 describes the time-dependent process. The partition coefficient for caffeine distributed between water and supercritical CO2, the only parameter determined from the dynamic extraction rate data, increases with temperature and pressure. Additional Material: 11 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 38 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 771-780 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Following hydrogenolysis of n-hexane on an alumina-supported platinum catalyst, the surface of the metal is covered partially with carbonaceous residues or coke. The fraction of surface platinum not covered with coke has been found to be about one half by four independent techniques: titration of preadsorbed oxygen by dihydrogen, chemisorption of carbon monoxide, infrared spectroscopy of chemisorbed carbon monoxide, and hydrogenation rate of ethylene. The first of these techniques suggests itself as the simplest one for further studies of deactivation by coking of platinum catalysts. Additional Material: 8 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 39 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 797-797 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 40 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1003-1012 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The in-situ catalytic hydrodechlorination of chlorinated hydrocarbons in waste-water-generating HCl and a hydrocarbon-free chlorine is demonstrated as a viable wastewater remediation technique. Catalyst screening studies with a shaker-type hydrogenation reactor have shown that the commercial Pd/C catalyst is highly effective in hydrochlorinating various chlorinated hydrocarbons in synthetic wastewater at room temperature and near atmospheric pressure. 1, 1, 2-trichloroethane hydrodechlorination experiments in an autoclave reactor shows that initial rates are well correlated with first-order dependence of the reactant hydrocarbon adsorbed on carbon. Initial rates are also independent of hydrogen pressure, and adsorption on the carbon support is Langmuir type. Activation energies calculated at different catalyst loadings varied from 29 to 38 MJ/mol.1,1,2-trichloroethane hydrodechlorination activity is much lower for Pd/Al2O3 than Pd/C because the reactant hydrocarbon does not adsorb on alumina. When the carbon support does not readily adsorb the reactant hydrocarbon, the hydrodechlorination rates dropped significantly. These results confirm the role of the carbon support in providing the major path to reaction and thereby significantly increasing reaction rates compared to direct adsorption from solution onto the palladium. Additional Material: 6 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 41 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1045-1048 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The first phase equilibrium data are presented for Structure H hydrates. The data represent the initial formation of these hydrates from methane, with adamantane -  a previously determined Structure H former. Temperature and pressure conditions are consistent with hydrocarbon production/transportation/processing facilities. Structure H hydrates are shown to contain molecules indigenous to petroleum. which may not be present in natural gas. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 42 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1105-1114 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The azo coupling of 1-naphthol with diazotized sulfanilic acid has been studied in detail focusing on the practical use of this reaction as a micromixing test reaction, as developed by Bourne and coworkers. The reaction is a fast, competitive, consecutive reaction whose final product distribution is affected greatly by mixing. Problems that occur in the isolation of the pure-dye products and quantification of the product distribution are addressed. Previously unreported information is given about the structure and properties of one of the products as well as the existence of an additional unknown product. The reaction was used to characterize the spatial heterogeneity of micromixing in a 14-L stirred-tank fermenter. Results show large differences in the product distribution dependent on the depth and radial position of the feed pipe in the tank. Additional Material: 10 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 43 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1129-1134 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 4 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 44 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1135-1138 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 4 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 45 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992) ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 46 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1206-1212 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Inclining a fluidized bed column by as little as 1.5 degrees greatly affects the bed characteristics. The bed contracts, the particle-liquid mass-transfer and heat-transfer coefficients increase by up to 30%, and the gas-liquid mass-transfer coefficient can either be increased by up to 15% or decreased by up to 20%. Additional Material: 12 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 47 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1229-1242 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A model and algorithm are presented for the separation of mixtures where the phase distribution on the trays is extremely uncertain, as it occurs when the mixtures have two partially-miscible binary pairs and a minimum-boiling ternary azeotrope. Included are an algorithm for the consistent initialization of index-I differential/algebric equations, a novel algorithm for branch switching when the phase distribution changes at a real bifurcation point, and a reliable algorithm for phase stability analysis. Open-loop responses are presented for the dehydration of secbutanol with disecbutylether in single-stage, 12-tray, and 33-tray separators. These simulation results for the 33-tray tower are in qualitative agreement with experimental measurements for the ARCO SBA-II tower. Additional Material: 9 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 48 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1254-1278 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The modular multivariable controller (MMC) represents a multivariable controller design methodology which is based on the solution of multiobjective optimization problems using the strategy of lexicographic goal programming; priority-driven, sequential satisfaction of objectives. This article formally introduces the concept of the MMC, analyzes its static characterstics, and proposes a specific methodology for the design of steady-state MMCs. It is shown that the framework of MMC can explicitly handle all types of control objectives (for example, equality or inequality specifications on controlled outputs), and constraints on manipulations. Its priority-driven, sequential satisfaction of control objectives leads to a modular, hierarchical structure of controllers with specific objectives. The modular character of MMC allows the explicit maintenance, tuning, and reconfiguration of multivariable control systems, while its hierarchical structure explicitly expresses engineering decisions and trade-offs. Its static design incorporates uncertainty in process gains and automatic reconfiguration to account for failure in sensors and/or actuators. The design of an MMC for a heavy oil fractionator is presented to illustrate the controller's character and the proposed methodology for the design of static MMCs. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 49 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1299-1301 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 3 Tab. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 50 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1303-1303 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 51 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1305-1305 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 52 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1309-1328 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: We examine the simplest homogeneous azeotropic distillation sequence of industrial relevance, where an entrainer is added to a binary azeotrope to recover both azeotropic constituents as pure products. Despite its apparent simplicity, such distillation columns can exhibit an unusual behavior not observed in zeotropic distillation: For some mixtures, separation as a function of reflux goes through a maximum. At infinite reflux, no separation is achieved.In some cases, achieving the same specifications with a larger number of trays requires a larger reflux.Sometimes the only feasible separation yields the intermediate component as a pure distillate, while the bottom product contains the light and heavy components.Sometimes the only feasible separation yields the intermediate component as a pure bottom product while the distillate contains the light and heavy components.While these unusual features can be regarded as curiosities, they are essential for proper entrainer selection and design. For a minimum boiling azeotrope, the existing and conflicting entrainer selection rules state that one should use a component that introduces no distillation boundary between the azeotropic constituents (Doherty and Caldarola, 1985), and either a low or high boiling component that introduces no additional azeotrope or a component which introduces new minimum boiling azeotropes (Stichlmair et al., 1989). By taking advantage of the curious aforementioned features, as well as our experience involving more than 400 mixtures, we have been able to analyze the assumptions behind these criteria, show when those assumptions break down, and therefore understand the limitations of the criteria. Additional Material: 61 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 53 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1329-1339 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Of special industrial interest is the cross-directional control of coating processes, where the cross direction refers to the direction perpendicular to the substrate movement. The objective of the controller is to maintain a uniform coating under unmeasured process disturbances. Assumptions that are relevant to coating processes found in industry are used to develop a model for control design. This model is used to derive a model predictive controller to maintain flat profiles of coating across the substrate by varying the liquid flows along the cross direction. Actuator constraints, measurement noise, model uncertainty, and the plant condition number are investigated to determine which of these limit the achievable closed-loop performance. From knowledge of how these limitations affect the performance we can make some recommendations on how to modify the plant design to improve the coating uniformity. The theory developed throughout the article is rigorously verified through experiments on a pilot plant. The controller rejects disturbances with two sampling times. The proposed controller can reduce the variance in coating thickness by as much as 80% compared to what is possible by manual control or simple control schemes. Additional Material: 9 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 54 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1369-1378 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: A knowledge based system is described that is designed to generate on-line advice for operators regarding the proper distribution of hydrogen resources in a refinery. The system uses a coupled architecture incorporating numerical computing in a knowledge based system environment. This arrangement allows for powerful and flexible problem solving. One portion of the coupled system formulates an optimization problem that is subsequently solved by an external routine. This application is particularly concerned with uncertainty that is present in some of the constraints. To deal with this uncertainty, a fuzzy approach to the optimization is taken. A method is presented that solves the fuzzy optimization problem using standard mathematical programming techniques. The results of the fuzzy optimization allow the crisp solution to be expanded into a neighborhood of solutions that is considered acceptable. Although this work examines a specific problem, the concepts presented are general. Additional Material: 12 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 55 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1357-1368 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The adsorption isotherms of O2, N2, CH4, CO2, SO2, and NO on five pillared clays (Zr, Al, Cr, Fe, and Ti-PILCs) are measured. The equilibrium selectivity of CH4/N2 on Al-PILC is greater than 5.0, which exceeds all known sorbents by a large margin. In addition, high SO2/CO2 equilibrium selectivities are observed on these pillared clays. The sorption characteristics of these pillared clays (PILCs) exhibit characteristic trends that are better understood with the aid of the potential energy profiles. A new semi-empirical approach is presented for the calculations of the potential energy profiles of PILCs. This approach requires the adsorption isotherms and an isotherm equation that accounts for the structural heterogeneity of the adsorbents. A comparison of the energy profiles obtained using the semi-empirical approach with the corresponding results obtained via the Kirkwood-Muller formalism, where only dispersion forces are taken into account, provides a measure of the importance of the electrostatic forces in the sorption characteristics of these PILCs. Sizable differences are observed for the potential energy profiles, indicating that the electrostatic forces are not negligible, and can significantly enhance the adsorption potential, resulting in large increases in the amounts adsorbed on these PILCs. Additional Material: 15 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 56 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1703-1715 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: One of the limitations of today's knowledge-based (KB) systems for diagnostics and supervision is a lack of adequate temporal reasoning mechanisms. Most of these systems are designed primarily to operate with the current values of the process variables and, sometimes, with their derivatives. Such simple capabilities, however, are not always sufficient to identify some complex dynamic phenomena, which in many cases leave their own unique “stamp” on the process behavior, expressed in the form of characteristic temporal shapes of the related variables. To detect and diagnose adequately the events of interest, the KB system should be able to reason about the temporal shapes of the process variables. Although during manual supervision process operators rely heavily on such characteristic shapes as reliable symptoms of underlying phenomena, their exploitation has not been considered seriously by the designers of KB control systems. We propose a generic methodology for qualitative analysis of the temporal shapes of continuous process variables designed to be embedded into a real-time KB environment. It is applicable to bioprocesses, as well as to other complex dynamic systems. Additional Material: 17 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 57 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1751-1760 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The experimental validation of on-line estimation of multiple specific growth rates for the bakers' yeast fed-batch process is presented. Pole placement based parameter estimation combined with an asymptotic biomass observer constitute the basic algorithm. The full process model being ill-conditioned for estimation using the available measured state variables, the use of two partial models related to two different states of the process is suggested. An alternating procedure between two sets of estimation algorithms designed from the partial models is proposed. The performance of the alternating procedure is validated both with simulated and experimental data. The accuracy of the estimates of the three specific growth rates involved in this process is verified according to two criteria based on the respiratory quotient and on the evaluation of the ethanol production/consumption rate. Additional Material: 20 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 58 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1761-1768 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Binary diffusion coefficients of some organic compounds in carbon dioxide at 313.2 K and 16.0/25.0 MPa were measured by using the Taylor-Aris tracer response technique. We propose a new correlation of Schmidt numbers as a function of solvent molar volumes for predicting binary diffusion coefficients in dence CO2 and self-diffusion coefficients of dense CO2. The correlation was also found to be valid for predicting self-diffusion coefficients of dense CH4 at Fv/A* 〈 40 or v2/(ṽ2)0 〉 1.62. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 59 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1801-1815 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The hydrodynamics of gas-solid flow, usually referred to as circulating fluidizedbed flow, was studied in a 7.5-cm clear acrylic riser with 75-μm FCC catalyst particles. Data were obtained for three central sections as a function of gas and solids flow rates. Fluxes were measured by means of an extraction probe. Particle concentrations were measured with an X-ray densitometer. In agreement with previous investigators, these data showed the flow to be in the core-annular regime, with a dilute rising core and a dense descending annular region. However, unlike the previous studies conducted worldwide, the data obtained in this investigation allowed us to determine the viscosity of the suspension. The viscosity was a linear function of the volume fraction of solids. It extrapolates to the high bubbling-bed viscosities. Additional Material: 35 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 60 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1493-1498 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: Racemic leucine can be separated into d- and l-isomers by fractional extraction across microporous hollow fibers. In this extraction, an aqueous solution of the racemate is fed to the lumen of the fibers, and an octanol solution of dodecyl-l-hydroxyproline flows countercurrently outside of the fibers. The interface between feed and extractant is stabilized by filling the pores in the hollow-fiber walls with a cross-linked polyvinylalcohol gel which offers negligible resistance to mass transfer. The extraction with dodecyl-l-hydroxyproline deliberately imitates earlier studies, facilitating comparisons of hollow-fiber extraction with other techniques. The results show that the isomer yield per equipment volume of racemic separation is 100 times greater than that in a continuously rotating extractor, and 1,000 times greater than that in a conventional packed tower. Additional Material: 5 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 61 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1523-1535 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The rigorous calculation of top and bottom fractions of a multicomponent distillation is very time consuming and involved as it can only be done iteratively, and convergence problems are often encountered, especially in azeotropic systems. This article presents a method for the easy determination of possible top and bottom fractions of a ternary distillation. This method, which works for zeotropic as well as for azeotropic mixtures, is especially useful in the first steps of process synthesis and design since impossible separations can be determined and thus excluded from further analysis so that work can be concentrated on feasible processes. A very important application of the method developed in this article is to the design and analysis of processes for complete separation of binary azeotropic mixtures by use of an entrainer (for example, Azeotropic Distillation and Extractive Distillation). Knowledge of the separation regions in the distillation diagram allows for the development of a generalized process and the formulation of criteria for entrainer selection. The effectiveness of the method is demonstrated on a number of industrial important processes. Additional Material: 17 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 62 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1481-1484 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Additional Material: 6 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 63 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992) ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 64 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1916-1922 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The bed expansion characteristics of two-phase inverse fluidization were studied. Twelve different spheres with diameters from 1.31 to 7.24 mm and densities between 75 and 930 kg/m3 were fluidized with water. The experimental lne-lnU curves were parallel to those predicted by the model of Richardson and Zaki and therefore the exponents n were similar. However, Ui, the liquid velocity at ε = 1 differed from that predicted from the standard drag curve for Ret 〉 130. This can be explained by the fact that the drag curve of a freely rising light sphere differs from that of a falling particle. The values of Ui calculated using this modified drag curve were in good agreement with the experimental results. The difference between experimental and calculated from the Ergun equation minimal fluidization velocities is explained by the difference in mechanical inertia of the light and heavy particles. Additional Material: 6 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 65 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1979-1989 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The statistical model adsorption isotherm developed by Ruthven and coworkers and used in the prediction and correlation of adsorption in microporous materials especially zeolites is critically examined from the perspective of statistical mechanics. This is done by applying the method to a class of molecular models for which the statistical thermodynamics may be solved analytically without approximation. The models considered are finite length single component and binary one-dimensional systems in which the molecules interact via square well potentials. These are among the simplest realistic yet analytically solvable molecular models of adsorption in porous materials. Our analysis clarifies the theoretical status of the Ruthven approach as well as revealing some of the weaknesses in it. Additional Material: 7 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 66 Electronic Resource Hoboken, NJ : Wiley-Blackwell AIChE Journal 38 (1992), S. 1969-1978 ISSN: 0001-1541 Keywords: Chemistry ; Chemical Engineering Source: Wiley InterScience Backfile Collection 1832-2000 Topics: Chemistry and Pharmacology , Process Engineering, Biotechnology, Nutrition Technology Notes: The study of nonlinear competitive equilibrium is of fundamental importance in understanding the behavior of proteins in preparative ion-exchange chromatographic separations. In this work we present a steric mass-action (SMA) ion-exchange equilibrium formalism, which explicitly accounts for the steric hindrance of salt counterions upon protein binding in multicomponent equilibria. An analytical solution has been derived for the calculation of isotachic effluent profiles of displaced proteins and induced salt gradients under ideal chromatographic conditions. A stability analysis has been employed to establish the order of the feed components in the displacement train. Theoretical predictions are compared to experimental results for the separation of proteins by cation-exchange displacement chromatography. These results demonstrate the efficacy of the SMA formalism in predicting complex behavior present in ion-exchange displacement systems. Furthermore, the analytical solution of ideal isotachic displacement profiles with the SMA formalism enables rapid methods development and optimization of ion-exchange displacement separations. Additional Material: 6 Ill. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 67 Electronic Resource
## Adjusted R Squared Calculator for Simple Regression ### Instructions: Use this calculator to compute the adjusted R-Squared coefficient for a simple linear regression. Please input the data for the independent variable $$(X)$$ and... Don't have a membership account? Back to
# Dealing with Heckman correction and sampling bias in an experiment without instruments Imagine a randomised experiment where workers are offered a job for \$x, and they can choose to reject the offer, accept$x or propose some counteroffer x' > x. An equal number of workers are offered the job in the control and treatment groups, and it is hypothesised that workers will exhibit higher "labour-market discrimination" towards the employer in the treatment compared to the control. To measure discrimination, the main analysis will be to test the number of offers that are not rejected (i.e. accepted or countered) by treatment. I would like to use the counteroffer data for a secondary analysis of discrimination, but I am worried that a comparison of the treatment and control means of the counteroffers x' runs into a selection issue. For example, some individuals who reject the offer may do so because they have an extremely high willingness-to-accept, which corresponds to a high counteroffer that is not observed. My questions: 1. Can I directly compare the means of the counteroffers, or are my worries valid? 2. If there is a selection issue, a Heckman correction requires some instrument that affects whether an offer is rejected, but not the amount of the counteroffer. I don't have anything like this and no other control variables besides the treatment. Does that mean that I cannot run a two-stage Heckman correction? 3. Is there anything else I could do to test for discrimination using the counteroffer data, and what assumptions would I need?
# Does the domain of the function depend on how you write it? What is the domain of the function $f(x,y)= \sqrt{xy}$. In this case, domain is $D=\{(x,y) \in R^2: \}$ Since $\sqrt{xy} = x^{0.5}y^{0.5}$ I can write the function equivalently as $f(x,y)= x^{0.5}y^{0.5}$ but in this case domain is only $D=\{(x,y) \in R^2: x \geq 0, y \geq0 \}$ So, which one is the domain finally? Is it possible that the function $f$ has different domain depending on how you write it? This is confusing. • Usually, you choose what domain you're applying the function over. This should be part of the function's definition. Sep 10, 2018 at 16:27 • Why do you think $x^{0.5}y^{0.5}$ is defined more restrictively than $\sqrt{xy}$? The domain is what you choose it to be, not something that can always be unambiguously inferred from the rule. Sep 10, 2018 at 20:23 • It is not in general true that $\sqrt{xy} = x^{0.5}y^{0.5}$. This equality requires certain assumptions, which might or might not be true in context. Sep 10, 2018 at 23:29 • I answered this previously. My answer there applies exactly to this question, although the question I was then answering may not be a precise duplicate. Sep 11, 2018 at 1:18 Strictly speaking, $f(x,y)=\sqrt{xy}$ is not a function at all. It becomes a function when you specify a domain and a codomain. Depending on your domain, you may have alternative ways to write the function; for example, if you choose the domain $\{(x,y)\in\mathbb R^2: xy=1\}$ you can equivalently write the function as $f(x,y)=1$; this form clearly is not equivalent to your expression if you chose as domain e.g $\{(x,y)\in\mathbb R^2: x>0\land y>0\}$. And of course you have to make sure that your expression is defined on the complete domain, or alternatively, use it only for the corresponding part of the domain and give another expression for other parts of the domain. For example, on the domain $(x,y)\in\mathbb R^2: xy\ge 0$, you could write your function equivalently as $$f(x,y) = \begin{cases} x^{1/2}y^{1/2} & x\ge 0 \\ (-x)^{1/2}(-y)^{1/2} & \text{otherwise}\end{cases}$$ Usually when not specifying a domain, the domain is implicitly given as “whereever that expression is defined”. In that sense, the two functions you give are then not the same, as they have different domains, although they agree in the intersection of their domains. Note also that without codomain, your function is not completely defined. For example, if you chose $\mathbb R_{\ge 0}$ as your codomain, your function (with implicitly given domain) is surjective (it reaches every point of the codomain), while with the codomain $\mathbb R$ it isn't. If not explicitly specified, usually the codomain of a function is assumed either to be its image (the minimal possible codomain, which makes the function surjective), or the largest “reasonable” set containing the image (like $\mathbb R$ for real-valued functions). Since the exact codomain is less often relevant than the domain, it is more often left unspecified. In strictly (set theoretical) mathematical terms, the domain is part of the definition of a function. However, it is fairly common to write down a function by giving an expression how to calculate it, and then the natural domain of the function is the set where that expression is well-defined. As you have discovered, there are sometimes different expressions that evaluate to the same thing for some input values, but there are different sets on which the expressions are defined. For example, $f(x)=(\sqrt{x+2})^2$ has the natural domain $[-2,\infty)$. However, everywhere in that domain it is equal to $(x+2)$. The function $g(x)=x+2$ has the natural domain $\mathbb R$. This is actually poorly explained (in US schools at least) until you get to proof-based writing. Formally, a function is two thing: a domain, and a "rule" which assigns to each element of the domain an output. If the domain is $$D$$, then for each $$x\in D$$, we assign a value $$f(x)$$. This "rule" is what we usually think of as some sort of equation like $$f(x)=x^2-\sqrt x$$. The domain can be any set, and the outputs can also be in any set. In practice, though, the domain is often implied by the context and the rule. For example, in calculus, when you see the function $$f(x)=1/x$$, it is assumed that the domain is the largest set of real numbers for which it makes sense to plug into $$f$$. In this case, any number but $$0$$ makes sense, so we can write $$D=(-\infty,0)\cup (0,\infty)$$. As you noticed, it is not always entirely obvious what the domain should be. For $$f(x,y)=\sqrt{xy}$$, it makes sense to plug in any positive numbers for $$x$$ and $$y$$, but also certain negative numbers. For example, it makes sense to plug in $$x=-1$$ and $$y=-4$$. However, it does not make sense to plug in $$x=-1$$, $$y=4$$. In this case the domain is assumed to be $$\{(x,y):xy\ge 0\}$$, or $$\{(x,y):x,y\ge 0 \text{ or }x,y\le 0\}$$. • It also has a codomain. Sep 10, 2018 at 17:13 • I strongly object to defining functions in terms of rules, as you would need to specify what a rule is. The standard definition of function is a relation $R$ such that for every element $x$ in the domain there exists an unique element $y$ in the codomain with $x R y$. Sep 10, 2018 at 20:07 • @miniBill As far as I can see, the rule is your relation; you're just using more precise terminology. Sep 10, 2018 at 21:20 • A relation is any subset of the Cartesian product. Which can be defined in a few steps from ZFC. My point is not formal. I'll explain. A rule assumes something describable, but for uncountable sets there are far more functions than finitely describable functions (which are necessarily enumerable). Sep 11, 2018 at 5:13 • Formally, there's more than one definition for "function" that is used and taught, depending on the context. Sep 11, 2018 at 13:55
Programming Awesome Guides # Solve almost every Binary Search Problem Algorithms are an integral part of data science. While most of us data scientists don’t take a proper algorithms course while studying, they are important all the same. Many companies ask data structures and algorithms as part of their interview process for hiring data scientists. Now the question that many people ask here is what is the use of asking a data scientist such questions. The way I like to describe it is that a data structure question may be thought of as a coding aptitude test. We all have given aptitude tests at various stages of our life, and while they are not a perfect proxy to judge someone, almost nothing ever really is. So, why not a standard algorithm test to judge people’s coding ability. But let’s not kid ourselves, they will require the same zeal to crack as your Data Science interviews, and thus, you might want to give some time for the study of algorithms. This series of posts is about fast-tracking that study and panning some essential algorithms concepts for the data scientists in an easy to understand way. In this post, I would particularly talk about Binary search. Let us say we have a sorted array of numbers, and we want to find out a number from this array. We can go the linear route that checks every number one by one and stops if it finds the number. The problem is that it takes too long if the array contains millions of elements. Here we can use a Binary search. This is case of a recursion based algorithm where we make use of the fact that the array is sorted. Here we recursively look at the middle element and see if we want to search in the left or right of the middle element. This makes our searching space go down by a factor of 2 every step. And thus the run time of this algorithm is O(logn) as opposed to O(n) for linear search. While understanding how Binary search works is easy, there are a lot of problems when you go on to implement binary search. I myself am never able to implement Binary Search without a single mistake and do some mistake on equality signs or the search space. But this post by zhijun_liao was a godsend when it comes to understanding Binary search. Essentially what this post suggests is not to look at binary search just as an algorithm to find an exact match to some item in a list but rather as an search algorithm that gets us the lowest value from a range of values where a particular condition is True. We can define the condition in our own different way based on the problem. Lets start with a template which I will use in an example to explain what I really mean above. We can now just change a very few things in the below given template namely wthout worrying about the less than and greater than signs: Condition, Range and the return statement. Here is the template: def binary_search(array): def condition(value): pass # could be [0, n], [1, n] etc. Depends on problem left, right = min(search_space), max(search_space) while left < right: mid = left + (right - left) // 2 if condition(mid): right = mid else: left = mid + 1 return left So what does the above template do? Given some condition, and a search space, it will give you the minimum value in the search space that satisfies the given condition. This value is the left in this code. Better explain this with an example: Lets rephrase our problem of finding an element in a sorted array as: Find the position of the first element in the sorted array that is ≥ target. Here we define 3 things: 1. Condition: array[value] >= target 2. range: Since the indices in arrays can range from 0 to n-1. 3. Return statement: We have gotten the index of the leftmost element that is ≥target. To answer our question we can just use a if-else loop on this index. def binary_search(array): def condition(value): return array[value] >= target left, right = 0, n-1 while left < right: mid = left + (right - left) // 2 if condition(mid): right = mid else: left = mid + 1 if array[left] == target: return left else: return -1 So, in the above we got the minimum index in the sorted array where the condition value ≥ target is satisfied. Graphically: So, now we understand Binary search a little better, let us see how this generalises to problems. And how you can think of Binary search as a solution search. Koko Eating Bananas From the problem definition on Leetcode : Koko loves to eat bananas. There are n piles of bananas, the ith pile has piles[i] bananas. The guards have gone and will come back in h hours. Koko can decide her bananas-per-hour eating speed of k. Each hour, she chooses some pile of bananas and eats k bananas from that pile. If the pile has less than k bananas, she eats all of them instead and will not eat any more bananas during this hour. Koko likes to eat slowly but still wants to finish eating all the bananas before the guards return. Return the minimum integer k such that she can eat all the bananas within h hours. Example 1: Input: piles = [3,6,7,11], h = 8 Output: 4 Example 2: Input: piles = [30,11,23,4,20], h = 5 Output: 30 Example 3: Input: piles = [30,11,23,4,20], h = 6 Output: 23 So, how does our Monkey Koko optimize his eating speed? We need to find the minimum speed with which koko could eat so that some condition is specified. See the pattern? We could try to think of eating speeds as a non-given sorted array and we can search for the minimum value in this array that specifies our condition. We need to come up with three parts to solve this problem: 1. Condition: We will create a function that returns True,if for a given eating speed k, Koko would be able to finish all bananas within h hour. 2. Range of answers: The minimum eating speed must be 1. And the maximum could be max(piles) based on the problem. 3. What to return? Should return left as that is the minimum value of speed at which our condition is met. Here is the code: class Solution: def minEatingSpeed(self, piles: List[int], h: int) -> int: # Can Koko finish the piles given this speed k? def check(k): hours_taken = 0 for n in piles: if n%k==0: hours_taken += n//k else: hours_taken += n//k + 1 if hours_taken>h: return False return True left,right = 1 , max(piles) while left<right: mid = left+(right-left)//2 if check(mid): right = mid else: left = mid+1 return left And that is it. Here we have “Binary Searched the Answer”. And it is applicable to a wide variety of problems. Other such problems you can look at are: ## Conclusion In this post, I talked about Binary Search. This is one of the most popular algorithms that is asked in Data Structures interviews, and a good understanding of these might help you land your dream job. And while you can go a fair bit in data science without learning it, you can learn it just for a little bit of fun and maybe to improve your programming skills. Also take a look at my other posts in the series , if you want to learn about algorithms and Data structures. If you want to read up more on Algorithms, here is an Algorithm Specialization on Coursera by UCSanDiego , which I highly recommend to learn the basics of algorithms. Thanks for the read. I am going to be writing more beginner-friendly posts in the future too. Follow me up at Medium or Subscribe to my blog . Also, a small disclaimer — There might be some affiliate links in this post to relevant resources, as sharing knowledge is never a bad idea.
M01 Chapter Contents M01 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentM01DCF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose M01DCF ranks a vector of character data in ASCII or reverse ASCII order of a specified substring. ## 2  Specification SUBROUTINE M01DCF ( CH, M1, M2, L1, L2, ORDER, IRANK, IFAIL) INTEGER M1, M2, L1, L2, IRANK(M2), IFAIL CHARACTER(*) CH(M2) CHARACTER(1) ORDER ## 3  Description M01DCF uses a variant of list-merging, as described on pages 165–166 in Knuth (1973). The routine takes advantage of natural ordering in the data, and uses a simple list insertion in a preparatory pass to generate ordered lists of length at least $10$. The ranking is stable: equal elements preserve their ordering in the input data. Only the substring (L1:L2) of each element of the array CH is used to determine the rank order. ## 4  References Knuth D E (1973) The Art of Computer Programming (Volume 3) (2nd Edition) Addison–Wesley ## 5  Parameters 1:     CH(M2) – CHARACTER(*) arrayInput On entry: elements M1 to M2 of CH must contain character data to be ranked. Constraint: the length of each element of CH must not exceed $255$. 2:     M1 – INTEGERInput On entry: the index of the first element of CH to be ranked. Constraint: ${\mathbf{M1}}>0$. 3:     M2 – INTEGERInput On entry: the index of the last element of CH to be ranked. Constraint: ${\mathbf{M2}}\ge {\mathbf{M1}}$. 4:     L1 – INTEGERInput 5:     L2 – INTEGERInput On entry: only the substring (L1:L2) of each element of CH is to be used in determining the rank order. Constraint: $0<{\mathbf{L1}}\le {\mathbf{L2}}\le \mathrm{LEN}\left({\mathbf{CH}}\left(1\right)\right)$. 6:     ORDER – CHARACTER(1)Input On entry: if ${\mathbf{ORDER}}=\text{'A'}$, the values will be ranked in ASCII order. If ${\mathbf{ORDER}}=\text{'R'}$, in reverse ASCII order. Constraint: ${\mathbf{ORDER}}=\text{'A'}$ or $\text{'R'}$. 7:     IRANK(M2) – INTEGER arrayOutput On exit: elements M1 to M2 of IRANK contain the ranks of the corresponding elements of CH. Note that the ranks are in the range M1 to M2: thus, if ${\mathbf{CH}}\left(i\right)$ is the first element in the rank order, ${\mathbf{IRANK}}\left(i\right)$ is set to M1. 8:     IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{M2}}<1$, or ${\mathbf{M1}}<1$, or ${\mathbf{M1}}>{\mathbf{M2}}$, or ${\mathbf{L2}}<1$, or ${\mathbf{L1}}<1$, or ${\mathbf{L1}}>{\mathbf{L2}}$, or ${\mathbf{L2}}>\mathrm{LEN}\left({\mathbf{CH}}\left(1\right)\right)$. ${\mathbf{IFAIL}}=2$ On entry, ORDER is not 'A' or 'R'. ${\mathbf{IFAIL}}=3$ On entry, the length of each element of CH exceeds $255$. ## 7  Accuracy Not applicable. The average time taken by the routine is approximately proportional to $n×\mathrm{log}n$, where $n={\mathbf{M2}}-{\mathbf{M1}}+1$. The routine relies on the Fortran intrinsic functions LLT and LGT to order characters according to the ASCII collating sequence. ## 9  Example This example reads a file of $12$-character records, and ranks them in reverse ASCII order on characters $7$ to $12$. ### 9.1  Program Text Program Text (m01dcfe.f90) ### 9.2  Program Data Program Data (m01dcfe.d) ### 9.3  Program Results Program Results (m01dcfe.r)
# A terribly wrong level 4 problem This problem titled How to distribute candies is a terribly incorrect question.This question claims that the candies are identical but the answer takes the assumption that they are all distinct.Not only has this question butchered the ratings of many correct solvers but also it is capable of infusing a terrible misconception in the mind of anyone who attempts it. So I sincerely request the staff members to immediately take it off the rating scale and give the correct solvers the rating that they deserve and I request all my followers to request a clarification or dispute to this problem to make it easier for the staff to know about this. 6 years, 9 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Thanks. I've fixed it. Staff - 6 years, 9 months ago It is terrible and to request the solution of problem. - 6 years, 9 months ago I have posted a message regarding this on @Calvin Lin 's message-board. - 6 years, 9 months ago Another problem Expected Random Sum Above 1. I guess the denominator should be $2e$ instead of $e^2$ to match the given answer. - 6 years, 9 months ago I had looked at your dispute, and I believe that you missed out a crucial factor of "independence". I've started a solution discussion, and you can try to justify your claim. I've shown a possible reason why $E[S] > \frac{e}{2}$. Staff - 6 years, 9 months ago
Corrected and searchable version of Google books edition Latest Tweets More boring politics, but it matters.  The two main recommendations of this Pittilo report are that • Practitioners of Acupuncture, Herbal Medicine, Traditional Chinese Medicine should be subject to statutory regulation by the Health Professions Council • Entry to the register should normally be through a Bachelor degree with Honours For the background on this appalling report, see earlier posts. A very bad report: gamma minus for the vice-chancellor The Times (blame subeditor for the horrid title), and some follow up on the Times piece The Health Professions Council breaks its own rules: the result is nonsense Chinese medicine -acupuncture gobbledygook revealed Consultation opens on the Pittilo report: help stop the Department of Health making a fool of itself Why degrees in Chinese medicine are a danger to patients The Department of Health consultation shuts on November 2nd.  If you haven’t responded yet, please do.  It would be an enormous setback for reason and common sense if the government were to give a stamp of official approval to people who are often no more than snake-oil salesman. Today I emailed my submission to the Pittilo consultation to the Department of Health, at [email protected] ### The submission I sent the following documents, updated versions of those already posted earlier. • Submission to the Department of Health, for the consultation on the Pittilo report [download pdf]. • $2.5B Spent, No Alternative Med Cures [download pdf] • An example of dangerous (and probably illegal) claims that are routinely made by TCM practitioners [download pdf]f I also completed their questionnaire, despite its deficiencies. In case it is any help to anyone, this is what I said: ### The questionnaire Q1: What evidence is there of harm to the public currently as a result of the activities of acupuncturists, herbalists and traditional Chinese medicine practitioners? What is its likelihood and severity? Harm No Harm Unsure Comment The major source of harm is the cruel deception involved in making false claims of benefit to desperate patients. This applies to all three. In the case of herbal and TCM there is danger from toxicity because herbal preparations are unstandardised so those that do contain an active ingredient are given in an unknown dose. This is irresponsible and dangerous (but would not be changed by the proposals for regulation). In addition TCM suffers from recurrent problems of contamination with heavy metals, prescription drugs and so on. Again this would not be the business of the proposed form of regulation. Q2: Would this harm be lessened by statutory regulation? If so, how? Yes No Unsure The proposed form of regulation would be no help at all. The HPC has already said that it is not concerned with whether or not the drug works, and, by implication, does not see itself as preventing false health claims (just as the GCC doesn’t do this). False claims are the responsibility of Trading Standards who are meant to enforce the Consumer Protection Unfair Trading Regulations (May 2008), though they do not at present enforce them very effectively. Also Advertisng Standards. The proposed regulation would not help, and could easily hinder public safety as shown by the fact that the GCC has itself been referred to the Advertisng Standards Authority. The questions of toxicity and contamination are already the responsibility of Trading Standards and the MHRA. Regulation by the HPC would not help at all. The HPC is not competent to deal with such questions. Q3: What do you envisage would be the benefit to the public, to practitioners and to businesses, associated with introducing statutory regulation? Significant benefit Some benefit No benefit Unsure This question is badly formulated because the answer is different according to whether you are referring to the public, to practitioners or to businesses. The public would be endangered by the form of regulation that is proposed, as is shown very clearly by the documents that I have submitted separately. In the case of practitioners and businesses, there might be a small benefit, if the statutory regulation gave the impression that HM and TCM had government endorsement and must therefore be safe and effective. There is also one way that the regulation could harm practitioners and businesses. If the HPC received a very large number of complaints about false health claims, just as the GCC has done recently, not only would it cost a large amount of money to process the claims, but the attendant bad publicity could harm practitioners. It is quite likely that this would occur since false claims to benefit sick people are rife in the areas of acupuncture, HM and TCM. Q4: What do you envisage would be the regulatory burden and financial costs to the public, to practitioners, and to businesses, associated with introducing statutory regulation? Are these costs justified by the benefits and are they proportionate to the risks? If so, in what way? Justified Not Justified Unsure Certainly not justified. Given that I believe that the proposed form of regulation would endanger patients, no cost at all would be justified. But even if there were a marginal benefit, the cost would be quite unjustified. The number of practitioners involved is very large. It would involve a huge expansion of an existing quango, at a time when the government is trying to reduce the number of quangos. Furthermore, if the HPC were flooded with complaints about false health claims, as the GCC has been, the costs in legal fees could be enormous. Q5: If herbal and TCM practitioners are subject to statutory regulation, should the right to prepare and commission unlicensed herbal medicines be restricted to statutorily regulated practitioners? Yes No Unsure I don’t think it would make much difference. The same (often false) ideas are shared by all HM people and that would continue to be the same with or without SR. Q6: If herbal and TCM practitioners are not statutorily regulated, how (if at all) should unlicensed herbal medicines prepared or commissioned by these practitioners be regulated? They could carry on as now, but the money that would have been spent on SR should instead be used to give the Office of Trading Standards and the MHRA the ability to exert closer scrutiny and to enforce more effectively laws that already exist. Present laws, if enforced, are quite enough to protect the public. Q7: What would be the effect on public, practitioners and businesses if, in order to comply with the requirements of European medicines legislation, practitioners were unable to supply manufactured unlicensed herbal medicines commissioned from a third party? Significant effect Some effect No effect Unsure European laws,especialliy in food area, are getting quite strict about the matters of efficacy. The proposed regulation, which ignores efficacy, could well be incompatible with European law, if not now, then soon. This would do no harm to legitimate business though it might affect adversely businesses which make false claims (and there are rather a lot of the latter). Q8: How might the risk of harm to the public be reduced other than by orthodox statutory regulation? For example by voluntary self-regulation underpinned by consumer protection legislation and by greater public awareness, by accreditation of voluntary registration bodies, or by a statutory or voluntary licensing regime? Voluntary self-regulation Accreditation of voluntary bodies Statutory or voluntary licensing Unsure I disagree with the premise, for reasons given in detail in separate documents. I believe that ‘orthodox statutory regulation’, if that means the Pittilo proposals, would increase, not decrease, the risk to the public. Strengthening the powers of Trading Standards, the MHRA and such consumer protection legislation would be far more effective in reducing risk to the public than the HPC could ever be. Greater public awareness of the weakness of the evidence for the efficacy of these treatments would obviously help too, but can’t do the job on its own. Q10: What would you envisage would be the benefits to the public, to practitioners, and to businesses, for the alternatives to statutory regulation outlined at Question 8? It depends on which alternative you are referring to. The major benefit of enforcement of existing laws by Trading Standards and/or the MHRA would be (a) to protect the public from risk, (b) to protect the public from health fraud and (c) almost certainly lower cost to the tax payer. Q11: If you feel that not all three practitioner groups justify statutory regulation, which group(s) does/do not and please give your reasons why/why not? Acupuncture Herbal Medicine TCM Unsure None of them. The differences are marginal. In the case of acupuncture there has been far more good research than for HM or TCM. But the result of that research is to show that in most cases the effects are likely to be no more than those expected of a rather theatrical placebo. Furthermore the extent to which acupuncture has a bigger effect than no-acupuncture in a NON-BLIND comparison, is usually too small and transient to offer any clinical advantage (so it doesn’t really matter whether the effect is placebo or not, it is too small to be useful). In the case of HM, and even more of TCM, there is simply not enough research to give much idea of their usefulness, with a small handful of exceptions. This leads to a conclusion that DH seems to have ignored in the past. It makes absolutely no sense to talk about “properly trained practitioners” without first deciding whether the treatments work or not. There can be no such thing as “proper training” in a discipline that offers no benefit over placebo. It is a major fault of the Pittilo recommendations that they (a) ignore this basic principle and (b) are very over-optimistic about the state of the evidence. Q12: Would it be helpful to the public for these practitioners to be regulated in a way which differentiates them from the regulatory regime for mainstream professions publicly perceived as having an evidence base of clinical effectiveness? If so, why? If not, why not? Yes No Unsure It might indeed be useful if regulation pointed out the very thin evidence base for HM and TCM but it would look rather silly. The public would say how can it be that the DH is granting statutory regulation to things that don’t work? Q13: Given the Government’s commitment to reducing the overall burden of unnecessary statutory regulation, can you suggest which areas of healthcare practice present sufficiently low risk so that they could be regulated in a different, less burdensome way or de-regulated, if a decision is made to statutorily regulate acupuncturists, herbalists and traditional Chinese medicine practitioners? Yes No Unsure As stated above, the.only form of regulation that is needed, and the only form that would protect the public, is through consumer protection regulations, most of which already exist (though they are enforced in a very inconsistent way). Most statutory regulation is objectionable, not on libertarian grounds, but because it doesn’t achieve the desired ends (and is expensive). In this case of folk medicine, like HM and TCM, the effect would be exactly the opposite of that desired as shown in separate documents that I have submitted to the consultation. Q14: If there were to be statutory regulation, should the Health Professions Council (HPC) regulate all three professions? If not, which one(s) should the HPC not regulate? Yes No Unsure The HPC should regulate none of them. It has never before regulated any form of alternative medicine and it is ill-equipped to do so. Its statement that it doesn’t matter that there is very little evidence that the treatments work poses a danger to patients (as well as being contrary to its own rules). Q15: If there were to be statutory regulation, should the Health Professions Council or the General Pharmaceutical Council/Pharmaceutical Society of Northern Ireland regulate herbal medicine and traditional Chinese medicine practitioners? HPC GPC/PSNI Unsure Neither. The GPC is unlikely to care about whether the treatments work any more than the RPSGB did, or the GCC does now. The problems would be exactly the same whichever body did it. Q16: If neither, who should and why? As I have said repeatedly, it should be left to Trading Standards, the MHRA and other consumer protection regulation. Q17: a) Should acupuncture be subject to a different form of regulation from that for herbalism and traditional Chinese medicine? If so, what? Yes No Unsure b) Can acupuncture be adequately regulated through local means, for example through Health and Safety legislation, Trading Standards legislation and Local Authority licensing? Yes No Unsure (a) No -all should be treated the same. Acupuncture is part of TCM (b) Yes Q18. a) Should the titles acupuncturist, herbalist and [traditional] Chinese medicine practitioner be protected? b) If your answer is no which ones do you consider should not be legally protected? Yes No Unsure No. It makes no sense to protect titles until such time as it has been shown that the practitioners can make a useful contribution to medicine (above placebo effect). That does not deny that placebos may be useful at times. but if that is all they are doing, the title should be ‘placebo practitioners’. Q19: Should a new model of regulation be tested where it is the functions of acupuncture, herbal medicine and TCM that are protected, rather than the titles of acupuncturist, herbalist or Chinese medicine practitioner? Yes No Unsure No. This makes absolutely no sense when there is so little knowledge about what is meant by the ” functions of acupuncture, herbal medicine and TCM”.Insofar as they don’t work (better than placebo), there IS no function. Any attempt to define function when there is so little solid evidence (at least for HM and TCM) is doomed to failure. Q20: If statutory professional self-regulation is progressed, with a model of protection of title, do you agree with the proposals for “grandparenting” set out in the Pittilo report? Yes No Unsure No. I believe the Pittilo report should be ignored entirely. The whole process needs to be thought out again in a more rational way. Q22: Could practitioners demonstrate compliance with regulatory requirements and communicate effectively with regulators, the public and other healthcare professionals if they do not achieve the standard of English language competence normally required for UK registration? What additional costs would occur for both practitioners and regulatory authorities in this case? Yes No Unsure No. It is a serious problem, in TCM especially, that many High Street practitioners speak hardly any English at all. That adds severely to the already considerable risks. There would be no reliable way to convey what was expected of them. it would be absurd for the taxpayer to pay for them to learn English for the purposes of practising TCM (of course there might be the same case as for any other immigrant for teaching English on social grounds). Q23: What would the impact be on the public, practitioners and businesses (financial and regulatory burden) if practitioners unable to achieve an English language IELTS score of 6.5 or above are unable to register in the UK? Significant impact Some impact No impact Unsure The question is not relevant. The aim of regulation is to protect the public from risk (and it should be, but isn’t, an aim to protect them from health fraud). It is not the job of regulation to promote businesses Q24: Are there any other matters you wish to draw to our attention? I have submitted three documents via [email protected]. The first of these puts the case against the form of regulation proposed by Pittilo, far more fluently than is possible in a questionnaire. Another shows examples of what is actually taught in degrees in acupuncture, HM and TCM. They show very graphically the extent to which the Pittilo proposals would endanger the public, if they were to be implemented.. Jump to follow-up I’m perfectly happy to think of alternative medicine as being a voluntary, self-imposed tax on the gullible (to paraphrase Goldacre again). But only as long as its practitioners do no harm and only as long as they obey the law of the land. Only too often, though, they do neither. When I talk about law, I don’t mean lawsuits for defamation. Defamation suits are what homeopaths and chiropractors like to use to silence critics. heaven knows, I’ve becomes accustomed to being defamed by people who are, in my view. fraudsters, but lawsuits are not the way to deal with it. I’m talking about the Trading Standards laws Everyone has to obey them, and in May 2008 the law changed in a way that puts the whole health fraud industry in jeopardy. The gist of the matter is that it is now illegal to claim that a product will benefit your health if you can’t produce evidence to justify the claim. I’m not a lawyer, but with the help of two lawyers and a trading standards officer I’ve attempted a summary. The machinery for enforcing the law does not yet work well, but when it does, there should be some very interesting cases. The obvious targets are homeopaths who claim to cure malaria and AIDS, and traditional Chinese Medicine people who claim to cure cancer. But there are some less obvious targets for prosecution too. Here is a selection of possibilities to savour.. • Universities such as Westminster, Central Lancashire and the rest, which promote the spreading of false health claims • Hospitals, like the Royal London Homeopathic Hospital, that treat patients with mistletoe and marigold paste. Can they produce any real evidence that they work? • Edexcel, which sets examinations in alternative medicine (and charges for them) • Ofsted and the QCA which validate these exams • Skills for Health and a whole maze of other unelected and unaccountable quangos which offer “national occupational standards” in everything from distant healing to hot stone therapy, thereby giving official sanction to all manner of treatments for which no plausible evidence can be offered. • The Prince of Wales Foundation for Integrated Health, which notoriously offers health advice for which it cannot produce good evidence • Perhaps even the Department of Health itself, which notoriously referred to “psychic surgery” as a profession, and which has consistently refused to refer dubious therapies to NICE for assessment. The law, insofar as I’ve understood it, is probably such that only the first three or four of these have sufficient commercial elements for there to be any chance of a successful prosecution. That is something that will eventually have to be argued in court. But lecanardnoir points out in his comment below that The Prince of Wales is intending to sell herbal concoctions, so perhaps he could end up in court too. ### The laws We are talking about The Consumer Protection from Unfair Trading Regulations 2008. The regulations came into force on 26 May 2008. The full regulations can be seen here, or download pdf file. They can be seen also on the UK Statute Law Database. The Office of Fair Trading, and Department for Business, Enterprise & Regulatory Reform (BERR) published Guidance on the Consumer Protection from Unfair Trading Regulations 2008 (pdf file), Statement of consumer protection enforcement principles (pdf file), and The Consumer Protection from Unfair Trading Regulations: a basic guide for business (pdf file). Has The UK Quietly Outlawed “Alternative” Medicine? On 26 September 2008, Mondaq Business Briefing published this article by a Glasgow lawyer, Douglas McLachlan. (Oddly enough, this article was reproduced on the National Center for Homeopathy web site.) “Proponents of the myriad of forms of alternative medicine argue that it is in some way “outside science” or that “science doesn’t understand why it works”. Critical thinking scientists disagree. The best available scientific data shows that alternative medicine simply doesn’t work, they say: studies repeatedly show that the effect of some of these alternative medical therapies is indistinguishable from the well documented, but very strange “placebo effect” ” “Enter The Consumer Protection from Unfair Trading Regulations 2008(the “Regulations”). The Regulations came into force on 26 May 2008 to surprisingly little fanfare, despite the fact they represent the most extensive modernisation and simplification of the consumer protection framework for 20 years.” The Regulations prohibit unfair commercial practices between traders and consumers through five prohibitions:- • General Prohibition on Unfair Commercial Practices (Regulation 3) • Prohibition on Misleading Actions (Regulations 5) • Prohibition on Misleading Omissions (Regulation 6) • Prohibition on Aggressive Commercial Practices (Regulation 7) • Prohibition on 31 Specific Commercial Practices that are in all Circumstances Unfair (Schedule 1). One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations”. The definition of “product” in the Regulations includes services, so it does appear that all forms medical products and treatments will be covered. Just look at that! One of the 31 commercial practices which are in all circumstances considered unfair is “falsely claiming that a product is able to cure illnesses, dysfunction or malformations” Section 5 is equally powerful, and also does not contain the contentious word “cure” (see note below) Misleading actions 5.—(1) A commercial practice is a misleading action if it satisfies the conditions in either paragraph (2) or paragraph (3). (2) A commercial practice satisfies the conditions of this paragraph— (a) if it contains false information and is therefore untruthful in relation to any of the matters in paragraph (4) or if it or its overall presentation in any way deceives or is likely to deceive the average consumer in relation to any of the matters in that paragraph, even if the information is factually correct; and (b) it causes or is likely to cause the average consumer to take a transactional decision he would not have taken otherwise. These laws are very powerful in principle, But there are two complications in practice. One complication concerns the extent to which the onus has been moved on to the seller to prove the claims are true, rather than the accuser having to prove they are false. That is a lot more favourable to the accuser than before, but it’s complicated. The other complication concerns enforcement of the new laws, and at the moment that is bad. ### Who has to prove what? That is still not entirely clear. McLachlan says “If we accept that mainstream evidence based medicine is in some way accepted by mainstream science, and alternative medicine bears the “alternative” qualifier simply because it is not supported by mainstream science, then where does that leave a trader who seeks to refute any allegation that his claim is false? Of course it is always open to the trader to show that his the alternative therapy actually works, but the weight of scientific evidence is likely to be against him.” On the other hand, I’m advised by a Trading Standards Officer that “He doesn’t have to refute anything! The prosecution have to prove the claims are false”. This has been confirmed by another Trading Standards Officer who said “It is not clear (though it seems to be) what difference is implied between “cure” and “treat”, or what evidence is required to demonstrate that such a cure is false “beyond reasonable doubt” in court. The regulations do not provide that the maker of claims must show that the claims are true, or set a standard indicating how such a proof may be shown.” The main defence against prosecution seems to be the “Due diligence defence”, in paragraph 17. Due diligence defence 17. —(1) In any proceedings against a person for an offence under regulation 9, 10, 11 or 12 it is a defence for that person to prove— (a) that the commission of the offence was due to— (i) a mistake; (ii) reliance on information supplied to him by another person; (iii) the act or default of another person; (iv) an accident; or (v) another cause beyond his control; and (b) that he took all reasonable precautions and exercised all due diligence to avoid the commission of such an offence by himself or any person under his control. If “taking all reasonable precautions” includes being aware of the lack of any good evidence that what you are selling is effective, then this defence should not be much use for most quacks. Douglas McLachlan has clarified, below, this difficult question ### False claims for health benefits of foods A separate bit of legislation, European regulation on nutrition and health claims made on food, ref 1924/2006, in Article 6, seems clearer in specifying that the seller has to prove any claims they make. Article 6 Scientific substantiation for claims 1. Nutrition and health claims shall be based on and substantiated by generally accepted scientific evidence. 2. A food business operator making a nutrition or health claim shall justify the use of the claim. 3. The competent authorities of the Member States may request a food business operator or a person placing a product on the market to produce all relevant elements and data establishing compliance with this Regulation. That clearly places the onus on the seller to provide evidence for claims that are made, rather than the complainant having to ‘prove’ that the claims are false. On the problem of “health foods” the two bits of legislation seem to overlap. Both have been discussed in “Trading regulations and health foods“, an editorial in the BMJ by M. E. J. Lean (Professor of Human Nutrition in Glasgow). “It is already illegal under food labelling regulations (1996) to claim that food products can treat or prevent disease. However, huge numbers of such claims are still made, particularly for obesity ” “The new regulations provide good legislation to protect vulnerable consumers from misleading “health food” claims. They now need to be enforced proactively to help direct doctors and consumers towards safe, cost effective, and evidence based management of diseases.” In fact the European Food Standards Agency (EFSA) seems to be doing a rather good job at imposing the rules. This, predictably, provoked howls of anguish from the food industry There is a synopsis here. “Of eight assessed claims, EFSA’s Panel on Dietetic Products, Nutrition and Allergies (NDA) rejected seven for failing to demonstrate causality between consumption of specific nutrients or foods and intended health benefits. EFSA has subsequently issued opinions on about 30 claims with seven drawing positive opinions.” “. . . EFSA in disgust threw out 120 dossiers supposedly in support of nutrients seeking addition to the FSD’s positive list. If EFSA was bewildered by the lack of data in the dossiers, it needn’t hav been as industry freely admitted it had in many cases submitted such hollow documents to temporarily keep nutrients on-market.” Or, on another industry site, “EFSA’s harsh health claim regime “By setting an unworkably high standard for claims substantiation, EFSA is threatening R&D not to mention health claims that have long been officially approved in many jurisdictions.” Here, of course,”unworkably high standard” just means real genuine evidence. How dare they ask for that! ### Enforcement of the law 19. —(1) It shall be the duty of every enforcement authority to enforce these Regulations. (2) Where the enforcement authority is a local weights and measures authority the duty referred to in paragraph (1) shall apply to the enforcement of these Regulations within the authority’s area. Nevertheless, enforcement is undoubtedly a weak point at the moment. The UK is obliged to enforce these laws, but at the moment it is not doing so effectively. A letter in the BMJ from Rose & Garrow describes two complaints under the legislation in which it appears that a Trading Standards office failed to enforce the law. They comment ” . . . member states are obliged not only to enact it as national legislation but to enforce it. The evidence that the government has provided adequate resources for enforcement, in the form of staff and their proper training, is not convincing. The media, and especially the internet, are replete with false claims about health care, and sick people need protection. All EU citizens have the right to complain to the EU Commission if their government fails to provide that protection.” This is not a good start. A lawyer has pointed out to me “that it can sometimes be very difficult to get Trading Standards or the OFT to take an interest in something that they don’t fully understand. I think that if it doesn’t immediately leap out at them as being false (e.g “these pills cure all forms of cancer”) then it’s going to be extremely difficult. To be fair, neither Trading Standards nor the OFT were ever intended to be medical regulators and they have limited resources available to them. The new Regulations are a useful new weapon in the fight against quackery, but they are no substitute for proper regulation.” Trading Standards originated in Weights and Measures. It was their job to check that your pint of beer was really a pint. Now they are being expected to judge medical controversies. Either they will need more people and more training, or responsibility for enforcement of the law should be transferred to some more appropriate agency (though one hesitates to suggest the MHRA after their recent pathetic performance in this area). ### Who can be prosecuted? Any “trader”, a person or a company. There is no need to have actually bought anything, and no need to have suffered actual harm. In fact there is no need for there to be a complainant at all. Trading standards officers can act on their own. But there must be a commercial element. It’s unlikely that simply preaching nonsense would be sufficient to get you prosecuted, so the Prince of Wales is, sadly, probably safe. Universities who teach that “Amethysts emit high Yin energy” make an interesting case. They charge fees and in return they are “falsely claiming that a product is able to cure illnesses”. In my view they are behaving illegally, but we shan’t know until a university is taken to court. Watch this space. The fact remains that the UK is obliged to enforce the law and presumably it will do so eventually. When it does, alternative medicine will have to change very radically. If it were prevented from making false claims, there would be very little of it left apart from tea and sympathy ### Follow-up New Zealand must have similar laws. Just as I was about to post this I found that in New Zealand a “couple who sold homeopathic remedies claiming to cure bird flu, herpes and Sars (severe acute respiratory syndrome) have been convicted of breaching the Fair Trading Act.” They were ordered to pay fines and court costs totalling$23,400. A clarification form Douglas McLachlan On the difficult question of who must prove what, Douglas McLachlan, who wrote Has The UK Quietly Outlawed “Alternative” Medicine?, has kindly sent the following clarification. “I would agree that it is still for the prosecution to prove that the trader committed the offence beyond a reasonable doubt, and that burden of proof is always on the prosecution at the outset, but I think if a trader makes a claim regarding his product and best scientific evidence available indicates that that claim is false, then it will be on the trader to substantiate the claim in order to defend himself. How will the trader do so? Perhaps the trader might call witness after witness in court to provide anecdotal evidence of their experiences, or “experts” that support their claim – in which case it will be for the prosecution to explain the scientific method to the Judge and to convince the Judge that its Study evidence is to be preferred. Unfortunately, once human personalities get involved things could get clouded – I could imagine a small time seller of snake oil having serious difficulty, but a well funded homeopathy company engaging smart lawyers to quote flawed studies and lead anecdotal evidence to muddy the waters just enough for a Judge to give the trader the benefit of the doubt. That seems to be what happens in the wider public debate, so it’s easy to envisage it happening a courtroom.” The “average consumer”. (3) A commercial practice is unfair if— (a) it contravenes the requirements of professional diligence; and (b) it materially distorts or is likely to materially distort the economic behaviour of the average consumer with regard to the product. It seems,therefore, that what matters is whether the “average consumer” would infer from what is said that a claim was being made to cure a disease. The legal view cited by Mojo (comment #2, below) is that expressions such as “can be used to treat” or “can help with” would be considered by the average consumer as implying successful treatment or cure. The drugstore detox delusion. A nice analysis “detox” at .Science-based Pharmacy
# Saul Albert #### Archive for ‘June, 2012’ UPDATE: I’ve now found there is a better way to do this, which I’ve documented here. A large part of my research is going to involve conversation analysis, which has a rather beautiful transcription style developed by the late Gail Jefferson to indicate pauses, overlaps, and prosodic features of speech in text. There are a few LaTeX packages out there for transcription, notably Gareth Walker’s ‘convtran’ latex styles. However, they’re not specifically developed for CA-style transcription, and don’t feel flexible enough for the idiosyncracies of many CA practitioners. So, without knowing a great deal about LaTeX (or CA for that matter), I spent some time working through a transcript from Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson & J. Heritage (Eds.), Structures of social action: Studies in Conversation Analysis (pp. 57-102). Cambridge: Cambridge University Press). Here’s a image version from page 78: Here’s how I figured that in LaTeX: \begin{table*}[!ht] \hfill{} \texttt{ \begin{tabular}{@{}p{2mm}p{2mm}p{150mm}@{}} & D: & 0:h (I k-)= \\ & A: & =Dz that make any sense to you? \\ & C: & Mn mh. I don' even know who she is. \\ & A: & She's that's, the Sister Kerrida, \hspace{.3mm} who, \\ & D: & \hspace{76mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{2.5mm}{[}}'hhh \\ & D: & Oh \underline{that's} the one you to:ld me you bou:ght.= \\ & C: & \hspace{2mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{2.5mm}{[}} Oh-- \hspace{42mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{2mm}{\lceil}} \\ & A: & \hspace{60.2mm}\raisebox{0pt}[0pt][0pt]{ \raisebox{3.1mm}{\lfloor}}\underline{Ye:h} \\ \end{tabular} \hfill{} } \caption{ Evaluation of a new artwork from (JS:I. -1) \cite[p.78]{Pomerantz1984} .} \label{ohprefix} \end{table*} Here’s the result, which I think is perfectly adequate for my needs, and now I know how to do it, shouldn’t take too long to replicate for other transcriptions: I had to make a few changes to the document environment to get this to work, including: • \usepackage[T1]{fontenc} to make sure that the double dashes — were intrepreted as a long dash while in the texttt environment. • I also had to do \renewcommand{\tablename}{Datum} to rename the “Table” to “Datum” – because I’m only using the table for formatting (shades of html positioning 1990’s style). • \usepackage{caption} to suppress caption printing where I wanted the datum printed without a legend (using \caption* \caption ). The above example is designed to break into a full page centre-positioned spread from a two-column article layout, so those directives are probably not relevant to using it in the flow of text or in two-columns, but I found the (texttt) fixed width font (which, because of the evenly spaced letters, seems to make it easier to read the transcription as a timed movement from left to right) was too large to fit into one column without making it unreadably small. I hope this is useful to someone. If I find a better way of doing this (with matrices and avm as I’ve been advised), I’ll update this post. Any pointers are also much appreciated as I think I’m going to be doing a lot more of this in the next few years. There are other horrors in here, and it was a really annoying way to spend a day, but this method seems to get me as far as I need to go right now. Many thanks to Chris Howes for holding my hand through this. I’m sitting in my office overlooking Mile End listening to a 20 year-old recording of two people sitting in their kitchen, chatting over the sound of BBC Radio 3 about their friends, their weekend, what’s on TV, and about how prim and proper Swiss people are. It feels like being magically transported back to 1993, when the British National Corpus recruited 124 men and women of balanced ages and demographically assigned social classes and asked them to carry around tape recorders to capture the conversations they had with friends, family, neighbours and co-workers every day. Over 700 hours of conversation were recorded and painstakingly transcribed and annotated to enable researchers to analyse an immense corpus of naturalistic language data (still representing only 10% of the total data in the BNC, mostly comprised of written books and journals and transcribed broadcasts). All this data has been used as a primary resource by computational linguists, natural language researchers, sociologists and all kinds of researchers measuring their models of language learning and production against the empirical evidence. However, for the most part, only the text transcriptions of this data rather than the audio itself have been easily accessible to researchers until very recently. In the last year, the Oxford Phonetics Lab has produced a British National Corpus Spoken Audio Sampler, after digitising, cataloguing and analysing the mountain of audio casettes that were hidden away in the British Library Sound Archive. They are soon going to make the entire “Audio BNC” available online to anyone who wants to listen to the original recordings on which so much research has been based, and the director Professor John Coleman kindly made selected recordings available to me as a beta tester. Using Matthew Purver’s SCoRE BNC search tool, I’ve been able to do a full-text search of the Audio BNC, and find naturalistic examples of conversations on specific topics (I’m looking for people talking about art, design, fashion, architecture, or otherwise engaging in aesthetic discussions), and then just dip into their lives at those specific moments. It is fascinating. The sense of omnipotence is almost intoxicating, especially because sitting here, listening and reading along with the original 1990’s transcriptions, I get a strong sense how much has changed in terms of the knowledge production tools available to researchers since then. The text transcriptions I’m reading are full of instances in which the transcriber says the speech is <unclear>, where references and names of things being referred to are omitted. Especially as I’m looking for people talking about art, I’ve found that almost all of the names of artists, musicians or other cultural references made by people in conversation are labelled <unclear> – understandably as how can the transcriber be expected to have a familiarity with relatively obscure painters from Swiss art history? With just a few contextual references, Google and Wikipedia make it trivial to identify about 90% of these <unclear> instances. Similarly, pressing my android phone to my headphone speakers and running Shazam, I’m able to identify what music they’re listening to on the radio in their kitchen while they chat. One of the most powerful things about the Audio BNC being released today is the opportunity to apply contemporary search and analysis tools to finding instances of naturalistic conversation from a huge range of contexts and situations involving different professional, social demographic and cultural groups, and then drop in and listen to what’s going on. Having pored over the transcriptions of these people’s speech, it’s a fantastic revelation to hear their accents, intonations, and get a sense of the detail of how they do the work of ‘being ordinary’ in the privacy of their homes and intimate relationships, then in public, then at the office. It’s the ultimate fly-on-the-wall experience, and it feels like sitting in front of a new telescope, suddenly able to inspect in great detail specific areas of a previously vague and undifferentiated view of a distant galaxy.
# Upgrading from Windows Home Server to Windows Server 2012 Essentials A couple of weeks back I started the upgrade from Windows Home Server to Windows Server 2012 Essentials. I never bothered moving to Windows Home Server 2011, the lack of Drive Extender was the deal breaker for me. But with WHSv1 falling out of support, it had to be done, that plus all the drives were almost full, which was having a noticeable impact on performance sometimes, so time to do something about it. Plus getting something a bit more modern was much desired! For background I put together the current box from an Asus T3-P5G31 around 2008 or 2009, does the job nicely, small and compact. Only downside is there's only space for two 3.5 inch drives, I have a third in the 5 1/4 inch bay. I was running it with a 1TB, 1.5TB and 2TB drives. First up Windows Server 2012 Essentials isn't cheap, it is insanely expensive to use as a home server, at around ten times the price of WHSv1. But as someone who has run a server at home since Windows 2000, hosting commercially successful websites from it, along with Small Business Server 2003, just so I could get push e-mail to my phone, it was more a returning to the norm. But undoubtedly the price is a deal breaker, the product is clearly aimed at the small business. I ordered two Western Digital Red 3TB drives to go along with it. There was mixed opinion on the internet as to if the board with its ICH7 controller would support 3TB drives. So I ordered a Transcend PDC3 SATA board, to add 2 more SATA ports, and definite support for 3TB drives, if the on-board controller didn't. Also added 2 USB 3 ports, juicy bonus, or so I thought, more on that later. Before doing the install, I backed up all the data onto one of the 3TB drives by plugging it into another machine and copying everything over the network. Data safe, ready to go to work. I ended up installing the system on an old 500GB drive I had lying around, then using the two 3TB drives to mirror data on the server - I ran into a setback when Storage Spaces had to wipe the data from a drive to use it in a Storage Space - d'oh, so I had to copy the data to another drive, and then back again after creating the Storage Space - adding about 20 hours to the process. I then used the 2TB and 1.5TB drives together in a simple volume to hold File History backups, and also image-based backups. So yes, I have two drives sticking out of the case, but I needed the storage. Alternatively you could run with USB drives in nice tidy enclosures, but I don't need tidy. I had already tested it out in virtual machines so had few unexpected surprises. It installed without a hitch. By default the connector software joins the client machines to the domain, I don't need this and run my home network as a workgroup, there is a workaround. Run the following on an elevated command prompt before installing the connector software: reg add "HKLM\SOFTWARE\Microsoft\Windows Server\ClientDeployment" /v SkipDomainJoin /t REG_DWORD /d 1 Boom, joins the computer without messing with your local user profiles. It installs the Launchpad and Dashboard software, and away you go basically. I however ran into some issues. The largest of which was the server becoming unresponsive after about 12 hours, upon closer inspection it seemed the Transcend SATA card's USB driver had a memory leak! Luckily I'm not using the USB ports, so I disabled the driver. This is something you should be aware of if you're using this card in a similar setup. The following minor issue is the connector installation adds a service that periodically changes your DNS servers to your server, if you need to prevent this from happening you can disable the service (Windows Server LAN configuration). Lastly clients seemed to forget their network credentials every session, this would prevent File History from running unless you accessed the server and entered them again - it seems the Launchpad software changes the credentials from permanent to session only. Disabling the Launchpad software resolves this - there seem to be no major side effects - backups still run normally, but you lose server alerts, or trigger backups manually from the client (you can still do it via the Dashboard) - no big deal in my opinion. Presumably this is a side effect of running as a workgroup rather than a domain. All in all no major problems, just be aware it is way more expensive, and way more complicated to setup, well relative to Windows Home Server, certainly not compared to older versions of Windows Server. But alas it seems the Home Server market has been abandoned by Microsoft, however Windows 8 Pro does support Storage Spaces, so using Windows 8 on a home server isn't unreasonable, in fact it's very possible. And yes my Terraria and Freelancer servers continue to operate normally from it! # Update on the new Windows Home Server Following on from my previous post on the subject of my new server, its been running fine for a week. Here's the thing sat next to the old server. Much smaller, and much more likely to survive the journey to Guildford - I've actually decided to use screws to hold this one together, not cellotape and blu-tac. Although I'm sure I'll be swearing at it when I need to swap out some disks. The only real downside to using such a smaller case is the number of disks it can support. There are only two 3.5 inch bays with this particular case, and one DVD-ROM drive bay - which I play to use to put an extra disk in, as having a DVD-ROM drive would be a bit pointless. But if push came to shove Windows Home Server is quite happy using USB drives too. Here's the exact build for those interested: Asus T3-P5G31 barebones Intel Pentium Dual-Core E5200 Arctic Cooling Freezer 7 Low Profile Fan OcUK 4GB 677 DDR2 Western Digital Caviar Green 1TB Other drives were harvested from the old server, but I'll probably end up adding a 1.5TB drive at some point. The new Western Digital drives are pretty quiet, but they're still farely loud while seeking. Not as quiet as my Hitachi P7K500 I use in my desktop, which are pretty much silent while doing anything, including seeking. Temperatures aren't bad considering it only has one fan other the one in the PSU, which is on the CPU - no chipset fans (which always get worn out after a few years). The two cores float between 36° and 44°, and the two drives in there at the moment float between 39° and 43° the CPU fan happily runs at around 1400 RPM, I've only twice heard it spin up to about 2000 RPM and then only for a couple of minutes usually when the server is munching through some backups. # New server under construction Today I'm putting together a new server, its based on an Asus T3 barebones system, I've got a 2.5Ghz dual core Pentium for it, and 4GB of RAM. As well as some of the new low power Western Digital disks. This will be replacing my 9 year old system which has faithfully been running almost nonstop based on a 1.4Ghz Athlon Thunderbird, with 1.5GB of RAM and a collection of aging hard disks, this has been running Windows Home Server and a Virtual Machine running Small Business Server flawlessly, so hopefully the new system will be just as reliable. All together it came to about £400, including Windows Home Server. On the plus side it should be using 25-50% of the energy of my existing server. Meaning it'll pay for itself in just a couple of years. Considering how cheap hardware is nowadays this really is a fantastic time to be replacing older energy-hungry systems with new, smaller, faster and more efficent systems, something businesses should really be looking at to reduce their energy bills. If everything goes to plan, my old server will be retired sometime tomorrow. # Windows Home Server review I picked up a copy of Windows Home Server a couple of weeks back, saw it on Overclockers and ordered it with a new 500GB hard drive, I was originally going to have this finished a day or two after installing, but things got in the way. Microsoft announced Windows Home Server back at the CES in January, it was very well received and I managed to get onto the beta program a couple of months later, so I come at this review having used it for 6 months or so already. Windows Home Server is aimed towards people with 2 of more PCs (a maximum of 10 clients are officially supported). Its main three features I would say are: 1) Network storage, it exposes standard network shares for file storage, you can create your own, change which users have permissions etc. If you have multiple hard drives you can set it to duplicate all of your files to protect against a hard disk failure. From an end user perspective all the hard disks will appear as one headless drive, and Windows Home Server will manage things in the background. 2) Backup, with the Connector software you can backup your computers to Home Server (which actually uses surprisingly little space as its cluster based backup not file based). By default it will backup the entire machine, so in the case of drive failure you can simply restore the whole image to a new hard drive. 3) Online access to your files, being a stripped down version of SBS 2003, it's got IIS and it does get used to provide a web front end to access all the files on the network shares. You can also use remote desktop to connect to the Home Server itself, or any PC on the network (Home Server will forward the packets to get around NATs). It supports uPnP, so if your router supports it too it can set all this up automatically. You even get a sub domain name to help locate your machine so you don't need to remember your IP address. It also supports a number of other things, media sharing like WMP11 or Windows Media Connect, you can stream media to another device like an Xbox 360. Also it allows 3rd parties to develop add-ons which can provide more functionality. There are already plenty of them released; one for example uses the web front end to make a public or private photo gallery. Another is a bit torrent client. To read more about what Home Server does, check out the Microsoft's website. On with my experiences... It was installed on my server machine in the cupboard, which was formally running the beta version of Windows Home Server, and prior to that Windows XP and Windows 2000. The specs are as follows, 1.4Ghz Athlon Thunderbird, clocked at 1.0Ghz (at 1.4Ghz it crashes due to the 180 watt PSU), with 1GB of RAM, an 11 year old 2MB video card and a bunch of hard drives. This is also the machine which runs Windows SBS 2003 in a Virtual Machine, which I use to handle my e-mail, which I wrote about in detail here. So although the machine does have 1GB of RAM, half of that is set aside for SBS 2003. There were no problems encountered using Virtual Server on the final version of Windows Home Server. So although this product looks simple and is geared towards anybody using it, you can still log on to the desktop and do some really powerful things with it. The OEM package that I had came with three discs, 1 DVD being the actual install disc, and 2 CDs one containing the Connector software for the clients, and 1 containing a bootable disc that you can use to recover a machine from a backup. Installing it was simple enough, it uses the same installer as Windows XP and Server 2003, although it does have some new swishy Vista style graphics. It took may be a little over an hour on my machine in total. Once installed, you need to give it a password, this is used to login to the client-side control panel provided by the Connector application, dubbed the Windows Home Server Console. Using this, you can setup individual machines, create new user accounts - typically these should match usernames and passwords on the client machines. Create shared folders, and see the storage pool to add or remove hard drives to the machine. Under settings we find more advanced options. We can alter almost everything from here, what time backups start and end, required password strengths, and setting up remote access. Speaking of which this is the webpage you're greeted with when attempting to login. Once logged in, you can access any of your files stored on the server. You can upload and download files over the web front end, open the Console over remote desktop, or connect to the server's desktop, or any other machines on your network. It also supports instance search so you can quickly find anything you're looking for, this also works locally on the network using Vista's search functionality. Pros: It's simple and does what it says without any hassle. Yet if you're more of a power user and want to install an FTP client, or DHCP server - although it isn't an officially supported scenario you can go ahead and do that. Plenty of good addons released so far, no doubt with plenty more under development. System requirements are low, and work on 5 year old hardware easily. Cons: No Connector support for 64-bit Windows yet, so you lose the fancy backup features and the ability to access the Console. But the network shares still work fine, and if you really need to, you can use remote desktop to connect to the server and access the Console from there. There are some ways around this, but I hear this is currently under development. Considering all of that, I give Windows Home Server 5 out of 5. Highly recommended to anybody who wants a backup solution for multiple PCs, or who wants an uber-network attached storage device. You can buy it in both OEM form for hardware you've already got or you can buy it on machines like the HP Media SmartServer. # Windows Home Server Connector on x64 It's nice to see Microsoft leading the field in support for 64-bit operating systems. Windows Home Server Connector, which is the little application installed on the clients which handles backups and opens the Home Server Console, doesn't install on x64. Officially the team says they don't have the resources to support x64 and so there will be no support at this time. What I want to know is why Microsoft isn't putting the money into the team to support x64. I don't know about you, but I want the transition to 64-bit completed as fast as possible, that means all vendors pulling their weight writing compatible software. We're running out of RAM fast, there's games around now which chew up over 2GB quite happily, we need to migrate soon. Having Microsoft, who was among the first (with AMD) in the x86 space to start pushing 64-bit dragging their feet with Home Server is somewhat annoying. You can force the install of the Connector software on x64 by using the following on an elevated command prompt: msiexec /i \path\to\whsconnector.msi WHSMSI="RUNSETUP" But backups simply won't function properly, and it is not officially supported. What I would like to see a few months down the road after release is x64 support. I'd also like to see a stripped down Exchange plug-in for Home Server, with a web front end hooked in to the Home Server web page too.
## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorAndrew Stacey • CommentTimeOct 14th 2009 It occured to me that it would be useful to have a bookmarklet so that when one is on an n-lab page, clicking on it takes you here and starts a new discussion in the 'latest changes' category with the name of the n-lab page automatically filled in as the discussion title and the first line of the input (surrounded by square brackets). I have no idea how to write these things, but they don't look hard. I'm happy to have a go, but if someone knows a little about javascript and could hack it together in seconds then that would be great. In non-javascript terms, the code would be: 1. Look at URL of current page, extract last component, call this the "title". 2. Go to http://www.math.ntnu.no/~stacey/Vanilla/nForum/post.php 3. Select 'latest changes' category 4. Insert "title" as the discussion topic 5. Insert [[title]] as the first line of the content For a bonus, I'd imagine this being activated when looking at the page just after editing it. So the person's name will be at the foot of the page. Grab that and put that as the last line of the content of the post on the n-forum. • CommentRowNumber2. • CommentAuthorUrs • CommentTimeOct 14th 2009 Very good idea. I had been thinking of something like that but didn't dare to "request" it :-) • CommentRowNumber3. • CommentAuthorMike Shulman • CommentTimeOct 14th 2009 That would be very cool, but I don't yet know how to write such things either, and I don't have time to learn right now. But good luck if you attempt it! • CommentRowNumber4. • CommentAuthorAndrew Stacey • CommentTimeOct 14th 2009 Bother, I was hoping you'd know. Anyone know where Mike Stay hangs out? He wrote the greasemonkey script for the n-cafe, didn't he? That was javascript, I think. • CommentRowNumber5. • CommentAuthorUrs • CommentTimeOct 14th 2009 This comment is invalid XHTML+MathML+SVG; displaying source. <div> <blockquote> Anyone know where Mike Stay hangs out? </blockquote> <p>For his contact information see <a href="http://math.ucr.edu/~mike/">here</a>.</p> <p>I suppose you know that he's working at/for Google?</p> </div>
Algebra: A Combined Approach (4th Edition) $y$ will be four times greater. $y=kx^2$ When $x$ is doubled, $k(2x)^2=4kx^2$ Since $kx^2=y$, $4kx^2=4y$ $y$ will be four times greater.
Restore Microsoft SQL Server database Introduction Although the SQL plug-in provides a basic ability to backup and restore SQL databases, there are important considerations when planning an effective backup/restore strategy. This document will focus on restoring SQL databases, a process which is typically more involved than backing them up. Where applicable, backup options and best practices will be called out when they have important ramifications on the restore process. Relocation There are two ways in which a SQL database can be restored: • To the SQL server from which it was backed up (in-place) This is the default behavior when you restore databases. If you do not change the restore to client (and you have not uninstalled and installed the SQL Server again on that client), you will be doing an in-place restore back to the original SQL Server. • To a SQL server other than the one from which it was backed up (relocated) If you change the default restore to client (or you have uninstalled and installed the SQL Server again on that client), you will be doing a relocated restore to an alternate SQL Server. Regardless of the method used, instance names must remain the same. The SQL plug-in requires that an instance on the target SQL Server be named exactly the same as the instance from which the backup was taken. If this is not the case, an error will occur during the restore attempt. There is currently no way to specify an alternate instance name at restore time. This is most likely to be an issue when doing a relocated restore since the alternate SQL Server may not have an instance of the same name. In this case, make sure you create (if one does not already exist) an appropriately named instance before attempting the restore. To ensure success in the widest range of conditions, the SQL plug-in applies relocation logic when restoring SQL databases. This means: • The SQL Server is queried to determine where databases should be stored on that particular server. • A unique folder will be created, at the location identified above, for each database restored. This folder will be named for the database whose files it will contain. This ensures that multiple databases with the same-named database files can be restored successfully. This relocation logic behavior can be disabled via the registry key on the client computer to which you will be restoring. If you disable relocation logic, the restore will only succeed if the exact original path to the database files is available on the destination SQL Server. This will usually only be true if you restore back to the original SQL Server. Therefore, relocated restores will likely fail if this logic is disabled. To disable relocation logic for all databases on all servers, set the following registry key/value pair: HKEY_LOCAL_MACHINE\SOFTWARE\Revinetix\RVX-Backup\Plugins\MsSQL DisableRelocate=true To disable relocation logic for all databases on a particular server, set the following registry key/value pair ([ServerID] is a 0-index-based value; the Name key can be used to identify the server): HKEY_LOCAL_MACHINE\SOFTWARE\Revinetix\RVX-Backup\Plugins\MsSQL\Servers\[ServerID] DisableRelocate=true To disable relocation logic for a particular database on a particular server, set the following registry key/value pair ([ServerID] and [DatabaseID] are 0-index-based values; the Name key can be used to identify the server/database): HKEY_LOCAL_MACHINE\SOFTWARE\Revinetix\RVX-Backup\Plugins\MsSQL\Servers\[ServerID]\Databases\[DatabaseID] DisableRelocate=true To remove disabling in all cases, set the key value to false, or delete the DisableRelocate key entirely. Restore order If you are restoring one or two, specific user (non-system) databases, you can simply restore them without concern for the order in which they are restored. But, if you also need to restore system databases, need to restore all databases, or are attempting to recreate a server from a backup, the order (and method) you use can be very important. The SQL plug-in does not enforce any particular order, as is it not built to be a SQL Server restoration tool. It will backup/restore databases, but you will need to be aware of certain SQL Server requirements when it comes to data protection via backup/restore of SQL databases. Consider the following when restoring SQL databases: • System databases should be restored first, individually, and in this order: • master • msdb • model • distribution • User databases should be restored after all system databases, grouping and order are not a factor. Lets say you need to restore all databases from a backup, this includes system and user databases. Perform the restore as follows: 1. Restore the master system database: 1. Browse the job (or set of jobs) and select ONLY the master entry. 2. Restore the selected database. 3. Verify that the job restored successfully. 4. Verify that the SQL Server is running properly after the restore. 2. Restore the msdb system database: 1. Browse the job (or set of jobs) and select only the msdb entry. 2. Restore the selected database. 3. Verify that the job restored successfully. 4. Verify that the SQL Server is running properly after the restore. 3. Restore the model system database: 1. Browse the job (or set of jobs) and select only the model entry. 2. Restore the selected database. 3. Verify that the job restored successfully. 4. Verify that the SQL Server is running properly after the restore. 4. Restore the distribution system database: 1. Browse the job (or set of jobs) and select only the distribution entry. 2. Restore the selected database. 3. Verify that the job restored successfully. 4. Verify that the SQL Server is running properly after the restore. 5. Restore the user databases: 1. Browse the job (or set of jobs) and select the remaining entries (do not select any of the system databases you have already restored). 2. Restore the selected database(s). 3. Verify that the job(s) restored successfully. 4. Verify that the SQL Server is running properly after the restore. This procedure can be modified to exclude any databases that you either do not have in your backup, or do not care to restore. The most important issue is the order and grouping of the restores for the respective databases. It is not recommended that you simply select the entire SQL: folder and restore it all at once. This will likely result in conflicts between the system and user databases at restore time. The restore procedure discussed here assumes that you are restoring to a running SQL Server. This implies that the system databases are at a minimal state of functionality that is required for a SQL Server instance to start up. If this is not the case, you will have to re-install the instance, repair the system databases using the Microsoft prescribed methods, or restore and attach the databases manually. Refer to the link below for more information on these topics. The following Microsoft articles are available for more information on specific SQL Server backup/restore topics: Recovery models A databases designated recovery model has a big impact on its backup/restore strategy. Primarily, this has to do with how SQL logs are handled. Here is how SQL backup types are mapped to backup levels used by the CFA: SQL backup type Backup level Full Full Differential Differential The recovery models impact the backup/restore process in the following ways: Simple Databases using this recovery model can only be backed using the full or differential level. Log (incremental) backups are not supported using this model. As a result, incremental backups are promoted to differential for databases using this recovery model. When a full or differential backup of a simple recovery model database is taken, the transactions logs are automatically rolled into the database (committed) and truncated before the backup. Restoring these types of databases is a straightforward affair, no additional steps or steps considerations are needed. Full/Bulk-Logged Databases using this recovery model not only support log (incremental) backups, they require them. The transaction logs are truncated when a log (incremental) backup is taken, but not when a full or differential backup is taken. In fact, the logs can grow indefinitely until a log (incremental) backup occurs or some other event (called a checkpoint) is triggered. It is the implication on restores that is most important for this recovery model. In SQL 2005 and later, databases are required to have all log transactions backed up before the database can be overwritten by a restore operation. If this requirement is not fulfilled, the following error can be seen when attempting a restore: The tail of the log for the database [DatabaseName] has not been backed up. Use BACKUP LOG WITH NORECOVERY to back up the log if it contains work that you do not want to lose. Use the WITH REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of the log. The current version of the SQL plug-in does not support running a log (incremental) backup automatically at restore time. But, you have some options for dealing with this problem, they are described below. Handling tail log backups at restore time If a database using the Full/Bulk-Logged recovery model causes a tail log error during a restore attempt, you have a few choices: 1. Run a manual, incremental backup of the database you are restoring on the destination server. Then, with as little delay as possible, restore the database. The reason for this is that you need to have an empty transaction log in order for the restored database to overwrite the existing one. Running the incremental backup first results in the log being truncated before the restore attempt, thus avoiding the tail log error. 2. Force the restored database to overwrite the existing one. By setting a specific registry entry on the destination client computer, the database can be overwritten during a restore attempt. Keep in mind, but, that any outstanding transactions in the log of the destination database will be lost. Do this only if you do not need the latest transactions from the destination database to be backed up. To force a database to be overwritten at restore time, set the following registry key/value pair ([ServerID] and [DatabaseID] are 0-index-based values; the Name key can be used to identify the server/database): HKEY_LOCAL_MACHINE\SOFTWARE\Revinetix\RVX-Backup\Plugins\MsSQL\Servers\[ServerID]\Databases\[DatabaseID] ReplaceOnRestore = true 3. Change the database to use the Simple recovery model. This will only work for backups/restores going forward since existing backups would still be affected by the existing recovery model. To fully understand the scope of this type of change, refer to the links below for more information on recovery models.
Time and Boundaries • 11.6k :smirk: • 20.4k the universal law just says two masses will accelerate towards each other. So your explanation would amount to that two masses accelerate towards each other because they accelerate towards each other. • 11.1k Newton's first law explicitly says that the motion of a body will remain constant unless acted on by a force. I think "acted on by a force" implies causation doesn't it? In Newtonian physics gravity is a force, and acceleration is caused. • 621 Gravity is just a name for the acceleration of any two masses towards each other. Are you talking about gravitational attraction? $g = 9.8 m/s ^2$ You hold this equation in contempt? What’s causing precise acceleration? Respective masses curving spacetime. Is this the language you respect? • 20.4k Are you talking about gravitational attraction? You hold this equation in contempt? Why would you suppose that? An odd response. What the Universal Law of Gravitation says is that the force between two masses is inversely proportional to the square of the distance between them. That number you posited is the proportion. The force is the product of the mass and the acceleration. F=ma. "Cause" does not appear anywhere in those equations. • 621 Saying gravity causes acceleration is just saying the acceleration between two masses causes the acceleration between two masses. Is it a stretch, a distortion, a mis-read to say your above quote is affirmation of my main point? Gravity causes acceleration (in free fall) ⇒ acceleration of mass ≡ gravity as $g = 9.8/s^2$, or $f = ma$. My Main Point Gravity and acceleration-due-to-gravity are, in a certain sense, as one. They are conjoined as a unified concept: gravity-and-acceleration. Thus cause and effect are, in the same sense, as one, save one stipulation: temporal sequencing. • 621 ..."time" is neither "temporal" nor a "phenomenon". (I think you're confusing (your) maps with the territory.) What’s the critical operation between cause and effect when considered as conjunction: time?'' No. IMO, wrong, or incoherent, question (i.e. misuse of terms). Are there any observable boundaries time cannot merge? More incoherence. "Time" is a metric (i.e. parameter), ucarr, not a force or agent. Do you think the forward-flowing of history comprises the physical phenomena populating our empirical experiences? In the below quote, are you referring to the commingling of the forward-flowing of history with the metric that tracks it mathematically? (I think you're confusing (your) maps with the territory.) • 20.4k I dunno. I guess I give up, having not been able to follow what it is you might be claiming. In Newtonian physics gravity just is an acceleration of a mass due to the another mass. Saying gravity causes that acceleration is circular. If that is all you have to say, then fine. But you then add something odd about temporal sequences. Newtonian physics is pretty clean, making use of mathematical equations rather than causal statements. While we can to some extent treat the equations as causal links, that's perhaps a bit muddled. So we can say that gravity causes stuff to fall, but that's a shorthand for a failure to explain the acceleration between masses rather than an explanation. pretty much ended the mistaken notion that cause requires time. It's a topic that has been discussed here before, leading quickly to partisan stances. • 2.8k I'm with Victor Toth on this one. Gravity is a force. The force is counteracted by the upward force of the plane before the parachutist leaps away. Then it's force due to gravity counterbalanced by force due to air resistance as he falls, each force changing a bit with distance, which is determined by time's passage. The effect of him falling is determined by several "causes", including jumping out of the plane and the force of gravity. I don't think "cause and effect" is relegated to the junkpile of philosophy (or physics, for that matter) because at some infinitesimal scale it's hard to discern which is which. I agree with in that regard. (I model mathematical causal chains - in time - as compositions of functions. A result (effect) at a time t is, say, z. The next temporal step, and the scale of time can vary, is to compute s, where s=f(z), then after that, r, where r= g(s), and so on. There's a whole theory herein. But I think it more realistic to assume several functions act on z, not just one. Like differing forces. So each step - and these are associated with intervals of time - has as outcome the influence of a number of "forces", rather than a single function.) Sorry, got carried away with a current research topic of mine. Maybe it's relevant here. • 621 (I model mathematical causal chains as compositions of functions. A result (effect) at a time t is, say, z. The next temporal step is to compute s, where s=f(z), then after that, r, where r= g(s), and so on. There's a whole theory herein. But I think it more realistic to assume several functions act on z, not just one. Like differing forces. So each step - and these are associated with intervals of time - has as outcome the influence of a number of "forces", rather than a single function.) In the above quote, jgill elaborates with detail and clarity what I've been trying to claim more vaguely and superficially. The above quote gives us a description of phenomenal reality, known empirically to all of us. It is a complex mix of the physical and the conceptual. Cause and effect and time are deeply partial to each other as an interweave, and this interweave has for its signature the forward-flowing of history. I model mathematical causal chains as compositions of functions. The gist of my claim herein is that the above quote describes our fluidly transforming world as an ongoing continuity of boundary crossings, boundary mergers, Venn Diagram overlapping and transcendence of boundaries. Time and its signature, the forward-flowing of history, will bleed through anything, whether physical or conceptual: the drop of water, in time, bores through the great stone; the black hole, in time, evaporates, releasing phenomena only seemingly lost forever. • 11.6k forward-flowing of history "Forward-flowing" is a cognitive illusion and intuitive way of talking about asymmetric change. "History" represents time-as-past-tense-narrative (i.e. a ghost story). Particle physicists refer to worldlines (or many-worlds branchings) and statistical mechanics refer to entropy gradients. I still don't see what your musings, ucarr, have to do with philosophy. What's the philosophical itch you're trying to get us to scratch? State it plainly. • 621 I still don't see what your musings, ucarr, have to with philosophy. What's the philosophical itch you're trying to get us to scratch? State it plainly. Have your seen my quote directly above yours? Do you think the forward-flowing of history comprises the physical phenomena populating our empirical experiences? "Forward-flowing" is a cognitive illusion and intuitive way of talking about asymmetric change. "History" represents time-as-past-tense-narrative (i.e. a ghost story). Particle physicists refer to worldlines (or many-worlds branchings) and statistical mechanics refer to entropy gradients. I take your above quote for an answer to my question above it. No doubt my appointment with the dentist tomorrow, when seen as asymmetric change representing time-as-past-tense-narrative (i.e. a ghost story) with reference to world lines (or many-worlds branchings) and statistical mechanics referring to entropy gradients, holds formally very little in common with my vision of getting a filling in my back molar. No. I haven't entered such descriptions into my daily planner. Having said that, I think I understand your cutting-edge scientific vision of forward movement is pertinent to the concepts and details of my narrative. If I'm right, then you exaggerate when claiming "It's clear as mud to me." • 11.6k After I posted. It's clear as mud to me. • 2.8k I'm not sure my little exposition should be a reference point. That's how I perceive change over time. I can also go backwards in time, showing there need not be a conflict between infinite regression and first causes. Particle physicists refer to worldlines (or many-worlds branchings) and statistical mechanics refer to entropy gradients Where a lot of that begins is the Schrödinger equation, which is fundamentally a partial differential equation with the independent variable t = time. When solutions are computed, all of a sudden mystical superpositions and wave collapses occur with experimentations. Why is time so vital here? • 11.6k Don't hold me to this but I vaguely recall that Heisenberg et al's matrix mechanics (re: possible-states of observables) provides a non-mystical, though experimentally equivalent, alternative to Schrödinger's wave mechanics (re: particles as classical waves). Something about Feynmann's path-integrals plays a decisive role in extending the scope of matrices, doesn't it? Yeah, I don't know wtf I'm talking about, jgill, but somebody with real QM chops is bound to come along who can talk mathematical physics to a mathematician. :sweat: • 7.4k "what is causing galaxies to deviate from the predictions of our models?" Such causes get posited as new elements of a model a in many subfields uncovering the nature of these causes becomes a major, or the major topic of research, e.g. dark matter and dark energy. They do, because they speak the same language we do. I'm not saying causation is denied, but what is the focus? You said it yourself - things that "deviate from the predictions of our model". Anomalies. But in the mathematical models, there is no variable or constant 'cause' or 'effect'. Nor do the models cause the universe to obey them. The world is orderly and disorderly and mathematics describes the order and the disorder. When there is an anomaly there is work to be done revising the model, or refining the instruments. Causation drops out of the conversation because it has no function. It is not a particle, or a field, or a force, or a dimension or a measurement... It's not anything, but an old fashioned way of thinking that we still use. To look for the cause of an anomaly not understood is to look for some new thing; it is not to look for causation. Causation is a fancy word for 'the way things go' and that is why there is the temporal aspect. • 768 I am genuinely curious about this widespread world of physics where cause is not referenced. I read a lot of physics and causes are mentioned constantly. Things like do-calculus were invented for the natural sciences. Bayesian inference is generally couched in causal language. The Routledge Guide to Philosophy of Physics, which is an excellent reference guide BTW, mentions cause 787 times, causal 586 times. Some of these references are indeed arguments against cause, but not most. In general, arguments against causation are nuanced, and not eliminitivist at any rate. "Cause isn't in mathematical equations," certainly isn't taken as gospel in the philosophy of causation (I'm currently in the middle of "Causation: A Users Guide). Why can mathematics not represent causes, but it can represent state changes and processes with a defined start and end point? Where I've seen arguments against cause related to physics, it's been in popular science books in the context of arguments for a block universe. The block universe is hardly something all physicists accept, and if authors are putting their best arguments for such a view into their books, they seem to have more motivations in philosophy than in physics. To be sure, this is partly because debates on the nature of causation generally aren't considered a topic for physics articles, and one's popular science books are a good place to get into more speculative discussions. But I certainly don't see the "cause is antiquated," view writ large on the natural sciences as a whole, or even just physics. Instruction on elements of physics being time symmetric is not an argument that physics itself is time symmetric, it demonstrably is not. I would be less skeptical of the block universe if the motivation behind some key arguments for it didn't seem to come from philosophers' anxiety over how their propositions could have truth values given some form of presentism. Davies, who I generally like, goes for one of these. It's frustrating because these are presented with an air of certitude (he says something like "one must be a solipsist to disagree") when in fact there is by no means only one way to view SR vis-á-vis the reality of local becoming. These examples amount to attacks on the Newtonian time the audience is expected to be familiar with, and then propose the block universe as the only solution (Putnam does something similar). The issue can also be resolved by seeing time as degenerate in SR, with time bifurcating into co-ordinate time and proper time . This distinction gets muddled in many retellings of twin paradoxes though. Of the views on time I like best in modern physics is the view that events in the past exist, and exist(ed) just at the local time they occured, while "now" is defined locally by the simultaneity of local interacting processes. I see no reason to jettison the overwhelming empirical evidence for time's passage when there exists fully coherent models that don't require eternalism. Cause is trickier because people mean many things by cause. Just like time now has to be split into many different types of precisely defined time (and even these might not be enough, some physicists think Minkowski Spacetime is doomed as a flawed model), we probably need some sort of precisely formulated definition of causality. In the philosophy of physics, the transfer of conserved quantities is the leading definition of causation from what I've seen, but there are information theoretic definitions too. • 768 A world line in an objects' 3D path rendered with a time dimension, nothing more. A world line can also be used to describe the history of a path for an observer. We talk about time in statistical mechanics all the time. Even in a model of quantum foundations like consistent histories, where there is no one true state of affairs at time T, a classical history emerges from decoherence/collapse. Physicists don't talk about time in SR/GR because you need to specify which types of time you are referring to. This doesn't disprove the reality of an arrow of time or local becoming, except inasmuch as philosophers have used the model to construct paradoxes, or pseudoparadoxes depending on who you ask, that call them into question. The funny thing is that the alleged paradoxes and the arguments that allegedly rebut them haven't really moved since the 1940s; they just get restated. Someone who wants to refute Davies can cite Gödel or Robb who were actually replying to people in their time... and so maybe time is illusory or circular... The things you mentioned don't have anything to do with history being a "cognitive illusion." The apparent "arrow of time," is one of the big questions in physics, not something that has been solved and written off as illusory by any means. Some physicists speculate that time is somehow "illusory," although the nature of this illusion is generally fairly nuanced and not grounded in cognitive science. When they do so, they tend to be doing more philosophy than physics, although the use of specialized terms certainly confuses this fact. That time, and thus history, can't flow and that things do not "move" "forwards" and "backwards" in time is more well established. These are bad analogies that lead to apparent paradox. So, "forward flowing of history," is probably best to avoid. • 11.1k The gist of my claim herein is that the above quote describes our fluidly transforming world as an ongoing continuity of boundary crossings, boundary mergers, Venn Diagram overlapping and transcendence of boundaries. All this does is show the deficiency of systems theory as a means for modeling the world. The reality of these "boundary crossings" implies that there is many things which cannot be classified as being proper to one system or another. Initially, this may not appear as a problem, but when it comes to mapping causation, we need to distinguish between what is within the system, and what is acting on the system, as a causal force. As in my reply to Banno, above, inertial continuity is modeled as internal, therefore non-causal, and external influence is modeled as a causal force of change. So for example, someone in another thread suggested to me that we could model an atom as a system. However, the natural state of atoms is to exist within complex molecules, where parts (electrons for example) are shared. If two atoms share an electron, and the atoms themselves are being modeled as distinct systems, then in each model, the shared atom is both an internal part of the inertial continuity of the system, and also a part of the other system, thereby acting as a causal force of change on that same system. In other words, from this 'systems' perspective, the electron must be understood as both a part of the inertial continuity of the system, and a causal force of change to the system (being a part of an external system), at the same time. • 768 You might be interested in information theoretic, holographic principal-based workarounds for this problem if you're not already aware of them. Since information is only exchanged across any systems' (however defined) 2D surface, we can model them purely relationally. One interpretation of this is that information content is relative between systems, with these relationships formalized using the concept of symmetry and group theory. Example: for many enzyme reactions, a chemicals' being composed of isotopes or not is indiscernible for both systems and thus irrelevant to describing the interaction. This was best expressed in brilliant dissertation that made it into Springer Frontiers and got rave reviews, before the author seemingly disappeared, which is a shame. Verdal's book sort of goes with this, in his explanation of information only existing relationally between parts of the universe, but he seems to reverse on this later in the book to use the old "amount of bits stored by each particle," calculation to make some points about quantum information. I think the arbitrary nature of system boundaries is akin to other problems in the sciences and even humanities. For example, in semiotic analysis/communications, a physical entity, say a group of neurons, might act as object, symbol, and interpretant during the process, depending on the level of analysis that is used. But at a certain part, the ability of any one component to convey aspects of the total message breaks down. E.g., a single logic gate can't hold the number "8," itself. Certain relationships only exist at higher levels of emergence, like your example of shared electrons. Causation, in such models, would likely be interpreted in terms of computation or information exchange, and I'd argue that current theories of computation and communications would actually make it extremely difficult to differentiate these two models at the formal level. IMO, something like the concept of levels of abstraction in computer science is needed for this sort of problem, but I can't fathom how to formalize it in a manner that isn't arbitrary. Subjective is fine. Entropy is subjective (see the Gibbs Paradox) but not arbitrary. Arbitrariness seems like a problem however. • 621 So for example, someone in another thread suggested to me that we could model an atom as a system. However, the natural state of atoms is to exist within complex molecules, where parts (electrons for example) are shared. If two atoms share an electron, and the atoms themselves are being modeled as distinct systems, then in each model, the shared atom is both an internal part of the inertial continuity of the system, and also a part of the other system, thereby acting as a causal force of change on that same system. In other words, from this 'systems' perspective, the electron must be understood as both a part of the inertial continuity of the system, and a causal force of change to the system (being a part of an external system), at the same time. I think the arbitrary nature of system boundaries is akin to other problems in the sciences and even humanities. For example, in semiotic analysis/communications, a physical entity, say a group of neurons, might act as object, symbol, and interpretant during the process, depending on the level of analysis that is used. But at a certain part, the ability of any one component to convey aspects of the total message breaks down. E.g., a single logic gate can't hold the number "8," itself. Certain relationships only exist at higher levels of emergence, like your example of shared electrons. Your above quotes for me are introductions to detailed examinations of topics in physics, each of which, in the elaboration of specialization, would easily engage the entire careers of physicist-specialists. My label of convenience for the theme connecting and focusing pertinent issues within Time and Boundaries is Boundary Ontology. Under this category the focus is on such questions as: How do we measure the surface of a material object? In the scale of human experience, this question is perhaps mundane. Is that the case at the scale of the elementary particles? How about the scale of the expanding universe? What does it mean for spacetime to expand and yet have no outer boundary? Speaking mathematically, clearly topology has a key role to play herein. For example: topology might offer a rational approach to a definition of the soul: a surface invariant to unlimited manifolding of a set. Is system the limit of entropic expansion? Is universe the limit of system? These are, I think, important boundary ontology questions. Is there a possible general mathematical definition of what constitutes the boundary of a system? Can boundaries be defined for cognitive inter-relations, thereby establishing a hybrid interweaving the cognitive_physical? Finally, there's the supreme challenge of the sine qua non of boundary ontology puzzles: Origin Boundary Ontology. First principle, first cause, etc, will need more than three spatial dimensions + time for practical elaboration. • 2.8k I don't know wtf I'm talking about, jgill, but somebody with real QM chops is bound to come along who can talk mathematical physics to a mathematician. :sweat: Real-life Q-physicists have been chased away, I fear. Kenosha Kid tried to get some sympathy for the Transactional approach, but had unsatisfactory experiences and left the room to play his guitar. I know very, very little about Q-theory beyond the elementary stuff. Feynman's path integral I can follow if I take the simplified version involving time splitting. In my old age I dabble in very elementary mathematics (in the professional sense), finding the road I am on challenging enough. :cool: • 11.6k :up: • 11.1k Interesting. Why do you say that entropy is subjective? Is it because a system's boundary is arbitrary? • 768 Not just that. Again take a box with a partition in it, with gas A on one side, gas B on the other side, and both gases are at the same temperature and pressure. If gas A and B are different gases, there is an entropy that arises once the gases are mixed. If the gases are the same, no additional entropy is calculated. The additional entropy from mixing does not depend on the character of the gases; it only depends on the fact that the gases are different. The two gases may be arbitrarily similar, but the entropy from mixing does not disappear unless they are the same gas - a paradoxical discontinuity... As a central example in Jaynes' paper points out, one can develop a theory that treats two gases as similar even if those gases may in reality be distinguished through sufficiently detailed measurement. As long as we do not perform these detailed measurements, the theory will have no internal inconsistencies. (In other words, it does not matter that we call gases A and B by the same name if we have not yet discovered that they are distinct.) If our theory calls gases A and B the same, then entropy does not change when we mix them. If our theory calls gases A and B different, then entropy does increase when they are mixed. This insight suggests that the ideas of "thermodynamic state" and of "entropy" are somewhat subjective. I don't agree with the use of the term "arbitrary" in the Wiki article, at least not in an important sense. This paradox has a special place in my heart because when I began reading a lot more on statistical mechanics and doing problems on it I realized this problem myself somewhat early on. I thought to myself "holy shit, maybe I could be really good at this, look what I uncovered, this is air tight too!" I finally got over the fear of someone stealing my great insight and posted a question in Stack Exchange. Within a few hours someone asked, "do you mean the Gibbs Paradox?" Yeah, someone had the idea first, over a century ago, pretty much as soon as Boltzmann published. So much for my genius lol. I felt better about this after reading Max Tegmark describe "discovering" decoherence as a first year PhD student, only to learn he'd been scooped by several years. At least that was somewhat close in time though. • 11.1k Again take a box with a partition in it, with gas A on one side, gas B on the other side, and both gases are at the same temperature and pressure. If gas A and B are different gases, there is an entropy that arises once the gases are mixed. If the gases are the same, no additional entropy is calculated. The additional entropy from mixing does not depend on the character of the gases; it only depends on the fact that the gases are different. The two gases may be arbitrarily similar, but the entropy from mixing does not disappear unless they are the same gas - a paradoxical discontinuity... I suggest that this is an illusion created by the terms of the example. If each individual molecule of compartment A is marked as A, and each individual molecule of B is marked as B, then even if the two compartments each contain the same type of gas, the combining will appear the same as if they are different gases, because they are marked as different. There is no paradox, just an illusion. In the case of two distinct gases, an act of mixing is required, and this requires time and energy. In the case of the gases being the same, it appears like the gases have already mixed as soon as the separation is removed. That's just an illusion, mixing has not occurred, as marking the molecules would reveal. • 768 In the case of two distinct gases, an act of mixing is required, and this requires time and energy. In the case of the gases being the same, it appears like the gases have already mixed as soon as the separation is removed. That's just an illusion, mixing has not occurred, as marking the molecules would reveal. Yes, that was sort of Gibbs' original point in the case of ideal gasses. You need a non-extensive entropy to deal with that the problem. Jayne's big point is summed up in the introduction: " We argue that, on the contrary, phenomenological thermodynamics, classical statistics, and quantum statistics are all in just the same logical position with regard to extensivity of entropy; they are silent on the issue, neither requiring it nor forbidding it." And, counter intuitively, non-extensive entropy actually tends to model many real systems better (e.g. Tsallis entropy). Jaynes paper does a better job explaining why this was generally been considered a genuine paradox. Distinguishability is, in an important sense for predicting/describing physical interactions, relational. • 621 I guess I give up, having not been able to follow what it is you might be claiming. My central mission in this conversation is to define time in terms of boundaries and their inter-relationships. My central premise is that time is a type of general boundary modulator; perhaps it is the general boundary modulator. For an example of what I mean, consider: once you were a boy in single digits; now you are a man in double digits. How did this change happen? Typically, we say, "Time passed and you, making your various rights of passage: birth, first steps, first words, first date, graduation, first job, marriage and etc., moved on, growing older." Well, do you think these rights of passage are moving you along through one boundary after another? Do you think passage through all of these boundaries has been actuated -- maybe I should rather say, facilitated -- by time? • 20.4k What you have to say is too muddled to have any reverberation. • 621 What you have to say is too muddled to have any reverberation. Thanks for the weigh-in. Dialogue is divine, even when it's not. You think my thinking untidy. The hard trick in slinking behind low expectations: maintaining enough public interest to avoid wholesale dismissal. Invective trumps silence, especially when it's instructive. Against obverse inclination, you've been doing your job of examination: unselfish. Hostile interest is intriguing because -- I'm off topic... Back to chasing reverberation. Goal: sustain your pithy judgments. bold italic underline strike code quote ulist image url mention reveal
## Algebra and Trigonometry 10th Edition $x=17; y=-11; z=-3$ Plug the value of $z=-3$ into the second equation. $3y-8(-3)=-9 \implies y=-11$ Now, we will plug the value of $y$ and $z$ into the expression of $x$ to find the value of $x$. We get: $x-(-11)+2(-3)=22 \implies x=17$ Thus, $x=17; y=-11; z=-3$
Wikipedia's Strickland affair: en.wikipedia.org/wiki/Wikipedi Strickland is a new Nobel prize winner who, before the prize, had never been put up for promotion to full professor (embarrassing her employer) and had two attempts at creating a Wikipedia article shot down (which should have, but apparently didn't, embarrassed the people who did it). This trio of op-eds examines how everyone responded, how we can do better, or (in the third case) why the writer thinks we shouldn't try to do better. @11011110 wow, that third one is quite a thing. Pedantic arguments about semantic points the author has invented out of whole cloth are *so* convincing 🙄 A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
# Rsa Find D Given N And E Calculator Public Key and Private Key. To make things look and feel real, I will demonstrate all steps needed to factorize and recover a private key. There are several ways to calculate the average of a group of numbers. Need a simple and nice feeling Java Applet math calculator? Click here. You have to be creative in finding the common difference for these types of problems. This confidence interval calculator allows you to perform a post-hoc statistical evaluation of a set of data when the outcome of interest is the absolute difference of two proportions (binomial data, e. Given that I don't like repetitive tasks, my decision to automate the decryption was quickly made. Please enter the necessary parameter values, and then click 'Calculate'. Therefore, factoring is at least as hard as RSA (but might be harder). Calculate Power, Current, Voltage or Resistance. The Probability Calculator computes the probability of one event, based on probabilities of other events. i =1 The fact that the difference is squared is the reason the technique is called linear least-squares regression. E(X|X +Y = n) = λ1n λ1 +λ2. c to the power of d, mod n, equals Bob's original message, m. The number n is called modulus. Additional Examples of Finding RSA Decryption Keys Fold Unfold. GitHub Gist: instantly share code, notes, and snippets. Sample size is calculated using the formula: n = (Z 2 × P(1 – P))/e 2. Once you know the factors p and q, it is relatively easy to calculate d and decrypt ciphertext. In mathematics, there are n! ways to arrange n objects in sequence. Also experiment with other financial calculators, or explore hundreds of other calculators addressing math, fitness, health, and many more. Or Use trial and error method to calculate d ed = 1 mod ϕ(n) ϕ(n)=(p. 11 When nis large, the function y= xmod nis a trapdoor one-way function. This calculator is used to add and subtract angles in the form Degrees - Minutes - Seconds (DMS). Similarly an encrypted message C is decrypted by the message C → Cd (mod n). The ciphertext C is. CONJECTURE 6. 70, A and B are disjoint I like to use what's called a joint probability distribution. For small populations n can be adjusted so that n(adj) = (Nxn)/(N+n). I could go to the trouble of finding two points and computing the slope, or of plugging zero in for x and solving for the y -intercept value, but it's simpler to just solve for " y = ". Density is a measure of how much mass is contained in a given unit volume (density = mass/volume). Compute the ex-pected number of successes in the. If the number of degrees is a whole number, the decimal point is optional. i =1 The fact that the difference is squared is the reason the technique is called linear least-squares regression. Hope any one want to do computation like (a^b mode n) effectively find it useful. The decryption key (d,n) is kept private by the user. Online 2D and 3D plotter with root and intersection finding, easy scrolling, and exporting features. Speed Calculator is online 3 in 1 tool. The encryption key (e,n) is made public. Decrypting a message consists of calculating c d mod N. How to Find Standard Deviation on the TI–84. RSA has stood the test of nearly 40 years of attacks, making it the algorithm of choice for encrypting Internet credit-card transactions, securing e-mail, and authenticating phone calls. Get best practices & research here. 3 should be used. Rather, use , and reduce intermediate results modulo 187 whenever they g square-a et bigge nd-mult r than iply 187. Euler’s Totient function ?(n) for an input n is count of numbers in {1, 2, 3, …, n} that are relatively prime to n, i. Recipient share of expenditures m. RSA NetWitness Platform 11. Click any of the examples below to see the algebra solver in action. But using e and d , user b can quickly factor N. Below you can see the formula for torsion spring constant and an example of how the formula works. Factorization means just that, finding the factors of a number. Biz & IT — Locking the bad guys out with asymmetric encryption It makes online shopping, banking, and secure communications possible. It is simple to use these specially designed calculators: Enter the values of the known variables in the text boxes; Leave the text box empty or the variable you want to solve for. In other words, it is possible to have n An matrices A and B such that eA+B 6= e eB. E[Y] = Z 1 1 E[YjX = x]fX(x)dx Now we review the discrete case. RSA RSA RSA Key generation RSA Encryption RSA Decryption A Real World Example RSA Security 25. me, this is an RSA private key. In general, the way A acts on \mathbf{x} is complicated, but there are certain cases. Main TVM functions of a BAII Plus Financial Calculator The calculator is also a quick method of double checking your formula calculations. Therefore, the essence of security for RSA is that given only the public key e and n, it takes a lot of computation to discover the private key d. Calculate a date 90 days from now, 60 days before today, or any N days prior to or after the current date, counting all days or only business days. The markup equation or markup formula is given below in several different formats. The Consumer Price Index (CPI) and inflation for October 2019 is scheduled for release by the U. e=31 pq=3599 Factoring 3599 gives 59 61 so I thought it. This arithmetic sequence calculator (also called the arithmetic series calculator) is a handy tool for analyzing a sequence of numbers that is created by adding a constant value each time. Solutions for Diff. The address of an element is given by listing the row number then the column number. Federal Share n. n is called the public modulus, e the public exponent and d the private exponent. In other words, pick d such that de - 1 can be evenly divided by (p-1)(q-1), the totient, or ϕ(n). However this is often not true for exponentials of matrices. Find out more Find out what's new at RSA. Z-Transforms, Their Inverses Transfer or System Functions Professor Andrew E. Use the formula a n = a 1 + (n – 1)d to set up two equations that use the given information. Conditional Probability. Use this information to factor N. Torque is a pseudo-vector that measures the tendency of a force to rotate an object about some axis. At this point we have all we need for the public/private keys. The PARTY2 can be given the public keys of e and n, so that PARTY2 can encrypt the message with them. Please enter the necessary parameter values, and then click 'Calculate'. Differentiate both sides of the equation, getting = D ( e 4x) + D ( e 5y) ,. Are n=221 and e=5 valid numbers for RSA. For example, if a product costs $100, then the selling price with a 25% markup would be$125. Include Y in place of e and its descendants in X. Decrypting RSA cipher text when given N e and d. n = pq = 11. by Alexey Samoshkin OpenSSL Command Cheatsheet Most common OpenSSL commands and use cases When it comes to security-related tasks, like generating keys, CSRs, certificates, calculating digests, debugging TLS connections and other tasks related to PKI and HTTPS, you’d most likely end up using the OpenSSL tool. This form is familiar from being used as stop sign. To decrypt ciphertext message C, raise it to another power d modulo n. Discover Cooper. DI Management Home > Mathematics > RSA: The problem: given d and e, can we factorize N? Surprisingly, there isn't a. Given two numbers, a (the dividend) and n (the divisor), a modulo n (abbreviated as a mod n) is the remainder from the division of a by n. Lemma 3 in this post guarantees that d exists and is unique (and also explains what a modular multiplicative inverse is). Expressed mathematically, we need to find a number d such that: 11 × d = 1 + n(288) Using trial and error, we find that 131 works because 11 × 131 = 1 + 5(288). The algorithm capitalizes on the fact that there is no efficient way to factor very large (100-200 digit) numbers. Lets start by extracting n and e from the public key. RSA and Factoring: We do not know if RSA is as hard as factoring. CONJECTURE 6. The parts of the key should each be a single hex number, while the cryptotext should be a sequence of bytes. You need to enable JavaScript to run this app. Digital Signatures In the non-digital world, Alice would sign the document. Instead of computing c d (mod n), Alice first chooses a secret random value r and computes (r e c) d (mod n). How long time it will take depends on file size, your own download speed and the server's upload speed. Bob will send or give the encrypted message to Alice. Q = n(e-) × F Q = 2. Arithmetic sequences are very helpful to identify because the formula for the nth term of an arithmetic sequence is always the same: a n = a 1 + (n - 1)d. it is always easy to calculate n; given n, it is very difficult to compute pand q. Interestingly, though n is part of the public key, difficulty in factorizing a large prime number ensures that attacker cannot find in finite time the two primes (p & q) used to obtain n. Given that I don't like repetitive tasks, my decision to automate the decryption was quickly made. What is the Smallest RSA Private Key Why is there, at all, such a thing? Why is it not 42? We have M≡me mod n We need to find d≡e. This is also called public key cryptography, because one of the keys can be given to anyone. You can help protect yourself from scammers by verifying that the contact is a Microsoft Agent or Microsoft Employee and that the phone number is an official Microsoft global customer service number. N is called the RSA modulus, e is called the encryption exponent, and d is called the decryption exponent. Now that you have found the gross profit, let’s look at the markup percentage calculation:. The public and private exponents will often be expressed as a fraction of the modulus. Raw Event Data Storage. Naturally occurring vitamin E exists in eight chemical forms (alpha-, beta-, gamma. N is called the RSA modulus, e is called the encryption exponent, and d is called the decryption. A bunch of buttons, a little screen and a lot of punching in numbers to get a result. Just type matrix elements and click the button. Randomly choose two prime numbers pand q. There are very many encryption algorithms but I am describing the Rivest, Shamir, Adleman (RSA) Algorithm. For small populations n can be adjusted so that n(adj) = (Nxn)/(N+n). If you answer yes, obtain the corresponding d. Student t-Value Calculator. Test by clicking to calculate the result with the default numbers. It then occurred to me (and a head slapped followed), that I have fairly recently published a library for Javascript RSA encryption which includes private and public key generation for RSA encryption. Unobligated balance of Federal funds (line d minus g) Program Income: l. n the set or population. Both the RSA-encrypted symmetric key and the symmetrically-encypted message are transmitted to Alice. This number is also called combination number or n choose k or binomial coefficient or simply combinations. A form of the permutation problem that students commonly see is the "committee" problem. Author: Minh Van Nguyen This tutorial uses Sage to study elementary number theory and the RSA public key cryptosystem. BYJU'S online find the value of x calculator tool makes the calculations faster and easier where it displays the output in a fraction of seconds. Yagle, EECS 206 Instructor, Fall 2005 Dept. Tech support scams are an industry-wide issue where scammers trick you into paying for unnecessary technical support services. 64 and n = 256, you probably won’t be able to simply look it up in a table. Investigate your GPS options and if you can get the display into decimal degrees like 117. Thus n=33, e=3 and d=7. Milling operations remove material by feeding a workpiece into a rotating cutting tool with sharp teeth, such as an end mill or face mill. This will calculate the decoding number d. F 2 = P 2 (π d 2 2 / 4) (2) where. RSA - Given n, calculate p and q? This may be a stupid question & in the wrong place, but I've been given an n value that is in the range of 10 42. The program is operated by entering two geographic points and then pressing the Calculate button. Example 1 Find the derivative f '(x), if f is given by f(x) = 4 cos (5x - 2) Solution to Example 1 Let u = 5x - 2 and f(u) = 4 cos u, hence. Computer security vendor RSA, maker of two-factor authentication SecurID, has been hacked by unknown parties. Note: The percent function will also work if you enter the number first and then the percentage you want i. The formula to calculate a growth rate given a beginning and ending population is:. , if gcd(a, m) = 1). The Rivest-Shamir-Adleman (RSA) algorithm is one of the most popular and secure public-key encryption methods. GCD Calculator Instructions. The algorithm for creating a decryption key is as. by Alexey Samoshkin OpenSSL Command Cheatsheet Most common OpenSSL commands and use cases When it comes to security-related tasks, like generating keys, CSRs, certificates, calculating digests, debugging TLS connections and other tasks related to PKI and HTTPS, you’d most likely end up using the OpenSSL tool. 2 (1978) standards which are based on metric units. In April 1977, RSA was invented by three people from Massachusetts Institute of Technology, including two computer scientists, Ron Rivest, Adi Shamir, and a mathematician Leonard Adleman, and was later publicized in August of the same year. Your online GED ® account is your one-stop shop for passing the GED ® test. It’s free to set up, and you’ll find study materials, tips, and classes. Solutions for almost all most important equations involving one unknown. Be aware that this is menu item A if you have a TI-84 calculator, but it is menu item 0 on a TI-83 calculator. 2 Perform encryption and decryption using the RSA algorithm, as in Figure 9. She chooses - p=13, q=23 - her public exponent e=35 • Alice published the product n=pq=299 and e=35. In fact, they asserted that even with the best factoring methods and the fastest computers available at the time, it would take over 40 quadrillion years. Asymmetric actually means that it works on two different keys i. We are experts in probability distribution calculators. The syntax for the binomial probability density function command is binompdf(n,p,x). Calculator Menu | Beam Deflection Calculators. RSA is an encryption algorithm, used to securely transmit messages over the internet. RSA has stood the test of nearly 40 years of attacks, making it the algorithm of choice for encrypting Internet credit-card transactions, securing e-mail, and authenticating phone calls. Not sure if this is the correct place to ask a cryptography question, but here goes. Amortization Schedule Calculator. It must be large enough such that the numbers p and q cannot be extracted from it - 512 bits at least i. You will need to find two numbers e and d whose product is a number equal to 1 mod r. In an RSA system the public key of a given user is e=31 and n=3599 How can I calculate the private key HELP ! Your public key is (E,PQ). "Vitamin E" is the collective name for a group of fat-soluble compounds with distinctive antioxidant activities [1]. d dx xn = nxn−1. Knowing this information can help you make a more informed choice regarding when to collect Social Security retirement benefits. The public key is the number pair (n,e). odd integer d between 3 and n-1. The other key must be kept private. This is often computed using the Extended Euclidean Algorithm, since e and ϕ(n) are relatively prime and d is to be the modular multiplicative inverse of e. This calculation is based on the Normal distribution, and assumes you have more than about 30 samples. In order to determine a rate law we need to find the values of the exponents n, m, and p, and the value of the rate constant, k. The secret key also consists of n and a d with the property that e × d is a multiple of φ(n) plus one. Date/calendar related services – Overview; Calendar Generator – Create a calendar for any year. Calculate Your Body Mass Index. Similarly, logs with different constant bases are equivalent. I had an issue of non-credit of my advance tax for AY 2019-20 which got resolved thru 'e-Nivaran' within time frame. Use the formula a n = a 1 + (n – 1)d to set up two equations that use the given information. This method computes points in elliptic curves, which are represented by formulas such as y² ≡ x³ + ax + b (mod n) where n is the number to factor. About This Calculator. When: b y = x. n = pq = 11. Mechanics and Machine Design, Equations and Calculators. And so to find 24% of $412, we are taught to change 24% to the decimal. The calculator will generate all the work with detailed explanation. At E*TRADE, you're in full control of your financial future. The EOQ Economic Order Quantity model is used to minimize these inventory related costs. 1 F2013abn 3 Reinforced concrete is a composite material, and the average density is considered to be 150 lb/ft3. In April 1977, RSA was invented by three people from Massachusetts Institute of Technology, including two computer scientists, Ron Rivest, Adi Shamir, and a mathematician Leonard Adleman, and was later publicized in August of the same year. So far, we have identified our one way function , which is given by modular exponentiation. Rather, use , and reduce intermediate results modulo 187 whenever they g square-a et bigge nd-mult r than iply 187. In the RSA key generation steps, what if two entities select a common factor to generate n (i. Additional Examples of Finding RSA Decryption Keys Fold Unfold. Exact Differential Equations • Integrating Factors Exact Differential Equations In Section 5. In the diagram, the angle is the angle = 180 degrees between the r and F vectors when they are drawn from the same origin. Shamir, and L. if the calculator shows that a certain eyepiece gives 100x in your telescope, and you add a 2x Barlow, the resulting magnification will be 200x (100 x 2). This wikiHow teaches you how to find the standard deviation for list of numbers on a TI-84 graphing calculator. In an RSA system the public key of a given user is e=31 and n=3599 How can I calculate the private key HELP ! Your public key is (E,PQ). Where M is the message block integer, C is the ciphertext block integer, and the private key is made up of the two numbers (d, n). The factors of e are 1 and 3, thus 1 is the highest common factor of them. Not only that, but this is all available online. To do this, we can use Euclid’s Extended Algorithm, but for simplicity let’s use this Modular Multiplicative Inverse calculator. Are n=221 and e=5 valid numbers for RSA. Given two numbers, a (the dividend) and n (the divisor), a modulo n (abbreviated as a mod n) is the remainder from the division of a by n. The residual is taken as a measure of the abstract parameter e i , or true error. The private key (d) is the inverse of e modulo PHI. All formula entries begin with an equal sign (=). 36-705 Brief Review of Basic Probability I assume you already know basic probability. We use α to denote the size of the public exponent (e = Nα), and β or δ to denote the size of the private exponent (d = Nβ or d = Nδ) depending on the context. It is infeasible to determine d given e and n. ) and you can get travel time having average speed and distance. 20, P(B) = 0. Thus n=33, e=3 and d=7. The RSA cryptosystem is based on this theorem: it implies that the inverse of the function a ↦ a e mod n, where e is the (public) encryption exponent, is the function b ↦ b d mod n, where d, the (private) decryption. _____ Exercise 3 You can use this key for other powers as well. It is named after Ron Rivest, Adi Shamir, and Leonard Adleman who published it at MIT in 1977. c to the power of d, mod n, equals Bob's original message, m. If you call. The public key is the pair [e,n] and the private key is the pair [d,n]. Digital Signatures In the non-digital world, Alice would sign the document. This follows from Lagrange's theorem and the fact that φ(n) is the order of the multiplicative group of integers modulo n. Sample size is calculated using the formula: n = (Z 2 × P(1 – P))/e 2. A C K N O W L E D G M E N T S I w ould like to express m y sincere appreciation to m y parents first and forem ost for sending m e through school for the past 19 years and alw ays m aking sure I w as aim ing high and giving m y best efforts. And, record the private key, D. Cities by ZIP Code™ For more rapid delivery, please use the recommended or recognized city names whenever possible for this ZIP Code ™. 3 presents the documentation in a unified map of product documentation and videos, including software, hardware, and RSA content. 1 An Algorithm for Modular Exponentiation 38. A simple RSA implementation in Python. We also deliver, on a regular basis, insights via blogs, webcasts, newsletters and more so you can stay ahead of cyber threats. N ^ S (y i-y i) 2 is smallest, given the data. The author, Samuel Chukwuemeka aka Samdom For Peace gives credit to Our Lord, Jesus Christ. The smaller d is, the faster this operation goes. For instance, the expression "7 mod 5" would evaluate to 2 because 7 divided by 5 leaves a. RSA is a cryptosystem which is known as one of the first practicable public-key cryptosystems and is widely used for secure data transmission. Expert Answer. Enter the root degree (n) and number (x) and press the = button. Important Note: The distance calculator on this page is provided for informational purposes only. Montgomery College’s talented and award-winning faculty, made up of academic leaders and industry experts, are both engaging educators and helpful guides. We can do the same with digital signatures. Taking a look at what you linked to in a reply to a question comment: Page on stuyctf. RSA Conference conducts information security events around the globe that connect you to industry leaders and highly relevant information. We will do some of the easier cases now, and discuss the rest later. by Alexey Samoshkin OpenSSL Command Cheatsheet Most common OpenSSL commands and use cases When it comes to security-related tasks, like generating keys, CSRs, certificates, calculating digests, debugging TLS connections and other tasks related to PKI and HTTPS, you’d most likely end up using the OpenSSL tool. Copersmith gave an attack witch allows to find m inspite using the salling (with e=3). An alternative, but equivalent definition for real powers can be given once the exponential function and the natural logarithm have been introduced. Factorial There are n! ways of arranging n distinct objects into an ordered sequence. If you'd like to see how we perform the calculation, view the page source. And, record the private key, D. Scientific Calculator. “What is a modulo?” you may ask – well, if you take two numbers and then divide the first number by the second number then the remainder is called the modulo. The probability that event B occurs, given that event A has already occurred is P(B|A) = P(A and B) / P(A) This formula comes from the general multiplication principle and a little bit of. where: Z = value from standard normal distribution corresponding to desired confidence level (Z=1. For small populations n can be adjusted so that n(adj) = (Nxn)/(N+n). choose a cleartext message call it m – in the form of a number less than n 2. Please input the function and its derivative, then specify the options below. rsatool calculates RSA (p, q, n, d, e) and RSA-CRT (dP, dQ, qInv) parameters given either two primes (p, q) or modulus and private exponent (n, d). Show, in the style of the trace given with the code, how the entropy-optimal sort first partitions the array B A B A B A B A C A D A B R A. Include Y in place of e and its descendants in X. When I added this info, and looked for n == 50, the chart indicates around 1. Public Key Protocol Key-management is the main problem with symmetric algorithms - Bob and Alice have to somehow agree on a key to use. A simple RSA implementation in Python. Below you can see the formula for torsion spring constant and an example of how the formula works. Fortunately,$n$is fairly small in this case, and factors easily upon trial division into$m = 83. Remark 1: This attack shows that anyone with a knowledge of the public parameters a, b and g can form a multiple x' of x. Step 3: Profit With our two constant, we can begin encrypting and decrypting. Given an eigenvalue of a 3 by 3 matrix, find a basis of the eigenspace corresponding to that eigenvalue. EXAMPLE: If you have the equation: 2X 3 - 4X 2 - 22X + 24 = 0. PARTY1, using d and n can then decrypt the. Old Mutual offers a wide range of affordable and comprehensive insurance, investment and corporate solutions as well as financial advice. General Use the arrows to move around the screen. Prime Numbers Generator and Checker (a. Note: The percent function will also work if you enter the number first and then the percentage you want i. The function is used, among other things, to find the number of way "n" objects can be arranged. Additional Examples of Finding RSA Decryption Keys Fold Unfold. Today RSA Link implemented a new way of presenting documentation to help RSA NetWitness® Platform customers find the information they need quickly and easily. 3 should be used. First, a reminder of the RSA algorithm and what my program implements: Take two distinct, large primes p and q Ideally these have a similar byte-length Multiply p and q and store the result in n. [Back to Contents] Definitions of Trigonometric Functions Draw a unit circle with center O. It is relatively easy to calculate M^e mod n and Cd for all values of M < n. Important Note: The distance calculator on this page is provided for informational purposes only. So 7mg of tetracaine were given. In particular, for this d, the following holds for all m: m = (me)d mod n. d is kept as the private key exponent. The public key has modulus. This is a little tool I wrote a little while ago during a course that explained how RSA works. In public key cryptosystems there are two keys, a public one. Find the eigenvalues and eigenvectors of a given 2 by 2 matrix. To decrypt ciphertext message C, raise it to another power d modulo n. 24 , and multiply times 412. 718281828 and raised to the power of x it has its own derivative. Again, this right triangle calculator works when you fill in 2 fields in the triangle angles, or the triangle sides. We choose p= 11 and q= 13. The function is used, among other things, to find the number of way “n” objects can be arranged. Modulo Definition. Generate the private key. Open Command Prompt and compile & Run. First, a reminder of the RSA algorithm and what my program implements: Take two distinct, large primes p and q Ideally these have a similar byte-length Multiply p and q and store the result in n. The easiest, and most common, is the case that n is a positive integer. It is based on the difficulty of factoring the product of two large prime numbers. We let n = pq be the product of two primes and e be a number with gcd(e;ϕ(n))=1, so that the RSA public key is given by the pair (n;e). Then click Calculate. Not only that, but this is all available online. The Rehabilitation Services Administration (RSA), through its many programs and projects, provides an array of discretionary grants and other funding opportunities to serve individuals with disabilities and their families. Calculator Soup is a free online calculator. I wanted an estimate for n == 40, which your chart indicated would be around 2850secs. It is often useful to designate the infinite possibilities by what is called the Taylor Series. F 2 = rod force (lb, N) P 2 = pressure in the cylinder (opposite rod) (psi, bar) Hydraulic Force Calculator Imperial Units. This work is derived from Euclid's Elements starting at Book III Proposition 1. Now you have all the information needed to use the mL/hr to dose/hr calculator. I need someone who can help me in RSA Cryptography. Factorial There are n! ways of arranging n distinct objects into an ordered sequence. General Use the arrows to move around the screen. For example 5!= 5*4*3*2*1=120. Cohen's d = 2t /√ (df) r Y l = √(t 2 / (t 2 + df)) Note: d and r Y l are positive if the mean difference is in the predicted direction. Calculator Instructions for Statistics Using the TI-83, TI-83 plus, or TI-84 I. Expressed in formulas, the following must apply: e × d = 1 (mod φ(n)). In the RSA system, each user sets up his or her own public and private keys. Boneh and Durfee improved this attack to recover private exponents that are. A solution for x p can be obtained from by replacing N d by N a, N a by N d, e s,n by e s,p, and e s,p by e s,n. RSA algorithm is an asymmetric cryptography algorithm. Fill in the public and private exponents and the modulus (e, d, and n) as well as the cryptotext. Public Key Cryptography Overview Decryption attacks on RSA • RSA Problem: Given a positive integer n that is a product of two distinct large primes p and q, a. Since an amplifier can produce a limited amount of voltage (limited by the internal power supply's design), the power output is limited when driving a given load (i. Information. Enter a population size and a sample size to calculate the theoretical margin of error, plus or minus in percentage points, 95% of the time, on questions where. PARTY1, using d and n can then decrypt the. RSA Encryption - Tutorial. In RSA encryption system, public. Note, too, that O(log n) is exactly the same as O(log(nc)).
# Thread: Trouble on Integral Test 1. ## Trouble on Integral Test Hi guys! I was doing a problem for my homework and I was doing great until the end. I got the integral correct and proved that the series converged but I'm having trouble understanding something. In the attached picture: I just need some explanation from at the point the limit is set up from t -> infinity and why they then set up another limit from s -> 0 and why for that latter portion they somehow got an e8. Thanks for all your help guys! 2. ## Re: Trouble on Integral Test That is an s, not an 8. It is $e^s$. 3. ## Re: Trouble on Integral Test If \displaystyle \begin{align*} s = \frac{11}{t} \end{align*} then as \displaystyle \begin{align*} t \to \infty , \, s \to 0 \end{align*}... 4. ## Re: Trouble on Integral Test omg, i legitimately couldn't tell. thanks so much!
+0 +2 154 1 $$Let a and b be the solutions of the quadratic equation 2x^2 - 8x + 7 = 0. Find \frac{1}{2a} + \frac{1}{2b}.$$ Guest Feb 16, 2018 #1 +93866 +3 Let a and b be the solutions of the quadratic equation 2x^2 - 8x + 7 = 0. Find \frac{1}{2a} + \frac{1}{2b}. $$\text{Let }\alpha\;\; and \;\ \beta\;\; \text{be the solutions of the quadratic equation }\\ 2x^2 - 8x + 7 = 0. \;\;\;Find \;\;\ \frac{1}{2a} + \frac{1}{2b}\\ \frac{1}{2\alpha} + \frac{1}{2\beta}=\frac{\beta + \alpha}{2\alpha\beta}$$ Now you can do this the long way and work out what the roots are but I expect you are supposed to know this: $$\boxed{If \;\;ax^2+bx+c=0 \text{ and the roots are }\alpha \;\;and \;\; \beta\;\;then\\ \alpha + \beta = \frac{-b}{a}\;\;and \;\; \alpha \beta = \frac{c}{a}\\}~\\$$ $$\alpha + \beta = \frac{-b}{a}\;\;and \;\; \alpha \beta = \frac{c}{a}\\ \alpha + \beta = \frac{8}{2}\;\;and \;\; \alpha \beta = \frac{7}{2}\\$$ $$\frac{1}{2\alpha} + \frac{1}{2\beta}\\=\frac{\beta + \alpha}{2\alpha\beta}\\ =\frac{8}{2}\div\frac{2\times7}{2}\\ =4\div7\\ =\frac{4}{7}$$ Melody  Feb 16, 2018 #1 +93866 +3 Let a and b be the solutions of the quadratic equation 2x^2 - 8x + 7 = 0. Find \frac{1}{2a} + \frac{1}{2b}. $$\text{Let }\alpha\;\; and \;\ \beta\;\; \text{be the solutions of the quadratic equation }\\ 2x^2 - 8x + 7 = 0. \;\;\;Find \;\;\ \frac{1}{2a} + \frac{1}{2b}\\ \frac{1}{2\alpha} + \frac{1}{2\beta}=\frac{\beta + \alpha}{2\alpha\beta}$$ Now you can do this the long way and work out what the roots are but I expect you are supposed to know this: $$\boxed{If \;\;ax^2+bx+c=0 \text{ and the roots are }\alpha \;\;and \;\; \beta\;\;then\\ \alpha + \beta = \frac{-b}{a}\;\;and \;\; \alpha \beta = \frac{c}{a}\\}~\\$$ $$\alpha + \beta = \frac{-b}{a}\;\;and \;\; \alpha \beta = \frac{c}{a}\\ \alpha + \beta = \frac{8}{2}\;\;and \;\; \alpha \beta = \frac{7}{2}\\$$ $$\frac{1}{2\alpha} + \frac{1}{2\beta}\\=\frac{\beta + \alpha}{2\alpha\beta}\\ =\frac{8}{2}\div\frac{2\times7}{2}\\ =4\div7\\ =\frac{4}{7}$$ Melody  Feb 16, 2018
# Detection of the Electric Charge of a Black Hole By the "No Hair Theorem", three quantities "define" a black hole; Mass, Angular Momentum, and Charge. The first is easy enough to determine, look at the radius of the event horizon and you can use the Schwarzschild formula to compute the mass. Angular Momentum can be found using the cool little ergosphere Penrose "discovered". However, I don't know how to determine the charge of the black hole. How can an electromagnetic field escape the event horizon of a Reissner-Nordström black hole? Is there any experiment we could theoretically do to a black hole to determine its charge? - How are we to look at radius of event horizon? we cannot measure schwarzschild radius. – Newman Aug 17 '11 at 20:43 We could probably measure it by the effects of gravitational lensing, or just simply the gravitational pull. – Benjamin Horowitz Aug 17 '11 at 23:20 A charged black hole does produce an electric field. In fact, at great distances (much larger than the horizon), the field strength is $Q/(4\pi\epsilon_0 r^2)$, just like any other point charge. So measuring the charge is easy. As for how the electric field gets out of the horizon, the best answer is that it doesn't: it was never in the horizon to begin with! A charged black hole formed out of charged matter. Before the black hole formed, the matter that would eventually form it had its own electric field lines. Even after the material collapses to form a black hole, the field lines are still there, a relic of the material that formed the black hole. A long time ago, back when the American Journal of Physics had a question-and-answer section, someone posed the question of how the electric field gets out of a charged black hole. Matt McIrvin and I wrote an answer, which appeared in the journal. It pretty much says the same thing as the above, but a bit more formally and carefully. Actually, I just noticed a mistake in what Matt and I wrote. We say that the Green function has support only on the past light cone. That's actually not true in curved spacetime: the Green function has support in the interior of the light cone as well. But fortunately that doesn't affect the main point, which is that there's no support outside the light cone. - ""Even after the material collapses to form a black hole, the field lines are still there, a relic of the material that formed the black hole."" So, the field lines end on the event horizon? Where is the charge? Inside or on the horizon? – Georg Jul 12 '11 at 9:21 Well, when we picture electric field lines, we picture them at a moment in time, so the answer to this question depends on a choice of a time coordinate (or at least of a particular foliation of spacetime into constant-time slices). If you use Schwarzschild coordinates for this, then yes, the field lines end on (or just barely outside of) the horizon. There's a good reason for this: in Schwarzschild coordinates, infalling matter appears to get "stuck" at the horizon, not crossing it until $t=\infty$. (I say "appears to" because this is just an artifact of a coordinate singularity.) – Ted Bunn Jul 12 '11 at 14:36 The Aharanov-Bohm effect (where an electrically charged particle is effected by the electric and magnetic fields even though it travels only in a region where these fields are zero, i.e. outside of a solenoid) demonstrates that from the point of view of quantum mechanics, the underlying fields are not the electromagnetic field, but instead the electromagnetic 4-potential. This says that forces are not enough to define physics, one must also use potentials (energies). So maybe the question should be not how the electric field gets out of the black hole but instead how the electric / electromagnetic potential gets out. The electromagnetic potential $A^\mu$ is not uniquely defined. There is a gauge freedom; one can always add the gradient of a function of space and time to get a different electromagnetic potential $A'^{\mu}$ that has the same electric and magnetic field: $A'^{\mu} = A^\mu + \partial^\mu \Gamma$. For a charged black hole, this means that there can be a non-zero magnetic potential $A^j$, (which still gives a zero magnetic field). Anyway, the point is that the question of "how does the electric field get out of a black hole" has an analog in the quantum mechanics of flat space-time; "how does the Aharanov-Bohm effect work?" In both cases, there seems to be a global requirement that things be consistent even though it appears that there might logically be no relationship. To detect an electric field or an electromagnetic potential we use a small test charge. In the two cases, we consider interactions between "restricted" electrons and "test" electrons. The restricted electrons are stuck inside the black hole, or are on paths inside a solenoid that generate essentially no electric or magnetic fields external to the solenoid. The test electrons detect the electric field outside the black hole, and the electromagnetic potential outside the solenoid. -
# Transformation of a Taylor series: “doubling” the derivative order Suppose a function $f(z)$ has a convergent Taylor expansion: $$f(z)=\sum_{n=0}^{\infty} c_n \frac{z^n}{n!}$$ Are there general tools to compute $$g(z) = \sum_{n=0}^{\infty} c_{2n} \frac{z^n}{n!}=\sum_{n=0}^{\infty} \, f^{(2n)}(0)\frac{z^n}{n!} \text{ ?}$$ I came across this general problem in a more specific context explained in this question. One possible way is to do an integral transform $$\dfrac{1}{s} \int_0^{\infty} e^{-z/s} f(z) dz$$ to remove the factorials, then replace $s \to \sqrt{s}$ and do the inverse transform. However, this easily leads to very complicated integrals, and does not seem to work for my particular problem. -