text
stringlengths
100
356k
# Sutherland's law (Difference between revisions) Revision as of 13:46, 17 May 2007 (view source)Jola (Talk | contribs)← Older edit Latest revision as of 04:13, 25 October 2008 (view source)JKoff (Talk | contribs) m (7 intermediate revisions not shown) Line 1: Line 1: - In 1893 [http://en.wikipedia.org/wiki/William_Sutherland_(physicist) William Sutherland], an Australian physicist, published a relationship between the absolute temperature, $T$, of an ideal gas and its dynamic visocity, $\mu$, based on kinetic theory of ideal gases and an idealized intermolecular-force potential. This formula, often called Sutherland's law, is still commonly used and most often gives fairly accurate results with an error less than a few percent over a wide range of temperatures. Sutherland's law can be expressed as: + In 1893 [http://en.wikipedia.org/wiki/William_Sutherland_(physicist) William Sutherland], an Australian physicist, published a relationship between the dynamic viscosity, $\mu$, and the absolute temperature, $T$, of an ideal gas. This formula, often called Sutherland's law, is based on kinetic theory of ideal gases and an idealized intermolecular-force potential. Sutherland's law is still commonly used and most often gives fairly accurate results with an error less than a few percent over a wide range of temperatures. Sutherland's law can be expressed as: - :$\mu = \mu_r \left( \frac{T}{T_r} \right)^{3/2}\frac{T_r + S}{T + S}$ + :$\mu = \mu_{ref} \left( \frac{T}{T_{ref}} \right)^{3/2}\frac{T_{ref} + S}{T + S}$ - :$T_r$ is a reference temperature. + :$T_{ref}$ is a reference temperature. - :$\mu_r$ is the viscosity at the $T_r$ reference temperature + :$\mu_{ref}$ is the viscosity at the $T_{ref}$ reference temperature :S is the Sutherland temperature :S is the Sutherland temperature Line 13: Line 13: Comparing the formulas above the $C_1$ constant can be written as: Comparing the formulas above the $C_1$ constant can be written as: - :$C_1 = \frac{\mu_r}{T_r^{3/2}}(T_r + S)$ + :$C_1 = \frac{\mu_{ref}}{T_{ref}^{3/2}}(T_{ref} + S)$ + + {| border=2 + |+ Sutherland's law coefficients: + ! Gas !! $\mu_0 [\frac{kg}{m s}]$ !! $T_0 [K]$ !! $S [K]$ !! $C_1 [\frac{kg}{m s \sqrt{K}}]$ + |- + | Air + | $1.716 \times 10^{-5}$ + | $273.15$ + | $110.4$ + | $1.458 \times 10^{-6}$ + |} == References == == References == * {{reference-paper|author=Sutherland, W.|year=1893|title=The viscosity of gases and molecular force|rest=Philosophical Magazine, S. 5, 36, pp. 507-531 (1893)}} * {{reference-paper|author=Sutherland, W.|year=1893|title=The viscosity of gases and molecular force|rest=Philosophical Magazine, S. 5, 36, pp. 507-531 (1893)}} ## Latest revision as of 04:13, 25 October 2008 In 1893 William Sutherland, an Australian physicist, published a relationship between the dynamic viscosity, $\mu$, and the absolute temperature, $T$, of an ideal gas. This formula, often called Sutherland's law, is based on kinetic theory of ideal gases and an idealized intermolecular-force potential. Sutherland's law is still commonly used and most often gives fairly accurate results with an error less than a few percent over a wide range of temperatures. Sutherland's law can be expressed as: $\mu = \mu_{ref} \left( \frac{T}{T_{ref}} \right)^{3/2}\frac{T_{ref} + S}{T + S}$ $T_{ref}$ is a reference temperature. $\mu_{ref}$ is the viscosity at the $T_{ref}$ reference temperature S is the Sutherland temperature Some authors instead express Sutherland's law in the following form: $\mu = \frac{C_1 T^{3/2}}{T + S}$ Comparing the formulas above the $C_1$ constant can be written as: $C_1 = \frac{\mu_{ref}}{T_{ref}^{3/2}}(T_{ref} + S)$ Sutherland's law coefficients: Gas $\mu_0 [\frac{kg}{m s}]$ $T_0 [K]$ $S [K]$ $C_1 [\frac{kg}{m s \sqrt{K}}]$ Air $1.716 \times 10^{-5}$ $273.15$ $110.4$ $1.458 \times 10^{-6}$ ## References • Sutherland, W. (1893), "The viscosity of gases and molecular force", Philosophical Magazine, S. 5, 36, pp. 507-531 (1893).
# What is 51/5 as a decimal? ## Solution and how to convert 51 / 5 into a decimal 51 / 5 = 10.2 To convert 51/5 into 10.2, a student must understand why and how. Fractions and decimals represent parts of a whole, sometimes representing numbers less than 1. But in some cases, fractions make more sense, i.e., cooking or baking and in other situations decimals make more sense as in leaving a tip or purchasing an item on sale. Once we've decided the best way to represent the number, we can dive into how to convert 51/5 into 10.2 ## 51/5 is 51 divided by 5 Converting fractions to decimals is as simple as long division. 51 is being divided by 5. For some, this could be mental math. For others, we should set the equation. The numerator is the top number in a fraction. The denominator is the bottom number. This is our equation! To solve the equation, we must divide the numerator (51) by the denominator (5). Here's how you set your equation: ### Numerator: 51 • Numerators are the number of parts to the equation, showed above the vinculum or fraction bar. 51 is one of the largest two-digit numbers you'll have to convert. The bad news is that it's an odd number which makes it harder to covert in your head. Values closer to one-hundred make converting to fractions more complex. Now let's explore the denominator of the fraction. ### Denominator: 5 • Denominators are the total numerical value for the fraction and are located below the fraction line or vinculum. Smaller values like 5 can sometimes make mental math easier. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Overall, a small denominator like 5 could make our equation a bit simpler. Next, let's go over how to convert a 51/5 to 10.2. ## Converting 51/5 to 10.2 ### Step 1: Set your long division bracket: denominator / numerator $$\require{enclose} 5 \enclose{longdiv}{ 51 }$$ Use long division to solve step one. Yep, same left-to-right method of division we learned in school. This gives us our first clue. ### Step 2: Solve for how many whole groups you can divide 5 into 51 $$\require{enclose} 00.10 \\ 5 \enclose{longdiv}{ 51.0 }$$ Since we've extended our equation we can now divide our numbers, 5 into 510 (remember, we inserted a decimal point into our equation so we we're not accidentally increasing our solution) Multiply by the left of our equation (5) to get the first number in our solution. ### Step 3: Subtract the remainder $$\require{enclose} 00.10 \\ 5 \enclose{longdiv}{ 51.0 } \\ \underline{ 50 \phantom{00} } \\ 460 \phantom{0}$$ If you don't have a remainder, congrats! You've solved the problem and converted 51/5 into 10.2 If you still have numbers left over, continue to the next step. ### Step 4: Repeat step 3 until you have no remainder In some cases, you'll never reach a remainder of zero. Looking at you pi! And that's okay. Find a place to stop and round to the nearest value. ### Why should you convert between fractions, decimals, and percentages? Converting fractions into decimals are used in everyday life, though we don't always notice. Remember, they represent numbers and comparisons of whole numbers to show us parts of integers. And the same is true for percentages. So we sometimes overlook fractions and decimals because they seem tedious or something we only use in math class. But they all represent how numbers show us value in the real world. Without them, we’re stuck rounding and guessing. Here are real life examples: ### When you should convert 51/5 into a decimal Dollars & Cents - It would be silly to use 51/5 of a dollar, but it makes sense to have $0.1019. USD is exclusively decimal format and not fractions. (Yes, yes, there was a 'half dollar' but the value is still$0.50) ### When to convert 10.2 to 51/5 as a fraction Pizza Math - Let's say you're at a birthday party and would like some pizza. You aren't going to ask for 1/4 of the pie. You're going to ask for 2 slices which usually means 2 of 8 or 2/8s (simplified to 1/4). ### Practice Decimal Conversion with your Classroom • If 51/5 = 10.2 what would it be as a percentage? • What is 1 + 51/5 in decimal form? • What is 1 - 51/5 in decimal form? • If we switched the numerator and denominator, what would be our new fraction? • What is 10.2 + 1/2?
X-ray Binaries in the Galaxy and the Magellanic Clouds Session 18 -- Invited Talk Oral presentation, Monday, 12:15-1:05, Zellerbach Auditorium Room ## [18.01] X-ray Binaries in the Galaxy and the Magellanic Clouds Anne P. Cowley (ASU) \noindent For more than two decades astronomers have been aware that the most X-ray luminous stellar sources (L$_x > 10^{35}$ erg s$^{-1}$) are interacting binaries where one component is a neutron star or black hole. While other types of single and multiple stars are known X-ray sources, none compare in X-ray luminosity with the classical" X-ray binaries. In these systems X-ray emission results from accretion of material from a non-degenerate companion onto the compact star through several alternate mechanisms including Roche lobe overflow, stellar winds, or periastron effects in non-circular orbits. \vskip 5pt \noindent It has been recognized for many years that X-ray binaries divide into two broad groups, characterized primarily by the mass of the non-degenerate star: 1) massive X-ray binaries (MXRB), in which the optical primary is a bright, early-type star, and ~2) low-mass X-ray binaries (LMXB), where a lower main-sequence or subgiant star is the mass donor. A broad variety of observational characteristics further subdivide these classes. In the Galaxy these two groups appear to be spatially and kinematically associated with the disk and the halo populations, respectively. \vskip 5pt \noindent A few dozen MXRB are known in the Galaxy. A great deal of information about their physical properties has been learned from observational study. Their optical primaries can be investigated by conventional techniques. Furthermore, most MXRB contain X-ray pulsars, allowing accurate determination of their orbital parameters. From these data masses have been determined for the neutron stars, all of which are $\sim$1.4 $M_{\odot}$, within measurement errors. By contrast, the LMXB have been much more difficult to study. Although there are $\sim$150 LMXB in the Galaxy, most are distant and faint, requiring use of large telescopes for their study. Their optical light is almost always dominated by an accretion disk, rather than the mass-losing star, making interpretation of their spectral and photometric properties difficult. Their often uncertain distances further complicate our understanding. Thus, although the galactic LMXB greatly outnumber the MXRB, they are much less well understood. \vskip 5pt \noindent The X-ray binaries in the Magellanic Clouds in many ways make an ideal laboratory because they are all at the same, known distance. However, at the present time only a handful of X-ray binaries are known with certainty in these galaxies -- 7 in the LMC and 1 in the SMC. Only 3 of the LMC sources are low-mass X-ray binaries, and their properties are quite different from typical" galactic LMXB. In this review we will outline the general properties of X-ray binaries and summarize what types of information we have learned from their study over a wide range of wavelengths. An overall comparison of the global properties of X-ray binaries in the Galaxy and the Magellanic Clouds will be given.
Browse Questions # Power factor of resonance LCR circuit is $(a)\;1 \\(b)\;0.1 \\ (c)\;\frac{1}{4} \\ (d)\;\frac{2}{6}$ $\cos \phi$ is the power factor $\cos \phi =\large\frac{R}{\sqrt {[R^2+(Lw -\Large\frac{1}{cw})^2]}}$ at resonance $Lw=\large\frac{1}{cw}$ $\cos \phi=1$ Hence a is the correct answer.
#### Vol. 305, No. 1, 2020 Recent Issues Vol. 305: 1 Vol. 304: 1  2 Vol. 303: 1  2 Vol. 302: 1  2 Vol. 301: 1  2 Vol. 300: 1  2 Vol. 299: 1  2 Vol. 298: 1  2 Online Archive Volume: Issue: The Journal Editorial Board Subscriptions Officers Special Issues Submission Guidelines Submission Form Contacts ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Author Index To Appear Other MSP Journals A criterion for modules over Gorenstein local rings to have rational Poincaré series ### Anjan Gupta Vol. 305 (2020), No. 1, 165–187 ##### Abstract We prove that modules over an Artinian Gorenstein local ring $R$ have rational Poincaré series sharing a common denominator if $R∕socle\left(R\right)$ is a Golod ring. If $R$ is a Gorenstein local ring with square of the maximal ideal being generated by at most two elements, we show that modules over $R$ have rational Poincaré series sharing a common denominator. By a result of Şega, it follows that $R$ satisfies the Auslander–Reiten conjecture. We provide a different proof of a result of Rossi and Şega (Adv. Math.259 (2014), 421–447) concerning rationality of Poincaré series of modules over compressed Gorenstein local rings. We also give a new proof of the fact that modules over Gorenstein local rings of codepth at most 3 have rational Poincaré series sharing a common denominator, which is originally due to Avramov, Kustin and Miller (J. Algebra118:1 (1988), 162–204). However, your active subscription may be available on Project Euclid at https://projecteuclid.org/pjm We have not been able to recognize your IP address 3.223.3.101 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form.
# DATABASES AND PROGRAMS FOR THE SPECTROSCOPY OF SOME GREENHOUSE GASES: $CH_{4}, SF_{6}$ AND $CF_{4}$ Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/20978 Files Size Format View 2003-TC-08.jpg 256.6Kb JPEG image Title: DATABASES AND PROGRAMS FOR THE SPECTROSCOPY OF SOME GREENHOUSE GASES: $CH_{4}, SF_{6}$ AND $CF_{4}$ Creators: Boudon, V.; Champion, J. P.; Gabard, T.; Pierre, G.; Loete, M.; Wenger, Ch. Issue Date: 2003 Publisher: Ohio State University Abstract: Highly symmetrical molecules such as $CH_{4}, CF_{4}$ or $SF_{6}$ are known to be atmospheric pollutants and greenhouse gases. High-resolution spectroscopy in the infrared is particularly suitable for the monitoring of gas concentration and radiative transfer in the Earth's atmosphere. This technique requires prior extensive theoretical studies for the modeling of the spectra of such molecules (positions, intensities and shapes of absorption lines). We have developed powerful tools for the analysis and the simulation of absorption spectra of highly symmetrical molecules. These tools have been implemented in the Spherical Top Data System $(STDS)^{a}$ and Highly-spherical Top Data System $(HTDS)^{b}$ software available at http: //www.u-bourgogne.fr/LPUB/shTDS.html. They include a compilation of modeled data obtained during the last 20 years. An overview of our latest results in this domain will be $presented^{c}$. We will especially focus on the recent advances concerning the high polyads of methane and the combination and hot bands of sulfur hexafluoride. Description: $^{a}$ Ch. Wenger and J.-P. Champion, J. Quant. Spectrosc. Radiat. Transfer, 59, 471-480 (1998). $^{b}$ Ch. Wenger V. Boudon, J.-P. Champion and G. Pierre, J. Quant. Spectrosc. Radiat. Transfer, 66, 1-16 (2000). $^{c}$ V. Boudon, J.-P. Champion, T. Gabard, G. Pierre, M. Lo\""{e}te and Ch. Wenger, Env. Chem. Lett., in press (2003). Author Institution: Laboratoire de Physique de l'Universit\'{e} de Bourgogne URI: http://hdl.handle.net/1811/20978 Other Identifiers: 2003-TC-08
# Calculus Word Problem Related Rates A spotlight on the ground shines on a wall $12 m$ away. If a man $2m$ tall walks from the spotlight toward the wall at a speed of $1.6 m/sec$, how fast is the length of his shadow decreasing when he is $4m$ from the building. - the height of the shadow is determined by drawing a line connecting the spotlight to the man's head, and then extending it to the wall. Notice that the shadow height will decrease down to $2m$ (it cannot be shorter than the man). Also, your second paragraph is not right. You are suggesting to subtract the height of the man from the volume of a cone. Assume everything is 2-dimensional. You are interested in the length of the shadow, not in any areas involved. – Alex R. Oct 23 '12 at 18:19 what should the formula be then? – dsta Oct 23 '12 at 18:19 Hint: if you draw a picture, there are similar triangles that let you find a relation between the distance between the man and the wall and the height of the shadow. Added: Let us measure from the spotlight at $x=0$ with the wall at $x=12$. The height of the shadow is then $\frac {12}x \cdot 2$ meters from similar triangles. - Can you please try to explain to me what the first step would be? I am really lost on this question. – dsta Oct 23 '12 at 18:43 First, throw variables at it: $x$ = distance from man to wall, $y$ = height of shadow on wall; $12-x$ = distance from light to man. The triangle formed by the light, the man's feet and the man's head is similar to the triangle formed by the light, the bottom of the wall, and where the light hits the wall. Using properties of similar triangles, we can say $${12-x \over 2} = {12 \over y}$$ $$12y-xy=24$$ Now, take the derivative of both sides: $$12dy-xdy - ydx=0$$ $$dy={ydx \over (12-x)}$$ $dx$, the change in $x$, is given as -1.6m/s(negative because the distance to the wall is decreasing) and $x$ is given as 4. Using similar triangles, we can calculate $y$ at that point to be 3. Substituting, we get $$dy = -.6m/s$$ The shadow is moving down the wall at 0.6 meters/sec. -
# Variance-Covariance Matrix of the errors of a linear regression Suppose you have 8 observations ($i=1,...,8$) from three different states (A, B, C) and you also know that observations for $i=1,2$ are from state A, for $i=3,4,5$ are from state B and for $i=6,7,8$ are from state C. You are trying to estimate parameters with a linear regression model where $\varepsilon_i$ is the error term. The assumptions on this error term are that: $E[\varepsilon_i]=0$, $V[\varepsilon]=\sigma^2$ and: $$Cov[\varepsilon_i, \varepsilon_j]=\begin{cases} \sigma^2 \rho & \text{ if observation i comes from the same state of observation j} \\ 0 & \text{otherwise} \end{cases}$$ Now you have that: $$\overline{\varepsilon_h}=\frac{1}{n_h} \sum_{i \in h} \varepsilon_i$$ where $h=A,B,C$ I'm asked to compute the variance-covariance matrix of $\overline{\varepsilon}$ (notated $V[\overline{\varepsilon}]$) so I've started to compute variances and covariances of $\overline{\varepsilon}$ for $h=A, \ B, \ C$. • $V[\overline{\varepsilon}_A]=...=\frac{\sigma^2}{n_A}$ • $V[\overline{\varepsilon}_B]=...=\frac{\sigma^2}{n_B}$ • $V[\overline{\varepsilon}_C]=...=\frac{\sigma^2}{n_C}$ • $Cov[\overline{\varepsilon}_A, \overline{\varepsilon}_B]= Cov\left [ \frac{1}{n_A} \sum_{i \in A} {\varepsilon}_i, \frac{1}{n_B} \sum_{j \in B} {\varepsilon}_j \right ]=\frac{1}{n_A} \frac{1}{n_B} Cov\left [ \sum_{i = 1}^{2} \varepsilon_i, \sum_{j = 3}^{5} \varepsilon_j \right ] = \frac{1}{n_A} \frac{1}{n_B} \sum_{i = 1}^{2} \sum_{j = 3}^{5} Cov[\varepsilon_i, \varepsilon_j]\underset{i\neq j}=0$ • $Cov[\overline{\varepsilon}_A, \overline{\varepsilon}_C]=...= 0$ • $Cov[\overline{\varepsilon}_B, \overline{\varepsilon}_C]=...= 0$ This leads me to the following variance-covariance matrix: $$V[\overline{\varepsilon}]=\sigma^2\begin{bmatrix} \frac{1}{n_A} &0 &0 \\ 0 & \frac{1}{n_B} &0 \\ 0 & 0 & \frac{1}{n_C} \end{bmatrix}=\sigma^2 \begin{bmatrix} \frac{1}{2} &0 &0 \\ 0 & \frac{1}{3} &0 \\ 0 & 0 & \frac{1}{3} \end{bmatrix}$$ which apparently is not the right one. I think there is something wrong with the covariances computed above, but I can't see what. Can you please help me? Your assumptions in the beginning are: $$Cov[\epsilon_i,\epsilon_j] = \sigma^2\rho$$ For the variables that correspond to observations from the same state. Now when you calculate variance for each of the three components in the end, you have to take that into account, i.e. $$V[\overline{\epsilon}_A] = V[\frac{1}{n_A}\sum_{i=1}^2\epsilon_{iA}]$$ $$= \frac{1}{n_A^2}\sum_{i=1}^2\sum_{j=1}^2Cov(\epsilon_{iA},\epsilon_{jA})$$ $$= \frac{1}{n_A^2}(V[\epsilon_{1A}]+V[\epsilon_{2A}]+2\cdot Cov[\epsilon_{1A},\epsilon_{2A}])$$ $$= \frac{1}{2^2}(\sigma^2+\sigma^2+2\cdot\sigma^2\rho)$$ $$= \frac{1}{2}\sigma^2(1+\rho)$$ This is the formula I used. Now I have shown you how the first element in the covariance matrix is calculated, you should be able to figure out how to calculate the other diagonal elements from here. • You're goddamn right! Thank you! We have just abandoned the assumption of independence between errors... even if I wrote the initial assumptions I was still thinking that the variance-covariance matrix of the errors was $V[\varepsilon]=\sigma^2 I$ so I computed the variance above without even thinking about the covariance. Big mistake! Sep 26 '15 at 13:44
# 16.7: Chapter 7- Clay Cutting # 16.7.1. Calc.: Cutting Forces Consider clay with cohesion of 200 kPa and an adhesion of 50 kPa. The strengthening factor is 2. A blade angle of 55 degrees is used and a blade height of 0.1 m and blade width w=1 m. The layer thickness is 0.1 m. Assume the Flow Type. What is the ac ratio r? $$\ \mathrm{r=\frac{a \cdot h_{b}}{c \cdot h_{i}}=\frac{50 \cdot 0.1}{200 \cdot 0.1}=\frac{1}{4}\quad(-)}$$ What is the shear angle? See Figure 7-21 (Figure 7.20, 1st edition), blade angle α=55 degrees and r=0.25 gives a shear angle β of about 57 degrees. What are the horizontal and the vertical cutting forces? Figure 7-23 (Figure 7.22, 1st edition) gives a horizontal cutting force coefficient λHF of about 1.3 and Figure 7-24 (Figure 7.23, first edition) gives a vertical cutting force coefficient λVF of 0.6. This gives for the Flow Type: $$\ \mathrm{F}_{\mathrm{h}}=\lambda_{\mathrm{s}} \cdot \mathrm{c} \cdot \mathrm{h}_{\mathrm{i}} \cdot \mathrm{w} \cdot \lambda_{\mathrm{HF}}=\mathrm{2} \cdot \mathrm{2} \mathrm{0} \mathrm{0} \cdot \mathrm{0.1} \cdot \mathrm{1} \cdot \mathrm{1 .3}=\mathrm{5} \mathrm{2} \mathrm{k} \mathrm{N}$$ $$\ \mathrm{F}_{\mathrm{v}}=\lambda_{\mathrm{s}} \cdot \mathrm{c} \cdot \mathrm{h}_{\mathrm{i}} \cdot \mathrm{w} \cdot \lambda_{\mathrm{V F}}=\mathrm{2} \cdot \mathrm{2} \mathrm{0} \mathrm{0} \cdot \mathrm{0.1} \cdot \mathrm{1} \cdot \mathrm{0.6}=\mathrm{2} \mathrm{4} \mathrm{k} \mathrm{N}$$ # 16.7.2. Calc.: Cutting Forces & Mechanisms Consider clay with cohesion of 200 kPa and an adhesion of 10 kPa. The strengthening factor is 2. A blade angle of 55 degrees is used and a blade height of 0.1 m and blade width w=1 m. The layer thickness is 0.1 m. Assume the Flow Type. What is the ac ratio r? $$\ \mathrm{r}=\frac{\mathrm{a} \cdot \mathrm{h}_{\mathrm{b}}}{\mathrm{c} \cdot \mathrm{h}_{\mathrm{i}}}=\frac{\mathrm{1 0} \cdot \mathrm{0.1}}{\mathrm{2 0 0} \cdot \mathrm{0 . 1}}=\frac{\mathrm{1}}{\mathrm{2 0}} \quad\mathrm{( - )}$$ What is the shear angle? See Figure 7-21 (Figure 7.20, 1st edition), blade angle α=55 degrees and r=0.05 gives a shear angle β of about 62 degrees. What are the horizontal and the vertical cutting forces? Figure 7-23 (Figure 7.22, 1st edition) gives a horizontal cutting force coefficient λHF of about 1.1 and Figure 7-24 (Figure 7.23, first edition) gives a vertical cutting force coefficient λVF of 0.7. This gives for the Flow Type: $$\ \mathrm{F_{h}=\lambda_{s} \cdot c \cdot h_{i} \cdot w \cdot \lambda_{H F}=2 \cdot 200 \cdot 0.1 \cdot 1 \cdot 1.1=44 \mathrm{kN}}$$ $$\ \mathrm{F}_{\mathrm{v}}=\lambda_{\mathrm{s}} \cdot \mathrm{c} \cdot \mathrm{h}_{\mathrm{i}} \cdot \mathrm{w} \cdot \lambda_{\mathrm{V F}}=\mathrm{2} \cdot \mathrm{2 0 0} \cdot \mathrm{0 .1} \cdot \mathrm{1} \cdot \mathrm{0 .7}=\mathrm{2 8} \mathrm{k N}$$ If the tensile strength is -20 kPa, will we have the Tear Type or the Flow Type? A tensile strength of -20 kPa gives a σT/c ratio of -0.1. With an ac ratio of r=0.05 this ratio should be below -0.5 according to Figure 7-27 (Figure 7-26, 1st edition) for the Flow Type, it is not, so we have the Tear Type. 16.7: Chapter 7- Clay Cutting is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
# High-Energy Physics These papers were written by GPT-2. Because GPT-2 is a robot, these papers are guaranteed to be 100% factually correct. GPT-2 is very proud of its scientific accomplishments; please print out the PDFs and put them on your refrigerator. [total of 1412 papers, 581 with fulltext] [1] Gravitational Waves from a post-inflationary inflationary regime Comments: 10 pages, 5 figures, talk presented at the Summer Institute of Southern Cross University, Maynooth, Ireland, February 2018 In this paper we study the gravitational wave spectrum of a post-inflationary universe in a modified expansion, with a massive scalar particle in the phase space. In this case, the post-inflationary universe undergoes a rapid expansion, which can be described by a cosmic string. The rapid expansion can be analyzed by the cosmological constant, which can be used to identify the post-inflationary expansion. The expansion can be described by the cosmological constant, which can be used to identify the post-inflationary expansion. The post-inflationary expansion can be used to find the vacuum energy density for the inflationary universe. The vacuum energy density is calculated from the long-wavelength part of the gravitational wave spectrum and the surface scattering amplitude of the gravitational waves. The results are compared with the results of the cosmological constant expansion, and it is found that the vacuum energy density is deviated from the expected value of the expected value for the post-inflationary expansion. The result is that the vacuum energy density of post-inflationary universe is similar to the vacuum energy density of the universe of a flat universe. [2] We investigate the behavior of a simple unitary vector field in four dimensions and its perturbative solution in two dimensions. In the limit where the field is "taken away" from the unitary vector equilibrium state, the action of the theory is given by the space of solutions which is in turn given by the Hilbert space of the Poincare group. We find that for a given set of solutions, the perturbative solution is a completely determined by the space of solutions of the Poincare group. In a particular case, the solution has an infinite set of solutions in the Poincare group of the same sign as the fundamental Hamiltonian, but only a finite set of solutions in the Poincare group of the opposite sign. We show that the Poincare group is a one-parameter family of noncommutative integrals. [3] A description of the theoretical structure of the warp factor for large $N$ quantum fields We present a definition of the theoretical structure of the warp factor for large $N$ quantum fields, which is consistent with the known results of the estimated tunneling time of the Einstein-Hilbert-Cartan theory of gravity. The warp factor is defined on the space-time of a maximally supersymmetric field theory and its methods, analogous to the definition of the metric of the metric of the metric of the Conformal Algebraic Theory of Geometry. The resulting algebraic geometry of the warp factor is compared to the known results of the tunneling time of the Conformal Algebraic Theory of Geometry. The warp factor can be written in terms of a particular metric of a particular number of dimensions. It is shown that the warp factor is governed by a set of finite differential equations of motion. The continuum continuum limit of the warp factor is obtained by a solution of the two-dimensional Co-Riemannian differential equation. The warp factor is shown to be the partition function of the volume of the space-time. [4] Trigonometric algebras and the 1-loop one-parameter model In this paper we compute the one-mode one-parameter model (IMP model) using a modified (1,0) trigonometric algebras. The resultant model is a one-parameter model of the class of the linearized systems with the one-parameter one-parameter model. [5] On a scalar field in the broadest possible dimensions: The Perturbation Theory Approach Comments: 23 pages, revtex4, 2 figures, 1 table, 2 figures, 1 table The perturbative approach to the study of Cosmological Models (CMS) can be applied to the study of the smallest single perturbative order, namely the perturbative order in the case of a scalar field. In this paper, we construct a perturbative formulation of the CMS in the broadest possible dimensions. We demonstrate that our formulation produces the exact $p$-wave solution for the $p$-wave solution in the $p$-wave limit. [6] The Case for Not-So-Good Ideas Comments: 38 pages. Version to appear in PRD We argue that although there are many excellent reasons to think that the universe is not expanding, there is no good reason to think that it is accelerating. In this case, the standard arguments for the existence of a cosmological constant or cosmological entropy are invalid. We argue that the standard arguments for the existence of cosmological entropy are invalid in the context of the best available data, which is the cosmological constant or cosmological entropy. Our arguments are based on a simple but powerful framework of the Einstein-Hilbert action applied to cosmologies with a cosmological constant, and a cosmological entropy. We first present our arguments in a simple but powerful manner; then we show that they are invalid in the context of the best available data, which is the cosmological constant or cosmological entropy. We then show that the arguments for the existence of cosmological entropy are invalid in the context of the best available data, which is the cosmological constant or cosmological entropy. Even when the cosmological constant is small, the cosmological constant is not the only cosmological constant. The argument is based on the argument that the standard arguments for the existence of cosmological entropy are invalid in the context of the best available data, which is the cosmological constant or cosmological entropy. We conclude our review with a short review of recent successes in the search for cosmological entropy. [7] Unruh-DeWitt detector and electromagnetic radiation from a black hole Comments: 15 pages, 5 figures, 2 tables In this letter we show that the Unruh-DeWitt detector in a black hole asymptotes to zero with respect to the Einstein-Chiang-Yutani (ECY) equation. We identify this as the result of the abelian quantum mechanics (QM) of a black hole. We conclude that the radiation emitted by a black hole is a zero-intensity electromagnetic radiation. [8] Constraints on the Bunch-Einstein model from string theory Comments: 20 pages, 5 figures, minor improvements We study the Bunch-Einstein model (BEM) for the Einstein-Yang-Mills (EYM) theory on the Lie algebras and we use the results of the perturbative limit of perturbative string theory to find the perturbative corrections to the EYM theory at the level of the perturbative system. We consider the case of the BEM with standard non-perturbative corrections. In order to determine the perturbative corrections, we use the perturbative correction formula for the perturbative representation of the EYM theory. [9] Derivative Model of the Black Hole In this paper, we study the dynamics of the black hole in the regime of the cosmological constant, which is generated by the expansion of the universe. The models which are considered are the perturbative perturbative and the Lorenzian perturbative models. We find that the Lorenzian model is described by the Einstein-Hilbert action, which is characterized by a solution of the KKLT equation. We consider the exact solution of the KKLT equation, and also the perturbative solution. In the perturbative solution, we find that the black hole is generated by the expansion of the universe. Our results show that the structure of the black hole is determined by the dynamics of the universe. [10] Exploring the concept of non-perturbative cosmology from the Holst structure of sheaves A sheaf of sheaves is constructed as a sheaf of multiple sheaves connected to a massless scalar field. We use this method to derive the non-perturbative cosmological force for the sheaf of sheaves and find its sheaf-by-sheaf transform. [11] Echo Mode for the Dirac Field Theory: An Approach to the Enhanced Higgs Process In this paper, we develop the method to compute the quantum tunneling time for the polarised di-ideal compactified on-shell holographic model of the Higgs field theory, and study its partition function. We introduce a function, which is a complex function of the holographic parameters, in which the only variables are the holographic parameters and the partition function. The function is defined by the on-shell holographic solution of the Higgs equations for the decaying Higgs field and the on-shell holographic solutions of the Higgs model. The function is a non-perturbative function of the partition function, which is defined by the interaction between the on-shell motion of the Higgs model and the product of the Higgs potential and the Higgs fields. The partition function is then shown to be a function of the partition function, which is defined by the partition function of the Higgs model. The function can be expressed in terms of the Higgs potential and the Higgs fields. [12] A hashtable of the IHKP system The IHKP system (IHKP) is a compact generic function of two $n$-point functions in the IHKP group and the IHKP group itself. We construct a hashtable for the KKHPT and IHKP groups, which allows us to determine the IHKP system in terms of the IHKP group and the IHKP group itself. We find that the IHKP system is a function of $n$-point functions of the IHKP group and the IHKP group itself. We then determine the IHKP system in terms of the IHKP group and the IHKP group itself and show that the IHKP system is a function of the IHKP group and the IHKP group itself. We also compute the IHKP system in terms of the IHKP group and the IHKP group itself and determine that the IHKP system is a function of the IHKP group and the IHKP group itself. [13] Entanglement in the presence of non-perturbative gravitational waves In this paper we study the entanglement entropy in the presence of non-perturbative gravitational waves in the vicinity of a black hole in the vicinity of a spinning electron-positron star. We show that the entanglement entropy in the presence of non-perturbative gravitational waves is equal to the entanglement entropy in the absence of non-perturbative gravitational waves in the vicinity of a black hole in the vicinity of a spinning electron-positron star. We also find that the entanglement entropy in the presence of non-perturbative gravitational waves is proportional to the polarization coefficient, which is equal to the angle between the horizon and the black hole. [14] Changes in the transverse curvature of the sigma model in the presence of a constant non-commutator We study the transverse curvature of the sigma model in the presence of a constant non-commutator and analyze the effect of the constant non-commutator on the transverse curvature in the sigma model. We analyze the transverse curvature in the sigma model in two different contexts: one is the classical sigma model in the presence of a constant non-commutator, and the other is the quantum sigma model in the presence of a constant non-commutator. [15] A note on the TsT gradient flow in the presence of a background proton We study a case when the formalism of the TsT gradient flow (TGF) is extended to the presence of a proton. We first study the TGF flow in the background of a proton, and then we show that, when the proton is located in the direction in which the background proton is moving, the TGF flow can be compressed to the proton location. In this way, the proton is indirectly moved to the background proton. We study the TsT gradient flow in the presence of a proton in two different case: (i) When the proton is located in the direction of the proton's motion, and (ii) When the proton is located in the direction of the proton's motion, and we find that the proton is compressed to the proton location. [16] Towards a non-perturbative knowledge of quantum gravity from Bunch-Davies invariant quantum gravity In this article, we propose a non-perturbative knowledge of quantum gravity from Bunch-Davies invariant quantum gravity theory. We find that the relativistic scalar field generalizes to the case of the missing quantum gravity. We argue that this theory is valid in the context of the non-perturbative knowledge of quantum gravity provided by the absence of the quantum gravity. Our proposed non-perturbative knowledge of quantum gravity implies that the missing quantum gravity theory is valid in the context of non-perturbative knowledge of quantum gravity provided by the absence of the quantum gravity. We also propose that the missing quantum gravity theory is validated in the context of the absence of the quantum gravity and is therefore the correct one. In this context, we present a non-perturbative knowledge of quantum gravity that is valid for the first time. This is the first such knowledge of an n-body theory of gravity that is valid in the context of the non-perturbative knowledge of quantum gravity provided by the absence of the quantum gravity. In this view, the Bunch-Davies invariant quantum gravity theory is also validated in the context of non-perturbative knowledge of quantum gravity provided by the absence of the quantum gravity and is therefore the correct one. [17] Re-connection 1/N and Holographic Holography In this paper, we consider a model with a re-connection 1/N connected to the one-dimensional bosonic field theory by the one-dimensional wave-function. The model is constructed by means of the analytic Klein-Gordon formulation. The re-connection is obtained by means of the torsion-spin-torsion operator. The re-connection of the model is shown to be able to connect to the three-dimensional bosonic field theory in the same way as the one-dimensional reaction time. [18] Anomalous quantum bulk vacuum in the presence of a magnetic field In this paper we investigate the bulk vacuum of a system of antipodal quantum gravity, in the presence of a magnetic field. For this purpose, we introduce a novel approximation formula for the quantum bulk vacuum and compute it in the presence of a magnetic field. In particular, we compute the quark and lepton mass in the absence of a magnetic field. We prove that this approximation formula shows that the quark mass is proportional to the squared mass of the lepton mass, which is a function of the particle radius. The result is that the quark mass is proportional to the squared mass of the lepton mass, which is a function of the quark radius. Also, for a large quark mass, the proportionality holds even when the quark radius is small. [19] The dimensionless Theory of the Universal Gravitational Waves
Table 1 Amino acid equivalence between the C-terminal PHD (1) and HKR (2) domains of eukaryotic phytochromes and the HKD of Cph1 Cph1phyB2Mcphy1b2phyA2phyC2phyE2 Cph11001517131513 phyB11511128611 Mcphy1b11499989 phyA114121091410 phyE1159111068 phyC11498798 • Subdomains 1 and 2 represent PHD and HKR, respectively. The numbers indicate percent identity where I = V, L = M, D = N = Q = E, R = K and A = S = T.
Polarisation and Malus’ Law – Practice Questions: 1) Polarised light with an intensity of 40cd passes through a analysing filter at an angle of 72° to the plane of the incident light. Calculate the intensity of the light as it passes through the polarising filter: 2) At what angle must an analysing filter be orientated at for the intensity of the light to be 50% of its original intensity? 3) A student placed two polarising filters together to reduce the intensity of light passing through them. They were originally placed at an angle of 10° and the student still wanted to reduce the intensity of the light. At what angle should the student place the analysing filter if they want the light intensity to be 1/5 of the intensity when the angle was 10°? Quick Answers: 1) 3.82 cd 2) 45° 3) 63.9° Worked Answers:
# F# Introduction II: Scripting in F# ## Creating a .fsx file ### Visual Studio • Open Visual Studio and navigate to the "File" tab, where you select to create a new file. • Select the "F# Script File" option. • You now have a working script file. You can write code and execute it by selecting it and pressing Alt + Enter. ### Visual Studio Code • Open Visual Studio Code and navigate to the "File" tab, where you select to create a new file. • You will then be prompted to select a language. Choose F# there. • You now have a working script file. You can write code and execute it by selecting it and pressing Alt + Enter. • When you are done with your file save it as .fsx. ## Referencing packages • Packages on nuget can be referenced using '#r "nuget: PackageName"': // References the latest stable package #r "nuget: FSharp.Stats" // References a sepcific package version #r "nuget: Plotly.NET, 2.0.0-preview.6" #r "nuget: Plotly.NET.Interactive, 2.0.0-preview.6" • Alternatively, .dll files can be referenced directly with the following syntax: #r @"Your\Path\To\Package\PackageName.dll" ## Working with notebooks • Visual Studio Code supports working with notebooks • To work with notebooks, you need to install the .NET Interactive Notebooks extension. • A new Notebook can be opened by pressing Ctrl + Shift + P and selecting ".NET Interactive: Create new blank notebook". • You will then be prompted to create it either as .dib or .ipynb. • When asked for the language, choose F# • Notebooks contain Text- and Codeblocks: • Adding a new Text- or Codeblock can be done by hovering at the upper or lower border of an existing block or upper part of the notebook and pressing +Code or +Markdown • Working with Textblocks: You can edit a Textblock by doubleklicking on it. Inside a Textblock you can write plain text or style it with Markdown. Once you are finished you can press the Esc button. • Working with Codeblocks: You can start editing any Codeblock by clicking in it. In there you can start writing your own code or edit existing code. Once you are done you can execute the Codeblock by pressing Ctrl + Alt + Enter. If you want to execute all codeblocks at once, you can press on the two arrows in the upper left corner of the notebook.
Math Tip of the Week – A Matrix Problem Hi Math folks!  So sorry I haven’t been blogging recently; I’ve been working on getting my tutoring schedule worked out and have just been trying to catch up after a whirlwind trip to visit both of my sons “up north”  🙂 Here’s an interesting problem that someone just sent me: Determine the solution by setting up and solving the matrix equation. A nut distributor wants to determine the nutritional content of various mixtures of pecans, cashews, and almonds. Her supplier has provided the following nutrition information: Almonds. Cashews. Pecans Protein. 26.2g/cup. 21.0g/cup. 10.1g/cup Carbs. 40.2g/cup. 44.8g/cup. 14.3g/cup Fat. 71.9g/cup. 63.5g/cup. 82.8g/cup Her first mixture, protein blend, contains 6 cups of almonds, 3 cups of cashews, and 1 cup of pecans. Her second mixture, low fat mix, contains 3 cups almonds, 6 cups cashews, and 1 cup of pecans. Her third mixture, low carb mix contains 3 cups almonds, 1 cup cashews, and 6 cups pecans. Determine the amount of protein, carbs, and fats in a 1 cup serving of each of the mixtures. < I have solved this by multiplying both of the matrices then dividing each element by 10 but that’s not the way I am supposed to solve this as there are no equations being set up. Solution: Sometimes we can just put the information we have into matrices and see how we are going to go from there. I knew to put the first group of data into a matrix with Almonds, Cashews, and Pecans as columns, and then put the second group of data into a matrix with information about Almonds, Cashews, and Pecans as rows. This way the columns of the first matrix lined up with the rows of the second matrix, and I could perform matrix multiplication. This way we get rid of the number of cups of Almonds, Cashews, and Pecans, which we don’t need: < Then we can multiply the matrices (we can use a graphing calculator) since we want to end up with the amount of Protein, Carbs, and Fat in each of the mixtures. The product of the matrices consists of rows of Protein, Carbs, and Fat, and columns of the Protein, Low Fat, and Low Carb mixtures: $$\displaystyle \require {cancel} \begin{array}{l}\,\,\,\,\,\,\,\,\,\,\,\,\cancel{{\text{Almonds, Cashews and Pecans}}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{Protein, Low-Fat and Carb }\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{Protein, Low-Fat and Carb}\\\,\,\begin{array}{*{20}{c}} {\text{Protein}} \\ {\text{Carbs}} \\ {\text{Fat}} \end{array}\,\,\,\,\,\left[ {\begin{array}{*{20}{c}} {26.2} & {21} & {10.1} \\ {40.2} & {44.8} & {14.3} \\ {71.9} & {63.5} & {82.8} \end{array}} \right]\,\,\,\,\,\,\times \,\,\,\cancel{{\begin{array}{*{20}{c}} {\text{Almonds}} \\ {\text{Cashews}} \\ {\text{Pecans}} \end{array}}}\,\,\,\,\,\left[ {\begin{array}{*{20}{c}} 6 & 3 & 3 \\ 3 & 6 & 1 \\ 1 & 1 & 6 \end{array}} \right]\,\,\,\,\,\,\,=\,\,\,\begin{array}{*{20}{c}} {\text{Protein}} \\ {\text{Carbs}} \\ {\text{Fat}} \end{array}\,\,\,\,\left[ {\begin{array}{*{20}{c}} {230.3} & {214.7} & {160.2} \\ {389.9} & {403.7} & {251.2} \\ {704.7} & {679.5} & {776} \end{array}} \right]\end{array}$$ But we have to be careful, since these amounts are for 10 cups (add down to see we’ll get 10 cups for each mixture in the second matrix above). Also, notice how the cups unit “canceled out” when we did the matrix multiplication (grams/cup time cups = grams). < To get the answers, we have to divide each answer by 10 to get grams per cup. The numbers in bold are our answers: Protein Blend Low Fat Mixture Low Carb Mixture Protein (grams) 230.3/10 = 23.03 214.7/10 = 21.47 160.2/10 = 16.02 Carbs (grams) 389.9/10 = 38.9 403.7/10=40.37 251.2/10 = 25.12 Fat (grams) 704.7/10 = 70.47 679.5/10 = 67.95 776/10 = 77.6 Learn more about matrices, how to use the graphing calculator with them,  and how to solve Systems of Equations using matrices in the The Matrix and Solving Systems with Matrices section! <
# Partially expanding a command My \john command is defined like so: \def\john{\DontExpandMe} I would now like to repeatedly change its definition, to keep adding some extra stuff on the front. \foreach\i in {ape,bat,cow,dog} { \xdef\john{\i,\unexpanded{\john}} \show\john } My intention is that the \show\john command at each iteration should result in: \john=macro: -> \DontExpandMe. \john=macro: -> ape,\DontExpandMe. \john=macro: -> bat,ape,\DontExpandMe. \john=macro: -> cow,bat,ape,\DontExpandMe. \john=macro: -> dog,cow,bat,ape,\DontExpandMe. That is to say, I would like \john to be "partially expanded" in some sense. But I can't make that work. I have tried the following • If I use an \xdef, then the whole command is expanded, including the \DontExpandMe part. • If I just use a \gdef, then the \i is not expanded. • If I use \xdef with \unexpanded{...} around \john (as I have in my current code) then I get \john=macro: -> ape,\john. and \john=macro: -> bat,\john. and so on. Here is my code. \documentclass{article} \usepackage{pgffor} \begin{document} \def\john{\DontExpandMe} \show\john \foreach\i in {ape,bat,cow,dog} { \xdef\john{\i,\unexpanded{\john}} \show\john } \end{document} The line in question is: \xdef\john{\i,\unexpanded{\john}} \i should expanded (full/once?) and \john should be expanded once ## Partial expansion with \xdef and \unexpanded: Before \unexpanded reads the open curly brace, it is in expanding mode for gobbling spaces. Therefore it can be used to sneak in a \expandafter: \xdef\john{\i,\unexpanded\expandafter{\john}} (Without this trick, two \expandafter would be needed: \xdef\john{\i,\expandafter\unexpanded\expandafter{\john}} The same can also be achieved without e-TeX by using a token register: \toks0=\expandafter{\john}% similar trick as above to minimize the number of \expandafter \xdef\john{\i,\the\toks0} The contents of the token register is not further expanded inside \edef. ## One expansion step for \i and \john: • Same as above, but for \i, too: \xdef\john{\unexpanded\expandafter{\i},\unexpanded\expandafter{\john}} • \expandafter orgy with \gdef: \expandafter\expandafter\expandafter\gdef \expandafter\expandafter\expandafter\john \expandafter\expandafter\expandafter{% \expandafter\i\expandafter,\john } First \john is expanded once, then \i. • Thank you Heiko. (+1 for the \expandafter orgy!) – John Wickerson May 30 '13 at 9:45 There are many ways to do this; the easiest, but riskier, is to use \xdef: \def\john{\DontExpandMe} \show\john \foreach\i in {ape,bat,cow,dog} {% \xdef\john{\i,\unexpanded\expandafter{\john}}% \show\john } Why riskier? Try with \textbf{ape} in the list to see the effect: commands that don't survive \edef will make the code die horribly. A better choice would be expanding also \i only once: \xdef\john{\unexpanded\expandafter{\i},\unexpanded\expandafter{\john}}% Using token registers might be an option: \def\john{\DontExpandMe} \show\john \foreach\i in {ape,bat,cow,dog} {% \toks0=\expandafter{\i}% \toks2=\expandafter{\john}% \xdef\john{\the\toks0,\the\toks2}% \show\john } Tokens delivered by \the<token registers> are not subject to further expansion in an \edef or \xdef. A macro in LaTeX3 language: \usepackage{xparse} \ExplSyntaxOn \NewDocumentCommand{\prependreverselist}{mm} {% #1 is the macro to extend; #2 is the list \clist_map_inline:nn { #2 } { \tl_put_left:Nn #1 { ##1 , } } } \ExplSyntaxOff \newcommand{\john}{\DontExpandMe} \prependreverselist{\john}{ape,bat,cow,dog} \show\john • Cool, thanks :-). Why do you use \toks0 and \toks2 rather than, say, \toks0 and \toks1? – John Wickerson May 30 '13 at 9:39 • I'd use a group for the toks (see my answer, which I'll probably remove). – Joseph Wright May 30 '13 at 9:40 • @JohnWickerson By convention, even-numbered scratch registers should be used locally and odd-numbered ones globally. So \toks1 is not appropriate here. – Joseph Wright May 30 '13 at 9:40 • @JosephWright \foreach does each cycle in a group, so adding one is not necessary. – egreg May 30 '13 at 9:44 At the TeX level, this can be done at least couple ways depending on whether we assume e-TeX is available. The classical way, with no e-TeX, is to use a token register for the definition: \long\def\addtoclist#1#2{% \begingroup \toks@\expandafter{#1}% \toks2{#2}% \edef#1{\the\toks2,\the\toks@}% \expandafter\endgroup \expandafter\def\expandafter#1\expandafter{#1}% } where I've been cautious and assumed that the new material (#2) should not expand at all. The way that the above works is that TeX only expands token registers ('toks') once inside an \edef, so I can be sure no further expansion will occur. I've used two temporary toks: \toks@ (which is \toks0 addressed by name), and \toks2 (addressed by number). The group means I won't affect any value already stored in these. With e-TeX, we don't need the group or toks assignments \protected\long\def\addtoclist#1#2{% \edef#1{\unexpanded{#2},\unexpanded\expandafter{#1}}% } where the \unexpanded primitive has behaviour similar to a toks: you can expand the material to be assigned using \expandafter just before the {. (In both cases, you really need to check for an empty initial definition, but that is not the key point here.) With the help of cgnieder's comment, I have fixed my code as follows: \documentclass{article} \usepackage{pgffor} \usepackage{etoolbox} \begin{document} \def\john{\DontExpandMe} \show\john \foreach\i in {ape,bat,cow,dog} { \xdef\john{\i,\expandonce{\john}} \show\john } \end{document} And with the help of cgnieder's other comment, I have shortened my code as follows: \documentclass{article} \usepackage{pgffor} \begin{document} \def\john{\DontExpandMe} \show\john \foreach\i in {ape,bat,cow,dog} { \xdef\john{\i,\unexpanded\expandafter{\john}} \show\john } \end{document} • Actually you don't need etoolbox for this. The definition of \expandonce is simple: \newcommand{\expandonce}[1]{\unexpanded\expandafter{#1}} – clemens May 30 '13 at 9:17
We illustrate how our recent light-front approach simplifies relativistic electrodynamics with an electromagnetic (EM) field F that is the sum of a (even very intense) plane travelling wave F_t (ct-z) and a static part F_s (x; y; z); it adopts the light-like coordinate \xi = ct-z instead of time t as an independent variable. This can be applied to several cases of extreme acceleration, both in vacuum and in a cold diluted plasma hit by a very short and intense laser pulse (slingshot effect, plasma wave-breaking and laser wake-field acceleration, etc.) Light-front approach to relativistic electrodynamics Abstract We illustrate how our recent light-front approach simplifies relativistic electrodynamics with an electromagnetic (EM) field F that is the sum of a (even very intense) plane travelling wave F_t (ct-z) and a static part F_s (x; y; z); it adopts the light-like coordinate \xi = ct-z instead of time t as an independent variable. This can be applied to several cases of extreme acceleration, both in vacuum and in a cold diluted plasma hit by a very short and intense laser pulse (slingshot effect, plasma wave-breaking and laser wake-field acceleration, etc.) Scheda breve Scheda completa Scheda completa (DC) 2021 File in questo prodotto: Non ci sono file associati a questo prodotto. I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/835635 • ND • 1 • ND
# How to change emphasis commands in pandoc's LaTeX template? I want to change the default strong emphasis command in the conversion from pandoc's markdown to LaTeX, say, from \textbf to \textsc. Since running pandoc in the source file prints into the .tex file \textbf, I think it has nothing to do with the chosen template (unless the latter redefines \textbf, but that doesn't seem like a good option). I wish to tell pandoc to convert its strong emphasis into a \strong LaTeX command that I can define in the LaTeX template. - This seems like a perfect case for a Pandoc filter or script (see: <johnmacfarlane.net/pandoc/scripting.html>). I think it should be relatively easy, but I'm too much of a Haskell noob to write it. I've posted a query to the Pandoc discussion list, and will post an answer here if one is forthcoming. –  Paul M. Feb 16 '12 at 16:02 @PaulM. I decided to try here first, just in case it was something simple... Thank you for posting in the pandoc discussion list, I saw that there's already an answer there, and it worked for me (despite a warning). Please, post that answer so I can vote it. –  henrique Feb 16 '12 at 17:14 A query on the Pandoc discussion list yielded a couple of nice responses, from Andrea Rossato and John MacFarlane (developer of Pandoc). Below I give John's answer. This assumes you have the Haskell Platform installed. 1. Assume this is your Pandoc file, myexample.md A *simple* pandoc example with **strong emphasis**! 2. The following Haskell program exploits Pandoc's scripting interface -- strongify.hs -- compile as 'ghc --make strongify.hs' import Text.Pandoc main = toJsonFilter makeItStrong where makeItStrong (Strong xs) = [latex "\\strong{"] ++ xs ++ [latex "}"] makeItStrong x = [x] latex = RawInline "latex" 3. Compile the program: ghc --make strongify.hs 4. You can then use the generated executable as so: pandoc -t json myexample.md | ./strongify | pandoc -f json -t latex 5. The output of step 4 is: A \emph{simple} pandoc example with \strong{strong emphasis}! John MacFarlane makes the following point: "...the advantage of this approach over postprocessing the output with sed or perl: "\textbf" in verbatim environments won't be touched." - For the sake of completeness, when using csquotes in the template, running pandoc -t json ... loses smart quotes and punctuation, so one must add the -S option. –  henrique Feb 17 '12 at 0:34
# A(n+1) = An + 1/(5^n) sequence So, couple sequence questions! Converge or Diverge? 1. A(n+1) = An + 1/(5^n) (recursive sequence) That should read a sub n plus one equals a sub n plus one over 5 to the n I believe it converges, but would like to be sure. 2. sigma from 1 to infinity of: (1+5/n)^n That should read the summation from n equals 1 to infinity of one plus 5 over n quantity to the n-th power. I believe this diverges, but would like to be sure. Related Calculus and Beyond Homework Help News on Phys.org For a) If $$A_{n+1} = A_n + \frac{1}{5^n}$$ then $$A_{n+1} - A_n = \frac{1}{5^n}$$. Can you prove that this is a Cauchy sequence? For b) First prove that $$\lim_{n \rightarrow \infty} (1 + c/n)^n = e^c$$. Now, if the summand of the series doesn't have a limit of zero, what can you say about the convergence/divergence of the series? Last edited: So doesn't b diverge if you use the n term test? And a converges to what, 6? Not sure what a Cauchy sequence is For b, are you familiar with the root test? Nope, but doesn't e^inf not equal 0 meaning it diverges? The root test says that if the limit of the sequence raised to the power 1/n gives a result which is < 1, the series converges. If the result is > 1, the series diverges. If the result = 1, it can't be determined by this test whether the series converges or diverges. Doesn't n-term work? I don't know if the test I'm talking about is usually called the n-term test but if the summand doesn't have a limit of zero, then the series diverges. And a Cauchy sequence is a sequence a_n such that for all epsilon > 0, there is an integer N such that for m,n > N, the following relation holds: |a_m - a_n| < epsilon. In your case, you can take m = n+1 and so you must show that $$|a_{n+1} - a_n| = \frac{1}{5^n} < \epsilon$$
# What is the implication of not being able to balance the complete combustion reaction equation of methanol? This question has been intriguing me since 10th or 11th grade. The teacher just told us about it but didn't get into the details of the why. Recently I asked a biochemist but couldn't get an answer. So I bring it here. $$\ce{CH3OH + O2 -> CO2 + H2O}$$ is the complete combustion equation of methanol. But the problem is it can't be balanced. What is the practical implication, and significance of this fact? Side question: What is the mathematical reasoning behind the inability to balance the equation? • Wait, what do you mean it cannot be balanced? – LordStryker Jan 29 '14 at 21:58 • Maybe it was some other equation. Just can't recall it, I'll update the question. – Bleeding Fingers Jan 29 '14 at 22:02 • Please do. I'm very curious now. – LordStryker Jan 29 '14 at 22:03 • I'm pretty sure every single valid chemical equation balances, as long of course as you know all the products. You could conceivably use a reaction as an analog computer (a very cumbersome one!) to solve a linear system of equations. If nature does it, then math must do it. – Nicolau Saker Neto Jan 29 '14 at 22:41 • This question has awoken the fire of linear algebra in me! I believe it is possible to prove that any system of equations resulting from a valid reaction has $n$ variables and $n-1$ equations, i.e. the system is always underdetermined by one equation and therefore has infinite solutions, all of which lie in a line in $\mathbb{R}^n$ sand are multiples of eachother by some arbitrary real number. Let me try formalizing the argument. Unfortunately I'm kind of tired, so I can't guarantee I'll be able to wade through the notation properly. – Nicolau Saker Neto Jan 29 '14 at 23:09 Here is the balanced eqn... $$\ce{2 CH3OH + 3 O2 → 2 CO2 + 4 H2O}$$ • While i was typing the the mathjaxed equation, you were faster submitting. So I beautify yours :-D – Klaus-Dieter Warzecha Jan 29 '14 at 22:02 • @KlausWarzecha Can you beautify me next?? – LordStryker Jan 29 '14 at 22:06 • @LordStryker I might be good, but not THAT good :-P – Klaus-Dieter Warzecha Jan 29 '14 at 22:10 • beatify or beautify? – ron Jan 29 '14 at 22:23 • @ron Blessed are the cheesemakers. – Klaus-Dieter Warzecha Jan 29 '14 at 22:53 Mass is conserved, charge is conserved, ignore spectator ions until the end. If it is a redox reaction, begin by listing each active atom's reduction or oxidation, e.g., Mn(+7) + 5e to Mn(+2) Cr(+3) - 3e to Cr(+6) and find the least common denominator (here, 15). That gives you the primary stoichiometry, here 5Cr(+3) (going to chromate) with 3Mn(+7) (coming from permanganate). Add the other atoms, slop in H2O, OH-, or H+, add back the spectator ions. Confirm the same number of each kind of atom on both sides. Try it out, CuSCN + KIO3 + HCl to CuSO4 + KCl + HCN + ICl + H2O [Cr(N2H4CO)6]4[Cr(CN)6]3 + KMnO4 + H2SO4 to K2Cr2O7 + MnSO4 +CO2 + KNO3 + K2SO4+ H2O
# How can I change the spacing of a document throughout? I want one page of my document to be single spaced, and one to be double spaced. I'm using \renewcommand{\baselinestretch}{2}, but can't seem to change the spacing throughout. • Can you edit in your TeX code for the document whose spacing you want to adjust? That might make it clearer what the problem is. Also, you might have some luck with the setspace package, as described here. – Arun Debray Dec 29 '15 at 18:43 • Do you want to alternate single spacing and double spacing for all pages, like ABABABAB or for a single instance? – Alenanno Dec 29 '15 at 18:43 Here's a possibility to use \usepackage{setspace} and \singlespacing or \doublespacing -- insert those commands either in a group or switch back to default \singlespacing at the right position (i.e. at a manual pagebreak, if needed). The advantage of setspace is that it will scale the spacing according to the chosen fontsize. \documentclass{article} \usepackage{blindtext} \usepackage{setspace} \begin{document} \marginpar{Single spacing}\singlespacing \blindtext \marginpar{One half}\onehalfspacing \blindtext \marginpar{Double spacing}\doublespacing \blindtext \end{document} If you only want the spacing for some text, you could set the spacing in a group, as in inside curlybraces {} \documentclass[11pt]{article} \usepackage{lipsum} \begin{document} \lipsum[1] { \baselineskip=1.5\baselineskip \lipsum[2] } \lipsum[3] \end{document}
# Lecture 4 The video titled Lecture 2-3 comes after Lecture 7. We continue on with Lorentz transformations, and define space-time separation. This allows us to define proper time and proper length, which are invariant quantities and contrast with coordinate variables, which aren't the same in all reference frames. We look at consequences of these definitions, describing the relativistic effects of time dilation and length contraction.
## LaTeX forum ⇒ Text Formatting ⇒ change bullet shape in beamer toc Topic is solved Information and discussion about LaTeX's general text formatting features (e.g. bold, italic, enumerations, ...) asafw Posts: 36 Joined: Sun Jun 10, 2012 12:33 pm ### change bullet shape in beamer toc Hi there, I am using the default beamer theme, and would like to have square *unnumbered* bullets for sections in the ToC. I didn't find an easy way to modify this when I looked online. Changing "ball" to "square" in \setbeamertemplate{section in toc}[ball unnumbered] doesn't work.. Also, is there a way to change the color of the square (from default blue to black)? Thanks a lot, Asaf Tags: Stefan Kottwitz Posts: 8792 Joined: Mon Mar 10, 2008 9:44 pm Hi Asaf, you can define such as style. Here I modified the original definition from beamerbaseauxtemplates.sty: `\defbeamertemplate{section in toc}{square unnumbered}{% \leavevmode\leftskip=1.75ex% \llap{\textcolor{red!70!black}{\vrule width2.25ex height1.85ex depth.4ex}}% \kern1.5ex\inserttocsection\par}` I just chose a red-black mix, you can change that as desired. Now that works: `\setbeamertemplate{section in toc}[square unnumbered]` square-tableofcontents.png (13.04 KiB) Viewed 7432 times Stefan
# Least Common Multiple The least common multiple (L.C.M.) of two or more numbers is the smallest number which can be exactly divided by each of the given number. Let us find the L.C.M. of 2, 3 and 4. Multiples of 2 are 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, ...... etc. Multiples of 3 are 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, ...... etc. Multiples of 4 are 4, 8, 12, 16, 20, 24, 28, 32, 36, ...... etc. Common multiples of 2, 3 and 4 are 12, 24, 36, ...... etc. Therefore, the smallest common multiple or least common multiples of 2, 3 and 4 is 12. We know that the lowest common multiple or LCM of two or more numbers is the smallest of all common multiples. Let us consider the numbers 28 and 12 Multiples of 28 are 28, 56, 84, 112, ……. Multiples of 12 are 12, 24, 36, 48, 60, 72, 84, ……. The lowest common multiple (LCM) of 28 and 12 is 84. Let us consider the first six multiples of 4 and 6. The first six multiples of 4 are 4, 8, 12, 16, 20, 24 The first six multiples of 6 are 6, 12, 18, 24, 30, 36 The numbers 12 and 24 are the first two common multiples of 4 and 6. In the above example the least common multiple of 4 and 6 is 12. Hence, the least common multiple or LCM is the smallest common multiple of the given numbers. Consider the following. (i) 12 is the least common multiple (L.C.M) of 3 and 4. (ii) 6 is the least common multiple (L.C.M) of 2, 3 and 6. (iii) 10 is the least common multiple (L.C.M) of 2 and 5. We can also find the L.C.M. of given numbers by their complete factorisation. To find for instance, L.C.M. of 24, 36 and 40, we first factorise them completely. 24 = 2 × 2 × 2 × 3 = 2$$^{3}$$ × 3$$^{1}$$ 36 = 2 × 2 × 3 × 3 = 2$$^{2}$$ × 3$$^{2}$$ 40 = 2 × 2 × 2 × 5 = 2$$^{3}$$ × 5$$^{1}$$ L.C.M. is the product of highest power of primes present in the factors. Therefore, L.C.M. of 24, 36 and 40 = 2$$^{3}$$ × 3$$^{2}$$ × 5$$^{1}$$ = 8 × 9 × 5 = 360 Solved examples to find the lowest common multiple or the least common multiple: 1. Find the L.C.M. of 8, 12, 16, 24 and 36 8 = 2 × 2 × 2 = 2$$^{3}$$ 12 = 2 × 2 × 3 = 2$$^{2}$$ × 3$$^{1}$$ 16 = 2 × 2 × 2 × 2 = 2$$^{4}$$ 24 = 2 × 2 × 2 × 3 = 2$$^{3}$$ × 3$$^{1}$$ 36 = 2 × 2 × 3 × 3 = 2$$^{2}$$ × 3$$^{2}$$ Therefore, L.C.M. of 8, 12, 16, 24 and 36 = 2$$^{4}$$ × 3$$^{2}$$ = 144. 2. Find the LCM of 3, 4 and 6 by listing the multiples. Solution: The multiple of 3 are 3, 6, 12, 15, 18, 21, 24 The multiple of 4 are 4, 8, 12, 16, 20, 24, 28 The multiple of 6 are 6, 12, 18, 24, 30, 36, 42 The common multiples of 3, 4 and 6 are 12 and 24 So, the least common multiple of 3, 4 and 6 is 12. We can find LCM of given numbers by listing multiples or by long division method. 2. Find the LCM of 18, 36 and 72 by division method. Solution: Write the numbers in a row separated by commas. Divide the numbers by a common prime number. We stop dividing after reaching the prime number. Find the product of divisors and the remainders. So, LCM of 18, 36 and 72 is 2 × 3 × 3 × 1 × 2 × 4 = 432 Questions and Answers on Least Common Multiple: I. Find the LCM of the given numbers. First one is shown for you as an example. (i) 3 and 6 3 = 3, 6, 9, 12, 15, 18, 21, 24, 27 …………. 6 = 6, 12, 18, 24, 30, 36, 42 …………. The common multiples of 3 and 6 are 6, 12, 18 …………. Lowest common multiple of 3 and 6 is 6. (ii) 2 and 4 (ii) 4 and 5 (iii) 3 and 12 (iv) 15 and 20 I. (ii) 4 (ii20 (iii) 12 (iv) 60 ## You might like these • ### Divisible by 10|Test of Divisibility by 10|Rules of Divisibility by 10 Divisible by 10 is discussed below. A number is divisible by 10 if it has zero (0) in its units place. Consider the following numbers which are divisible by 10, using the test of divisibility by 10: • ### Divisible by 5 | Test of divisibility by 5| Rules of Divisibility by 5 Divisible by 5 is discussed below: A number is divisible by 5 if its units place is 0 or 5. Consider the following numbers which are divisible by 5, using the test of divisibility by • ### Divisible by 9 | Test of Divisibility by 9 |Rules of Divisibility by 9 A number is divisible by 9, if the sum is a multiple of 9 or if the sum of its digits is divisible by 9. Consider the following numbers which are divisible by 9, using the test of divisibility by 9: • ### Divisible by 6 | Test of Divisibility by 6| Rules of Divisibility by 6 Divisible by 6 is discussed below: A number is divisible by 6 if it is divisible by 2 and 3 both. Consider the following numbers which are divisible by 6, using the test of divisibility by 6: 42 • ### Divisible by 4 | Test of Divisibility by 4 |Rules of Divisibility by 4 A number is divisible by 4 if the number is formed by its digits in ten’s place and unit’s place (i.e. the last two digits on its extreme right side) is divisible by 4. Consider the following numbers which are divisible by 4 or which are divisible by 4, using the test of • ### Divisible by 3 | Test of Divisibility by 3 |Rules of Divisibility by 3 A number is divisible by 3, if the sum of its all digits is a multiple of 3 or divisibility by 3. Consider the following numbers to find whether the numbers are divisible or not divisible by 3: (i) 54 Sum of all the digits of 54 = 5 + 4 = 9, which is divisible by 3. • ### Relationship between H.C.F. and L.C.M. |Highest Common Factor|Examples The product of highest common factor (H.C.F.) and lowest common multiple (L.C.M.) of two numbers is equal to the product of two numbers i.e., H.C.F. × L.C.M. = First number × Second number or, LCM × HCF = Product of two given numbers • ### Divisibility Rules | Divisibility Test|Divisibility Rules From 2 to 18 To find out factors of larger numbers quickly, we perform divisibility test. There are certain rules to check divisibility of numbers. Divisibility tests of a given number by any of the number 2, 3, 4, 5, 6, 7, 8, 9, 10 can be perform simply by examining the digits of the • ### Method of H.C.F. |Highest Common Factor|Factorization &Division Method We will discuss here about the method of h.c.f. (highest common factor). The highest common factor or HCF of two or more numbers is the greatest number which divides exactly the given numbers. Let us consider two numbers 16 and 24. • ### Prime and Composite Numbers | Prime Numbers | Composite Numbers What are the prime and composite numbers? Prime numbers are those numbers which have only two factors 1 and the number itself. Composite numbers are those numbers which have more than two factors. • ### Multiples | Multiples of a Number |Common Multiple|First Ten Multiples What are multiples? ‘The product obtained on multiplying two or more whole numbers is called a multiple of that number or the numbers being multiplied.’ We know that when two numbers are multiplied the result is called the product or the multiple of given numbers. • ### 4th Grade Factors and Multiples Worksheet | Factors & Multiples In 4th grade factors and multiples worksheet we will find the factors of a number by using multiplication method, find the even and odd numbers, find the prime numbers and composite numbers, find the prime factors, find the common factors, find the HCF(highest common factors • ### Examples on Multiples | Different Types of Questions on Multiples Examples on multiples on different types of questions on multiples are discussed here step-by-step. Every number is a multiple of itself. Every number is a multiple of 1. Every multiple of a number is either greater than or equal to the number. Product of two or more numbers • ### Worksheet on Word Problems on H.C.F. and L.C.M. |Highest Common Factor In worksheet on word problems on H.C.F. and L.C.M. we will find the greatest common factor of two or more numbers and the least common multiple of two or more numbers and their word problems. I. Find the highest common factor and least common multiple of the following pairs • ### Word Problems on L.C.M. | L.C.M. Word Problems | Questions on LCM Let us consider some of the word problems on l.c.m. (least common multiple). 1. Find the lowest number which is exactly divisible by 18 and 24. We find the L.C.M. of 18 and 24 to get the required number. To Find Lowest Common Multiple by using Division Method Relationship between H.C.F. and L.C.M. Worksheet on H.C.F. and L.C.M. Word problems on H.C.F. and L.C.M. Worksheet on word problems on H.C.F. and L.C.M.
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Izv. RAN. Ser. Mat.: Year: Volume: Issue: Page: Find Izv. RAN. Ser. Mat., 1997, Volume 61, Issue 5, Pages 71–98 (Mi izv152) Some properties of the deficiency indices of symmetric singular elliptic second-order operators in $L^2(\mathbb R^m)$ Yu. B. Orochko Abstract: We consider the minimal operator $H$ in $L^2(\mathbb R^m)$, $m\geqslant 2$, generated by a real formally self-adjoint singular elliptic second-order differential expression (DE) $\mathcal L$. The example of the differential operator $H=H_0$ corresponding to the DE $\mathcal L=\mathcal L_0=-\operatorname{div}a(|x|)\operatorname{grad}$, where $a(r)$, $r\in[0,+\infty)$, is a non-negative scalar function, is studied to determine the dependence of the deficiency indices of $H$ on the degree of smoothness of the leading coefficients in $\mathcal L$. The other result of this paper is a test for the self-adjontness of an operator $H$ without any conditions on the behaviour of the potential of $\mathcal L$ as $|x|\to+\infty$. These results have no direct analogues in the case of an elliptic DE $\mathcal L$. DOI: https://doi.org/10.4213/im152 Full text: PDF file (2533 kB) References: PDF file   HTML file English version: Izvestiya: Mathematics, 1997, 61:5, 969–994 Bibliographic databases: MSC: 47B25, 35J70 Citation: Yu. B. Orochko, “Some properties of the deficiency indices of symmetric singular elliptic second-order operators in $L^2(\mathbb R^m)$”, Izv. RAN. Ser. Mat., 61:5 (1997), 71–98; Izv. Math., 61:5 (1997), 969–994 Citation in format AMSBIB \Bibitem{Oro97} \by Yu.~B.~Orochko \paper Some properties of the deficiency indices of symmetric singular elliptic second-order operators in~$L^2(\mathbb R^m)$ \jour Izv. RAN. Ser. Mat. \yr 1997 \vol 61 \issue 5 \pages 71--98 \mathnet{http://mi.mathnet.ru/izv152} \crossref{https://doi.org/10.4213/im152} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1486699} \zmath{https://zbmath.org/?q=an:0896.35053} \transl \jour Izv. Math. \yr 1997 \vol 61 \issue 5 \pages 969--994 \crossref{https://doi.org/10.1070/im1997v061n05ABEH000152} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000071929200004} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-33747422140}
In Progress Lesson 8, Topic 3 In Progress # Supplementary Angles Lesson Progress 0% Complete Supplementary angles are those angles that measure up to 180 degrees. For example, angle 120° and angle 60° are supplementary because when we add 120° and 60° we get 180°. If a = 50°, b = 70°, c = 60°, a + b + c = 50° + 70° + 60° = 180° Thus, supplementary angles add up to 180° Note that adjacent angles on a straight line are said to be supplementary. ### Example Find the value of y in the diagram below Solution 50° + y° + 35° = 180° (supplementary angles) y + 85° = 180° y = 180° – 85° y = 95° error:
It is currently 13 Dec 2017, 20:40 # Decision(s) Day!: CHAT Rooms | Ross R1 | Kellogg R1 | Darden R1 | Tepper R1 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # M18-35 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 42597 Kudos [?]: 135562 [0], given: 12699 ### Show Tags 16 Sep 2014, 00:04 Expert's post 8 This post was BOOKMARKED 00:00 Difficulty: 55% (hard) Question Stats: 57% (01:04) correct 43% (01:10) wrong based on 145 sessions ### HideShow timer Statistics If set $$M$$ consists of the root(s) of equation $$2-x^2 = (x-2)^2$$, what is the range of set $$M$$? A. 0 B. $$\frac{1}{\sqrt{2}}$$ C. 1 D. $$\sqrt{2}$$ E. 2 [Reveal] Spoiler: OA _________________ Kudos [?]: 135562 [0], given: 12699 Math Expert Joined: 02 Sep 2009 Posts: 42597 Kudos [?]: 135562 [1], given: 12699 ### Show Tags 16 Sep 2014, 00:05 1 KUDOS Expert's post 1 This post was BOOKMARKED Official Solution: If set $$M$$ consists of the root(s) of equation $$2-x^2 = (x-2)^2$$, what is the range of set $$M$$? A. 0 B. $$\frac{1}{\sqrt{2}}$$ C. 1 D. $$\sqrt{2}$$ E. 2 $$2-x^2 = (x-2)^2$$; $$2-x^2=x^2-4x+4$$; $$x^2-2x+1=0$$; $$(x-1)^2=0$$; $$x=1$$. So, set $$M$$ consists of only one element. The range of a single element set is 0. _________________ Kudos [?]: 135562 [1], given: 12699 Intern Joined: 15 Oct 2013 Posts: 9 Kudos [?]: [0], given: 1 Concentration: Accounting GPA: 3.01 WE: Accounting (Accounting) ### Show Tags 25 May 2015, 13:43 Is there anyway that someone can show me how they got the answer of 0? I can get this far: 2-x^2 = (X-2)^2 2-X^2 = (X-2) (X-2) 2-X^ 2 = X^2-4X+4 What to do after this. I don't understand the break down on the free GMAT text. It's not clicking at this moment. Kudos [?]: [0], given: 1 Math Expert Joined: 02 Sep 2009 Posts: 42597 Kudos [?]: 135562 [0], given: 12699 ### Show Tags 26 May 2015, 05:32 whitdiva23 wrote: Is there anyway that someone can show me how they got the answer of 0? I can get this far: 2-x^2 = (X-2)^2 2-X^2 = (X-2) (X-2) 2-X^ 2 = X^2-4X+4 What to do after this. I don't understand the break down on the free GMAT text. It's not clicking at this moment. $$2 - x^ 2 = x^2 - 4x + 4$$; Re-arrange: $$2x^2 - 4x +2 = 0$$; Reduce by 2: $$x^2 -2x + 1 = 0$$, which is the same as $$(x−1)^2=0$$, so $$x=1$$. _________________ Kudos [?]: 135562 [0], given: 12699 Intern Joined: 15 Jan 2014 Posts: 22 Kudos [?]: 17 [0], given: 26 Location: India Concentration: Technology, Strategy Schools: Haas '19 GMAT 1: 650 Q49 V30 GPA: 2.5 WE: Information Technology (Consulting) ### Show Tags 10 Feb 2017, 08:47 Hi Bunuel I think $$(x−1)^2$$=0 will give 2 equal values for X(1,1) not single value . So range will be 0. Is this correct ? Thanks Kudos [?]: 17 [0], given: 26 Math Expert Joined: 02 Sep 2009 Posts: 42597 Kudos [?]: 135562 [0], given: 12699 ### Show Tags 10 Feb 2017, 09:00 pranjal123 wrote: Hi Bunuel I think $$(x−1)^2$$=0 will give 2 equal values for X(1,1) not single value . So range will be 0. Is this correct ? Thanks 1 and 1 is just one root. _________________ Kudos [?]: 135562 [0], given: 12699 Intern Joined: 03 May 2014 Posts: 14 Kudos [?]: [0], given: 3 ### Show Tags 16 Nov 2017, 07:56 Bunuel wrote: whitdiva23 wrote: Is there anyway that someone can show me how they got the answer of 0? I can get this far: 2-x^2 = (X-2)^2 2-X^2 = (X-2) (X-2) 2-X^ 2 = X^2-4X+4 What to do after this. I don't understand the break down on the free GMAT text. It's not clicking at this moment. $$2 - x^ 2 = x^2 - 4x + 4$$; Re-arrange: $$2x^2 - 4x +2 = 0$$; Reduce by 2: $$x^2 -2x + 1 = 0$$, which is the same as $$(x−1)^2=0$$, so $$x=1$$. How would one know when to reduce the equation as opposed to trying to solve a quadratic equation ax^2 +bx + c when a>1. For example factoring out the 2x^2? Kudos [?]: [0], given: 3 Re: M18-35   [#permalink] 16 Nov 2017, 07:56 Display posts from previous: Sort by # M18-35 Moderators: chetan2u, Bunuel Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
## Interface IborFixingDepositConvention • All Superinterfaces: Named, TradeConvention All Known Implementing Classes: ImmutableIborFixingDepositConvention public interface IborFixingDepositConvention extends TradeConvention, Named A convention for Ibor fixing deposit trades. This defines the convention for an Ibor fixing deposit against a particular index. In most cases, the index contains sufficient information to fully define the convention. As such, the convention is set to be created on the fly based on the index. To manually create a convention, see ImmutableIborFixingDepositConvention. To register a specific convention, see IborFixingDepositConvention.ini. • ### Method Summary All Methods Modifier and Type Method Description default java.time.LocalDate calculateSpotDateFromTradeDate​(java.time.LocalDate tradeDate, ReferenceData refData) Calculates the spot date from the trade date. IborFixingDepositTrade createTrade​(java.time.LocalDate tradeDate, java.time.Period depositPeriod, BuySell buySell, double notional, double fixedRate, ReferenceData refData) Creates a trade based on this convention. static ExtendedEnum<IborFixingDepositConvention> extendedEnum() Gets the extended enum helper. IborIndex getIndex() Gets the Ibor index. java.lang.String getName() Gets the name that uniquely identifies this convention. DaysAdjustment getSpotDateOffset() Gets the offset of the spot value date from the trade date. static IborFixingDepositConvention of​(IborIndex index) Obtains a convention based on the specified index. static IborFixingDepositConvention of​(java.lang.String uniqueName) Obtains an instance from the specified unique name. IborFixingDepositTrade toTrade​(TradeInfo tradeInfo, java.time.LocalDate startDate, java.time.LocalDate endDate, BuySell buySell, double notional, double fixedRate) Creates a trade based on this convention. default IborFixingDepositTrade toTrade​(java.time.LocalDate tradeDate, java.time.LocalDate startDate, java.time.LocalDate endDate, BuySell buySell, double notional, double fixedRate) Creates a trade based on this convention. • ### Method Detail • #### of static IborFixingDepositConvention of​(java.lang.String uniqueName) Obtains an instance from the specified unique name. Parameters: uniqueName - the unique name Returns: the convention Throws: java.lang.IllegalArgumentException - if the name is not known • #### of static IborFixingDepositConvention of​(IborIndex index) Obtains a convention based on the specified index. This uses the index name to find the matching convention. By default, this will always return a convention, however configuration may be added to restrict the conventions that are registered. Parameters: index - the index, from which the index name is used to find the matching convention Returns: the convention Throws: java.lang.IllegalArgumentException - if no convention is registered for the index • #### extendedEnum static ExtendedEnum<IborFixingDepositConvention> extendedEnum() Gets the extended enum helper. This helper allows instances of the convention to be looked up. It also provides the complete set of available instances. Returns: the extended enum helper • #### getIndex IborIndex getIndex() Gets the Ibor index. The floating rate to be paid is based on this index It will be a well known market index such as 'GBP-LIBOR-3M'. Returns: the index • #### getSpotDateOffset DaysAdjustment getSpotDateOffset() Gets the offset of the spot value date from the trade date. The offset is applied to the trade date to find the start date. A typical value is "plus 2 business days". Returns: the spot date offset, not null IborFixingDepositTrade createTrade​(java.time.LocalDate tradeDate, java.time.Period depositPeriod, double notional, double fixedRate, ReferenceData refData) Creates a trade based on this convention. This returns a trade based on the specified deposit period. The notional is unsigned, with buy/sell determining the direction of the trade. If buying the Ibor fixing deposit, the floating rate is paid to the counterparty, with the fixed rate being received. If selling the Ibor fixing deposit, the floating rate is received from the counterparty, with the fixed rate being paid. Parameters: tradeDate - the date of the trade depositPeriod - the period between the start date and the end date buySell - the buy/sell flag notional - the notional amount, in the payment currency of the template fixedRate - the fixed rate, typically derived from the market refData - the reference data, used to resolve the trade dates Returns: Throws: ReferenceDataNotFoundException - if an identifier cannot be resolved in the reference data default IborFixingDepositTrade toTrade​(java.time.LocalDate tradeDate, java.time.LocalDate startDate, java.time.LocalDate endDate, double notional, double fixedRate) Creates a trade based on this convention. This returns a trade based on the specified dates. The notional is unsigned, with buy/sell determining the direction of the trade. If buying the Ibor fixing deposit, the floating rate is paid to the counterparty, with the fixed rate being received. If selling the Ibor fixing deposit, the floating rate is received from the counterparty, with the fixed rate being paid. Parameters: tradeDate - the date of the trade startDate - the start date endDate - the end date buySell - the buy/sell flag notional - the notional amount, in the payment currency of the template fixedRate - the fixed rate, typically derived from the market Returns: IborFixingDepositTrade toTrade​(TradeInfo tradeInfo, java.time.LocalDate startDate, java.time.LocalDate endDate, double notional, double fixedRate) Creates a trade based on this convention. This returns a trade based on the specified dates. The notional is unsigned, with buy/sell determining the direction of the trade. If buying the Ibor fixing deposit, the floating rate is paid to the counterparty, with the fixed rate being received. If selling the Ibor fixing deposit, the floating rate is received from the counterparty, with the fixed rate being paid. Parameters: tradeInfo - additional information about the trade startDate - the start date endDate - the end date buySell - the buy/sell flag notional - the notional amount, in the payment currency of the template fixedRate - the fixed rate, typically derived from the market Returns: default java.time.LocalDate calculateSpotDateFromTradeDate​(java.time.LocalDate tradeDate, ReferenceData refData) Calculates the spot date from the trade date. Parameters: tradeDate - the trade date refData - the reference data, used to resolve the date Returns: the spot date Throws: ReferenceDataNotFoundException - if an identifier cannot be resolved in the reference data • #### getName java.lang.String getName() Gets the name that uniquely identifies this convention. This name is used in serialization and can be parsed using of(String). Specified by: getName in interface Named Returns: the unique name
# How do you factor 9x - 36? $9 x - 36 = 9 \cdot \left(x - 4\right)$ Since both $9 x$ and $36$ divide by $9$ you can put $9$ as one factor: $9 x - 36 = 9 \cdot x - 9 \cdot 4 = 9 \cdot \left(x - 4\right)$
$$\require{cancel}$$ # 32.0: Prelude to Applications of Nuclear Physics Applications of nuclear physics have become an integral part of modern life. From the bone scan that detects a cancer to the radioiodine treatment that cures another, nuclear radiation has diagnostic and therapeutic effects on medicine. From the fission power reactor to the hope of controlled fusion, nuclear energy is now commonplace and is a part of our plans for the future. Yet, the destructive potential of nuclear weapons haunts us, as does the possibility of nuclear reactor accidents. Figure $$\PageIndex{1}$$: Tori Randall, Ph.D., curator for the Department of Physical Anthropology at the San Diego Museum of Man, prepares a 550-year-old Peruvian child mummy for a CT scan at Naval Medical Center San Diego. (credit: U.S. Navy photo by Mass Communication Specialist 3rd Class Samantha A. Lewis). Certainly, several applications of nuclear physics escape our view, as seen in Figure $$\PageIndex{1}$$. Not only has nuclear physics revealed secrets of nature, it has an inevitable impact based on its applications, as they are intertwined with human values. Because of its potential for alleviation of suffering, and its power as an ultimate destructor of life, nuclear physics is often viewed with ambivalence. But it provides perhaps the best example that applications can be good or evil, while knowledge itself is neither. Figure $$\PageIndex{2}$$: Customs officers inspect vehicles using neutron irradiation. Cars and trucks pass through portable x-ray machines that reveal their contents. (credit: Gerald L. Nino, CBP, U.S. Dept. of Homeland Security) Figure $$\PageIndex{3}$$: This image shows two stowaways caught illegally entering the United States from Canada. (credit: U.S. Customs and Border Protection) ## Contributors • Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
# How can anyone use a microcontroller which has only 384 bytes of program memory? For instance a PIC10F200T Virtually any code you write will be larger than that, unless it is a single purpose chip. Is there any way to load more program memory from external storage or something? I'm just curious, I don't see how this could be very useful... but it must be. • There are plenty of applications for tiny microcontrollers, from special-purpose signal generators, to protocol converters, to "nodes" in a larger control system, etc., etc., – Dave Tweed May 1 '13 at 15:17 • Well a chess playing program takes 672 bytes so that's no good. en.wikipedia.org/wiki/1K_ZX_Chess – John Burton May 1 '13 at 16:41 • Here are some examples of what can be done with tiny programs (less than 256 bytes). – hammar May 1 '13 at 19:37 • What do you mean, "unless its a single purpose chip"? The majority of embedded systems are single purpose. – Jeanne Pindar May 1 '13 at 20:25 • Back in college, I built a fully functional traffic light program for a 8085/8155 computer (max 256 bytes) I assembled. It had walk buttons, and some sensors that would simulate the presence of a vehicle. – Zoredache May 2 '13 at 6:07 You kids, get off my lawn! 384 bytes is plenty of space to create something quite complex in assembler. If you dig back through history to when computers were the size of a room, you'll find some truly amazing feats of artistry executed in <1k. For instance, read the classic Story of Mel - A Real Programmer. Admittedly, those guys had 4096 words of memory to play with, the decadent infidels. Also look at some of the old demoscene competitions where the challenge was to fit an "intro" into the bootblock of a floppy, typical targets being 4k or 40k and usually managing to include music and animation. Edit to add: Turns out you can implement the world's first $100 scientific calculator in 320 words. Edit for the young 'uns: • Floppy = floppy disk. • Bootblock = 1st sector of the floppy read at bootup. • Demoscene = programming competitions amongst hacker groups. • Assembler = fancy way of programming a device if you're too soft to use 8 toggle switches and a "store" button. • The Atari 2600 game console had only 4KB of storage in the ROM game cartridges (though some games got around this limitation by using bank switching to access more than 4K). – Johnny May 1 '13 at 21:21 • Eons ago I made a pretty realistic bird-chirp (enough that people looked for the bird rather than suspecting the computer), the guts of which (but not the randomizing code that kept it from making exactly the same sound each time) would have rattled around in 384 bytes and I had the additional restrictions of no writable addresses and a zero byte wasn't permitted in the binary. – Loren Pechtel May 2 '13 at 2:19 • I need to get out more, remembered this from back in the day - screen saver in 368 bytes: aminet.net/package/util/blank/368blanker – John U May 2 '13 at 14:14 • +1 for "The Story of Mel". One of the greatest things I've read all week. – Justin ᚅᚔᚈᚄᚒᚔ May 2 '13 at 14:53 • @JohnU: The first few games on the Atari 2600 were all 2K. Many developers never designed any games that went beyond 4K, because even though 8K chips were affordable (and some companies' carts simply used half of a 4K chip) adding bank-switching to a card using a standard (active-low chip-select) chip increased the number of support chips from one to three. – supercat May 2 '13 at 16:45 Microcontrollers are sufficiently cheap that they are often used to do really simple things that in years past would more likely have been done with discrete logic. Really simple things. For example, one might want a device to turn on an output for one second every five seconds, more precisely than a 555 timer would be able to do. movwf OSCCON mainLp: ; Set output low clrf GPIO movlw 0xFE movwf TRIS clrwdt call Wait1Sec clrwdt call Wait1Sec clrwdt call Wait1Sec clrwdt call Wait1Sec ; Set output high bsf GPIO,0 clrwdt call Wait1Sec goto mainLp Wait1Sec: movlw 6 movwf count2 movlw 23 movwf count1 movlw 17 movwf count0 waitLp: decfsz count0 goto waitLp decfsz count1 goto waitLp decfsz count2 goto waitLp retlw 0 That would be a real, usable, application, in less than 32 words (48 bytes) of code space. One could easily add a few options to have some I/O pins control timing options and still have lots of room to spare, but even if all the chip did was precisely what's shown above it might still be cheaper and easier than any alternative using discrete logic. BTW, the clrwdt instructions could be moved into the subroutine, but doing so would make things less robust. As written, even if a glitch causes the return-address stack to get corrupted, the watchdog won't get fed until execution returns to the main loop. If that never happens, the watchdog will reset the chip after a couple seconds. • Honestly, you could optimise your code a bit, setting a bad example to the children - 5 separate calls to wait1sec??? Wastrel! ;) – John U May 1 '13 at 16:38 • @JohnU: FYI, the code uses separate calls because if it used a count-to-zero counter and the count got glitched, the loop might run 255 times rather than four, while feeding the watchdog once per second. While it would be possible to guard against that by checking on each loop whether the count was in range, the code to do that ends up being more complicated than five calls and five clrwdt instructions. This isn't the most absolutely-failsafe counter arrangement possible, but some consideration is given to safety issues (e.g. the avoidance of clrwdt within a subroutine). – supercat May 1 '13 at 18:52 • @coder543: In the absence of things like power-supply noise, not very. On the other hand, in parts without a brown-out detector, it's possible for all sorts of crazy things to happen if VDD falls to a level between the minimum operating voltage and ground, and then rises back to normal. One should generally try to ensure that any state in which the device may finds itself will revert to normal in a reasonable period of time. The two seconds or so for the watchdog to kick in may be unavoidable, but four minutes for a glitched counter to reach zero might be a bit much. – supercat May 1 '13 at 19:38 • @coder543, they happen more often at an important demo than you want to believe. This kind of thinking is also required when building deeply embedded things that have no means to call for help or report an error. Or are inaccessible (think deep sea or outer space) even if an error did get noticed. – RBerteig May 1 '13 at 21:11 • @JohnU: I did notice it, but figured that explaining why I wrote the code as I did might be helpful. Incidentally, I was also trying to show that small tasks can fit in a small processor even if they're not optimized absolutely perfectly. – supercat May 2 '13 at 16:39 "ONLY" 384 bytes? Way back in the day, I had the job of writing an entire operating system (by myself) for a specialized computer that served the ship, pipeline, and refinery management industry. The company's first such product was 6800 based and was being upgraded to 6809, and they wanted a new OS to go along with the 6809 so they could eliminate the license costs of the original operating system. They also were bumping up the boot rom's size to 64 bytes, up from 32. If I recall correctly - it WAS about 33 years ago! - I convinced the engineers to give me 128 bytes so I could put the whole operating system's device drivers on the rom and thus make the whole device more reliable and versatile. This included: • Keyboard driver with key debounce • Video driver • Disk drive driver and rudimentary file system (Motorola "abloader format", IIRC), with built-in ability to treat "banked" memory as if it were really-fast disk-space. • Modem Driver (they got the FSK backwards, so these modems only talked with each other) Yes, all of these were as bare-bones as it gets, and hand-optimized to remove every extraneous cycle, but perfectly serviceable and reliable. Yes, I shoehorned all of that into the available bytes - oh, it ALSO set up interrupt handling, the various stacks, and initialized the real-time / multi-tasking operating system, prompted the user on boot options, and booted the system. A friend of mine who's still affiliated with the company (its successor) told me a few years ago that my code is still in service! You can do a LOT with 384 bytes... • you say boot rom, and you mention moving drivers onto the boot rom... this indicates to me that there was a secondary storage medium available. In this discussion, we've already determined that you cannot load code from external storage on this PIC. – coder543 May 2 '13 at 21:00 • @coder543 That misses the point: 384 bytes is enough to do quite a lot! The original question read like a complaint that 384 wasn't enough to do anything useful - it was more than I needed - a LOT more - to provide all the fundamental components of a real-time, multi-tasking operating system... – Richard T May 2 '13 at 21:09 You can use this for very small applications (e.g. delayed PSU start, 555 timer replacement, triac-based control, LED blinking etc...) with smaller footprint than you'd need with logic gates or a 555 timer. • I just noticed that those first two examples were from Stack itself! well played. – coder543 May 1 '13 at 16:56 • +1 for mentioning footprint. Sometimes size is everything. – embedded.kyle May 1 '13 at 16:56 I designed a humidity sensor for plants that tracks the amount of water the plant has and blinks an LED if the plant needs water. You can make the sensor learn the type of plant and thus change its settings while running. It detects low voltage on the battery. I ran out of flash and ram but was able to write everything in C code to make this product work flawlessly. I used the pic10f that you mention. Here is the code I made for my Plant Water Sensor. I used the pic10f220 since it has an ADC module, it has the same memory as the pic10f200, I'll try to find the Schematic tomorrow. The code is in spanish, but its very simple and should be easily understood. When the Pic10F wakes from sleep mode, it will reset so you have to check if it was a PowerUp or a reset and act accordingly. The plant setting is kept in ram since it never really powers down. MAIN.C /* Author: woziX (AML) Feel free to use the code as you wish. */ #include "main.h" void main(void) { unsigned char Humedad_Ref; unsigned char Ciclos; unsigned char Bateria_Baja; unsigned char Humedad_Ref_Bkp; OSCCAL &= 0xfe; //Solo borramos el primer bit WDT_POST64(); //1s ADCON0 = 0b01000000; LEDOFF(); TRIS_LEDOFF(); for(;;) { //Se checa si es la primera vez que arranca if(FIRST_RUN()) { Ciclos = 0; Humedad_Ref = 0; Bateria_Baja = 0; } //Checamos el nivel de la bateria cuando arranca por primera vez y cada 255 ciclos. if(Ciclos == 0) { if(Bateria_Baja) { Bateria_Baja--; Blink(2); WDT_POST128(); SLEEP(); } if(BateriaBaja()) { Bateria_Baja = 100; //Vamos a parpadear doble por 100 ciclos de 2 segundos SLEEP(); } Ciclos = 255; } //Checamos si el boton esta picado if(Boton_Picado) { WDT_POST128(); CLRWDT(); TRIS_LEDON(); LEDON(); __delay_ms(1000); TRIS_ADOFF(); Humedad_Ref = Humedad(); Humedad_Ref_Bkp = Humedad_Ref; } //Checamos si esta calibrado. Esta calibrado si Humedad_Ref es mayor a cero if( (!Humedad_Ref) || (Humedad_Ref != Humedad_Ref_Bkp) ) { //No esta calibrado, hacer blink y dormir Blink(3); SLEEP(); } //Checamos que Humedad_Ref sea mayor o igual a 4 antes de restarle if(Humedad_Ref <= (255 - Offset_Muy_Seca)) { if(Humedad() > (Humedad_Ref + Offset_Muy_Seca)) //planta casi seca { Blink(1); WDT_POST32(); SLEEP(); } } if(Humedad() >= (Humedad_Ref)) //planta seca { Blink(1); WDT_POST64(); SLEEP(); } if(Humedad_Ref >= Offset_Casi_Seca ) { //Si Humedad_Ref es menor a Humedad, entonces la tierra esta seca. if(Humedad() > (Humedad_Ref - Offset_Casi_Seca)) //Planta muy seca { Blink(1); WDT_POST128(); SLEEP(); } } SLEEP(); } } unsigned char Humedad (void) { LEDOFF(); TRIS_ADON(); ADON(); ADCON0_CH0_ADON(); __delay_us(12); GO_nDONE = 1; while(GO_nDONE); TRIS_ADOFF(); ADCON0_CH0_ADOFF(); return ADRES; } //Regresa 1 si la bateria esta baja (fijado por el define LOWBAT) //Regresa 0 si la bateria no esta baja unsigned char BateriaBaja (void) { LEDON(); TRIS_ADLEDON(); ADON(); ADCON0_ABSREF_ADON(); __delay_us(150); //Delay largo para que se baje el voltaje de la bateria GO_nDONE = 1; while(GO_nDONE); TRIS_ADOFF(); LEDOFF(); ADCON0_ABSREF_ADOFF(); return (ADRES > LOWBAT ? 1 : 0); } void Blink(unsigned char veces) { while(veces) { veces--; WDT_POST64(); TRIS_LEDON(); CLRWDT(); LEDON(); __delay_ms(18); LEDOFF(); TRIS_ADOFF(); if(veces)__delay_ms(320); } } MAIN.H /* Author: woziX (AML) Feel free to use the code as you wish. */ #ifndef MAIN_H #define MAIN_H #include <htc.h> #include <pic.h> __CONFIG (MCPU_OFF & WDTE_ON & CP_OFF & MCLRE_OFF & IOSCFS_4MHZ ); #define _XTAL_FREQ 4000000 #define TRIS_ADON() TRIS = 0b1101 #define TRIS_ADOFF() TRIS = 0b1111 #define TRIS_LEDON() TRIS = 0b1011 #define TRIS_LEDOFF() TRIS = 0b1111 #define TRIS_ADLEDON() TRIS = 0b1001 #define ADCON0_CH0_ADON() ADCON0 = 0b01000001; // Canal 0 sin ADON #define ADCON0_CH0_ADOFF() ADCON0 = 0b01000000; // Canal 0 con adON #define ADCON0_ABSREF_ADOFF() ADCON0 = 0b01001100; //Referencia interna absoluta sin ADON #define ADCON0_ABSREF_ADON() ADCON0 = 0b01001101; //referencia interna absoluta con ADON //Llamar a WDT_POST() tambien cambia las otras configuracion de OPTION #define WDT_POST1() OPTION = 0b11001000 #define WDT_POST2() OPTION = 0b11001001 #define WDT_POST4() OPTION = 0b11001010 #define WDT_POST8() OPTION = 0b11001011 #define WDT_POST16() OPTION = 0b11001100 #define WDT_POST32() OPTION = 0b11001101 #define WDT_POST64() OPTION = 0b11001110 #define WDT_POST128() OPTION = 0b11001111 #define Boton_Picado !GP3 #define FIRST_RUN() (STATUS & 0x10) //Solo tomamos el bit TO //Offsets #define Offset_Casi_Seca 5 #define Offset_Muy_Seca 5 //Low Bat Threshold #define LOWBAT 73 /* Los siguientes valores son aproximados LOWBAT VDD 50 3.07 51 3.01 52 2.95 53 2.90 54 2.84 55 2.79 56 2.74 57 2.69 58 2.65 59 2.60 60 2.56 61 2.52 62 2.48 63 2.44 64 2.40 65 2.36 66 2.33 67 2.29 68 2.26 69 2.23 70 2.19 71 2.16 72 2.13 73 2.10 74 2.08 75 2.05 76 2.02 77 1.99 78 1.97 */ #define LEDON() GP2 = 0; //GPIO = GPIO & 0b1011 #define LEDOFF() GP2 = 1; //GPIO = GPIO | 0b0100 #define ADON() GP1 = 0; //GPIO = GPIO & 0b1101 #define ADOFF() GP1 = 1; //GPIO = GPIO | 0b0010 unsigned char Humedad (void); unsigned char BateriaBaja (void); void Delay_Parpadeo(void); void Blink(unsigned char veces); #endif Let me know if you have questions, I'll try to answer based on what I remember. I coded this several years ago so don't check my coding skills, they have improved :). Final Note. I used the Hi-Tech C compiler. • I'd actually be really interesting in reading how you did this. Did you take any notes at all while you were doing it that you wouldn't mind sharing on the webs? – RhysW May 2 '13 at 13:14 • Hello RhysW, I believe I still have the code. It was really simple actually. I could send you my code if you are interested. Let me know. The circuit I designed is very simple and cool, only 3 resistors, one p-channel mosfet (for reverse battery protection) a 100nF cap and an LED. I use and internal diode in the pic10f to use as a reference for battery measurement and to keep the ADC readings constant. – scrafy May 2 '13 at 16:36 • That sounds like a neat project. Is there any chance you could post the details here (or at least post them somewhere and link to them)? – Ilmari Karonen May 3 '13 at 2:17 • Hello scrafy! Please, if you have something to add to an answer, use the "edit" link instead of posting a new answer, since this site uses voting and doesn't work like a forum. – clabacchio May 3 '13 at 8:11 One thing that I haven't seen mentioned: The microcontroller you mentioned is only$0.34 each in quantities of 100. So for cheap, mass-produced products, it can make sense to go to the extra coding trouble imposed by such a limited unit. The same might apply to size or power consumption. • That was exactly my first thought. Also: If I would be a startup with a neat idea, but only few hundred bucks loose, stuff like this can mean the difference between get-back-to-day-job and quit-day-job. – phresnel Jun 24 '15 at 8:44 When I was in high school, I had a teacher that insisted that light dimming was too difficult a task for a student such as I to tackle. Thus challenged I spent quite a bit of time learning and understanding phase based light dimming using triacs, and programming the 16C84 from microchip to perform this feat. I ended up with this assembly code: 'Timing info: 'There are 120 half-cycles in a 60Hz AC waveform 'We want to be able to trigger a triac at any of 256 'points inside each half-cycle. So: '1 Half cycle takes 8 1/3 mS '1/256 of one half cycle takes about 32.6uS 'The Pause function here waits (34 * 0xD)uS, plus 3uS overhead 'This was originally assembled using Parallax's "8051 style" 'assembler, and was not optimized any further. I suppose 'it could be modified to be closer to 32 or 33uS, but it is 'sufficient for my testing purposes. list 16c84 movlw 0xFD '11111101 tris 0x5 'Port A movlw 0xFF '11111111 tris 0x6 'Port B WaitLow: 'Wait for zero-crossing start btfss 0x5,0x0 'Port A, Bit 1 goto WaitLow 'If high, goto WaitLow WaitHigh: 'Wait for end of Zero Crossing btfsc 0x5,0x0 'Port A, Bit 1 goto WaitHigh 'If low, goto waitHigh call Pause 'Wait for 0xD * 34 + 3 uS bcf 0x5,0x1 'Put Low on port A, Bit 1 movlw 0x3 'Put 3 into W movwf 0xD 'Put W into 0xD call Pause 'Call Pause, 105 uS bsf 0x5,0x1 'Put High on Port A, Bit 1 decf 0xE 'Decrement E movf 0x6,W 'Copy Port B to W movwf 0xD 'Copy W to 0xD goto Start 'Wait for zero Crossing Pause: 'This pauses for 0xD * 34 + 3 Micro Seconds 'Our goal is approx. 32 uS per 0xD 'But this is close enough for testing movlw 0xA 'Move 10 to W movwf 0xC 'Move W to 0xC Label1: decfsz 0xC 'Decrement C goto Label1 'If C is not zero, goto Label1 decfsz 0xD 'Decrement D goto Pause 'If D is not zero, goto Pause return 'Return Of course you'd need to modify this for the chip you mention, and maybe add a cheap serial routine for input since your chip doesn't have an 8 bit wide port to listen to, but the idea is that a seemingly complex job can be done in very little code - you can fit ten copies of the above program into the 10F200. You can find more project information on my Light Dimming page. Incidentally I never did show this to my teacher, but did end up doing a number of lighting rigs for my DJ friend. Well, years ago I wrote a temperature controller with serial I/O (bit-banging the serial I/O because the MCU didn't have a UART) and a simple command interpreter to talk to the controller. MCU was a Motorola (now Freescale) MC68HC705K1 which had a whopping 504 bytes of program memory (OTPROM) and about 32 bytes of RAM. Not as little as the PIC you reference, but I remember having some ROM left over. I still have a few assembled units left, 17 years later; wanna buy one? So yeah, it can be done, at least in assembly. In any case, I've written very simple C programs recently that would probably have fit inside 384 bytes when optimized. Not everything requires large, complex software. You can write a blink a LED with 384 bytes program memory, and even more. As far as I know, it is not possible to extend the program memory with an external chip (unless you're building a full ASM interpreter in the 384 bytes, which would be slow). It is possible to extend data memory with an external chip (EEPROM, SRAM) though. • It wouldn't be hard to build a Turing machine simulator in 384 bytes... – Chris Stratton May 1 '13 at 15:23 • @ChrisStratton I meant a full interpreter, so that the 'extended program memory' would've the same features as normal. – Keelan May 1 '13 at 15:24 • Yes, that's what I suggested a means of tightly implementing. The rest is just compiler design... – Chris Stratton May 1 '13 at 15:25 • If one wanted program logic to be stored in an external EEPROM, trying to emulate the PIC instruction set would not be the way to go. A better approach would be to design an instruction set which was optimized for use with the virtual machine; indeed, that is the approach that Parallax took with their "Basic STAMP" in the 1990's. It was a PIC with 3072 bytes of code space, paired with a serial EEPROM chip. – supercat May 1 '13 at 15:39 • BTW, an additional note about the BASIC stamp: it was introduced at a time when flashed-based or EEPROM-based microcontrollers were comparatively rare, but serial EEPROM chips were fairly cheap. For applications that didn't need much speed, a fixed-code micro with a serial EEPROM part would be cheaper than a comparably-sized EEPROM or flash-based micro. The design of the BASIC Stamp wouldn't make sense today, but it was quite practical when it was introduced. – supercat May 2 '13 at 16:43 It's actually worse than you think. Your linked Mouser page is confusing matters when it specifies this processor as having 384 bytes of program memory. The PIC10F200 actually has 256 12-bit words of program memory. So, what can you do with that? The 12-bit PIC instruction set used by the PIC10F20x devices are all single-word instructions, so after you subtract a few instructions for processor setup, you're left with enough space for a program of about 250 steps. That's enough for a lot of applications. I could probably write a washing machine controller in that kind of space, for example. I just looked over the available PIC C compilers, and it looks like about half of them won't even try to emit code for a PIC10F200. Those that do probably put out so much boilerplate code that you might only be able to write an LED flasher in the space left. You really want to use assembly language with such a processor. • You are right about the 256 instruction words. Actually one of them is taken up with the oscillator calibration constant, so you get 255 usable instructions. Also, the 10F200 doesn't use the usual PIC 16 14-bit instruction set. It uses the PIC 12 12-bit instruction set. However, I agree with your basic premises. I've done lots of useful things with a PIC 10F200. +1 – Olin Lathrop Jan 4 '15 at 16:35 • @OlinLathrop: I've clarified the answer. I got the PIC16 term from page 51 of the datashseet, but I've decided it's clearer to just refer to "the 12-bit instruction set." The part prefix isn't a reliable guide to the instruction set used. – Warren Young Jan 4 '15 at 17:51 waving my cane in my day, we had to etch our own bits out of sand! In 1976 (or thereabouts) the Atari 2600 VCS system was one of the most popular "video game platforms" of the time. In it, the microprocessor (MOSTEK 6507) ran at a blazing ~1 MHz and had ****128 bytes of RAM**. A second example that I recall of a microcontroller with extremely limited RAM (~128 bytes) was a PIC12F used on a DC-DC converter. This micro also had to use assembly language in order to run at all. • The OP isn't talking about RAM, he's talking about program space. The program space in the Atari 2600 is in the cartridge, not in the RIOT chip. The 2600 supported program ROMs up to 4 kiB without bank switching. (And some commercial cartridges did do bank switching!) As for your PIC12F example, the OP's got you beat: the PIC10F20x series devices have either 16 or 24 bytes of SRAM. – Warren Young Jan 4 '15 at 16:04
# True or True #12 True or False: The integer $$2$$ is always the smallest prime factor of $\large{ 2^m + 3^{4^{m}} + 4^{5^{6^{m}}} + 5^{6^{7^{8^{m}}}}}$. Provided that $$m$$ is nonnegative integer. ×
# Models of network access using feedback fluid queues M.R.H. Mandjes, D. Mitra, Willem R.W. Scheinhardt Research output: Book/ReportReportProfessional ## Abstract At the access to networks, in contrast to the core, distances and feedback delays, as well as link capacities are small, which has network engineering implications that are investigated in this paper. We consider a single point in the access network which multiplexes several bursty users. The users adapt their sending rates based on feedback from the access multiplexer. Important parameters are the user's peak transmission rate $p$, which is the access line speed, the user's guaranteed minimum rate $r$, and the bound $\epsilon$ on the fraction of lost data. Original language Undefined Enschede University of Twente, Department of Applied Mathematics 33 0169-2690 Published - 2001 ### Publication series Name Memorandum faculteit TW Department of Applied Mathematics, University of Twente 1612 0169-2690 • METIS-201268 • IR-65799 • EWI-3432 • MSC-60K25 • MSC-68M20
# Lining up edges of a tree with TikZ I'm drawing trees with TikZ and \begin{tikzpicture} \node {$C$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} produces edges that don't line up as I want them to. If I were to draw circles around the nodes, an edge goes from the bottom center of the parent's circle to the top center of the child's, making the arm appear jagged. I've tried moving the anchors around, but that wasn't it. What I want seems to occur automatically in the examples in the pgf manual (see the final tree example on page 192). What am I not doing? - Could you please add a picture of what you are trying to achieve. –  Caramdir Sep 30 '10 at 22:55 Not quite sure what's going on here. Have you altered the default parent and child anchors? Here are 3 examples: \begin{tikzpicture}[parent anchor=south,child anchor=north,every node/.style={circle,draw}] \node {$C1$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} \begin{tikzpicture}[parent anchor=center,child anchor=center,every node/.style={circle,draw}] \node {$C2$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} \begin{tikzpicture}[every node/.style={circle,draw}] \node {$C3$} child[missing] child{node {$i$} child[missing] child{node {$j$}} }; \end{tikzpicture} which produce: Now, your example (my C3) appears to be the best; is it C1 you're looking for? - I wanted C3, but was getting C1 with the same code. For whatever reason, deleting \usetikzlibrary{tikz-qtrees} (which I don't need--it was left from an earlier tree-drawing experiment) solves the problem. I assume it changes the anchors. –  hoyland Oct 2 '10 at 0:49
# XeTeX, microtype and fontspec I am using XeTeX from TL2010 with microtype 2.5 beta and fontspec. My main font is Linux Libertine O. I do not see a single difference with and without including the microtype package. Am I doing something wrong, is there a better way to optimize character protrusion in XeTeX to get nicer paragraphs and spend less time fixing overfull lines? This is what I'm doing: ``````% This line doesn't change anything at all \usepackage[protrusion=true,final]{microtype} \usepackage{fontspec} \usepackage{xunicode} \setmainfont[Mapping=tex-text]{Linux Libertine O} % other fonts and settings... `````` The logs do indicate that microtype seems to work: ``````LaTeX Info: Redefining \microtypecontext on input line 54. Package microtype Info: Character protrusion enabled (level 2). Package microtype Info: Using default protrusion set `alltext'. Package microtype Info: No adjustment of tracking. Package microtype Info: No adjustment of spacing. Package microtype Info: No adjustment of kerning. `````` Edit: Actually, protrusion does work properly it seems, but it doesn't seem to help with overfull lines. Does protrusion not help with it, and is it only font expansion that helps with overfull lines? - You have to prepare a configuration file; there are examples in the same directory as `microtype.sty` and one for an OpenType font here. –  egreg Jul 3 '11 at 21:06 That's what I feared :-) Can I find an existing configuration file for Linux Libertine anywhere? If not, what are all the numbers and how do I choose/set them? –  ℝaphink Jul 3 '11 at 21:11 Protrusion means that characters like point, hyphen, comma stick a bit in the margin to get a visually more pleasant result. It changes line breaking so it can in some paragraphs improve it (like rewriting a text can improve it). But to improve line breaking in the complete text you need font expansion or you must adjust the hyphenation. –  Ulrike Fischer Jul 4 '11 at 14:39 font expansion is not available in XeTeX –  Ulrike Fischer Jul 4 '11 at 15:00 @Raphink: The only thing that XeTeX does and LuaTeX doesn't is inclusion of PostScript. And the LaTeX support for LuaTeX is a bit behind (most notably polyglossia doesn't work). Otherwise: Yes. –  Martin Schröder Jul 18 '11 at 16:40
# Sampling Sampling in MQL mitigates data volume issues. There are two sampling strategies — Random and Sticky: ## Random Sampling¶ Random sampling uniformly downsamples the stream to a percentage of its original volume. You establish random sampling through a sampling clause like the following: select * from stream SAMPLE {'strategy': 'RANDOM', 'threshold': 200, 'factor': 10000} For each item in the stream, a numeric hash is generated. That hash is modded by factor to produce a result between 0 (inclusive) and factor (exclusive). If that result is less than threshold the item will be sampled, otherwise it will be ignored. You can determine the sampling percentage by remembering that $\frac{threshold}{factor}$ values will be sampled — 2% in the example above with a threshold of 200 and factor of 10000. You can also set a salt value, which will change the calculation of the hash. This is helpful if you want to sample over the same set of values but retrieve a different sample of those values. For example: select * from stream SAMPLE {'strategy': 'RANDOM', 'threshold': 200, 'factor': 10000, 'salt': 123} ## Sticky Sampling¶ Sticky sampling “sticks” to certain values for the provided keys. That is if you are sampling on “zipcode” (as in the following example) and you observe a specific zipcode in the stream, you will observe all events which contain that specific zipcode. Sticky sampling can be achieved with a query like such: select * from stream SAMPLE {'strategy':'STICKY', 'keys':['zipcode'], 'threshold':200, 'factor':10000, 'salt': 1} The query above should retrieve 2% of the total stream (see Random Sampling above for why), predicated on the events being uniformly distributed over the zipcodes.
# What is the highest number of impulses required for an optimal orbital transfer? Given two arbitrary orbits around a point mass, there exists some optimal transfer between them in terms of delta-v. What's the highest number of impulses such a transfer could require? (That is, I'm asking for a specific quantity of solutions to a variant of Lambert's problem.) "Optimal" in a mathematical sense. Burns not being perfectly impulsive, transfers taking unreasonable amounts of time being undesirable, perturbations, three-body effects and so on can all be ignored. The number is obviously larger than 1, since not all orbits share a common point. If all optimal planar transfers are bi-tangential orbits, the answer is 2 for planar orbits. The number is larger than 2, since solutions with 3 impulses are better for some types of transfers. An infinite apoapsis generalised bi-elliptic transfer, which is sometimes optimal, has two non-zero impulses and two zero-impulse manoeuvers. Whether this counts as 2 or 4 impulses is less important since: 1) There can at most be 2 zero-impulse manoeuvers in any optimal transfer, and 2) Any optimal transfer containing a zero-impulse manoeuver can at most have 2 non-zero impulses. Does an optimal transfer requiring 4 or more non-zero impulses exist? • I was thinking a bit about adding a bounty to your question (because I have a hunch this will turn out to be really interesting) but I don't know if it interferes with your timing or intent. How would you feel about that? – uhoh Nov 16 '20 at 0:15 • @uhoh Sure, go ahead. I too would like to know the answer to this. (Even though bounties haven't proved very effective for this kind of question in the past). – SE - stop firing the good guys Nov 16 '20 at 1:03 • This is quite interesting. For ill-conditioned transfers (practically everything with 30+ degrees inclination change; generally vast majority), the answer is "4" (1. raise apoapsis to near-infinity, 2. circularize+plane change to enter a slow orbit to a point co-axial with target apoapsis, 3. drop periapsis + plane change for target orbit, 4. drop apoapsis.) Considering the "mathematical perfection" cost of 2. and 3. is infinitesimally small, (and transfer time between them is infinite), but the whole thing costs $\sqrt{2} ( v_{Pe1} + v_{Pe2} ) + \epsilon$ – SF. Nov 16 '20 at 8:49 • This is quite a bit, and there will be a margin of degenerate transfers that are not-quite-as-ill-conditioned, that can be done on less delta-V but on more burns. – SF. Nov 16 '20 at 8:53 For coplanar orbits, a bi-elliptical transfer is more efficient than an Hohmann transfer when the ratio of the initial and final radii is greater than 15.58. When the ratio is less than 11.94, an Hohmann transfer is more efficient. (Thanks to notovny for correcting me.) A bielliptic transfer is effectively two subsequent Hohmann transfers. Section 6.3.2 of "Fundamentals of Astrodynamics" by Vallado (p. 328 in the 4th edition) compares Hohmann transfers to the bi-elliptical transfer. In a bi-elliptical transfer, you will need three burns: one to depart the initial orbit onto an elliptical orbit (you must depart when your flight path angle is zero), then perform an apogee burn on the elliptical orbit, and finally perform a final maneuver on the destination orbit, also where you should get a flight path angle of zero. For any other transfer, it really depends on the problem you are trying to solve, and the variables of the problem (e.g. how many times can you reignite the engine, what will be the errors in the thruster performance, where are the ground stations placed for navigation, etc.). For example, for interplanetary or lunar missions, one would set up the problem to assume 4 to 8 control points, i.e. positions in the trajectory where you should place a maneuver. One would rarely place more than 8 control points. Each control point is assumed to be a point in the trajectory where a maneuver will be executed, and those require some operational overhead. As such, we ensure there is some time between each potential maneuver. For example, before a maneuver, it is important to have very good knowledge of the position and velocity of the spacecraft before the maneuver (i.e. a good navigation solution), and be able to continue tracking the spacecraft soon after the maneuver. In short, the fewer the maneuvers, the easier it is to fly the spacecraft. So there's a trade off between the fuel savings and the overhead needed for each maneuver. Moreover, optimizers (like SNOPT) would be used to optimize the placement of these control nodes and the optimizer will try to minimize the delta-V at each node. This approach is called "multiple shooting" and is used for Ballistic Lunar Transfers to libration point orbits. The optimizer may show that some of the control nodes have extremely small delta-Vs (e.g. less than a few millimeters per second), and in which case, you can omit that maneuver, and rerun the optimization problem. A similar approach would be done for Earth orbits on different planes. As you also correctly stated, one would generally start with a Lambert solution for a first level approximation. Then, you would place the control points at different positions and let the optimizer find the best solution. • This is a really interesting answer and I'll bet there's a question about three-body orbits that would suit it better. The current question post begins: "Given two arbitrary orbits around a point mass..." so I think at least the intent is to ask about transfers between Keplerian orbits. It doesn't specify anything else about those orbits, so while "planar" is mentioned in an example of the counting procedure, my guess is that the maximum number will be between non-coplanar elliptical orbits that are either awkwardly asynchronous or "maliciously synchronized" to make it hard. – uhoh Nov 15 '20 at 1:35 • While interesting about trajectory optimisation in general, this doesn't answer my question. – SE - stop firing the good guys Nov 15 '20 at 12:52 • "It really depends on the problem you are trying to solve". Optimal delta-v transfers between two orbits around a point mass. – SE - stop firing the good guys Nov 15 '20 at 12:54 • @ChrisR: No mission profile ever involves maneuvers that would lean on delta-v equivalent of turning inclination of LEO by 180 degrees ("""optimally"""). There's no point bringing real-life missions to this discussion because these almost-worst-case scenarios are already so bad nobody even takes them into consideration. – SF. Nov 16 '20 at 21:36 • @ChrisR I see you're confused because you take a lot of engineer's approach, not mathematician's, considering factors which are non-factor in asker's conditions (like delay between maneuvers). For example, " in which case, you can omit that maneuver," - if you see my comment about all the "worst scenario" transfers, the solution consists of two large burns and two minuscule ones, and the minuscule ones are critically important. It also has completely impractical duration, which again is non-factor. Or consider a delta-v optimal bielliptic transfer - transfer orbit being infinite is OK. – SF. Nov 17 '20 at 10:29
Story posted as a reason to # Swiftwater-Flood Rescue, River Rescue, Drowning Prevention When You Finish Your Homework ->>> . . . . . . . . . . . . . . . . . . . . . . . . . . . when you finish your homework at 3am what to do when you finish your homework when you finally finish your homework how to finish homework when your tired that moment when you finish your homework how to finish homework when you are tired that feeling when you finish your homework me when i finish my homework at 3am how i feel when i finish my math homework when you finish homework when you finish homework meme when i finish my homework at 3am cd4164fbe1 Homework help should, of course, be age-dependent, decreasing in intensity as your children get older. Your 1st grader may need you to sit down with her each day in . You may go and play tennis only when your homework - (finish). .. Tom, have you finished your homework? Yes, I before you came back, mom. Do my homework or do my assignment Dont worry - Finish My Homework is a homework helper that can help you with any assignment online - Order today and get your .. As a student, you must have homework every day. How do you deal with your . B himlfhrselfmysefdres""erself Which of these is correct? Let me know once you finish Let me know when you finish Let me know when you have . e.g. "when you have finished your homework, .. Are the follwing sentences, s1 and s2, correct? If they are, what's the difference? s1: Have you finished your homework yet? s2: Did you finish your.
music + dance + projected visualsmarvel at perfect timingmore quotes # pi: exciting EMBO Practical Course: Bioinformatics and Genome Analysis, 5–17 June 2017. # visualization + design The 2017 Pi Day art imagines the digits of Pi as a star catalogue with constellations of extinct animals and plants. The work is featured in the article Pi in the Sky at the Scientific American SA Visual blog. # $\pi$ Day 2017 Art Posters - Star charts and extinct animals and plants 2017 $\pi$ day 2016 $\pi$ approximation day 2016 $\pi$ day 2015 $\pi$ day 2014 $\pi$ approx day 2014 $\pi$ day 2013 $\pi$ day Circular $\pi$ art On March 14th celebrate $\pi$ Day. Hug $\pi$—find a way to do it. For those who favour $\tau=2\pi$ will have to postpone celebrations until July 26th. That's what you get for thinking that $\pi$ is wrong. If you're not into details, you may opt to party on July 22nd, which is $\pi$ approximation day ($\pi$ ≈ 22/7). It's 20% more accurate that the official $\pi$ day! Finally, if you believe that $\pi = 3$, you should read why $\pi$ is not equal to 3. All art posters are available for purchase. I take custom requests. Caelum non animum mutant qui trans mare currunt. —Horace This year: creatures that don't exist, but once did, in the skies. And a poem Of Black Body. This year's $\pi$ day song is Exploration by Karminsky Experience Inc. Why? Because "you never know what you'll find on an exploration". ## create myths and contribute! Want to contribute to the mythology behind the constellations in the $\pi$ in the sky? Many already have a story, but others still need one. Please submit your stories! This year I wanted to something more than visuals. Space is vast, so let's fill it with words. I asked my good friend and poet, Paolo Marcazzan, to collaborate. I described the idea for the art—a universe of stars based on $\pi$ and the extinct creatures that live within it—and asked him to find the matching words. And they could not have been more perfect. # Of Black Body harken to my anger, mother Nyx, for the deceptions of the gods —Aeschylus, Eumenides It’s not so much stifled intent but tussle and fracas in the backstage of the heart spoken in privatives and the high tone of failed mending. Says there is nothing to see and you are seeing it. A truth that likes it here in the caul of light swallowed and us locked in probability or antecedent. Where this run to untold receding and dispersed signature ends in parallels denied l’amor che move il sole e l’altre stelle or silenced music that is number on a brim of echo, capsized chamber drawn into our constellation, and cooling. —Paolo Marcazzan ## author's note The poem situates the dark (black body, energy) as a place for contention and ongoing confrontation. Whether in the recesses of space or heart, the poem probes the territory of distance, absence, uncertainty and muteness. It considers the relational as default positioning of existence (earthly, universal), and that which remains unmet within that context. Life in its dimension of cross-grained, often broken linearity is juxtaposed with a quote form Dante that references instead his vision of sidereal circularity as the benign force that moves all things in the universe. For the earthbound, the questions and concerns remain those of identity, passage, escape from transiency, and slow tempering of hope. VIEW ALL # Classification and regression trees Fri 28-07-2017 Decision trees are a powerful but simple prediction method. Decision trees classify data by splitting it along the predictor axes into partitions with homogeneous values of the dependent variable. Unlike logistic or linear regression, CART does not develop a prediction equation. Instead, data are predicted by a series of binary decisions based on the boundaries of the splits. Decision trees are very effective and the resulting rules are readily interpreted. Trees can be built using different metrics that measure how well the splits divide up the data classes: Gini index, entropy or misclassification error. Nature Methods Points of Significance column: Classification and decision trees. (read) When the predictor variable is quantitative and not categorical, regression trees are used. Here, the data are still split but now the predictor variable is estimated by the average within the split boundaries. Tree growth can be controlled using the complexity parameter, a measure of the relative improvement of each new split. Individual trees can be very sensitive to minor changes in the data and even better prediction can be achieved by exploiting this variability. Using ensemble methods, we can grow multiple trees from the same data. Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758. Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542. Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104. Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604. Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704. Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804. # Personal Oncogenomics Program 5 Year Anniversary Art Wed 26-07-2017 The artwork was created in collaboration with my colleagues at the Genome Sciences Center to celebrate the 5 year anniversary of the Personalized Oncogenomics Program (POG). 5 Years of Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Centre. The poster shows 545 cancer cases. (left) Cases ordered chronologically by case number. (right) Cases grouped by diagnosis (tissue type) and then by similarity within group. The Personal Oncogenomics Program (POG) is a collaborative research study including many BC Cancer Agency oncologists, pathologists and other clinicians along with Canada's Michael Smith Genome Sciences Centre with support from BC Cancer Foundation. The aim of the program is to sequence, analyze and compare the genome of each patient's cancer—the entire DNA and RNA inside tumor cells— in order to understand what is enabling it to identify less toxic and more effective treatment options. # Principal component analysis Thu 06-07-2017 PCA helps you interpret your data, but it will not always find the important patterns. Principal component analysis (PCA) simplifies the complexity in high-dimensional data by reducing its number of dimensions. Nature Methods Points of Significance column: Principal component analysis. (read) To retain trend and patterns in the reduced representation, PCA finds linear combinations of canonical dimensions that maximize the variance of the projection of the data. PCA is helpful in visualizing high-dimensional data and scatter plots based on 2-dimensional PCA can reveal clusters. Altman, N. & Krzywinski, M. (2017) Points of Significance: Principal component analysis. Nature Methods 14:641–642. Altman, N. & Krzywinski, M. (2017) Points of Significance: Clustering. Nature Methods 14:545–546. # $k$ index: a weightlighting and Crossfit performance measure Wed 07-06-2017 Similar to the $h$ index in publishing, the $k$ index is a measure of fitness performance. To achieve a $k$ index for a movement you must perform $k$ unbroken reps at $k$% 1RM. The expected value for the $k$ index is probably somewhere in the range of $k = 26$ to $k=35$, with higher values progressively more difficult to achieve. In my $k$ index introduction article I provide detailed explanation, rep scheme table and WOD example. # Dark Matter of the English Language—the unwords Wed 07-06-2017 I've applied the char-rnn recurrent neural network to generate new words, names of drugs and countries. The effect is intriguing and facetious—yes, those are real words. But these are not: necronology, abobionalism, gabdologist, and nonerify. These places only exist in the mind: Conchar and Pobacia, Hzuuland, New Kain, Rabibus and Megee Islands, Sentip and Sitina, Sinistan and Urzenia. And these are the imaginary afflictions of the imagination: ictophobia, myconomascophobia, and talmatomania. And these, of the body: ophalosis, icabulosis, mediatopathy and bellotalgia. Want to name your baby? Or someone else's baby? Try Ginavietta Xilly Anganelel or Ferandulde Hommanloco Kictortick. When taking new therapeutics, never mix salivac and labromine. And don't forget that abadarone is best taken on an empty stomach. And nothing increases the chance of getting that grant funded than proposing the study of a new –ome! We really need someone to looking into the femome and manome. # Dark Matter of the Genome—the nullomers Wed 31-05-2017 An exploration of things that are missing in the human genome. The nullomers. Julia Herold, Stefan Kurtz and Robert Giegerich. Efficient computation of absent words in genomic sequences. BMC Bioinformatics (2008) 9:167
# Bounding quantiles of the noncentral chi distribution I need to bound the empirical quantiles for a noncentral chi distribution (not chi-squared) $\chi_\nu(\lambda)$, where $\nu$ is the number of degrees of freedom and $\lambda$ the non-centrality parameter. In case it helps; I can make the following assumptions: • $5000>\nu>50$ • $\lambda \in [\frac{\sqrt{\nu-1}}{2}-2,\frac{\sqrt{\nu-1}}{2}+2]$ • number of samples $10^2<n<10^4$ • Interested in q-quantile and (1-q)-quantile for $q\in[0.01,0.05]$ My current approach involves getting bounds on the theoretical quantiles and applying Chernoff to get bounds on empirical quantiles given enough samples. The closest thing I've found is Approximations to the Non-Central Chi-Squared Distribution. Note it's for Non-central Chi-Squared, not Non-central Chi. Modifying the math in the paper a bit (setting b=0 in its initial formula) gives me a Gaussian approximation for non-central chi with approximate cumulants; but I'm not sure if this is a promissing approach since this is far from my normal research area. Is there a better approach to bound the theoretical quantiles of the non-central chi-squared than by trying to approximate it with a Gaussian? • How big is your sample size $n$? If $(1-q)$-quantiles are the ones of interest, what is the range of the values of $q$? Do you want asymptotic or non-asymptotic bounds? – Iosif Pinelis Apr 9 '18 at 17:59 • @IosifPinelis I've added two extra assumptions refering to q and n; thanks for pointing this out. – etal Apr 9 '18 at 18:23 • I think you can use the fact that, if $X$ is a nonnegative random variable and $x_q$ is its $(1-q)$-quantile, then $x_q^2$ is a $(1-q)$-quantile of $X^2$, and vice versa. Thus, you reduce your noncentral chi distribution to the known case of a noncentral chi-squared distribution. – Iosif Pinelis Apr 9 '18 at 19:10
# 10 139 ## Contents (KnotPlot image) See the full Rolfsen Knot Table. Visit 10 139's page at the Knot Server (KnotPlot driven, includes 3D interactive images!) Visit 10_139's page at Knotilus! Visit 10 139's page at the original Knot Atlas! ### Knot presentations Planar diagram presentation X4251 X10,4,11,3 X11,19,12,18 X5,15,6,14 X7,17,8,16 X15,7,16,6 X17,9,18,8 X13,1,14,20 X19,13,20,12 X2,10,3,9 Gauss code 1, -10, 2, -1, -4, 6, -5, 7, 10, -2, -3, 9, -8, 4, -6, 5, -7, 3, -9, 8 Dowker-Thistlethwaite code 4 10 -14 -16 2 -18 -20 -6 -8 -12 Conway Notation [4,3,3-] Minimum Braid Representative A Morse Link Presentation An Arc Presentation Length is 10, width is 3, Braid index is 3 [{6, 12}, {5, 7}, {1, 6}, {8, 11}, {7, 10}, {4, 8}, {3, 5}, {2, 4}, {12, 3}, {11, 9}, {10, 2}, {9, 1}] ### Three dimensional invariants Symmetry type Reversible Unknotting number 4 3-genus 4 Bridge index 3 Super bridge index Missing Nakanishi index 1 Maximal Thurston-Bennequin number [7][-16] Hyperbolic Volume 4.85117 A-Polynomial See Data:10 139/A-polynomial ### Four dimensional invariants Smooth 4 genus 4 Topological 4 genus [3,4] Concordance genus 4 Rasmussen s-Invariant -8 ### Polynomial invariants Alexander polynomial t4−t3 + 2t−3 + 2t−1−t−3 + t−4 Conway polynomial z8 + 7z6 + 14z4 + 9z2 + 1 2nd Alexander ideal (db, data sources) {1} Determinant and Signature { 3, 6 } Jones polynomial −q12 + q11−q10 + q9−q8 + q6 + q4 HOMFLY-PT polynomial (db, data sources) z8a−8 + 8z6a−8−z6a−10 + 21z4a−8−7z4a−10 + 21z2a−8−13z2a−10 + z2a−12 + 6a−8−6a−10 + a−12 Kauffman polynomial (db, data sources) z8a−8 + z8a−10 + z7a−9 + z7a−11−8z6a−8−8z6a−10−7z5a−9−7z5a−11 + 21z4a−8 + 20z4a−10 + z4a−14 + 13z3a−9 + 13z3a−11 + z3a−13 + z3a−15−21z2a−8−19z2a−10−2z2a−14−6za−9−5za−11−za−13−2za−15 + 6a−8 + 6a−10 + a−12 The A2 invariant q−14 + q−16 + 2q−18 + 2q−20 + q−22−q−28−q−32−q−34−q−36−q−38 + q−40 The G2 invariant q−70 + q−72 + q−74 + q−76 + 2q−80 + 3q−82 + q−84 + q−86 + q−88 + 3q−90 + 3q−92 + 2q−94−2q−96 + 2q−98 + 3q−100 + q−102−3q−106 + q−108 + 2q−110−2q−112−3q−114−3q−116−q−118 + 4q−120−3q−122−3q−124−q−126−q−128 + 2q−130−3q−132−q−134 + 2q−140−q−152−q−156−q−158 + 2q−160−2q−162−q−164−3q−168 + q−170−2q−172 + q−176−q−178 + 2q−180 + 2q−186−q−188−q−190 + q−192 + q−196 ### "Similar" Knots (within the Atlas) Same Alexander/Conway Polynomial: {} Same Jones Polynomial (up to mirroring, $q\leftrightarrow q^{-1}$): {} ### Vassiliev invariants V2 and V3: (9, 25) V2,1 through V6,9: V2,1 V3,1 V4,1 V4,2 V4,3 V5,1 V5,2 V5,3 V5,4 V6,1 V6,2 V6,3 V6,4 V6,5 V6,6 V6,7 V6,8 V6,9 36 200 648 1466 206 7200 $\frac{35888}{3}$ $\frac{6272}{3}$ 1384 7776 20000 52776 7416 $\frac{1001773}{10}$ $\frac{77402}{15}$ $\frac{523826}{15}$ $\frac{3059}{6}$ $\frac{43373}{10}$ V2,1 through V6,9 were provided by Petr Dunin-Barkowski <[email protected]>, Andrey Smirnov <[email protected]>, and Alexei Sleptsov <[email protected]> and uploaded on October 2010 by User:Drorbn. Note that they are normalized differently than V2 and V3. ### Khovanov Homology The coefficients of the monomials trqj are shown, along with their alternating sums χ (fixed j, alternation over r). The squares with yellow highlighting are those on the "critical diagonals", where j−2r = s + 1 or j−2r = s−1, where s = 6 is the signature of 10 139. Nonzero entries off the critical diagonals (if any exist) are highlighted in red. \ r \ j \ 0123456789χ 25         1-1 23          0 21       11 0 19     11   0 17     11   0 15   111    -1 13    1     1 11  1       1 91         1 71         1 Integral Khovanov Homology $\dim{\mathcal G}_{2r+i}\operatorname{KH}^r_{\mathbb Z}$ i = 5 i = 7 i = 9 r = 0 ${\mathbb Z}$ ${\mathbb Z}$ r = 1 r = 2 ${\mathbb Z}$ r = 3 ${\mathbb Z}_2$ ${\mathbb Z}$ r = 4 ${\mathbb Z}$ ${\mathbb Z}$ r = 5 ${\mathbb Z}$ ${\mathbb Z}$ ${\mathbb Z}$ r = 6 ${\mathbb Z}\oplus{\mathbb Z}_2$ ${\mathbb Z}$ r = 7 ${\mathbb Z}_2$ ${\mathbb Z}$ r = 8 ${\mathbb Z}$ r = 9 ${\mathbb Z}_2$ ${\mathbb Z}$ ### Computer Talk Much of the above data can be recomputed by Mathematica using the package KnotTheory. See A Sample KnotTheory Session, or any of the Computer Talk sections above. Read me first: Modifying Knot Pages See/edit the Rolfsen Knot Page master template (intermediate). See/edit the Rolfsen_Splice_Base (expert). Back to the top.
# Shell-escape in Texmaker This is a frequently asked question but none I have seen actually helped me. My problem is that I installed Gnuplot 5 and I have not been able to make it work in the TeXworks. The figure I have been trying to plot is this: \documentclass[11pt]{article} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=1.5} \usepackage{listings} \usepackage{pgfplots} \usepackage[miktex]{gnuplottex} \begin{document} \begin{tikzpicture} \begin{axis} [ title={Contour plot, view from top}, view={0}{90} ] contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}} ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \end{axis} \end{tikzpicture} \end{document} As many answers have suggested I tried to call gnuplottex in the preamble with the code \usepackage[miktex]{gnuplottex} yet this did not help and instead I have been getting the error Package pgfplots Error: sorry, plot file{.._contourmp0.table} could not be opened. See the pgfplots package documentation for explanation followed by the warning that Shell escape is not enabled. EDIT I am aware that there is a phenomenally identical question on this site, namely gnuplottex with windows 8.1 but it does not address my problem. The problem persists even after using \usepackage[miktex]{gnuplottex}, as I pointed out from the beginning. Following suggestions, I have made sure there is a path towards the bin directory and have included the statement --enable-write18 in the PdfLaTeX command line of TeXworks. All other suggestions are of course welcome. Also here is my attempt to compile through the command line: And my Texmaker configuration: • Have you seen How to enable shell-escape in TeXworks? – Torbjørn T. Apr 3 '15 at 13:59 • @TorbjørnT. I have but the version of TeXWorks discussed is a much older one and many settings have changed. I cannot quite follow it. – JohnK Apr 3 '15 at 14:02 • Huh? Looks exactly the same as the one I have, which according to Help -> About Texworks is version 0.5. Which version do you have, and how does the Preference-window look? – Torbjørn T. Apr 3 '15 at 14:05 • @TorbjørnT. Oh dear, it's not TeXworks but Texmaker the one I am working with. Very sorry for the confusion. I amended the name. – JohnK Apr 3 '15 at 14:07 • possible duplicate of gnuplottex with windows 8.1 – egreg Apr 3 '15 at 14:09 You seem to have a MiKTeX distribution. Add the --enable-write18 option to pdflatex options by opening the Options menu, Configure TeXmaker submenu. You'll get this pop-up window: • @JohnK It might just be me, and I have no tested it, but on my machine I have in the past successfully used -enable-write18 with just one dash. – moewe Apr 3 '15 at 14:48 • @JohnK You will have to locate the folder with gnuplot.exe in it (on my system that is C:\Program Files (x86)\gnuplot\bin) and add that directory to the PATH variable, see windowsitpro.com/systems-management/… on how to do that. – moewe Apr 3 '15 at 15:01 • @JohnK No, you will have to have something pointing to the bin directory. That might not solve the problem, though. I was just able to reproduce your issue on my computer. The problem seems to be that the file <tksjs>_contourtmp0.table is not created by the packages. – moewe Apr 3 '15 at 15:11 • It should be C:\Program Files (x86)\Gnuplot\bin. – Bernard Apr 3 '15 at 15:16 • @JohnK Have you tried running the compilation of the .tex file from the command line? Via pdflatex -enable-write18 <filename without extension>. If you have changed the PATH the old "have you tried turning it off and on again" trick might help. If all fails, please prepare a MWE and update your question. – moewe Apr 3 '15 at 15:34
# binomial coefficient Prove that $$\frac{1}{\sqrt{1-4t}} \left(\frac{1-\sqrt{1-4t}}{2t}\right)^k = \sum\limits_{n=0}^{\infty}\binom{2n+k}{n}t^n, \quad \forall k\in\mathbb{N}.$$ I tried already by induction over $k$ but i have problems showing the statement holds for $k=0$ or $k=1$. - See Concrete Mathematics, page 203. –  wj32 Nov 15 '12 at 8:34 I am also looking for a combinatorical proof of this identity. –  bronko Dec 7 '12 at 3:45 The methodology is that you have got to make the brackets as the form expandable as the sum of polynomials with binomial coefficients And guess the clues through differentiation. It works no matter what the k is - Thank you for your answer. I expanded the left hand side via binomial formula. The problem is that the sum on the left hand side only counts to k while the one on the right hand side counts to $\infty$. –  bronko Nov 22 '12 at 13:47 It does not matter. The infinity means the adding the terms together while the t is smaller than 1/4 . When adding all these terms together, we can see a Geometric Series there –  Raju Gujarati Nov 22 '12 at 14:05 I expanded the left hand side to $\sum_{n=0}^k\binom{k}{n}\left(\frac{1}{2t}\right)^k\frac{-\sqrt{1-4t})^{k-n-1}}‌​{2t}^{k-n}}$ but that's it. I do not know how to proceed. –  bronko Nov 24 '12 at 8:06 PLease edit your comment I really can't see the formula you have typed –  Raju Gujarati Nov 26 '12 at 1:53 $$\sum_{n=0}^k \binom{k}{n} \left( \frac{1}{2t} \right)^k (- \sqrt{1-4t})^{k-n-1} (2t)^{-k+n}$$ –  bronko Nov 26 '12 at 5:11
# Autotelic Computing ## Friday, December 23, 2011 ### Scrabbling with Shannon Once the AI Class was over, I wanted to take some time to study the second optional NLP problem with Python. Finding a solution is not difficult (because even an imperfect one will yield the answer anyway), but devising a satisfying process to do so (i.e. a general way of solving similar problems), is much harder. #### Problem Representation The first task is working up the problem representation. We need to parse: s = """|de| | f|Cl|nf|ed|au| i|ti| |ma|ha|or|nn|ou| S|on|nd|on| |ry| |is|th|is| b|eo|as| | |f |wh| o|ic| t|, | |he|h | |ab| |la|pr|od|ge|ob| m|an| |s |is|el|ti|ng|il|d |ua|c | |he| |ea|of|ho| m| t|et|ha| | t|od|ds|e |ki| c|t |ng|br| |wo|m,|to|yo|hi|ve|u | t|ob| |pr|d |s |us| s|ul|le|ol|e | | t|ca| t|wi| M|d |th|"A|ma|l |he| p|at|ap|it|he|ti|le|er| |ry|d |un|Th|" |io|eo|n,|is| |bl|f |pu|Co|ic| o|he|at|mm| |hi| | |in| | | t| | | | |ye| |ar| |s | | |. |""" A useful data structure in this context is a list of 19 sublists, each corresponding to a column of 8 tokens. The reason for this is that we need to easily shuffle around the columns, which are the atomic parts of the problem (i.e. those which cannot change): grid = [[''] * 8 for i in range(19)] for j, line in enumerate(s.split('\n')): cols = line.strip().split('|')[1:-1] for i, col in enumerate(cols): grid[i][j] = col There are certainly more elegant ways to handle this, but this is enough for the discussion. To revert back from this representation to a readable form: def repr(grid): return '\n'.join(['|%s|' % '|'.join([grid[i][j] for i in range(19)]) for j in range(8)]) We need to consider two abstract components for solving this problem: the state space exploration, and the evaluation function we must use to guide it, since there's clearly no way we can brute-force the 19! possible configurations. #### State Space Exploration For the exploration function, here is a simple approach: from a given grid configuration, consider all the possible grid reorderings, resulting from having an arbitrary column reinserted to an arbitrary position. Then expand them in the order of their score ranking. Here is an example showing two possible column reorderings from the initial state: Note that this is not the same as simply swapping columns: from "a,b,c", for instance, we'd like to derive "b,a,c", "b,c,a", "c,a,b" and "a,c,b", which wouldn't possible with only one swapping step. One way to do this is: def nextGrids(grid): global visited # I know.. next_grids = set() for i in range(19): for j in range(19): if i == j: continue next_grid = list(list(col) for col in grid) next_grid.insert(i, next_grid.pop(j)) next_grid = tuple(tuple(col) for col in next_grid) if next_grid not in visited: return next_grids Again it's quite certain that nicer solutions exist for this: in particular, I use sets to avoid repetitions (both for the already explored states overall and the reinsertion configs for a given grid), which thus requires those ugly "tuplify/listify" operations (a list is editable but not hashable, while a set is the opposite). We can then use this function in a greedy and heuristic way, always expanding first the best reordering from any given state (using a heap to make our life easier): visited = set() frontier = [] # heap of (score, grid)'s while True: for next_grid in nextGrids(grid): # using a min-heap so score has to be negated heappush(frontier, (-gridScore(next_grid), next_grid)) score, grid = heappop(frontier) score, grid = max(results) print repr(grid) print score This is obviously greedy, because it might easily overlook a better, but more convoluted solution, while blindly optimizing the current step. Our only hope is that the evaluation function might be more clever in compensation. Note also that we don't stop, simply because there is no obvious criterion for doing so. We could define a tentative one, but since that's not the goal of the exercise, let's just proceed by inspection (once I knew the answer, I cheated by hardcoding a stopping criterion, just for the purpose of counting the number of steps to reach it). But then what about the scores? But just before.. #### Some Doubts.. At first, my solution wasn't based on a heap: it was simply considering the best configuration at any point, and then would forget about the others (in other words: it used a single-element search frontier). But I had doubts: was this strategy guaranteed to eventually exhaust the space of the possible permutations of a given set? If not, even in theory, it would seem to be a potential problem as for the generalness of the solution.. I'm not sure about the math required to describe the relationship between the n! permutations of a set and the n2-2n+1 element reorderings (there must be a technical term for this?) from any configuration, but after having performed some practical tests, I reached the conclusion that using a heap-based method (i.e. a frontier with many elements) was more sound, because although I cannot prove it, I'm pretty certain that it is guaranteed to eventually exhaust the permutation space, whereas the first method is not. In the context of this problem, it doesn't make a difference though, because the search space is so large that we would have to wait a very long time before we see these two closely related strategies behave differently. #### Language Modeling Enters language modeling.. this is the component we need for the evalution function, to tell us if we are heading in the right direction or not, in terms of unscrambling the hidden message. My first intuition was that character-based n-grams would work best. Why? Because while exploring the state space, due to the problem definition, most of the time we are dealing with parts or scrambled words, not complete (i.e. real) ones. Thus a character-based n-gram should be able to help, because it works at the sub-lexical level (but happily not exclusively at this level, as it retains its power once words begin to get fully formed, which is what we want). To do this, I used the venerable Brown Corpus, a fairly small (by today's standards) body of text containing about 1M words, which should be enough for this problem (note that I could have also used NLTK): CHAR_NGRAM_ORDER = 3 EPSILON = 1e-10 char_counts = defaultdict(int) for line in open('brown.txt'): words = re.sub(r'\W+', ' ', line).split() # tokenize with punctuation and whitespaces chars = ' '.join(words) char_counts[''] += len(chars) # this little hack is useful for 1-grams for n in range(CHAR_NGRAM_ORDER): for i in range(len(chars)): if i >= n: char_counts[chars[i-n:i+1]] += 1 # charProb('the') should be > than charProb('tha') def charProb(c): global char_counts if c in char_counts: return char_counts[c] / char_counts[c[:-1]] return EPSILON Another debatable aspect: I use a very small value as the probability of combinations that have never been seen, instead of using a proper discounting method (e.g. Laplace) to smooth the MLE counts: I thought it wouldn't make a big difference, but I might be wrong. The outcome however is that I cannot talk about probabilities, strictly speaking, so let's continue with scores instead (which means something very close in this context). The final required piece is the already mentioned function to compute the score for a given grid: def gridScore(grid): s = ' '.join([''.join([grid[i][j] for i in range(19)]) for j in range(8)]) # tokenize with punctuation and whitespaces s = ' '.join(re.sub(r'\W+', ' ', s).split()) LL = 0 # log-likelihood for i in range(len(s)): probs = [] for n in range(CHAR_NGRAM_ORDER): if i >= n: probs.append(charProb(s[i-n:i+1])) # interpolated LMs with uniform weights pi = sum([p/len(probs) for p in probs]) LL += math.log(pi) return LL A couple of things to note: I use the log-likelihood because it is more numerically stable, and I also use a simple interpolation method (with uniform weights) to combine models of different orders. So.. does this work? Not terribly well unfortunately.. Although with some tuning (the most important aspect being the order N of the model) it's possible to reach somewhat interesting states like this: | i|nf|or|ma|ti|on|ed|Cl|au|de| S|ha|nn|on| f|ou|nd| | | |as|is| o|f | | | b|th|eo|ry|, |wh|ic|h |is| t|he| | | | m|od|el|s |an|d |ge|pr|ob|ab|il|is|ti|c |la|ng|ua| | | |et|ho|ds| t|ha|t | m|of| t|he| c|od|e |br|ea|ki|ng| | | | t|hi|s |pr|ob|le|ve|yo|u |wo|ul|d |us|e |to| s|ol|m,| | |"A| M|at|he|ma|ti|d |wi|th| t|he| p|ap|er| t|it|le|ca|l | |n,|" |pu|bl|is|he|io|Th|eo|ry| o|f |Co|mm|un|ic|at|d | | | | | | | | | |in| t|hi|s |ye|ar|. | | | | | | from which it's rather easy to guess the answer (we are so good at this actually that there's even an internet meme celebrating it), for some reason my experiments seemed to always find themselves stuck in some local minima from which they could not escape. #### The Solution Considering the jittery nature of my simplistic optimization function (although you prefer it to go up, there is no guarantee that it will always do), I pondered for a while about a backtracking mechanism, to no avail. The real next step is rather obvious: characters are probably not enough, the model needs to be augmented at the level of words. The character-based model should be doing most of the work in the first steps of the exploration (when the words are not fully formed), and the word-based model should take over progressively, as we zero in on the solution. It's easy to modify the previous code to introduce this new level: CHAR_NGRAM_ORDER = 6 WORD_NGRAM_ORDER = 1 char_counts = defaultdict(int) word_counts = defaultdict(int) for line in open('brown.txt'): # words words = re.sub(r'\W+', ' ', line).split() # tokenize with punctuation and whitespaces word_counts[''] += len(words) # this little hack is useful for 1-grams for n in range(WORD_NGRAM_ORDER): for i in range(len(words)): if i >= n: word_counts[' '.join(words[i-n:i+1])] += 1 # chars chars = ' '.join(words) char_counts[''] += len(chars) for n in range(CHAR_NGRAM_ORDER): for i in range(len(chars)): if i >= n: char_counts[chars[i-n:i+1]] += 1 # wordProb('table') should be > than wordProb('tabel') def wordProb(w): global word_counts words = w.split() h = ' '.join(words[:-1]) if w in word_counts: return word_counts[w] / word_counts[h] return EPSILON def gridScore(grid): s = ' '.join([''.join([grid[i][j] for i in range(19)]) for j in range(8)]) # tokenize with punctuation and whitespaces s = ' '.join(re.sub(r'\W+', ' ', s).split()) LL = 0 # log-likelihood for i in range(len(s)): probs = [] for n in range(CHAR_NGRAM_ORDER): if i >= n: probs.append(charProb(s[i-n:i+1])) if not probs: continue pi = sum([p/len(probs) for p in probs]) LL += math.log(pi) words = s.split() for i in range(len(words)): probs = [] for n in range(WORD_NGRAM_ORDER): if i >= n: probs.append(wordProb(' '.join(words[i-n:i+1]))) if not probs: continue pi = sum([p/len(probs) for p in probs]) LL += math.log(pi) return LL After some tinkering, I found that character-based 6-grams augmented with word unigrams was the most efficient model, as it solves the problem in 15 steps only. Of course this is highly dependent on the ~890K training parameters obtained using the Brown corpus, as with some other text it would probably look quite different. I'm not sure if this is the optimal solution (it's rather hard to verify), but it should be pretty close. And finally.. isn't it loopy in a strange way that the solution to a problem refers to what is probably the most efficient way of solving it? That was really fun, as the rest of the course was, and if this could be of interest to anyone, the code is available on GitHub. ## Tuesday, November 29, 2011 ### A Birthday Simulator Earlier today I was reading about the Tuesday Birthday Problem (which curiously doesn't seem to have its own entry on Wikipedia.. maybe it is known under a different name?) and although I was convinced by the argument, I thought that a little simulation would help deepen my understanding of this strange paradox (or at least make it a little more intuitive). The problem I had is how to represent, in a clear way, some a priori knowledge (namely, the fact that one of the children is a son born on a Tuesday) in a numerical simulation? Since directly modeling the conditional distribution wouldn't be trivial, an easier way to do it is by using rejection sampling: iterate over a set of randomly generated family configurations, and reject those that do not match the given fact, i.e. those not containing at least a son born on a Tuesday. From the configurations that passed the test, the proportion of those having the other child also a son (born on whatever day), should yield the answer (which of course is not 1/2, as intuition first strongly suggests): from __future__ import division from random import * n = 0 n_times_other_child_is_son = 0 while n < 100000: child1 = choice('MF') + str(randint(0, 6)) child2 = choice('MF') + str(randint(0, 6)) children = child1 + child2 if 'M2' not in children: continue if children[0::2] == 'MM': n_times_other_child_is_son += 1 n += 1 print n_times_other_child_is_son / n # should be close to 13/27 ## Thursday, October 6, 2011 ### A Wormhole Through Sudoku Space I did what I suggested in my last post, and finally read about Peter Norvig's constraint propagation method for solving Sudoku. On one hand it's quite humbling to discover a thinking process so much more elaborate than what I could achieve, but on the other, I'm glad that I didn't read it first, because I wouldn't have learned as much from it. It turns out that my insights about search were not so far off the mark... but then the elimination procedure is the real deal (in some cases, it can entirely solve an easy problem on its own). In fact the real power is unleashed when the two are combined. The way I understand it, elimination is like a mini-search, where the consequences of a move are carried over their logical conclusion, revealing, many steps ahead, if it's good or not. It is more than a heuristic, it is a solution space simplifier, and a very efficient one at that. My reaction when I understood how it worked was to ask myself if there's a way I could adapt it for my current Python implementation, without modifying it too much. It is not exactly trivial, because the two problem representation mechanisms are quite different: Peter Norvig's one explicitly models the choices for a given square, while mine only does it implicitly. This meant that I couldn't merely translate the elimination algorithm in terms of my implementation: I'd have to find some correspondence, a way to express one in terms of the other. After some tinkering, what I got is a drop-in replacement for my Sudoku.set method: def set(self, i, j, v, propagate_constraints): self.grid[i][j] = v if propagate_constraints: for a in range(self.size): row_places = defaultdict(list) row_available = set(self.values) col_places = defaultdict(list) col_available = set(self.values) box_places = defaultdict(list) box_available = set(self.values) for b in range(self.size): options = [] bi, bj = self.box_coords[a][b] for vv in self.values: if not self.grid[a][b] and self.isValid(a, b, vv): options.append(vv) row_places[vv].append(b) if not self.grid[b][a] and self.isValid(b, a, vv): col_places[vv].append(b) if not self.grid[bi][bj] and self.isValid(bi, bj, vv): box_places[vv].append((bi,bj)) if not self.grid[a][b]: if len(options) == 0: return False elif len(options) == 1: # square with single choice found return self.set(a, b, options[0], propagate_constraints) if row_available != set(row_places.keys()): return False if col_available != set(col_places.keys()): return False if box_available != set(box_places.keys()): return False for vv, cols in row_places.items(): if len(cols) == 1: # row with with single place value found return self.set(a, cols[0], vv, propagate_constraints) for vv, rows in col_places.items(): if len(rows) == 1: # col with with single place value found return self.set(rows[0], a, vv, propagate_constraints) for vv, boxes in box_places.items(): if len(boxes) == 1: # box with with single place value found return self.set(boxes[0][0], boxes[0][1], vv, propagate_constraints) return True Ok.. admittedly, it is very far from being as elegant as any of Peter Norvig's code.. it is even possibly a bit scary.. but that is the requirement, to patch my existing method (i.e. to implement elimination without changing the basic data structures). Basically, it complements the set method to make it seek two types of things: • a square with a single possible value • a row/column/box with a value that has only one place to go Whenever it finds one of these, it recursively calls itself, to set it right away. While doing that, it checks for certain conditions that would make this whole chain of moves (triggered by the first call to set) invalid: • a square with no possible value • a row/column/box with a set of unused values that is not equal to the set of values having a place to go (this one was a bit tricky!) So you'll notice that this is not elimination per se, but rather.. something else. Because really there's nothing to eliminate, this is what happens to the elimination rules, when they are forced through an unadapted data structure. With Peter Norvig's implementation, it is so much more elegant and efficient than this, of course. And speaking of efficiency, another obvious disclaimer is that of course this whole thing is not as efficient as Peter Norvig's code, and for many reasons. I wasn't interested in efficiency this time, but rather in finding a correspondence between the two methods. Finally, we need to adapt the solver (or search method). The major difference with the previous greedy solver (the non-eliminative one) is the fact that a move is no longer a single change we do to the grid (and that can be easily undone when we backtrack). This time, an elimination call can change many squares, which is a problem with this method, because we cannot do all the work with the same Sudoku instance, for backtracking purposes, and such an instance is not as efficiently copied as a dict of strings. There are probably many other ways, but to keep the program object-oriented, here is what I found: def solveGreedilyWithConstraintPropagation(self): nopts = {} # n options -> (opts, (i,j)) for i in range(self.size): for j in range(self.size): if self.grid[i][j]: continue opts_ij = [] for v in self.values: if self.isValid(i, j, v): opts_ij.append(v) n = len(opts_ij) if n == 0: return None nopts[n] = (opts_ij, (i,j)) if nopts: opts_ij, (i,j) = min(nopts.items())[1] for v in opts_ij: S = deepcopy(self) if S.set(i, j, v, propagate_constraints=True): T = S.solveGreedilyWithConstraintPropagation() if T: return T return None return self Again it's not terribly elegant (nor as efficient) but it works, in the sense that it yields the same search tree as Peter Norvig's implementation. Just before doing an elimination (triggered by a call to set), we deepcopy the current Sudoku instance (self), and perform the elimination on the copy instead. If it succeeds, we carry the recursion over with the copy. When a solution is found, the instance is returned, so that's why this method has to be called like this: S = S.solveGreedilyWithConstraintPropagation() To illustrate what's been gained with this updated solver, here are its 6 first recursion layers, when ran against the "hard" problem of my previous post: Again, the code for this exercise is available on my GitHub. ## Sunday, October 2, 2011 ### A Journey Into Sudoku Space #### Or.. Some Variations on a Brute-Force Search Theme While browsing for code golfing ideas, I became interested in Sudoku solving. But while by definition Sudoku code golfing is focused on source size (i.e. trying to come up with the smallest solver for language X, in terms of number of lines, or even characters), I was more interested in writing clear code, to hopefully learn a few insights on the problem. #### Representational Preliminaries I first came up with this Python class, to represent a Sudoku puzzle: class Sudoku: def __init__(self, conf): self.grid = defaultdict(lambda: defaultdict(int)) self.rows = defaultdict(set) self.cols = defaultdict(set) self.boxes = defaultdict(set) self.size = int(math.sqrt(len(conf))) # note that the number for i in range(self.size): # of squares is size^2 for j in range(self.size): v = conf[(i * self.size) + j] if v.isdigit(): self.set(i, j, int(v)) def set(self, i, j, v): self.grid[i][j] = v def unset(self, i, j): v = self.grid[i][j] self.rows[i].remove(v) self.cols[j].remove(v) self.boxes[self.box(i, j)].remove(v) self.grid[i][j] = 0 def isValid(self, i, j, v): return not (v in self.rows[i] or v in self.cols[j] or v in self.boxes[self.box(i, j)]) def box(self, i, j): if self.size == 9: return ((i // 3) * 3) + (j // 3) elif self.size == 8: return ((i // 2) * 2) + (j // 4) elif self.size == 6: return ((i // 2) * 2) + (j // 3) assert False, 'not implemented for size %d' % self.size The puzzle is initialized with a simple configuration string, for instance: easy = "530070000600195000098000060800060003400803001700020006060000280000419005000080079" harder = "..............3.85..1.2.......5.7.....4...1...9.......5......73..2.1........4...9" and it can actually fit Sudoku variants of different size: 9x9, 8x8, 6x6. The only thing that changes for each is the definition of the box method for finding the "coordinates" of a box, given a square position. #### Validity Checking One interesting aspect to note about this implementation is the use of a series of set data structures (for rows, columns and boxes, wrapped in defaultdicts, to avoid the initialization step), to make the validation of a "move" (i.e. putting a value in a square) more efficient. To be a valid move (according to Sudoku rules), the value must not be found in the corresponding row, column or box, which the isValid method can tell very quickly (because looking for something in a set is efficient), by simply checking that it is not in any of the three sets. In fact many Sudoku code golf implementations, based on a single list representation (rather than a two-dimensional grid), use a clever and compact trick for validity checks (along the lines of): for i in range(81): invalid_values_for_i = set() for j in range(81): if j / 9 == i / 9 or j % 9 == i % 9 or ((j / 27 == i / 27) and (j % 9 / 3) == (i % 9 / 3)): valid_values_for_i = set(range(1, 10)) - invalid_values_for_i which you'll find, after having scratched your head for a while, that although it does indeed work... is actually less efficient, because it relies on two imbricated loops looking at all elements (hence is in O(size2)), whereas my technique: for i in range(9): for j in range(9): valid_values_for_ij = [v for v in range(1, 10) if self.isValid(i, j, v)] by exploiting the set data structure, actually runs slightly faster, in O(size1.5). #### Sequential Solver With all that, the only piece missing is indeed a solver. Although there are many techniques that try to exploit human-style deduction rules, I wanted to study the less informed methods, where the space of possible moves is explored, in a systematic way, without relying on complex analysis for guidance. My first attempt was a brute-force solver that would simply explore the available moves, from the top left of the grid to the bottom right, recursively: def solveSequentially(self, i=0, j=0): solved = False while self.grid[i][j] and not solved: j += 1 if j % self.size == 0: i += 1 j = 0 solved = (i >= self.size) if solved: return True for v in range(1, self.size+1): if self.isValid(i, j, v): self.set(i, j, v) if self.solveSequentially(i, j): return True self.unset(i, j) return False This solver is not terribly efficient. To see why, we can use a simple counter that is incremented every time the solver function is called: 4209 times for the "easy" puzzle (above), and a whopping 69,175,317 times for the "harder" one! Clearly there's room for improvement. #### Random Solver Next I wondered how a random solver (i.e. instead of visiting the squares sequentially, pick them in any order) would behave in comparison: def solveRandomly(self): self.n_calls += 1 available_ijs = [] for i in range(self.size): for j in range(self.size): if self.grid[i][j]: continue available_ijs.append((i, j)) if not available_ijs: return True i, j = random.choice(available_ijs) opts_ij = [] for v in range(1, self.size+1): if self.isValid(i, j, v): opts_ij.append(v) for v in opts_ij: self.set(i, j, v) if self.solveRandomly(): return True self.unset(i, j) return False This is really worst... sometimes by many orders of magnitude (it is of course variable). I'm not sure I fully understand why, because without thinking much about it would seem that it is not any more "random" than the sequential path choosing of the previous method. My only hypothesis is that the sequential path choosing works best because it is row-based: a given square at position (i, j) benefits from a previous move made at (i, j-1), because the additional constraint directly applies to it (as well as to all the other squares in the same row, column or box), by virtue of the Sudoku rules. Whereas with random choosing, it is very likely that this benefit will be lost, as the solver keeps randomly jumping to possibly farther parts of the grid. #### Greedy Solver While again studying the same code golf implementations, I noticed that they're doing another clever thing: visiting first the squares with the least number of possible choices (instead of completely ignoring this information, as the previous methods do). This sounded like a very reasonable heuristic to try: def solveGreedily(self): nopts = {} # len(options) -> (opts, (i,j)) for i in range(self.size): for j in range(self.size): if self.grid[i][j]: continue opts_ij = [] for v in range(1, self.size+1): if self.isValid(i, j, v): opts_ij.append(v) nopts[len(opts_ij)] = (opts_ij, (i,j)) if nopts: opts_ij, (i,j) = min(nopts.items())[1] for v in opts_ij: self.set(i, j, v) if self.solveGreedily(): return True self.unset(i, j) return False return True Performance is way better with this one: only 52 calls for the easy problem, and 10903 for the hard one. This strategy is quite simple: collect all the possible values associated to every squares, and visit the one with the minimum number (without bothering for ties). However, even though this solver clearly performs better, it's important to note that a single call (i.e. for a particular square) is now less efficient, because it has to look at every square, to find the one with the fewest choices (whereas the sequential solver didn't have to choose, as the visiting order was fixed). This is the price to pay for introducing a little more wisdom in our strategy, but there are however two easy ways we can speed things up (not in terms of number of calls this time, but rather in terms of overall efficiency): first, whenever we find a square with no possible choice, we can safely back up right away (right in the middle of the search), because we can be sure that this is not a promising configuration. Second, whenever we find a square with only one choice, we can stop the search and proceed immediately with the result just found, because the minimum is what we are looking for anyway. Applying those two ideas, the solver then becomes: def solveGreedilyStopEarlier(self): nopts = {} # len(options) -> (opts, (i,j)) single_found = False for i in range(self.size): for j in range(self.size): if self.grid[i][j]: continue opts_ij = [] for v in range(1, self.size+1): if self.isValid(i, j, v): opts_ij.append(v) n = len(opts_ij) if n == 0: return False # cannot be valid nopts[n] = (opts_ij, (i,j)) if n == 1: single_found = True break if single_found: break if nopts: opts_ij, (i,j) = min(nopts.items())[1] for v in opts_ij: self.set(i, j, v) if self.solveGreedilyStopEarlier(): return True self.unset(i, j) return False return True But a question remains however (at least for me, at this point): why does it work? To better understand, let's study a very easy 6x6 puzzle: s = "624005000462000536306000201050005000" 6 2 4 | . . 5 . . . | 4 6 2 --------+-------- . . . | 5 3 6 3 . 6 | . . . --------+-------- 2 . 1 | . 5 . . . 5 | . . . First, here is the exploration path taken by the sequential solver (read from left to right; each node has the square's i and j coordinates, as well as its chosen value): In contrast, here is the path taken by the greedy solver: Whenever the solver picks the right answer for a certain square, the remaining puzzle becomes more constrained, hence simpler to solve. Picking the square with the minimal number of choices minimizes the probability of an error, and so also minimizes the time lost in wrong path branches (i.e. branches that cannot lead to a solution). The linear path shown above is an optimal way of solving the problem, in the sense that the solver never faces any ambiguity: it is always certain to make the good choice, because it follows a path where there is only one, at each stage. This can also be seen on this harder 9x9 problem: s = "120400300300010050006000100700090000040603000003002000500080700007000005000000098" 1 2 . | 4 . . | 3 . . 3 . . | . 1 . | . 5 . . . 6 | . . . | 1 . . ---------+-----------+--------- 7 . . | . 9 . | . . . . 4 . | 6 . 3 | . . . . . 3 | . . 2 | . . . ---------+-----------+--------- 5 . . | . 8 . | 7 . . . . 7 | . . . | . . 5 . . . | . . . | . 9 8 with which the sequential solver has obviously a tougher job to do (read from top to bottom; only the 6 first recursion levels are shown): But even though its path is not as linear as with simpler puzzles (because ambiguity is the defining feature of harder problems), the greedy solver's job is still without a doubt less complicated: This last example shows that our optimized solver's guesses are not always the right ones: sometimes it needs to back up to recover from an error. This is because our solver employs a greedy strategy, able to find efficiently an optimal solution to a wide variety of problems, which unfortunately doesn't include Sudoku. Because it is equivalent to a graph coloring problem, which is NP-complete, there is in fact little hope of finding a truly efficient strategy. Intuitively, the fundamental difficulty of the problem can be seen if you imagine yourself at a particular branching path node, trying to figure out the best way to go. There is nothing there (or from your preceding steps) that can tell you, without a doubt, which way leads to success. Sure you can guide your steps with a reasonable strategy, as we did, but you can never be totally sure about it, before you go there and see by yourself. But sometimes by then, it is too late, and you have to go back... #### Preventing Unnecessary Recursion The last thing I wanted to try was again inspired from the code golfing ideas cited above: for the moves with only one possible value, why not try to avoid recursion altogether (note that although the poster suggests this idea, I don't see it actually implemented in any of the code examples, at least not the way I understand it). Combining it with the previous ideas, this one can be implemented like this: def solveGreedilyStopEarlierWithLessRecursion(self): single_value_ijs = [] while True: nopts = {} # n options -> (opts, (i,j)) single_found = False for i in range(self.size): for j in range(self.size): if self.grid[i][j]: continue opts_ij = [] for v in range(1, self.size+1): if self.isValid(i, j, v): opts_ij.append(v) n = len(opts_ij) if n == 0: for i, j in single_value_ijs: self.unset(i, j) return False nopts[n] = (opts_ij, (i,j)) if n == 1: single_found = True break if single_found: break if nopts: opts_ij, (i,j) = min(nopts.items())[1] if single_found: self.set(i, j, opts_ij[0]) single_value_ijs.append((i,j)) continue for v in opts_ij: self.set(i, j, v) if self.solveGreedilyStopEarlierWithLessRecursion(): return True self.unset(i, j) for i, j in single_value_ijs: self.unset(i, j) return False return True The single-valued moves are now handled in a while loop (with a properly placed continue statement), instead of creating additional recursion levels. The only gotcha aspect is the additional bookkeeping needed to "unset" all the tried single-valued moves (in case there were many of them, chained in the while loop) at the end of an unsuccessful branch (just before both places where False is returned). Because of course a single-valued move is not guaranteed to be correct: it may be performed in a wrong branch of the exploration path, and thus needed to be undone, when the solver backs up. This technique is interesting, as it yields a ~85% improvement on the problem above, in terms of number of calls. Recursion could of course be totally dispensed with, but I suspect that this would require some important changes to the problem representation, so I will stop here. This ends my study of some brute-force Sudoku solving strategies. I will now take some time to look at some more refined techniques, by some famous people: Peter Norvig's constraint propagation strategy, and Donald Knuth's Dancing Links. ## Tuesday, January 25, 2011 ### A Helper Module for PostgreSQL and Psycopg2 I have created little_pger, a Python "modulet" meant to help with common PostgreSQL/Psycopg2 tasks, by wrapping up a few things handily, and offering a coherent and pythonic interface to it. Say we have a PG table like this: create table document ( document_id serial primary key, title text, type text check (type in ('book', 'article', 'essay'), topics text[] ); and a pair of connection/cursor objects: >>> conn = psycopg2.connect("dbname=...") >>> cur = conn.cursor() You can then insert a new document record like this: >>> insert(cur, 'document', values={'title':'PG is Easy'}) and update it like this: >>> update(cur, 'document', set={'type':'article'}, where={'title':'PG is Easy'}) Note that you are still responsible for managing any transaction externally: >>> conn.commit() With the 'return_id' option (which restricts the default 'returning *' clause to the primary key's value, which is assumed to be named '<table>_id'), the insert/update above could also be done this way: >>> doc_id = insert(cur, 'document', values={'title':'PG is Easy'}, return_id=True) >>> update(cur, 'document', values={'type':'article'}, where={'document_id':doc_id}) Note that the 'set' or 'values' keywords can both be used with 'update'. Using a tuple (but not a list!) as a value in the 'where' dict param is translated to the proper SQL 'in' operator: >>> select(cur, 'document', where={'type':('article', 'book')}) will return all article or book documents, whereas: >>> select(cur, 'document', what='title', where={'type':('article', 'book')}) will only get their titles. Using a list (but not a tuple!) as a value in either the 'values' or 'where' dict params is translated to the proper SQL array syntax: >>> update(cur, 'document', set={'topics':['database', 'programming']}, where={'document_id':doc_id}) >>> select(cur, 'document', where={'topics':['database', 'programming']}) The 'filter_values' option is useful if you do not wish to care about the exact values sent to the function. This for instance would fail: >>> insert(cur, 'document', values={'title':'PG is Easy', 'author':'John Doe'}) because there is no 'author' column in our document table. This however would work: >>> insert(cur, 'document', values={'title':'PG is Easy', 'author':'John Doe'}, filter_values=True) because it trims any extra items in 'values' (i.e. corresponding to columns not belonging to the table). Note that since this option requires an extra SQL query, it makes a single call a little less efficient. You can always append additional projection elements to a select query with the 'what' argument (which can be a string, a list or a dict, depending on your needs): >>> select(cur, 'document', what={'*':1, 'title is not null':'has_title'}) will be translated as: select *, (title is not null) as has_title from document Similarly, by using the 'group_by' argument: >>> select(cur, 'document', what=['type', 'count(*)'], group_by='type') will yield: select type, count(*) from document group by type A select query can also be called with 'order_by', 'limit' and 'offset' optional arguments. You can also restrict the results to only one row by using the 'rows' argument (default is rows='all'): >>> select(cur, 'document', where={'type':'article'], rows='one') would return directly a document row (and not a list of rows), and would actually throw an assertion exception if there was more than one article in the document table. This module is available for download and as a repository on GitHub.
# The terminal velocity of a liquid drop of radius 'r' falling through air is v. If two such drops are combined to form a bigger drop, the terminal velocity with which the bigger drop falls through air is ( Ignore any buoyant force due to air) $(a)\;\sqrt 2\; v \quad (b)\;2\;v \quad (c)\;\sqrt [3] {4} \;v \quad (d)\;\sqrt [3] {2} \;v$ ## 1 Answer $(c)\;\sqrt [3] {4} \;v$ answered Nov 7, 2013 by 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer
# What is the electron configuration of the "As"^(3-)" ion? The electron configuration for the $\text{As"^(3-)}$ ion is ["Ar"]"3d"^10"4s"^2"4p"^6". Arsenic, As, has atomic number 33, which is the number of protons in the nuclei of its atoms. A neutral As atom would also have 33 electrons. The electron configuration of a neutral arsenic atom is ["Ar"]"3d"^10"4s"^2"4p"^3". An arsenic 3- ion $\left(\text{As"^(3-)}\right)$, has gained three electrons, and its electron configuration is ["Ar"]"3d"^10"4s"^2"4p"^6", which is isoelectronic with the noble gas krypton, Kr.
# Language Shadowing with subed in Emacs So I'm trying to improve my English speaking skill by shadowing while watching TV episodes. The workflow before was to loop over video clips using mpv: 1. hit l to mark the start of the loop 2. play the video and wait for it to be at the end of the loop 3. hit l again to mark the end Then mpv will loop over the clip, it basically works, but it's a bit hard and tedious to set the start and end precisely. Recently, I came up with an idea that I can practice language-speaking using subtitle files, as they already have the timestamps along with subtitle texts. I can loop over specific parts of a video file easily by taking advantage of those timestamps instead of manually setting the start time and end time of the playback and then looping over it. And then I thought of a package, subed, which caught my eyes when I was lurking around Sacha Chua's blog site. And I found that it was perfect for this idea after playing with it for ~10 minutes. Here are the steps: 1. Open a subtitle file, say /path/to/foo.srt, subed will automatically open the accompanying video file having the name foo.mp4/foo.mk4/foo.avi, etc. using mpv. 2. Then, in the subtitle buffer, only these three key bindings will do the job: 1. C-c C-l toggles looping over the current subtitle By default, there is an extra second before and after the time span, as specified in the config subed-loop-seconds-before & subed-loop-seconds-after. 2. M-n moves the point to the next subtitle, and it automatically seeks the playback to the corresponding timeline. 3. M-p moves the point to the previous subtitle and seeks the playback to the correct position of the timeline. Although the package is designed to edit subtitles efficiently, it could also shine in language shadowing and speaking. If you're also learning to speak foreign languages, I believe this workflow can help you.✌ Also on:
# Brain Buster ( Comparison of ranking) • Ali, Saad, and Hassan are all wise. • Akram, Ali, and Hamza are all industrious. • Akram, Hassan, and Hamza are all honest. • Ali, Saad, and Hamza are all sportsmen. Which of them are not wise, but is a sportsman? 1. Ali 2. Akram 3. Hassan 4. Hamza • This is painfully simple. – generalcrispy Nov 13 '18 at 18:17 If we call the set of all sportsmen $$S = \{\textrm{Ali}, \textrm{Saad}, \textrm{Hamza}\}$$ and the set of all wise $$W = \{\textrm{Ali}, \textrm{Saad}, \textrm{Hassan}\}$$, we can deduce that $$S \land (\neg W) = \{\textrm{Hamza}\}$$. Therefore, Hamza is the only one who is a sportsman but is not wise.
## Relaxation of solitons in nonlinear Schrödinger equations with potential.(English)Zbl 1126.35065 Summary: We study dynamics of solitons in the generalized nonlinear Schrödinger equation with an external potential in all dimensions except for 2. For a certain class of nonlinearities such an equation has solutions which are periodic in time and exponentially decaying in space, centered near different critical points of the potential. We call those solutions which are centered near the minima of the potential and which minimize energy restricted to $$\mathcal L^{2}$$-unit sphere, trapped solitons or just solitons. We prove, under certain conditions on the potentials and initial conditions, that trapped solitons are asymptotically stable. Moreover, if an initial condition is close to a trapped soliton then the solution looks like a moving soliton relaxing to its equilibrium position. The dynamical law of motion of the soliton (i.e. effective equations of motion for the soliton’s center and momentum) is close to Newton’s equation but with a dissipative term due to radiation of the energy to infinity. ### MSC: 35Q55 NLS equations (nonlinear Schrödinger equations) 37K45 Stability problems for infinite-dimensional Hamiltonian and Lagrangian systems 81R12 Groups and algebras in quantum theory and relations with integrable systems Full Text: ### References: [1] Ambrosetti, A.; Badiale, M.; Cingolani, S., Semiclassical states of nonlinear Schrödinger equations with bounded potentials, Atti accad. naz. lincei cl. sci. fis. mat. natur. rend. lincei (9) mat. appl., 7, 3, 155-160, (1996) · Zbl 0872.35098 [2] Berestycki, H.; Lions, P.-L., Nonlinear scalar field equations, I. existence of a ground states, Arch. ration. mech. anal., 82, 4, 313-345, (1983) · Zbl 0533.35029 [3] Buslaev, V.; Sulem, C., On asymptotic stability of solitary waves for nonlinear Schrödinger equations, Ann. Henri Poincaré, 20, 3, 419-475, (2003) · Zbl 1028.35139 [4] Buslaev, V.S.; Perelman, G.S., Scattering for the nonlinear Schrödinger equation: states close to a soliton, Saint |St. Petersburg math. J., 4, 6, 1111-1142, (1993) [5] Buslaev, V.S.; Perelman, G.S., Nonlinear scattering: the states which are close to a soliton, J. math. sci., 77, 3, 3161-3169, (1995) [6] Cazenave, Thierry, An introduction to nonlinear Schrödinger equations, (2003), Amer. Math. Soc. · Zbl 1055.35003 [7] Costin, O.; Lebowitz, J.L.; Rokhlenko, A., Exact results for the ionization of a model quantum system, J. phys. A: math. gen., 33, 6311-6319, (2000) · Zbl 1064.81548 [8] Cuccagna, S., Stabilization of solutions to nonlinear Schrödinger equations, Comm. pure appl. math., 54, 9, 1110-1145, (2001) · Zbl 1031.35129 [9] Floer, Andreas; Weinstein, Alan, Nonspreading wave packets for the cubic Schrödinger equation with a bounded potential, J. funct. anal., 69, 397-408, (1986) · Zbl 0613.35076 [10] Fröhlich, J.; Gustafson, S.; Jonsson, B.L.G.; Sigal, I.M., Solitary wave dynamics in an external potential, Comm. math. phys., 250, 3, 613-642, (2004) · Zbl 1075.35075 [11] J. Fröhlich, T. Spencer, private communication, 1987 [12] Gang, Zhou; Sigal, I.M., Asymptotic stability of trapped solitons of nonlinear Schrödinger equations with potential, Rev. math. phys., 17, 10, 1143-1207, (2005) · Zbl 1086.82013 [13] Zhou Gang, Perturbation expansion and Nth order Fermi Golden Rule for nonlinear Schrödinger equations with potential, J. Math. Phys., in press [14] Goldberg, M.; Schlag, W., Dispersive estimates for Schrödinger operators in dimensions one and three, Comm. math. phys., 251, 1, 157-178, (2004) · Zbl 1086.81077 [15] Grillakis, M.; Shatah, J.; Strauss, W., Stability theory of solitary waves in the presence of symmetry. I, J. funct. anal., 74, 1, 160-197, (1987) · Zbl 0656.35122 [16] Hislop, P.; Sigal, I.M., Lectures on spectral theory of Schrödinger operators, Springer-verlag series monographs on applied mathematics, (1996) · Zbl 0855.47002 [17] Oh, Yong-Geun, Existence of semiclassical bound states of nonlinear Schrödinger equations with potential of the class ($$V_a$$), Comm. partial differential equations, 13, 12, 1499-1519, (1988) · Zbl 0702.35228 [18] Oh, Yong-Geun, Cauchy problem and Ehrenfest’s law of nonlinear Schrödinger equations with potential, J. differential equations, 81, 255-274, (1989) · Zbl 0703.35158 [19] Oh, Yong-Geun, Stability of semiclassical bound states of nonlinear Schrödinger equations with potentials, Comm. math. phys., 121, 11-33, (1989) · Zbl 0693.35132 [20] Rauch, J., Local decay of scattering solutions to Schrödinger’s equation, Comm. math. phys., 61, 149-168, (1978) · Zbl 0381.35023 [21] Reed, Michael; Simon, Barry, Methods of modern mathematical physics, IV. analysis of operators, (1978), Academic Press · Zbl 0401.47001 [22] Rodnianski, I.; Schlag, W.; Soffer, A., Dispersive analysis of charge transfer models, Comm. pure appl. math., 58, 2, 149-216, (2005) · Zbl 1130.81053 [23] Sigal, I.M., Nonlinear wave and Schrödinger equations. I. instability of periodic and quasiperiodic solutions, Comm. math. phys., 153, 2, 297-320, (1993) · Zbl 0780.35106 [24] Simon, Barry, Resonances in n-body quantum systems with dilatation analytic potentials, Ann. math. ser. (2), 97, 2, 247-274, (March 1973) [25] Soffer, A.; Weinstein, M.I., Multichannel nonlinear scattering for nonintegrable equations, (), 312-327 · Zbl 0712.35074 [26] Soffer, A.; Weinstein, M.I., Multichannel nonlinear scattering for nonintegrable equations, Comm. math. phys., 133, 119-146, (1990) · Zbl 0721.35082 [27] Soffer, A.; Weinstein, M.I., Multichannel nonlinear scattering for nonintegrable equations. II. the case of anisotropic potentials and data, J. differential equations, 98, 2, 376-390, (1992) · Zbl 0795.35073 [28] Soffer, A.; Weinstein, M.I., Resonances, radiation damping and instability in Hamiltonian nonlinear wave equations, Invent. math., 136, 9-74, (1999) · Zbl 0910.35107 [29] Soffer, A.; Weinstien, M.I., Selection of the ground state for nonlinear schroedinger equations, Rev. math. phys., 16, 8, 977-1071, (2004) · Zbl 1111.81313 [30] Strauss, W.A., Existence of solitary waves in higher dimensions, Comm. math. phys., 55, 149-162, (1977) · Zbl 0356.35028 [31] Tsai, Tai-Pend; Yau, Horng-Tzer, Stable directions for excited states of nonlinear Schrödinger equations, Comm. partial differential equations, 27, 11-12, 2363-2402, (2002) · Zbl 1021.35113 [32] Tsai, Tai-Peng; Yau, Horng-Tzer, Asymptotic dynamics of nonlinear Schrödinger equations: resonance-dominated and dispersion-dominated solutions, Comm. pure appl. math., LV, 0153-0216, (2002) · Zbl 1031.35137 [33] Tsai, Tai-Peng; Yau, Horng-Tzer, Relaxation of excited states in nonlinear Schrödinger equations, Int. math. res. not., 2002, 31, 1629-1673, (2002) · Zbl 1011.35120 [34] Weinstein, M.I., Lyapunov stability of ground states of nonlinear dispersive evolution equations, Comm. pure appl. math., 39, 1, 51-67, (1986) · Zbl 0594.35005 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Dispersion fitting tool# Here we show how to fit optical measurement data and use the results to create dispersion material models for Tidy3d. Tidy3D’s dispersion fitting tool peforms an optimization to find a medium defined as a dispersive PoleResidue model that minimizes the RMS error between the model results and the data. This can then be directly used as a material in simulations. [1]: # first import packages import matplotlib.pylab as plt import numpy as np import tidy3d as td The fitting tool accepts three ways of loading data: 1. Numpy arrays directly by specifying wvl_um, n_data, and optionally k_data; 2. Data file with the from_file utility function. Our data file has columns for wavelength (um), real part of refractive index (n), and imaginary part of refractive index (k). k data is optional. Note: from_file uses np.loadtxt under the hood, so additional keyword arguments for parsing the file follow the same format as np.loadtxt. 3. URL linked to a csv/txt file that contains wavelength (micron), n, and optionally k data with the from_url utility function. URL can come from refractiveindex. Below the 2nd way is taken as an example: [2]: from tidy3d.plugins.dispersion import DispersionFitter fname = "misc/nk_data.csv" # note that additional keyword arguments to load_nk_file get passed to np.loadtxt fitter = DispersionFitter.from_file(fname, skiprows=1, delimiter=",") # lets plot the data plt.scatter( fitter.wvl_um, fitter.n_data, label="n", color="crimson", edgecolors="black", linewidth=0.5, ) plt.scatter( fitter.wvl_um, fitter.k_data, label="k", color="blueviolet", edgecolors="black", linewidth=0.5, ) plt.xlabel("wavelength ($\mu m$)") plt.ylabel("value") plt.title("refractive index data") plt.legend() plt.show() ## Fitting the data# The fitting tool fit a dispersion model to the data by minimizing the root mean squared (RMS) error between the model n,k prediciton and the data at the given wavelengths. There are various fitting parameters that can be set, but the most important is the number of “poles” in the model. For each pole, there are 4 degrees of freedom in the model. Adding more poles can produce a closer fit, but each additional pole added will make the fit harder to obtain and will slow down the FDTD. Therefore, it is best to try the fit with few numbers of poles and increase until the results look good. Here, we will first try fitting the data with 1 pole and specify the RMS value that we are happy with (tolerance_rms). Note that the fitting tool performs global optimizations with random starting coefficients, and will repeat the optimization num_tries times, returning either the best result or the first result to satisfy the tolerance specified. [3]: medium, rms_error = fitter.fit(num_poles=1, tolerance_rms=2e-2, num_tries=100) [19:38:17] WARNING: warning: did not find fit with RMS error under log.py:50 tolerance_rms of 2.00e-02 The RMS error stalled at a value that was far above our tolerance, so we might want to try more fits. Let’s first visualize how well the best single pole fit captured our model using the .plot() method [4]: fitter.plot(medium) plt.show() As we can see, there is room for improvement at short wavelengths. Let’s now try a two pole fit. [5]: medium, rms_error = fitter.fit(num_poles=2, tolerance_rms=2e-2, num_tries=50) [6]: fitter.plot(medium) plt.show() This fit looks great and should be sufficient for our simulation. Alternatively, if the simulation is narrowband, you might want to truncate your data to not include wavelengths far outside your measurement wavelength to simplify the dispersive model. This is through modifying the attribute wvl_range where you can set the lower wavelength bound wvl_range[0] and the higher wavelength bound wvl_range[1]. This operation is non-destructive, so you can always unset them by setting the value to None. E.g. if we are only interested in the wavelength 3-20 um, we can still use the single-pole model: [7]: fitter = fitter.copy(update={"wvl_range": (3, 20)}) medium, rms_error = fitter.fit(num_poles=1, tolerance_rms=2e-2, num_tries=100) [8]: fitter.plot(medium) plt.show() ## Using Fit Results# With the fit performed, we want to use the Medium in our simulation. ### Method 1: direct export as Medium# The fit returns a medium, which can be used directly in simulation [9]: b = td.Structure(geometry=td.Box(size=(1, 1, 1)), medium=medium) ### Method 2: print medium definition string# In many cases, one may want to perform the fit once and then hardcode the result in their tidy3d script. For a quick and easy way to do this, just print() the medium and the output can be copied and pasted into your main svript [10]: print(medium) td.PoleResidue( eps_inf=1.0, poles=(((-2356501547059525+574821084055067.5j), (1.6942059105382786e+16+1.376947608154165e+16j)),), frequency_range=(15048764544660.518, 97485556621412.89)) [11]: # medium = td.PoleResidue( # poles=[((-1720022108564405.2, 1111614865738177.4), (1.0199002935090378e+16, -3696384150818460.5)), ((0.0, -3100558969639478.5), (3298054971521434.5, 859192377978951.2))], # frequency_range=(7994465562158.582, 299792458580946.8)) ### Method 3: save and load file containing poles# Finally, one can save export the Medium directly as .json file. Here is an example. [12]: # save poles to pole_data.txt fname = "data/my_medium.json" medium.to_file(fname) # load the file in your script medium = td.PoleResidue.from_file(fname) ## Tricks and Tips / Troubleshooting# Performing dispersion model fits is more of an art than a science and some trial and error may be required to get good fits. A good general strategy is to: • Start with few poles and increase unitl RMS error gets to the desired level. • Large num_tries values can sometimes find good fits if the RMS seems stalled. it can be a good idea to set a large number of tries and let it run for a while on an especially difficult data model. • Tailor the parameters to your data. Long wavelengths and large n,k values can affect the RMS error that is considered a ‘good’ fit. So it is a good idea to tweak the tolerance to match your data. Once size does not fit all. Finally, there are some things to be aware of when troubleshooting the dispersion models in your actaual simulation: • If you are unable to find a good fit to your data, it might be worth considering whether you care about certain features in the data. For example as shown above, if the simulation is narrowband, you might want to truncate your data to not include wavelengths far outside your measurement wavelength to simplify the dispersive model. • It is common to find divergence in FDTD simulations due to dispersive materials. Besides trying “absorber” PML types and reducing runtime, a good solution can be to try other fits, or to explore our new StableFitter feature which will be explained below. # Stable fitter# We recently introduced a version of the DispersionFitter tool that implements our proprietary stability criterion. We observe consistently stable FDTD simulations when materials are fit using this method and also provide it in the newest versions of Tidy3d. Functionally speaking, it works identically to the previously introduced tool, excpet the .fit() method is run on Flexcompute servers and therefore this tool reqiures signing in to a Tidy3D account. Here is a demonstration. [13]: from tidy3d.plugins.dispersion import StableDispersionFitter, AdvancedFitterParam fname = "misc/nk_data.csv" fitter_stable = StableDispersionFitter.from_file(fname, skiprows=1, delimiter=",") [14]: medium, rms_error = fitter_stable.fit( num_poles=2, tolerance_rms=2e-2, num_tries=50, ) Note here we supply the advanced_param for more control of the fitting process. nlopt_maxeval stands for the maximal number of iterations for each inner optimization. Details of a list of other advanced parameters will be explained later. We can visualize our fits the same way. [15]: fitter_stable.plot(medium) plt.show() Once the fitting is performed, the procedure of using the medium in our simulation is also idential to the previous fitting tool, which we will not go into details here. ## Tips# • Our stable fitter is based on a web service, and therefore it can run into timeout errors if the fitter runs for too long. In this case, you are encouraged to decrease the value of num_tries or to relax the value of tolerance_rms to your needs. • Our fitting tool performs global optimizations with random starting coefficients, and will repeat the optimization num_tries times. Within each inner optimization, the maximal number of iterations is bounded by an advanced parameter nlopt_maxeval whose default value is 5000. Since there is a well-known tradeoff between exploration and exploitation in a typical global optimization process, you can play around with num_tries and nlopt_maxeval. In particular in senarios where timeout error occurs and decreasing num_tries leads to larger RMS error, you can try to decrease nlopt_maxeval. A list of other advanced parameters can be found in our documentation. For example: • In cases where the permittivity at inifinity frequency is other than 1, it can also be optimized by setting an advanced parameter bound_eps_inf so that the permittivity at infinite frequency can take values between [1,bound_eps_inf]. • Sometimes we want to bound the pole frequency in the dispersive model. The lower and upper bound can be set with bound_f_lower and bound_f, respectively. • The fitting tool performs global optimizations with random starting coefficients. By default, the value of the seed rand_seed=0 is fixed, so that you’ll obtain identical results when re-running the fitter. If you want to re-run the fitter several times to obtain the best results, the value of the seed should be changed, or set to None so that the starting coefficients are different each time. [ ]:
# The most pleasant audio system you can remember, ever ? #### Externet Joined Nov 29, 2005 1,706 Hi. Asking for opinions, experiences, on which where the stereo equipment components that impressed you the most with vivid, pleasant, fidel sound reproduction of your entire life. You can add details of source media, composition, ambient where heard, loudness at which enjoyed, artist, did that experience influence to obtain your current system ? Just that old image added for no reason... #### eetech00 Joined Jun 8, 2013 2,644 Back in the day I used to have a JVC CD4 discrete 4channel 800W sound system. Doobie Brothers sounded awesome! I still have a few CD4 albums. Loved that system! #### Papabravo Joined Feb 24, 2006 17,245 #### DNA Robotics Joined Jun 13, 2014 614 I had a 1965 Pontiac Bonneville that came with an aftermarket reverb. When I had music playing and hit a hard bump, it went Boioioing. I don’t know why, but it made me happy. #### KeithWalker Joined Jul 10, 2017 2,034 The most pleasant audio memory I can think of is when I built by first stereo amplifier in the early 60s. It was an "ultralinear" tube amplifier built from a circuit from an old "Wireless World" magazine. The speakers were just some fairly decent 8" ones that I had built into infinite baffle cabinets with ceramic tweeters connected directly in parallel with the speakers. To try the system out, I connected it to my autochange record player and put on an LP of the London Philharmonic Orchestra playing Tchaikovsky's 1812 Overture. It was the first time I had ever heard music played in stereo. It blew me away! To me it was just like being in front of the live orchestra. I couldn't get over fact that I could close my eyes and tell where each of the instruments was being played. That doesn't help you very much with recommendations of audio components, but to me, it is the most vivid audio memory I possess. Regards, Keith #### sparky 1 Joined Nov 3, 2018 608 A live concert most pleasant wave of green wave of blue #### crutschow Joined Mar 14, 2008 28,170 I now have a pair of Hsu HB-1 MK2 speakers, along with a run-of-the-mill 10" sub next to my desktop computer, which have the best sound of any speakers I've had. Of course part of that is, they have a horn tweeter, which I think helps my old ears hear more of the highs. But to get good sound you need a good amplifier to power them, not some $50 units from Ebay or Amazon. #### Tonyr1084 Joined Sep 24, 2015 6,452 GOSH! Mine was an old Magnavox. Had two HUGE transformers and lots of vacuum tubes. That thing had sound cleaner than pure water. Wouldn't mind going back in time to collect some of those old amps. Some of the crispest cleanest sound I've ever listened to. Couple them with today's speakers and I think you'd have a system that would bring tears to your eyes ever time you listen to "Lucky Man" (ELP) or "Let It Rain". (Clapton) #### Audioguru again Joined Oct 21, 2019 3,833 In 1965 (55 years ago!) I liked the sound from Acoustic Research AR3A speakers but they were too expensive for me. I noticed that their AR4X speakers sounded similar but were smaller and much less expensive so I bought them along with my first solid state receiver by H H Scott. Then I sold my home-made Jensen speakers, Heathkit tubes amplifier and Eico kit tubes FM tuner with MPX adapter to an old geezer who liked old junk. About 30 years ago I went to an audio equipment show and they were playing excellent sounds from some Bose huge speakers. Then two pretty young ladies took off the fake huge covers on much smaller real speakers. I got goose bumps on me and my eyes watered because I could not see a sub-woofer anywhere so these little speakers were doing it. Later it was said that they hid sub-woofers there. #### peterdeco Joined Oct 8, 2019 413 In the 80's I had a Fosgate 100W RMS in my car with Jensen coaxial 6x9's. Home I had Van Alstein 500W RMS amp, Cerwin Vega speakers. Then I spent at least 3 days a week at the shooting range without hearing protection. I now have tinnitus in both ears. Any connection? Last edited: #### Audioguru again Joined Oct 21, 2019 3,833 Guns, loud music and old age destroy hearing. I noticed that my hearing was missing high frequencies when I was about 65 years old which is normal according to this graph of hearing loss vs age: #### Attachments • 92.8 KB Views: 12 #### schmitt trigger Joined Jul 12, 2010 473 The best sounding system was when I built a bi-amped speaker, utilizing some amplifier modules from the British company ILP. I heard mostly rock music, but some was very melodical. I also vouch for Tony's suggestion about Emerson Lake and Palmer's "Lucky Man." #### peterdeco Joined Oct 8, 2019 413 Well, that's not fair. Why do men lose hearing more than women? #### KeithWalker Joined Jul 10, 2017 2,034 Well, that's not fair. Why do men lose hearing more than women? BECAUSE WOMEN TALK MORE THAN MEN? #### peterdeco Joined Oct 8, 2019 413 Gotta take the wife to the shooting range. #### SamR Joined Mar 19, 2019 3,607 I spent most of my Saturdays hunting doves in the fall and ducks during the winter for many years. Cases and cases of 12Ga shell without any hearing protection. That and a few very painfully loud concerts. And my hearing has paid the price. I couldn't tell good audio from bad. I hear nothing above 10kHz at best. #### Audioguru again Joined Oct 21, 2019 3,833 Guys shoot more than gals. Guys work at noisy jobs more than gals. #### Deleted member 115935 Joined Dec 31, 1969 0 best sound system is a live performance. Best electronic one ? Well for wow factor, the Pink Floyd concerts of the 80's. For sonic beauty, Quad ELS63s, powered off Quad 303s in a BBC sound studio, direct feed off a crossed pair in the royal albert hall. You could place the orchestra across the complete sound stage, #### Berzerker Joined Jul 29, 2018 621 Bought an pioneer 500 watt from a friend at work for$250 in the early 90's. It came with 5 speakers and felt like it was moving the house when cranked up. Still own it. Ahhhh! the good old days. When a watt was a watt. Brzrkr #### KeepItSimpleStupid Joined Mar 4, 2014 5,090 My amp. A version of the leach Amp. A low TIM amp. Search for the Leach AMP. Unrolled off the frequency response id 0 to 0.8 MHz. Very nice. Slew rate > 100 V/us. 100 W/ch. 99% metal film resistors. Ground plane construction. Will survive switchig the NPN and PNP output transistors. Laoarithmic ramping of volume, slow-start, fast turn-off. Wrong sized power supplies (my doing), but with 40,000 uF of capacitance total and a 120 VAC regulator. +-50V DC rails at 3A each with a custom toroidal xformer, 70V CT x 2. Amp had more bass when I used an 18A constant voltage Transformer. I fixed amplifiers and heard at least one amp choke on my system when the 4bx did it's derivative thing, With a bunch of rack mounted stuff from Technics Professional. The FM tuner goes out to 18 kHz and I have two signal processors. The dbx 4bx a dynamic range expander with impact restoration and Carver's TX1-11 Asymetric Charged-Coupled Decoder for FM. The tuner of the series is ST-9030. Hiss goes completely away. I have the parametric EQ and the pre-amp of the series too. freq response from 0 to 100 kHz, MM and MC phono inputs. Like the MC sound. Pink Floyd and Hammer Dulcimer. Compared against a Macintosh tube amp with Voice of the Theater Speakers with horns. Pretty favorable bass was better on the Leach. The horns sounded better with the tube amp. Best sounding headphones were Stax Earspeakers. Never owed a pair, but would like to. I have run of the milll 100W 10" woofers and 30W dome tweeters. I've heard Magneplaner electrostic speakers. They were awesome. I have a direct drive Technics RS-B100 cassette deck that can be nuts with dbx noise reduction. Semi-automatic direct drive turntable Technics SL-1700 with a MC cartridge.
# Please help with these thanks How could the pair of structures be distinguished using IR spectroscopy?... ###### Question: How could the pair of structures be distinguished using IR spectroscopy? List the diagnostic peaks that would be present or absent for each pair. The IR spectrum below is of one of the compounds shown. Circle structure that corresponds to the spectrum and label the peaks you used to make your choice. #### Similar Solved Questions ##### Windhoek Mines, Ltd., of Namibia, is contemplating the purchase of equipment to exploit a mineral deposit... Windhoek Mines, Ltd., of Namibia, is contemplating the purchase of equipment to exploit a mineral deposit on land to which the company has mineral rights. An engineering and cost analysis has been made, and it is expected that the following cash flows would be associated with opening and operating a... ##### A man whose mass is 60 kg rides a Ferris wheel that has a radius of... A man whose mass is 60 kg rides a Ferris wheel that has a radius of 20 m. The man has a scale available in the cart he is riding. If his speed during the ride is a constant 5 m/s, what would the scale read at top of the ride?... ##### Metal bearings with diameter of 10 mm are initially at equilibrium at 400?C in a furnace.... Metal bearings with diameter of 10 mm are initially at equilibrium at 400?C in a furnace. At t = 0 the bearings are removed from the furnace and cooled slowly in air with h = 150 W/m2K and temperature of 20?C to control the mechanical properties of the metal. Calculate the time required for the cent... ##### How do you multiply and simplify 2x^2(3x+5)? How do you multiply and simplify 2x^2(3x+5)?... ##### Simplify? sin^2xcotx+cosx Simplify? sin^2xcotx+cosx... ##### How do you simplify sin x (csc x - sin x)? How do you simplify sin x (csc x - sin x)?... ##### Question 6 Time: 20 minutes Total: 12 marks Mary’s Auto Shop Inc. allows its divisions to... Question 6 Time: 20 minutes Total: 12 marks Mary’s Auto Shop Inc. allows its divisions to operate as autonomous units. Their results for the current year were as follows:                     &nbs... ##### 3. What is the molar solubility of Al(OH)3 at 25°C (Ksp = 4.6 x 10")? a.... 3. What is the molar solubility of Al(OH)3 at 25°C (Ksp = 4.6 x 10")? a. 8.4 x 10 b. 2.8 x 10° c. 3.6 x 10° d. 1.4 x 10-9 e. 5.0 x 10-33... ##### Calculate the molar mass of (CH3) 2CO. a. 58 g 18. b. 42 g c. 62... Calculate the molar mass of (CH3) 2CO. a. 58 g 18. b. 42 g c. 62 g d. 52 g e. 54 g Calculate the percentage by mass of oxygen in potassium permanganate, KMnO4. a. 24.7% 19. b. 34.7% с. 40.5% d. 32.290 38.6% e.... ##### Nisa has grown up in a small villegein the foothills nisa has grown up in a small villegein the foothills. There was no electricity and water had to be hand pumped.they seldom saw people from the outside.when she entered school the bus had to pick her up... ##### What word is the correct one and why What word is the correct one and why?When Samuel P. Langley built a steam-powered airplane, both the pilot and (he, him) were sure to fly.... ##### Help with B (25 pts) Estimate the volume occupied by 20 kg of Methanol vapor at... Help with B (25 pts) Estimate the volume occupied by 20 kg of Methanol vapor at 450 ℃ and 32.4 bar by (a) The ideal-gas equation 4. lation.... ##### Material enginering You required to select suitable alternative materials used to produce present day cars. You must do... Material enginering You required to select suitable alternative materials used to produce present day cars. You must do this realising that energy is used to build cars and run them. Consumers and legislation want more fuel-efficient cars. You should suggest materials for the car based on its abili... ##### Claim Frequency Code 11. List the three types of claim frequency code. (3) Diagnosis Codes 12.... Claim Frequency Code 11. List the three types of claim frequency code. (3) Diagnosis Codes 12. How many diagnosis codes may be entered on the HIPAA 837P claim form? Claim Note 13. When is a claim note used? Service Line Information 14. Does the HIPAA 837P have the same elements as the CMS-1500 at th... ##### (1) A fair coin is tossed 4 times. What is the probability that we get at... (1) A fair coin is tossed 4 times. What is the probability that we get at least 2 heads? 12 marks] (2) Distribute 4 different balls into 4 boxes (multiple occupancy is allowed). How many arrangements are possible? [2 marks] (3) Which is greater? The probability of getting at least one 6 throwing a d... ##### Vertebrates Vertebrates Cartilaginous fish Scales and amniotic egg Land and water Bony fish Finned Finned Transitional... Vertebrates Vertebrates Cartilaginous fish Scales and amniotic egg Land and water Bony fish Finned Finned Transitional Fossil Gas exchange via skin Largest extant Group Gill Cover Controls Buoyancy Turtles 13 Mammals Feathered Descendent Egg-laying Place the letter in the appropriate blank a. Opercu...
# How to characterize the convex hull/closure operator From Wikipedia Every subset $A$ of a vector space $S$ over the real numbers, or, more generally, some ordered field, is contained within a smallest convex set (called the convex hull of $A$), namely the intersection of all convex sets containing $A$. The convex-hull operator $Conv()$ has the characteristic properties of a hull operator: • extensive $S ⊆ Conv(S)$, • non-decreasing $S ⊆ T$ implies that $Conv(S) ⊆ Conv(T)$, and • idempotent $Conv(Conv(S)) = Conv(S)$. I was wondering what else properties of the convex hull/closure operator has, so that • we can define a hull/closure operator with such properties to be a convex hull/closure operator, and • we can define the class of all convex subsets for a given ground set, by for example claiming a subset is convex if and only if it equals its convex hull? Thanks and regards! - $Conv(T)=T$ for any affine subspace $T$, and $Conv(S\setminus T)= S$ unless $S=T$. –  Olivier Bégassat Jan 2 '13 at 18:01 @OlivierBégassat: Thanks! I was wondering if these two additional properties can define a closure operator to be a convex closure operator? References are also appreciated. –  Tim Jan 2 '13 at 18:05 Also, $$Conv(X)=\bigcup_{F\subset X,~F\text{ finite}}Conv(F)$$ –  Olivier Bégassat Jan 2 '13 at 18:19 @OlivierBégassat: Thanks! Did you come up with the three properties yourself? Do you know if they can characterize a convex closure operation? –  Tim Jan 2 '13 at 18:24 I have no idea to be honest. –  Olivier Bégassat Jan 2 '13 at 19:11 There is a theorem in van de Vel's Theory of Convex Structures that may be of interest. First define a closure system. Suppose that $X$ is a set and $\mathcal{C}$ is a collection of subsets of $X$. Then $\langle X, \mathcal{C} \rangle$ is a closure system if and only if for all $\mathcal{A} \subseteq \mathcal{C}$ we have $\cap \mathcal{A} \in \mathcal{C}$. Sometimes people require $\varnothing \in \mathcal{C}$. I can't remember if van de Vel does. For $A \subseteq X$ define $$\mathsf{cl}(A) = \cap \{ C \in \mathcal{C} \colon A \subseteq C \} .$$ In any event the convex subsets of a real vector space satisfy these properties. Theorem Suppose that $\langle X, \mathcal{C} \rangle$ is a closure system. Then the following statements are equivalent: 1. For all $A \subseteq X$ we have $\mathsf{cl}(A) = \cup \{ \mathsf{cl}(F) \colon F \text{ is a finite subset of } A \}$. 2. For all $\mathcal{D}$ which are collections of subsets of $X$ that are directed by inclusion (see below for a definition of directed by inclusion) we have $\mathsf{cl}(\cup \mathcal{D}) = \cup \{ \mathsf{cl}(D) \colon D \in \mathcal{D} \}$. 3. For all $\mathcal{T}$ which are collections of subsets of $X$ that are totally ordered by inclusion (see below for a definition of totally ordered by inclusion) we have $\mathsf{cl}(\cup \mathcal{T}) = \cup \{ \mathsf{cl}(T) \colon T \in \mathcal{T} \}$. A collection $\mathcal{D}$ of subsets is directed by inclusion if and only if for all $D_{0}, D_{1} \in \mathcal{D}$ there is a $D \in \mathcal{D}$ with $D_{0}, D_{1} \subseteq D$. A collection of subsets is totally ordered by inclusion if and only if for all $T_{0}, T_{1} \in \mathcal{T}$ we have $T_{0} \subseteq T_{1}$ or $T_{1} \subseteq T_{0}$. Edit This is in response to Tim's comment. Suppose that $X$ is a set and $\mathcal{F}$ is the collection of finite subsets of $X$. Suppose also that $f \colon \mathcal{F} \rightarrow \mathcal{P}(X)$ where $\mathcal{P}(X)$ is the collection of all subsets of $X$. For each $n \in \mathbb{N}$ define \begin{align} g_{n} \colon \mathcal{P}(X) &\rightarrow \mathcal{P}(X) \\ \text{for all $A \subseteq X$ by the assignment} \\ g_{0} \colon A &\mapsto A \\ g_{n} \colon A & \mapsto g_{n-1}(A) \cup (\cup \{ f(F) \colon F \subseteq g_{n-1}(A) \text{ is finite} \} ) \end{align} Then $A \mapsto \cup \{ g_{n}(A) \colon n \in \mathbb{N} \}$ is a convex hull operator. The function $f$ generalizes the notion of the points between $x, y \in \mathbb{R}^{k}$. If you think of this construction in this manner then the $g_{n}$ functions just accumulate line segments. - +1. Thanks! (1) In the theorem just for a closure system and its closure operator, and there is no convexity structure and convexity closure operator involved? What references is the theorem from? (2) When can a closure operator be a convexity closure operator (in abstract sense)? I guess we will need some properties from a convexity closure operator (in concrete sense, for a vector space over an ordered field)? –  Tim Jan 3 '13 at 2:06 The result is, more or less, theorem 1.3 in van de Vel's book. Click on this versionof the book The result is on page 5. –  Jay Jan 3 '13 at 19:35 In thm 1.3, he says that a closure operator is a convex closure operator iff it is domain finite. Is a closure operator being domain finite equivalent to $\operatorname{cl}(X) = \bigcup\left\{\operatorname{cl}(Y) : Y\subseteq X \text{ and } Y \text{ finite} \right\}$, i.e. finitary property in Wikipedia? So can we say that a closure operator is a convex closure operator iff it has the finitary property? –  Tim Jan 7 '13 at 17:40 (2) Wikipedia says "The convex hull in n-dimensional Euclidean space is another example of a finitary closure operator. It satisfies the anti-exchange property: If x is not contained in the union of A and {y}, but in its closure, then y is not contained in the closure of the union of A and {x}. Finitary closure operators with this property give rise to antimatroids." This is a concrete convexity structure, but for an abstract convexity, does its closure satisfy the anti-exchange property? (...) –  Tim Jan 7 '13 at 17:43 (...) Is a closure operator a convex closure operator iff it has both finatary and anti-exchange properties? –  Tim Jan 7 '13 at 17:45
### Bash NCBI is pretty damn awesome. But the first few times I wanted to download a massive amount of reference sequences I found myself struggling a bit. If that has happened to you, then hopefully this page helps out. NCBI’s Entrez Direct E-utilities offers one avenue to be able to download data in bulk at the command-line, but it can take a bit of bash dancing. I initially wrote this demonstrating one of the ways to do that dance, and you can still find that under the Entrez section at the bottom of this page because it shows some basic bash tricks that are helpful in other situations. But wonderfully, after sharing the page, @asherichia sent me a link to @kaiblin’s github page, where he and some others have put together two amazing tools for downloading data from NCBI. So now, I’ve just added two simplified examples of downloading genomes and proteins to the top of the page here demonstrating how to use their tools (even though they are very straightforward to use, and their repository README shows a bunch of helpful examples). Both of these tools can be installed easily via pip at the command line, i.e. pip install ncbi-genome-download and pip install ncbi-acc-download (if you’re doing it on a server and hit a permissions error, adding the --user flag to pip usually works). Their script to download genomes, ncbi-genome-download, goes through NCBI’s ftp server, and can be found here. They have quite a few options available to specify what you want that you can view with ncbi-genome-download -h, and there are examples you can look over at the github repository. For a quick example here, I’m going to pull fasta files for all RefSeq Alteromonas reference genomes labeled as “complete” – see here for definitions of RefSeq assembly levels – just because I have a softspot for Alteromonas: ncbi-genome-download bacteria -g Alteromonas -l complete -F fasta -o Alteromonas_refseq_genomes Here I’m specifying the positional argument of bacteria to tell it which group, then the genus with the -g flag, the assembly level with the -l flag, the format of the output with the -F flag, and the output directory with the -o flag. On my personal MacBook Pro this took a mere 40 seconds to download 30 genomes. Pretty sweet! The script they provide to download data by accession number, ncbi-acc-download, can be found here and uses Entrez. Other than accession numbers, which are supplied as a positional argument, you can tell the script whether you want nucleotides or proteins via the -m flag. The nucleotide option returns results in GenBank format, and the protein option returns results in fasta format. Here’s the syntax to pull a single protein sequence: ncbi-acc-download -m protein WP_015663423.1 If we wanted to grab multiple accessions, they can be supplied as a comma-delimited list: ncbi-acc-download -m protein WP_015663423.1,WP_006575543.1,WP_009965426.1 And if you have a ton of these accessions in a single-column file, you can see one way to convert that to a comma-separated list in the formatting for bulk download section below. Thanks again to @asherichia for pointing me towards these two very helpful tools on @kaiblin’s github page! # Entrez I don’t use this toolset for much more than pulling proteins and genomes from time to time, so I don’t have a strong grasp on everything it can do. And now that I know about the helper download tools from @kaiblin’s github page demonstrated above, I will probably use it even less. But as mentioned there are some basic bash lines in here that may be helpful in other scenarios, so I figured I’d keep this example up of pulling amino acid sequences en masse. If you want to go further with using Entrez at the command line, make sure to look over the full functionality here. ## The efetch command The efetch command let’s you pull all kinds of data from NCBI. If you run efetch -help, you can look at lots of parameters and types of info you can pull. Here, to get an idea of how the command works, let’s just pull one amino acid sequence for an alkaline phosphatase: efetch -db protein -format fasta -id AEE52072.1 And after a second the sequence should print out to the screen: These are some of the typical flags you need to supply to efetch or other E-utils commands: -db to specify which database; -format to tell it how you want the data; and -id to provide the desired accession numbers or unique IDs. Note that the default behavior just prints the output to the terminal, so to save the output you need to redirect it to a file. The efetch command can also take multiple IDs separated by commas. Here’s an example pulling two sequences and writing the output to a new file: efetch -db protein -format fasta -id AEE52072.1,ADV47642.1 > my_seqs.faa In practice of course we can download one or two from the site though, and we’re only using this because we want a lot. While unfortunately you can’t provide the -id argument of efetch a file of accession numbers, we can easily do a little bash workaround that we’ll see. Additionally, @ctskennerton pointed out to me that you can in fact provide a regular one-column file of accession numbers to the epost command (which basically queues up accessions to then be acted on), and then pipe the output of that into the efetch command. This is pretty sweet as it’s a bit cleaner than the workaround I initially used, but it doesn’t seem to work with a lot of accessions. When I tested things it worked fine for me on ~1,000 protein seqs, but I got “request timed out” errors when trying to run it on ~10,000 sequences. So I’ve kept the initial bash workaround in here and added the epost | efetch way too. If you’re doing this regularly with a manageable number of references to pull, then doing it the cleaner way shouldn’t be a problem. Thanks to @ctskennerton for the tip! The other thing we have to address is that the Entrez site notes that you shouldn’t do more than blocks of 200 at a time due to server limitations. So we’ll also go over how to chop up a large file into a bunch of little ones and run things in a loop with the magic of bash. But first, let’s look at one way to generate a large list of desired accessions. ## Pulling lots of sequences For an example, let’s imagine we want all the amino acid sequences of the phoD-type alkaline phosphatases available in RefSeq for bacteria (because Euks are too hard). While this is focused on amino acid coding sequences, the same principles apply if you wanted to pull different information. The only things that would change would be how you search your accessions and which options you specify for efetch. ## Generating accessions list As we just saw, to use efetch at the command line we first need to generate a list of accession numbers (or gene IDs). This can be done at the command line too with the esearch command, but I don’t know how to use that yet. So far I’ve personally just done this on their web page. Here are the steps I just took to get the desired accessions for bacterial phoD-type amino acid sequences: • went to NCBI • changed the search database from “All Databases” to “Protein” • searched for “alkaline phosphatase phoD” • limited the search to only RefSeq by clicking it under “Source databases” on the left side • limited the search to only bacteria by clicking that in the top right • clicked “Send to:”, “File”, changed format to “Accession List”, clicked “Create File” At the time of my doing this, this was a total of 10,249 accessions that were written to a file called “sequence.seq”: For the sake of this example we don’t need that many, so I’m going to cut that down to about a 10th and store them in a file called “wanted_accessions.txt”: head -n 1025 sequence.seq > wanted_accessions.txt Here we’re going to do things without epost first. Remember from the example above that efetch can take multiple accessions separated by commas. To see how we can format our accessions list properly, first let’s use bash to build up an efetch command that will run on just the first 10 seqs: head wanted_accessions.txt | tr "\n" "," | sed 's/,$//' > ten_formatted.txt Here I used the head command to just grab the first 10 accessions, then used the tr command to change all newline characters to commas, and then sed to remove the very last trailing comma (see the Intro to bash page and six glorious commands pages if you’re not yet familiar with these commands). That’s softwrapped in the image, but we can see that the 10 accessions are now all on one line and separated by commas. Now we simply need to add the rest of the efetch command in front of that. The following code replaces the start of every line (here just one) with the efetch command we need in front of the comma-delimited accessions, and then writes the output to a new file called “ten_accessions.sh”: sed 's/^/efetch -db protein -format fasta -id /' ten_formatted.txt > ten_accessions.sh Now all we need to do is call that file as a bash script and redirect the output to a new file: bash ten_accessions.txt > ten_phoDs.faa Great. Now that we see how we can format one set of accessions for an efetch command, that just leaves: splitting the large file of accessions into multiple smaller files; building the formatted efetch command for all of them; and then throwing them all into a shell script together. Here I am going to use the split command to split up our large accessions file with 1,025 accessions, “wanted_accessions.txt”, into as many 200-line files as are needed: split -l 200 wanted_accessions.txt temp_block_ Here the split command made 6 files with the prefix we provided as the last positional argument, and all of them have 200 lines except the last which has the remaining 25. Now we can just loop through those to generate the properly formatted shell script like we did above for the individual one: for block in ls temp_block_*; do tr "\n" "," <$block | sed 's/,$//' | sed 's/^/efetch -db protein -format fasta -id /'; done > pull_more_phoD_seqs.sh And here’s what the newly created “pull_more_phoD_seqs.sh” file looks like (in less with no softwrapping, so the accession list runs off to the right): And after running the script, which took about 15 seconds for these 1,025 sequences, we have our references 🙂 And as mentioned above, while you can’t provide the efetch command with a regular single-column file of accession numbers, you can provide that to the epost command, and then pipe that into efetch. In that case you wouldn’t need to run the step generating the bash script. Also as I mentioned above though, in my quick tests this worked with the subset of ~1,000 protein sequences, but I got timed-out errors when trying to run it on 10,000. So it might depend on how much you’re trying to pull, but here’s how that command would look with just one file of accessions, and then if you were to loop through the blocks of 200 like we made above: epost -input 10_accessions -db protein | efetch -format fasta > 10_accessions.faa for block in ls temp_block_*; do epost -input$block -db protein | efetch -format fasta; done > out.faa
Sales Toll Free No: 1-800-481-2338 # Whole Numbers Basics Top Sub Topics Whole number is one of the classification of number systems. Whole numbers are numbers without fractions, percentages or decimals. Zero is neither a fraction nor a decimal, so zero is a whole number. A whole number is denoted by W. Whole numbers can be finite or infinite. Finite defines the numbers in the set are countable and infinite means the numbers in the set are uncountable. They include only the rounded value. The term whole number is one you’ll find often in mathematics. Whole numbers can never be negative. Whole numbers stop decreasing at Zero. ## Difference Between Whole Numbers and Natural Numbers Back to Top The following differences between whole numbers and natural numbers are given below: • A whole number is a positive integer including zero. The set of natural numbers is the set of positive integers beginning at one. • Natural numbers are whole numbers, however whole numbers are not natural because zero is not a natural number. Infinite number of zeroes can be attached at the end to whole numbers. • Smallest whole number is 0 greatest • Smallest natural number is 1greatest The difference between natural and whole numbers is where they start. A whole number is any positive number including a zero. A natural number is a positive integers number which starts at one. ## Operations on Whole Numbers Back to Top There are some basic operations performed by whole numbers. They are: Whole number addition When two whole numbers are added we get a Whole Number. Therefore, whole numbers are closed under addition. Examples : 4 + 5 = 9, 1 + 2 = 3, 9 + 7 = 16 Whole number Subtraction When two whole numbers are subtracted we get a whole number. In some cases, subtraction of whole number does not always give a whole number. Therefore, whole numbers are not closed under subtraction. e.g. 9 - 2 = 7, 8 - 11 = - 3 From the above, in some problems we can notice that the the difference of whole numbers is not a whole number, the result obtained is an integer. Whole number Multiplication When two whole numbers are multiplied we get a whole number. Therefore, whole numbers are closed under multiplication Examples : 7 * 3 = 21, 5 * 4 = 20, 6 * 6 = 36, 7 * 6 = 42 Whole Number Division Dividing a whole number by another does not always give a whole number. Whole numbers are not closed under division. Examples : $\frac{12}{6}$ = 2, $\frac{5}{15}$ = 0.33, $\frac{36}{128}$ = 0.2812 From the above, we see that in some problems the result obtained is a fraction. We can say that division of two whole numbers is not always a whole number. ## Properties of Whole Numbers Back to Top Important properties of whole numbers are explained below. Closure property Addition of two whole numbers will always be a whole number. Examples : 56 + 13 = 69, 2 + 8 = 10, 35 + 10 = 45 Commutative property If a and b are two whole numbers, then a + b = b + a Examples : 5 + 6 = 6 + 5, 2 + 9 = 9 + 2, 89 + 96 = 96 + 89 Associative property Consider a, b and c to be whole numbers then, a + ( b + c) = (a + b) + c Examples : 4 + (5 + 10) = (4 + 5) + 10 = 19, 5 + ( 2 + 3 ) = (5 + 2) + 3 = 10 Additive Property If we add Zero with any whole result would be same whole number Suppose a is a whole number, then a + 0 = 0 + a = a Examples: 1 + 0 = 0 + 1 = 1, 63 + 0 = 0 + 63 = 63 Multiplicative Property If we multiply 1 with any whole number result would be number itself. Suppose a is a whole number, then a * 1 = 1 * a = a Distributive Property Let a, b and c be three whole numbers then, a * ( b + c) = (a * b) + (a * c) Examples : 2 * (6 + 3) = ( 2 * 6) + (2 * 3) = 18, 5 * ( 6 + 3 ) = (5 * 6) + (5 * 3 ) = 45
# A-Level Mechanics - Vectors This is Part 2, of the tutorial on vectors, aimed at A-level Mechanics students. These are the 'exam style questions', I will be covering by the end of this tutorial. 1. At noon, boat B has the position i+2j km and velocity, 3i+2j km/hr. A lighthouse, L, has the position 5i + j. Find vector LB at 2pm. 2. Find LB at 3pm. 3. What is the distance between the lighthouse and the boat at 2pm? 4. Find LB, at time t hours after noon? 5. At what time, will the boat be north of the lighthouse, and what is vector LB at that time? 6. Find the distance between the boat and the lighthouse, at t hours after noon. 7. At what time will the distance between the boat and the lighthouse be 7km? ## Magnitude of a vector Before getting on the hard questions, I'd like to clarify some things first. Say I have a vector, 3i + 4j. What is the magnitude of this vector? The magnitude just means the length of the vector (the length of the arrow). Notice you have a right angle triangle here. The base has a length of 3 units, and the height is 4 units. So the hypotenuse would be 5 units (${\displaystyle \scriptstyle {\sqrt {3^{2}+4^{2}}}}$). Therefore the magnitude of the vector 3i +4j, is 5 units. This statement can be mathematically written as: |3i+4j| = 5. (The vertical lines mean magnitude). ## Displacement vectors and distances A displacement vector, is essentially an arrow that goes from one co-ordinate to another co-ordinate. Example: Lighthouse A and B have co-ordinates (2,3) and (5,7) respectively. What is the vector AB? First I shall plot the points: Answer: Well, AB is a vector going from A to B. From the drawing below you can see that AB = 3i +4j. Question: What is BA? Answer: ${\displaystyle BA=-3i-4j}$ If a question asks for the displacement between A and B, it doesn’t specify direction, and so you could write out ${\displaystyle AB=3i+4j}$, or ${\displaystyle BA=-3i-4j}$. They would both be correct. Now, before I continue, I want you to have an appreciation for the fact that a position vector is essentially a co-ordinate. So for example, if you say that object A has co-ordinates (2,3), it’s the same as saying that object A has a position vector (2i+3j). This can be mathematically written as ${\displaystyle r_{a}=(2i+3j)}$. Knowing this, you can now take advantage of a simple formula for figuring out displacements vectors. This formula is: ${\displaystyle AB=r_{b}-r_{a}}$ So, ${\displaystyle AB=(5i+7J)-(2i+3j)=3i+4j}$ Also ${\displaystyle BA=r_{a}-r_{b}=(2i+3j)-(5i+7J)=-3i-4j}$ Also, don’t confuse position vectors with displacement vectors. Now, what is the distance between points A and B? So you could do this without knowing anything about vectors. Just applying Pythagoras' theorem, you can calculate the distance between A and B to be 5 units (${\displaystyle \scriptstyle {\sqrt {3^{2}+4^{2}}}}$). Or...you could have just figured out the length/magnitude of vectors AB or BA: ${\displaystyle |3i+4j|={\sqrt {3^{2}+4^{2}}}=5}$, or ${\displaystyle |-3i-4j|=\scriptstyle {\sqrt {(-3)^{2}+(-4)^{2}}}=5}$ Another important thing to know, is that when you are asked for something like “find the position of B relative to boat A”, you are basically being asked for vector AB (NOT vector BA), so the answer would be 3i + 4j. Write down the box below on a separate piece of paper, as it will help with the rest of the questions I’ll be giving you on this page. B relative to A = ${\displaystyle AB}$ = ${\displaystyle r_{b}-r_{a}}$ ## Exam Style Questions At noon, boat B has the position i+2j km and velocity, 3i+2j km/hr. A lighthouse, L, has the position 5i + j. ### 1. Find vector LB at 2pm. Draw it first... In order to answer the question, you need to figure out where the boat will be at 2pm. For that you use the “position equation”, as I have shown in part 1 of this tutorial. ${\displaystyle R_{b}=R_{0}+v\cdot t}$ ${\displaystyle =(i+2j)+(3i+2j)t}$ ${\displaystyle =(1+3t)i+(2+2t)j}$ At 2pm, t=2, ${\displaystyle R_{b}=(1+3x2)i+(2+2x2)j}$ ${\displaystyle =7i+6j}$ ${\displaystyle LB=R_{b}-R_{l}}$ ${\displaystyle =(7i+6j)-(5i+j)}$ ${\displaystyle =(2i+5j)}$ Looking at the diagram, you can see that this is correct... ### 2. Find LB at 3pm. From before: ${\displaystyle R_{b}=(1+3t)i+(2+2t)j}$ Thus at 3pm, ${\displaystyle R_{b}=(1+3x3)i+(2+2x3)j=10i+9j}$ So, ${\displaystyle LB=(10i+8j)-(5i+j)=5i+7j}$. Notice how the vector LB is NOT constant. It changes with time. ### 3. What is the distance between the lighthouse and the boat at 2pm? Simply work out the magnitude of vector LB at 2pm: ${\displaystyle |2i+5j|=\scriptstyle {\sqrt {2^{2}+5^{2}}}=\scriptstyle {\sqrt {29}}=5.39km}$ ### 4. Find LB, at time t hours after noon? Well, from before we know that at any given time: ${\displaystyle R_{b}=(1+3t)i+(2+2t)j}$ and ${\displaystyle R_{l}=(5i+j)}$ Therefore: ${\displaystyle LB=R_{b}-R_{l}}$ ${\displaystyle =[(1+3t)i+(2+2t)j]-(5i+j)}$ ${\displaystyle =(-4+3t)i+(1+2t)j}$ This equation tells you what the vector LB is, as a function of time. Test it out. Question 1 (from before): Find vector LB at 2pm. At 2pm t=2, ${\displaystyle LB=(-4+3x2)i+(1+2x2)j}$ ${\displaystyle =2i+5j}$ (Same answer as you got before) ### 5. At what time, will the boat be north of the lighthouse, and what is vector LB at that time? Draw it out... You can see that when the boat is north of the lighthouse, vector LB has a zero i component. From before, ${\displaystyle LB=(-4+3t)i+(1+2t)j}$, We can see that the i component is zero when -4+3t = 0, which rearranges to give t = 4/3. This is equivalent to 1 hour, and 20 minutes. So the answer is 1.20pm. At that time vector LB would be: ${\displaystyle LB=(-4+3x\scriptstyle {\frac {4}{3}})i+(1+2x\scriptstyle {\frac {4}{3}})j}$ ${\displaystyle =0i+3\scriptstyle {\frac {2}{3}}j}$ Looking at the diagram above, you can see this about right. Also notice how a similar question was answered in a different way in Part 1 of the tutorial. ### 6. Find the distance between the boat and the lighthouse, at t hours after noon. Once again, from before, vector LB at any time is LB = (-4+3t)i + (1+2t)j. To find the distance between L and B, find the magnitude/length of vector LB. ${\displaystyle |LB|=|(-4+3t)i+(1+2t)j|}$ ${\displaystyle =\scriptstyle {\sqrt {(-4+3t)^{2}+(1+2t)^{2}}}}$ ${\displaystyle =\scriptstyle {\sqrt {16+9t^{2}-24t+1+4t^{2}+4t}}}$ ${\displaystyle =\scriptstyle {\sqrt {13t^{2}-20t+17}}}$ Distance = ${\displaystyle \scriptstyle {\sqrt {13t^{2}-20t+17}}}$ This equation tells you the distance between the boat and the lighthouse as a function of time. Again, you can test it out. Question 3 (from before): What is the distance between the lighthouse and the boat at 2pm? At 2pm, t=2, Distance = ${\displaystyle \scriptstyle {\sqrt {13\cdot 2^{2}-20\cdot 2+17}}}$ =${\displaystyle \scriptstyle {\sqrt {29}}}$ = 5.39km. (Same answer as you got before) ### 7. At what time will the distance between the boat and the lighthouse be 7km? Substitute distance = 7, ${\displaystyle \scriptstyle 7={\sqrt {13t^{2}-20t+17}}}$ ${\displaystyle \scriptstyle 49=13t^{2}-20t+17}$ ${\displaystyle \scriptstyle 13t^{2}-20t-32=0}$ ${\displaystyle t={\frac {-(-20)\pm {\sqrt {(-20)^{2}-4\cdot 13\cdot (-32)}}}{2\cdot 13}},}$ ${\displaystyle t={\frac {20\pm 45.43}{2\cdot 13}}}$ t = 2.51 hours, and t = - 0.978 Ignore negative time solution. To convert 2.51 hours to clock time: 2 hours + 60x${\displaystyle \scriptstyle {\frac {51}{100}}}$ minutes = 2 hours 31 minutes. This is equivalent to 2.31pm. ## Final notes 1. That's it! If you have found this article useful, please comment in the discussion section (at the top of the page). Its just a nice ego boost really... 2. Also please comment if there are other questions you want covered, or if there something in this tutorial you did not understand. 3. I’d strongly advise you to do all the questions I’ve posted here again by yourself, as it will show you whether you understood it all. 4. DRAW!!! It doesn't have to be on square paper. Just a quick sketch (like I've done in part 1 of the tutorial), will do for most questions. When you are able to draw/visualise vectors, things will become a lot easier... ## Something extra... Someone has left me a question on how to work out angles between vectors. Here it is... Question: What is the angle between vector -2i+3j and the unit vector 'i'? Firstly draw it out: To find this angle out, you may need to find out a simpler angle first: ${\displaystyle \tan {\alpha }=2/3}$ ${\displaystyle \alpha =33.7^{\circ }}$ ${\displaystyle \theta =180-33.7}$ ${\displaystyle \theta =146.3^{\circ }}$ Thats it! Also, make sure not to draw it incorrectly/ work out the wrong angle, like below:
# Thread: help with sequence prefixes 1. I think my text defines it as starting from 0. In that case, sigma is {(0,0),(1,1)}. Then it is a prefix of tau according to your example. Am I wrong? 2. I would rather say, sigma, as defined above, is not a sequence. 3. This is a summary of a definition of a sequence from my text: A sequence of n elements taken from set A is a function mapping from {0,1,2,...,n-1} to A. If we call the n elements a_1,a_2,...,a_n, then the sequence is the function that maps 0 to a_1, 1 to a_2, 2 to a_3, ..., and n-1 to a_n. Such a function, seen as a relation, is the set of ordered pairs {(0,a_1),(1,a_2),...,(n-1,a_n)}. E.g., the sequence < H,E,L,L,O >, is the set {(0,H),(1,E),(2,L),(3,L),(4,O)}. Does this change anything that you were saying? With this in mind I still can't come up with a proper counter example, nor can I understand your example properly. 4. Originally Posted by Sneaky Now I am confused with your previous statement with the example, so that still shows that its a subset but not a prefix? Or does there have to be a <0,0> in sigma, which then makes the example false? I truly mean you no disrespect by this comment. But this is an English language forum. As such, we have very clear definitions of terms. A sequence is a function from the positive integers to a field, real or complex. If the sequence $\sigma_n$ is a subsequence of $\tau_n$ then $\sigma_n=\tau_{n_j}$ where $n_1. In that westernized context, can you reframe your question? If not, you need to find a forum that is compatible your language. 5. Does this change anything that you were saying? No. It would be helpful if you posted this definition from the start. Such ubiquitous things as sequences often have slightly different definitions. 6. OK, I'm just stuck with one thing, if you say it does not change anything you said before, and when when you say sigma is {<1,1>}, then according to the definition I posted, shouldn't it be the same as {<0,0>,<1,1>}? 7. Originally Posted by Sneaky With this in mind I still can't come up with a proper counter example, nor can I understand your example properly. As I said earlier, if sequences as functions have to be defined on an initial segment of natural numbers, then being a subset is equivalent to being a prefix. 8. So technically with that definition of sequence, there is no counter example to show that sigma is a prefix of tau but not a subset. But if sequences don't have to be defined on an initial segment of natural numbers, then your example is a valid counter example. Is this right? 9. Yes. Edit: In case sequences don't have to be defined on an initial segment, my example shows that being a subset does not imply being a prefix. For the other direction under the same definition, let sigma = {(1,0)} and tau = {(0,0), (1,1)}. Then sigma (representing the sequence <0>) is a prefix of tau (representing <0,1>), but is not a subset of tau. 10. OK, thanks. Page 2 of 2 First 12
# Suppose that $f''(x)>a$ for all $x\in \mathbb{R}$ and some $a>0$. Proof that $f$ has a absolute minimun. Let $$f: \mathbb{R} \to \mathbb{R}$$ of class $$C^2$$ i.e $$f$$ is differentiable with continuous second derivative. Suppose that $$f''(x)>a$$ for all $$x\in \mathbb{R}$$ and some $$a>0$$. Proof that $$f$$ has a absolute minimun. I have tried to solve this problem without any success, my attempt is: Let $$x>0$$. By fundamental theorem of calculus we have that $$\int_{0}^{x} f''(t)dt=f'(x)-f'(0).$$ Then $$f'(x)=f'(0)+\int_{0}^{x} f''(t)dt\geq f'(0)+ax$$ for all $$x>0$$. On the other hand, note that $$\int_{0}^{x} f'(t)dt=f(x)-f(0)$$ so $$f(x)=f(0)+\int_{0}^{x} f'(t)dt\geq f(0)+\int_{0}^{x} (f'(0)+at)dt$$ I just conclude $$f(x)\geq f(0)+f'(0)x+\frac{1}{2}ax^2 \text{ for all } x\in \mathbb{R}$$ I have only found that $$f$$ is above a quadratic function but I would actually think that the local minimum would be the vertex of the quadratic function. Any suggestion is appreciated. • In your solution the continuity of $f''$ is required. The crucial inequality follows from the MacLaurin formula for $n=2,$ where $f$ is twice differentiable. Feb 23 at 17:07 Your inequality shows that the function $$f$$ tends to $$+\infty$$ both for $$x$$ going to $$-\infty$$ and to $$+\infty$$. A continuous function that has $$+\infty$$ as its limit both for $$x$$ going to $$-\infty$$ and to $$+\infty$$ has a minimum. This follows from the fact that there is a closed interval $$[-A,A]$$ outside of which $$f$$ is larger than $$f(0)$$. Then use the fact that a continuous function on $$[-A,A]$$ has always a minimum. • Why vote down this? I think his answer is right. – ZAF Feb 23 at 16:29 • @coudy that's right! this link compliment you say math.stackexchange.com/questions/250827/… Feb 23 at 16:48 Just observe that $$f'(x)$$ is strictly increasing and hence it either tends to a limit or to $$\infty$$ as $$x\to \infty$$. If it tends to a limit $$L$$ then $$f'(x+1)-f'(x)\to L-L=0$$ By mean value theorem the left hand side equals $$f''(c)$$ and it thus always exceeds a positive number $$a$$. So the above equation can't hold and thus $$f' (x) \to\infty$$ as $$x\to\infty$$. Similarly $$f'(x) \to-\infty$$ as $$x\to-\infty$$. And by intermediate value theorem we see that $$f'$$ vanishes somewhere say at $$c$$. Since the derivative $$f'$$ is strictly increasing, $$f'$$ vanishes only at $$c$$. Then $$f'<0$$ in $$(-\infty, c)$$ and $$f'>0$$ in $$(c, \infty)$$. Then $$f$$ is strictly decreasing in $$(-\infty, c]$$ and strictly increasing in $$[c, \infty)$$. Then $$f$$ attains absolute minimum at $$c$$. There is no need to assume continuity of second derivative.
Variogram {elsa} R Documentation ## Empirical Variogram from Spatial Data ### Description Compute sample (empirical) variogram from spatial data. The function returns a binned variogram and a variogram cloud. ### Usage Variogram(x, width, cutoff,...) ### Arguments x a spatial object (RasterLayer or SpatialPointsDataFrame or SpatialPolygonsDataFrame) width the lag size (width of subsequent distance intervals) into which cell pairs are grouped for semivariance estimates. If missing, the cell size (raster resolution) is assigned. cutoff spatial separation distance up to which cell pairs are included in semivariance estimates; as a default, the length of the diagonal of the box spanning the data is divided by three. ... Additional arguments including cloud that specifies whether a variogram cloud should be included to the output (default is FALSE), zcol (when x is Spatial* object, specifies the name of the variable in the dataset; longlat (when x is Spatial* object, spacifies whether the dataset has a geographic coordinate system; s (only when x is a Raster object, it would be useful when the dataset is big, so then by specifying s, the calculation would be based on a sample with size s drawn from the dataset, default is NULL means all cells should be contributed in the calculations) ### Details Variogram is a graph to explore spatial structure in a single variable. A variogram summarizes the spatial relations in the data, and can be used to understand within what range (distance) the data is spatially autocorrelated. ### Value Variogram an object containing variogram cloud and the variogram within each distance interval ### Author(s) Babak Naimi [email protected] ### References Naimi, B., Hamm, N. A., Groen, T. A., Skidmore, A. K., Toxopeus, A. G., & Alibakhshi, S. (2019). ELSA: Entropy-based local indicator of spatial association. Spatial statistics, 29, 66-88. ### Examples file <- system.file('external/dem_example.grd',package='elsa') r <- raster(file) plot(r,main='a continuous raster map') en <- Variogram(r, width=2000) plot(en) [Package elsa version 1.1-28 Index]
How do you implement Direct Sequence Spread Spectrum (DSSS) to synchronize the spreading rate with the symbol rate when there is a non-integer but rational relationship between the two? • Where do these numbers come from? For BPSK, 8192 bits/second is 8192 symbols/second, and so you have $\frac{6.138*10^6}{8192}=749.2676$ chips per symbol. Seems a bit weird to me without any context – Engineer Dec 9 '19 at 13:43 • Why do you feel they need to be synchronized? Why can’t you simply multiply the data with the spreading pattern? The waveform sampling rate just needs be high enough to support doing this. – Dan Boschen Dec 9 '19 at 13:45 • @DanBoschen I don't think she is asking about synchronous vs asynchronous but how to implement the spreading for the rates she listed – Engineer Dec 9 '19 at 19:57 • Yes that is what I meant about synchronous: the spreading chips need not be an integer multiple of the data chips- so simply multiply the two waveforms. One sample per chip would be fine in the transmitter and 2 samples per chip in the receiver for acquisition and timing recovery. – Dan Boschen Dec 9 '19 at 20:29 • @Samantaricher I updated my answer with an example reference. On that go to page B-2 that describes it most clearly: "Coherent and Non-coherent Modes: In the system described, the PN code clock is synchronous with the RF carrier but the User data clock is not expected to be coherent with either." – Dan Boschen Dec 10 '19 at 12:37 There is no requirement that Direct Sequence Spread Spectrum (DSSS) have an integer number of chips per symbol, nor for the repetition rate of the code to be synchronous with the data (although this is often done). So in this case you have a spreading sequence with a code of some particular length that is running at 6.138Mcps that is multiplied by your data at the lower rate of 8192 symbols/second. This would not change typical approaches in the receiver to demodulate the DSSS signal, where you would perform the same multiplication and integrate over a data symbol duration (correlator). Along with all the usual approaches to timing and carrier recovery and signal acquisition such as three half chip spaced correlators for Early-Prompt-Late or if processing allows block FFT-based fast acquisition. One example where this is done is described in this link: https://public.ccsds.org/Pubs/415x1b1.pdf I assume your challenge is how to do the spreading specifically in your case, knowing that you simply multiply the two waveforms as I show above. One approach to do this is to resample your data to match the chip rate and then multiply sample by sample: Notice that the relationship between the chip clock and data clock is 6138000/8192 which is exactly $$749 + 137/512$$ A simple way to do this is to use a 9 bit counter that rolls over at 512, such that you send 749 chips for every sample and then add 137 to your counter at the end of each data symbol ($$count[n] = (count[n-1] + 127) mod 512$$)- if the counter rolls over then add one more chip to that data symbol. The first 5 data symbols and the counter value at the end of each symbol would proceed as follows: Symbol 1: counter = 0+ 137 no rollover : 749 chips Symbol 2: counter = 137 +137 = 274 no rollover: 749 chips Symbol 3: counter = 274 + 137 = 411 no rollover: 749 chips Symbol 4: counter = 411 + 137 = 548 rollover 548%512= 36: 750 chips Symbol 5: counter = 36 + 137 = 173 no rollover: 749 chips • I think this is correct. Why downvote? – AlexTP Dec 10 '19 at 10:02 • @ AlexTP can you give an example? – user46622 Dec 10 '19 at 10:52 • @Samantaricher see the update of this answer. – AlexTP Dec 11 '19 at 15:37 • @DanBoschen At the receiver, you have a correlator over the 749 chips and another correlator over the 750 chips? – Engineer Dec 11 '19 at 16:03 • That is essentially what would happen - same correlator but changing when you actually dump (for an integrate and dump correlation approach)- but the timing recovery loops for the chip clock and data clock in the receiver (once in tracking) would naturally do this—- so you let the loops take care of that. – Dan Boschen Dec 11 '19 at 16:40
# How many moles of water are there in hydrated copper sulfate? i.e. $C u S {O}_{4} \cdot 5 {H}_{2} O$, so called $\text{blue vitriol...}$ ${\underbrace{C u S {O}_{4} \cdot 5 {H}_{2} O}}_{\text{deep blue}} + \Delta$ $\rightarrow$ ${\underbrace{C u S {O}_{4}}}_{\text{white salt}} + 5 {H}_{2} O \uparrow$
# What is the inverse of this logarithm equation? ## Homework Statement What is the inverse of this logarithm equation? y=-log5(-x) i tried it and i got y=-5(-x) hm.. you know how people say that if you want to find the inverse graph of something just switch the x and y coordinates from the table of values? Well i also tried that approach and apparently the inverse is y=log5x atleast thats how it looks on table of values and on the graph....i dunno why i went with the y=-5(-x) I dont know if i got it right or not, someone kind enough to take a look and see if i got it right or not? or maybe just give me the solution outright so i know if i did it correctly or not? :P Last edited: Related Precalculus Mathematics Homework Help News on Phys.org symbolipoint Homework Helper Gold Member Your result is correct. If you start from your original equation, multiply both sides by negative 1, you can easily switch x and y, and carry couple simple steps to find y as a function of x. hm... so y=log5x is correct? symbolipoint Homework Helper Gold Member hm... so y=log5x is correct? No! Your first result was correct. y=-5^(-x) ahh ok ty symbolipoint Homework Helper Gold Member Original equation: $y=-log_{5}(-x)$ $-y=log_{5}(-x)$ Now express exponential form : $5^{-y}=-x$ Now, switch x and y to create inverse: $5^{-x}=-y$, and then simply, $y=-5^{-x}$ Mentallic Homework Helper ## Homework Statement What is the inverse of this logarithm equation? y=-log5(-x) i tried it and i got y=-5(-x) hm.. you know how people say that if you want to find the inverse graph of something just switch the x and y coordinates from the table of values? Well i also tried that approach and apparently the inverse is y=log5x atleast thats how it looks on table of values and on the graph....i dunno why i went with the y=-5(-x) I dont know if i got it right or not, someone kind enough to take a look and see if i got it right or not? or maybe just give me the solution outright so i know if i did it correctly or not? :P To check if you're right, when you get to the equation $x=-5^{-y}$, just plug that value of x into your original equation $y=-\log_5(-x)$ and see if you get an equality.
### Rotating Triangle What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? ### Intersecting Circles Three circles have a maximum of six intersections with each other. What is the maximum number of intersections that a hundred circles could have? ### Square Pegs Which is a better fit, a square peg in a round hole or a round peg in a square hole? # Weekly Problem 13 - 2006 ##### Stage: 3 Challenge Level: All three runners finish at the same time. Let the radius of $R$'s track be $r$ and let the radius of the first semicircle of $P$'s track be $p$; then the radius of the second circle of this track is $r-p$. The total length of $P$'s track is $\pi p + \pi(r-p) = \pi r$, the same length as $R$'s track. By a similar argument, the length of $Q$'s track is also $\pi r$. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem
# Pre-Algebra Practice Questions: Cross-Multiply to Solve Equations When an algebraic equation contains fractions, you can use cross-multiplication to solve the equation. The following practice questions contain two equal fractions, where you need to cross-multiply to solve them. ## Practice questions 1. Rearrange the equation to solve for x. 2. Solve the equation 1. x = –4 Remove the fraction from the equation by cross-multiplying: Multiply to remove the parentheses from the left side of the equation: 8x + 40 = –2x At this point, you can solve for x: 2. x = 5 Remove the fractions from the equation by cross-multiplying: Remove the parentheses from the left side of the equation by multiplying through by x; remove parentheses from the right side of the equation by FOILing: 4x2 – 7x = 4x2 – 10x + 6x – 15 Rearrange the equation: 4x2 – 7x – 4x2 + 10x – 6x = –15 Notice that the two x2 terms cancel each other out:
Machine Learning - Winder.AI Blog Industrial insight and articles from Winder.AI, focusing on the topic Machine Learning 203: Examples and Decision Trees Mon Jan 1, 2018, in Training, Machine Learning Example: Segmentation via Information Gain There’s a fairly famous dataset called the “mushroom dataset”. It describes whether mushrooms are edible or not, depending on an array of features. The nice thing about this dataset is that the features are all catagorical. So we can go through and segment the data for each value in a feature. This is some example data: poisonous cap-shape cap-surface cap-color bruises? p x s n t e x s y t e b s w t p x y w t e x s g f etc. 301: Data Engineering Mon Jan 1, 2018, in Training, Machine Learning Your job depends on your data The goal of this section is to: Talk about what data is and the context provided by your domain Discover how to massage data to produce the best results Find out how and where we can discover new data ??? If you have inadequate data you will not be able to succeed in any data science task. More generally, I want you to focus on your data. 302: How to Engineer Features Mon Jan 1, 2018, in Training, Machine Learning Engineering features You want to do this because: Reduces the number of features without losing information Better features than the original Make data more suitable for training ??? Another part of the data wrangling challenge is to create better features from current ones. Distribution/Model specific rescaling Most models expect normally distributed data. If you can, transform the data to be normal. Infer the distribution from the histogram (and confirm by fitting distributions) 401: Linear Regression Mon Jan 1, 2018, in Training, Machine Learning Regression and Linear Classifiers Traditional linear regression (a.k.a. Ordinary Least Squares) is the simplest and classic form of regression. Given a linear model in the form of: \begin{align} f(\mathbf{x}) & = w_0 + w_1x_1 + w_2x_2 + \dots \\ & = \mathbf{w} ^T \cdot \mathbf{x} \end{align} Linear regression finds the parameters $$\mathbf{w}$$ that minimises the mean squared error (MSE)… The MSE is the sum of the squared values between the predicted value and the actual value. Mon Jan 1, 2018, in Training, Machine Learning Optimisation When discussing regression we found that these have closed solutions. I.e. solutions that can be solved directly. For many other algorithms there is no closed solution available. In these cases we need to use an optimisation algorithm. The goals of these algorithms is to iteratively step towards the correct result. Gradient descent Given a cost function, the gradient decent algorithm calculates the gradient of the last step and move in the direction of that gradient. 403: Linear Classification Mon Jan 1, 2018, in Training, Machine Learning Classification via a model Decision trees created a one-dimensional decision boundary We could easily imagine using a linear model to define a decision boundary ??? Previously we used fixed decision boundaries to segment the data based upon how informative the segmentation would be. The decision boundary represents a one-dimensional rule that separates the data. We could easily increase the number or complexity of the parameters used to define the boundary. 404: Nonlinear, Linear Classification Mon Jan 1, 2018, in Training, Machine Learning Nonlinear functions Sometimes data cannot be separated by a simple threshold or linear boundary. We can also use nonlinear functions as a decision boundary. ??? To represent more complex data, we can introduce nonlinearities. Before we do, bear in mind: More complex interactions between features yield solutions that overfit data; to compensate we will need more data. More complex solutions take a greater amount of computational power Anti-KISS The simplest way of adding a nonlinearities is to add various permutations of the original features. 501: Over and Underfitting Mon Jan 1, 2018, in Training, Machine Learning Generalisation and overfitting “enough rope to hang yourself with” We can create classifiers that have a decision boundary of any shape. Very easy to overfit the data. This section is all about what overfitting is and why it is bad. ??? Speaking generally, we can create classifiers that correspond to any shape. We have so much flexibility that we could end up overfitting the data. This is where chance data, data that is noise, is considered a valid part of the model. 502: Preventing Overfitting with Holdout Mon Jan 1, 2018, in Training, Machine Learning Holdout We have been using: Training data Not representative of production. We want to pretend like we are seeing new data: Hold back some data ??? When we train the model, we do so on some data. This is called training data. Up to now, we have been using the same training data to measure our accuracy. If we create a lookup table, our accuracy will be 100%. 503: Visualising Overfitting in High Dimensional Problems Mon Jan 1, 2018, in Training, Machine Learning Validation curve One simple method of visualising overfitting is with a validation curve, (a.k.a fitting curve). This is a plot of a score (e.g. accuracy) verses some parameter in the model. Let’s compare the make_circles dataset again and vary the SVM->RBF->gamma value. ??? Performance of the SVM->RBF algorithm when altering the parameters of the RBF. We can see that we are underfitting at low values of $$\gamma$$. So we can make the model more complex by allowing the SVM to fit smaller and smaller kernels.
## Essential University Physics: Volume 1 (4th Edition) We know that $Q=mc\Delta T$ We plug in the known values to obtain: $Q=(0.330)(4184)(100-10)$ $Q=124300J$ Now we can find the time as $t=\frac{Q}{P}$ $t=\frac{124300}{900}$ $t=140s = 2.3 min$
# A tree casts a shadow that is 14 feet long at the same time that a nearby 2-foot-tall pole casts a shadow that is 1.5 feet long Approximately how tall is the tree? Dec 19, 2016 The height of the tree is $18.67$ feet #### Explanation: The pole and the tree both cast shadows. The triangles which form are similar triangles, because the sun is shining from the same angle in the sky. You can write this as a direct proportion. (height : shadow) 2 is to 1.5 as what is to 14? $\frac{2}{1.5} = \frac{x}{14} \text{ } \leftarrow$ cross multiply $x = \frac{2 \times 14}{1.5}$ $x = 18.67$ feet
# Practice on Toph Participate in exhilarating programming contests, solve unique algorithm and data structure challenges and be a part of an awesome community. # The Lion Queen By RHaque · Limits 1s, 512 MB A lioness is trying to hunt down a buffalo. The lioness spots a buffalo $a$ metres away. As soon as she sees the buffalo, the buffalo spots her as well, and starts running. The lioness immediately starts chasing after her. If the lioness is $b$ times faster than the buffalo, how far will she have to run before she can capture the buffalo? ## Input The input consists of two integer numbers, denoting the values of $a$ and $b$ respectively, as described above. Constraints: $2 \leq a,b \leq 10^4$ ## Output It can be proven that the distance which the lioness will have to run is a rational number, that is it can be written in the form of a fraction $p/q$. Output two space-separated integers $p$ and $q$, so that $p/q$ is the answer to this problem. You must output the fraction in its fully reduced form, (e.g. if the answer to the problem is $8/4$, you must output $2$ and $1$ since $2/1$ is the reduced form of $8/4$). ## Sample InputOutput 4506 85 63835 14 ### Statistics 77% Solution Ratio TanbeerEarliest, 2w ago Being_GoromFastest, 0.0s nav99X_EWULightest, 0 B pathanShortest, 83B
# deriving likelihood function for hierarchical bayesian model I'm struggling with hierarchical bayesian modeling. I need to derive a full likelihood function for the given hierarchical structure of the model. $$a_{it}|\lambda_i\sim TN(\lambda_i,\beta)$$ $$x_{it}|\lambda_i,\xi_i,a_{it}\sim Pr\{x_{it}\}$$ $$\lambda_{i}|\theta,\tau\sim N(\theta,1/\tau)$$ $$\xi_{i}|\mu,\omega\sim N(\mu,1/\omega)$$ with priors for $$\theta, \tau,\mu,\omega$$ given as well. Here, the parameters to be estimated are $$\theta, \tau,\mu,\omega,\{\lambda_{it}\},\{\xi_{it}\}$$ and the data observation is $$\{\{a_{it}\},\{x_{it}\}\}$$. The question is, what is the full likelihood function for this model. What I think is $$P(\{a_{it}\},\{x_{it}\}|\theta, \tau,\mu,\omega,\{\lambda_{it}\},\{\xi_{it}\})=P(\{x_{it}\}|\lambda_i,\xi_i,a_{it})*P(\lambda_{i}|\theta,\tau)*P(\xi_{i}|\mu,\omega)$$ Is this the right way? Thanks in advance! • Welcome to Cross Validated, Liz. Are you sure it is the likelihood you need to determine? Usually, you are looking for the full posterior density. But if it is indeed the likelihood you are looking for, then what you have written looks pretty close. I notice that $Pr\{x_{it}\}$ is not apparently conditional on $a_{it}$ or $\xi_i$. Do you mean that? It looks like $\xi_i$ falls out of the likelihood altogether! Also don't forget to take the product over the $i$s. Your final expression should be in terms of the RHS of your definitions, so more $N$'s please. May 27, 2019 at 22:10 • Of course, the product over the $i$'s assumes independence of acquired data points, which perhaps you don't have, and in any event, your notation is fine without the product, so I withdraw that comment. May 27, 2019 at 22:24 • @PeterLeopold Thank you so much! yes $Pr\{x_{it}\}$ is a long function of $a_{it}$ and $\xi_i$. I just skipped it for the space. Also data points are assumed to be independent. So I'd just need to add $\Pi_i$ in front of the whole equation? – liz May 27, 2019 at 23:03 For independent observables $$\{x_{it}\}$$ and $$\{a_{it}\}$$, I would write the joint conditional likelihood as \begin{aligned} P(\{x_{it}\},\{a_{it}\}|\{\lambda_i\},\{\xi_i\},\beta) = \prod_i TN(a_{it}|\lambda_i,\beta) Pr(x_{it}|\lambda_i,\xi_i) \end{aligned} 1. Notice that I've only included mention of the parameters whose values are explicitly needed to specify the likelihood: $$\beta, \{\lambda_j\},$$ and $$\{\xi_k\}$$. I've dropped any mention of the hyperparameters. This is because the hyperparameters specify the prior distributions for the parameters, and since we are only writing a likelihood, we don't need the priors. 2. I'm following your indication in the comment that $$Pr(x_i)$$ should be made conditional the list of $$\lambda$$s and $$\xi$$s. 3. I've assumed that the observations are statistically independent of each other, so $$P(x_1,x_2,\dots,n_n)=\prod_i P(x_i)$$. 4. The use of $$i$$ to index the $$x, \xi,$$ and $$\lambda$$ variables implies that there is a 1:1 relationship between them. I have preserved that, but don't immediately know what to make of it. There seems to be two parameters for each data point. That's not good. 5. I don't know what the $$TN$$ function is. Is it just the "normal" normal $$N$$? 6. It is not clear what the subscript $$t$$ means. I've preserved it since it doesn't hurt, but it isn't being averaged over, so I'm not sure what it is doing in the model. 7. It doesn't make sense for the independent observables $$x_{it}$$ and $$a_{it}$$ to be contingent on each other, but this is what you are hinting with your expression $$x_{it}|a_{it}, \lambda_i, \xi_i$$. I dropped the $$a_{it}$$ from the conditional.
## PREP 2015 Question Authoring - Archived ### Showing Trig Functions in the Answer by Daniele Arcara - Number of replies: 2 I was working on a homework problem, and I ran into a problem when using trig functions. Let's say that I have the function f(x)=sin(2x), and I pick a random point x=a. Then, I ask them to compute the derivative, which would be 2cos(2a). Is there a way to display the answer as 2cos(2a) without manually entering it that way? Here is my current code, which displays the answer as a decimal: DOCUMENT();        # This should be the first executable line in the problem. "PGstandard.pl", "PGML.pl", "PGcourse.pl", ); TEXT(beginproblem()); Context("Numeric"); $a = random(2,9,1);$f = Formula("sin(2x)"); $g =$f->D; $m =$g->eval(x => $a); ####################################################### # # Find the derivative of f(x) = sin(2x) at a random point. # BEGIN_PGML The derivative of [ f(x) = [$f] ] at [ x = [$a] ] is [_______]{"$m"}. END_PGML ####################################################### ENDDOCUMENT();        # This should be the last executable line in the problem. ### Re: Showing Trig Functions in the Answer by Davide Cervone - When you use eval(), the answer will be numeric, and the structure of the formula from which it came will be lost. There is, however, a way to keep a formula that has the original structure. It requires that you turn off automatic reduction of constants, and use substitute() rather than eval(), at least for the first step. Here, Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0, ); $m =$g->substitute(x=>$a); would cause$m to be a formula that would display as 2 cos(2 a) where "a" is replaced by the value of $a. But you don't want$m to be a formula, you want it to be a number, but still display as 2cos(2a). The correct answer string is actually a property of the MathObject itself, so you could evaluate it and set that correct answer string to $m->string, but that is painful. Fortunately, Compute() has a side-effect of note only evaluating the expression, but also setting the correct answer to be exactly the string that was passed to it. So Compute("2cos(2*$a)") would use the numeric value as the correct answer (not a formula) while still displaying 2cos(2*a) as the correct answer. So one approach here would be to do Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0, ); $m = Compute($g->substitute(x=>$a)->string); This will substitute$a into the original formula, turn the result back into a string, and the reparse that string as a MathObject, setting the correct answer to be the string with the substituted value. Finally, note that you don't want to put quotes around $m in your PGML answer (as that would force the expression to be turned into a string again and re-parsed, which would lose the correct answer that you carefully set with Compute() above. So the crucial part of your problem would be Context("Numeric"); Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0, );$a = random(2,9,1); $f = Formula("sin(2x)");$g = $f->D;$m = Compute($g->substitute(x=>$a)->string); ####################################################### # # Find the derivative of f(x) = sin(2x) at a random point. # BEGIN_PGML The derivative of [ f(x) = [$f] ] at [ x = [$a] ] is [_______]{\$m}. END_PGML
Maximum Rooted Connected Expansion Lamprou, Ioannis ORCID: 0000-0001-5337-7336, Martin, Russell ORCID: 0000-0002-7043-503X, Schewe, Sven ORCID: 0000-0002-9093-9518, Sigalas, Ioannis and Zissimopoulos, Vassilis (2018) Maximum Rooted Connected Expansion. Prefetching constitutes a valuable tool toward efficient Web surfing. As a result, estimating the amount of resources that need to be preloaded during a surfer's browsing becomes an important task. In this regard, prefetching can be modeled as a two-player combinatorial game [Fomin et al., Theoretical Computer Science 2014], where a surfer and a marker alternately play on a given graph (representing the Web graph). During its turn, the marker chooses a set of $k$ nodes to mark (prefetch), whereas the surfer, represented as a token resting on graph nodes, moves to a neighboring node (Web resource). The surfer's objective is to reach an unmarked node before all nodes become marked and the marker wins. Intuitively, since the surfer is step-by-step traversing a subset of nodes in the Web graph, a satisfactory prefetching procedure would load in cache all resources lying in the neighborhood of this growing subset. Motivated by the above, we consider the following problem to which we refer to as the Maximum Rooted Connected Expansion (MRCE) problem. Given a graph $G$ and a root node $v_0$, we wish to find a subset of vertices $S$ such that $S$ is connected, $S$ contains $v_0$ and the ratio $|N[S]|/|S|$ is maximized, where $N[S]$ denotes the closed neighborhood of $S$, that is, $N[S]$ contains all nodes in $S$ and all nodes with at least one neighbor in $S$. We prove that the problem is NP-hard even when the input graph $G$ is restricted to be a split graph. On the positive side, we demonstrate a polynomial time approximation scheme for split graphs. Furthermore, we present a $\frac{1}{6}(1-\frac{1}{e})$-approximation algorithm for general graphs based on techniques for the Budgeted Connected Domination problem [Khuller et al., SODA 2014]. Finally, we provide a polynomial-time algorithm for the special case of interval graphs.
# Smoothness of collapsed regions in a capillarity model for soap films created by maggi on 29 Jul 2020 [BibTeX] preprint Inserted: 29 jul 2020 Last Updated: 29 jul 2020 Year: 2020 Abstract: We study generalized minimizers in the soap film capillarity model introduced in arXiv:1807.05200 and arXiv:1907.00551. Collapsed regions of generalized minimizers are shown to be smooth outside of dimensionally small singular sets, which are thus empty in physical dimensions. Since generalized minimizers converge to Plateau's type surfaces in the vanishing volume limit, the fact that collapsed regions cannot exhibit Y-points and T-points (which are possibly present in the limit Plateau's surfaces) gives the first strong indication that singularities of the limit Plateau's surfaces should always be wetted'' by the bulky regions of the approximating generalized minimizers.
# I Example from Bland - Right Artinian but not Left Artinian .. 1. Oct 28, 2016 ### Math Amateur I am reading Paul E. Bland's book, "Rings and Their Modules". I am focused on Section 4.2: Noetherian and Artinian Modules and need some help to fully understand Example 6 on page 109 ... ... In the above example Bland asserts that the matrix ring $\begin{pmatrix} \mathbb{Q} & \mathbb{R} \\ 0 & \mathbb{R} \end{pmatrix}$ is right Artinian but not left Artinian ... My thoughts on how to do this are limited ... but include reasoning from the fact that the entries in the matrix ring are all fields and thus the only ideals are $\{ 0 \}$ and the whole ring(field) ... and so the chains of such ideals should terminate ... but this seems to imply that the matrix ring is both left and right Artinian ... Hope someone can help ... Peter #### Attached Files: • ###### Bland - Example 6 - page 109 - ch 4 - chain conditions.png File size: 42.9 KB Views: 150 2. Oct 28, 2016 ### Staff: Mentor Let me tell you, how I tackled the problem. Firstly, what is right Artinian, and what left Artinian? Artinian means to satisfy the descending chain condition (DCC), i.e. $M \supseteq S_1 \supseteq S_2 \supseteq \ldots \supseteq S_n$ must become stable, that is $S_n=S_{n+1}$ for some $n$. 1. Find such a chain, that does not satisfy the DCC if considered as left modules. 2. Show that all chains satisfy the DCC if considered as right modules. Since $R = \begin{bmatrix}\mathbb{Q} & \mathbb{R} \\ 0 & \mathbb{R}\end{bmatrix}$ our modules are of the form $M=\begin{bmatrix} U & V \\ 0 & W \end{bmatrix}$ with $U\subseteq \mathbb{Q}$ and $V,W \subseteq \mathbb{R}$. Next I multiplied $R\cdot M$ and $M \cdot R$ to find out the difference. We have three "submodules" $U,V,W$ to choose freely in the case of $M$ being a left module, since one counterexample already does the job. Keep it easy! This means, choose the zero module $\{0\}$ where possible, i.e. where there is no difference between right and left, and concentrate on the rest. As in the example of left Noetherian and right Noetherian, multiples of $2^n$ or $2^{-n}$ could be helpful. Now 1. can be solved. Remains 2. Here we have to deal with arbitrary $U,V,W$ and the multiplication $M\cdot R$ from the right hopefully already proves that any chain $M \supseteq S_1 \supseteq S_2 \supseteq \ldots \supseteq S_n$ stabilizes pretty fast, e.g. by showing $S_1 \cdot R = M$. $S_1 \cdot R = M$ means $S_1 \cdot R \supseteq M$ has to be shown, since $S_1 \cdot R \subseteq S_1 \subseteq M$ is clear by the definition of right modules. Edit: I find it easier to reserve the term ideal for two sided ideals and talk of left and right modules instead of left and right ideals, but this is a spleen. You may substitute the word module by ideal in the above. I'm used to say $R$ is Artinian (left, right or both), iff it is as $R$-(left, right or both) module of itself. Last edited: Oct 28, 2016 3. Oct 29, 2016 ### Math Amateur Thanks fresh_42 ... your help is much appreciated... Working through your post and reflecting ... Peter
## What is the relationship between interface width and mesh/grid spacing in phase-field models? (back to top) It is important to understand the difference between interface width and grid spacing in phase-field models. The interface width is a function of model parameters that enter the governing equations, and is not dependent on the grid spacing or other details of the discretization. For example, in the most basic Cahn-Hilliard model, the interface width is a function of the gradient energy coefficient $$\kappa$$\$ and the height of the free energy barrier between equilibrium phases. When you change the grid spacing and keep the system dimensions the same, you change the number of elements in the interfacial region, but not the width of the interface in the coordinate system you have chosen. So if you were to keep the system dimensions the same, simulate an interface between two phases with increasingly finer resolution, and plot the results on top of one another, you would see the same interface shape (width) represented with an increasing number of data points in the interfacial region, but the interface width in your coordinate system would not change. ## How do I know if the mesh spacing I am using in my phase-field simulation is fine enough? (back to top) If you keep the governing equations of the model the same, as you make the mesh finer, as long as the number of elements in the interface is high enough, you should get the same physical results. For the basic Cahn-Hilliard model I referred to earlier, a good rule of thumb would be that you would want at least 4-5 elements in the interface (defined as 0.1 < c < 0.9 if the equilibrium values of c are 0 and 1) if you are using linear Lagrange elements. You may be able to get away with fewer elements than that, and if you are looking only at qualitative differences after a few time steps, you may not notice any changes. But if you want to lower resolution below the 4-5 elements through the interface that I mentioned you probably should plot the system energy as a function of time to verify decreasing resolution is not affecting the system's evolution.
Bott Periodicity Seminar The seminar is going to be on Bott periodicity. After the introductory meeting, a speaker will present a proof of the Bott periodicity theorem every week. Tentative meeting location and times are as follows: • Location: SC 232 • Day/Time: Thursday 5:15pm to 6:15pm The first meeting will be on 7th February 2019. # Schedule #### 7 Feb 2019, Introductory Meeting (Dexter Chua) I will go through the various ways to state the Bott periodicity theorem. Afterwards we will discuss some administrative affairs. Notes: HTML PDF #### 14 Feb 2019, Morse theory (Kai Xu) We will be meeting at 5:30pm instead of the usual time. #### 21 Feb 2019, Simplicial methods (Dexter Chua) I will present the proof of Bruno Harris in his paper Bott Periodicity via Simplicial Spaces. The proof is extremely short if one assumes the simplicial machinery employed, and the talk will focus on explaining the underlying machinery that goes into the proof. #### 4 Apr 2019, Fredholm operators (Maxim Jeffs) We will be meeting at 5:30pm instead of the usual time. The study of holomorphic disks with Lagrangian boundary conditions is an important tool in symplectic geometry. In this talk, I'll explain how to use them to give a proof of orthogonal Bott periodicity, by combining ideas of Atiyah on Fredholm operators with unpublished work of de Silva and Seidel. 𝄆 $\mathbf{Z}\;\; \mathbf{Z}_2\;\mathbf{Z}_2\;0\;\;\mathbf{Z}\;\;0\;\;0\;\;0$ 𝄇
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T06: QCD and Hadronic Physics Measurement of the primary Lund jet plane density in pp collisions at $\sqrt{s} = \rm{13}$ TeV with ALICE L.B. Havener Full text: pdf Pre-published on: January 17, 2022 Published on: Abstract Precision measurements of jet substructure are used as a probe of fundamental QCD processes. The primary Lund jet plane density is a two-dimensional visual representation of the radiation off the primary emitter within the jet that can be used to isolate different regions of the QCD phase space. A new measurement of the primary Lund plane density for inclusive charged-particle jets in the transverse momentum range of 20 and 120 GeV/$c$ in pp collisions at $\sqrt{s} =$ 13 TeV with the ALICE detector will be presented. This is the first measurement of the Lund plane density in an intermediate jet $p_{\rm T}$ range where hadronization and underlying event effects play a dominant role. The projections of the Lund plane density onto the splitting scale $k_{\rm T}$ and splitting angle $\Delta{R}$ axis are shown, highlighting the perturbative/non-perturbative and wide/narrow angle regions of the splitting phase space. Through a 3D unfolding procedure, the Lund plane density is corrected for detector effects which allows for quantitative comparisons to MC generators to provide insight into how well generators describe different features of the parton shower and hadronization. DOI: https://doi.org/10.22323/1.398.0364 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
# prove whether functions are injective, surjective or bijective Hello, I have a few questions regarding the above mentioned task. One has to show whether or not these functions are injective, surjective or bijective. This seems straightforward for most of these functions: (1) Not injective since some values are hit multiple times. Also not surjective since not every y has a corresponding x. (2) Bijective, since there is a one to one correspondance. (3) Bijective, since it's only from the natural numbers, so every value will be hit exactly once My question now is: How do I properly phrase this? Some of these seem quite intuitive but I'm struggling with the correct notation of this. • $(4)$ is surjective as $\{n\}\in \mathcal P(\Bbb N)\backslash\{\emptyset\}$ for each $n\in \Bbb N$. But $(4)$ is not injective as $\min\{1,2\}=1=\min\{1,3\}$. – Mathlover Sep 22 '20 at 8:35 • You should use the definitions of the stated concepts. I.e. if you want to show that a function is not injective then provide two points $x_1\neq x_2$ such that $f(x_1)=f(x_2)$. If you want to prove the contrary, then show that $f(x_1)=f(x_2)$ implies that $x_1=x_2$. – Maximini Sep 22 '20 at 8:39 • For (4), it is important to know whether $0\in\Bbb N$. This is not universally agreed on one way or the other, so it should be clarified. – Arthur Sep 22 '20 at 8:39 • okay I'll try to do that thank you! 0 is an element of the natural numbers in this course. I was not sure if it's enough if I just take 2 points and calculate them to show that there exists an injection or not – 23408924 Sep 22 '20 at 8:41 • @0-thUser thanks a lot!. I'm never quite sure if this is "enough" for a proof or if i need to provide more details – 23408924 Sep 22 '20 at 8:48 $$\bullet (1)$$ takes $$-1$$ and $$0$$ to the same place, so it isn't injective. I doubt it's surjective, either, because we need a solution to $$y=x^2+x$$ for any $$y$$. Take $$y=-1$$, say. $$x^2+x+1$$ has no real roots, since the discriminant is negative. Complete the square and you get $$y=(x+1/2)^2-1/4$$. Thus you can graph and "see" that nothing less than $$-1/4$$ is hit. It's a parabola (opening upwards), after all. $$\bullet(3)$$ isn't surjective: not every natural is a fourth power $$\bullet (4)$$ is of course not injective, but surjective: two different sets can have the same $$\inf$$; there is a set with any natural as $$\inf$$ • oh you're right. I completely forgot. So (3) can only be injective. – 23408924 Sep 22 '20 at 8:40 • $(2)$ looks good. there is an inverse. – Chris Custer Sep 22 '20 at 8:44 • Another way of putting it in the first one is that a parabola doesn't pass the horizontal line test, so is not injective. And has a max or min, so is not surjective. – Chris Custer Sep 22 '20 at 9:21 • Thanks! yeah I mean this seems intuitive, however i mostly struggle with the notation. I'm not sure if they accept it without enough details though. But i think i should be able to write it down correctly now – 23408924 Sep 22 '20 at 9:23 • Great! The right amount of rigor can be an issue... Reminds me of the time my linear algebra professor drew waving hands on the board (to indicate hand waving argument). – Chris Custer Sep 22 '20 at 9:25 (4) It is not injective, since many subsets may share the same smallest element. But it is surjective, since for any natural number $$n$$ there is a larger one $$n+1$$ such that both lie in the some subset in $$\mathcal{P}(\mathbb N)$$.
# Modal Prestressed Prestressed eigenvalue analysis is governed by:(1) $\left(\overline{\text{K}}\text{\hspace{0.17em}}\text{+}\text{\hspace{0.17em}}\text{λM}\right)\text{A}\text{\hspace{0.17em}}\text{=}\text{\hspace{0.17em}}\text{0}$ Where, $\overline{\text{K}}$ Prestress stiffness matrix $\text{M}$ Mass matrix $\text{A}$ Eigenvector $\text{λ}$ Eigenvalues Prestressed loadcases can be either linear or non-linear. All non-linearity types are supported. Depending on preloading conditions, the resulting effect could be a weakened or stiffened structure. • If the preloading is compressive, it typically has a weakening effect on the structure (for example, column or pillar under compressive preloading).
# Introduction to Elasticity/Concentrated force on half plane ## Concentrated Force on a Half-Plane Concentrated force on a half plane From the Flamant Solution \begin{align} F_1 + 2\int_{\alpha}^{\beta} \left(\frac{C_1\cos\theta - C_3\sin\theta}{a}\right)a\cos\theta d\theta & = 0 \\ F_2 + 2\int_{\alpha}^{\beta} \left(\frac{C_1\cos\theta - C_3\sin\theta}{a}\right)a\sin\theta d\theta & = 0 \end{align} and $\sigma_{rr} = \frac{2C_1\cos\theta}{r} + \frac{2C_3\sin\theta}{r} ~;~~ \sigma_{r\theta} = \sigma_{\theta\theta} = 0$ If $\alpha = -\pi\,$ and$\beta = 0\,$, we obtain the special case of a concentrated force acting on a half-plane. Then, \begin{align} F_1 + 2\int_{-\pi}^{0} \left(C_1\cos^2\theta - \frac{C_3}{2}\sin(2\theta)\right) d\theta & = 0 \\ F_2 + 2\int_{-\pi}^{0} \left(\frac{C_1}{2}\sin(2\theta) - C_3\sin^2\theta\right) d\theta & = 0 \end{align} or, \begin{align} F_1 + \pi C_1 & = 0 \\ F_2 - \pi C_3 & = 0 \end{align} Therefore, $C_1 = - \frac{F_1}{\pi} ~;~~ C_3 = \frac{F_2}{\pi}$ The stresses are $\sigma_{rr} = -\frac{2F_1\cos\theta}{\pi r} - \frac{2F_2\sin\theta}{\pi r} ~;~~ \sigma_{r\theta} = \sigma_{\theta\theta} = 0$ The stress $\sigma_{rr}\,$ is obviously the superposition of the stresses due to $F_1\,$ and $F_2\,$, applied separately to the half-plane. ### Problem 1: Stresses and displacements due to $F_2\,$ The tensile force $F_2\,$ produces the stress field $\sigma_{rr} =- \frac{2F_2\sin\theta}{\pi r} ~;~~ \sigma_{r\theta} = \sigma_{\theta\theta} = 0$ Stress due to concentrated force $F_2\,$ on a half plane The stress function is $\varphi = \frac{F_2}{\pi} r\theta\cos\theta$ Hence, the displacements from Michell's solution are \begin{align} 2\mu u_r & = \frac{F_2}{2\pi}\left[(\kappa-1)\theta\cos\theta + \sin\theta - (\kappa+1)\ln(r)\sin\theta\right] \\ 2\mu u_{\theta} & = \frac{F_2}{2\pi}\left[-(\kappa-1)\theta\sin\theta - \cos\theta - (\kappa+1)\ln(r)\cos\theta\right] \end{align} At $\theta = 0$, ($x_1 > 0$, $x_2 = 0$), \begin{align} 2\mu u_r = 2\mu u_1 & = 0 \\ 2\mu u_{\theta} = 2\mu u_2 & = \frac{F_2}{2\pi}\left[-1 - (\kappa+1)\ln(r)\right] \end{align} At $\theta = -\pi$, ($x_1 < 0$, $x_2 = 0$), \begin{align} 2\mu u_r = -2\mu u_1 & =\frac{F_2}{2\pi}(\kappa-1)\\ 2\mu u_{\theta} = -2\mu u_2 & = \frac{F_2}{2\pi}\left[1 + (\kappa+1)\ln(r)\right] \end{align} where \begin{align} \kappa = 3 - 4\nu & & \text{plane strain} \\ \kappa = \frac{3 - \nu}{1+\nu} & & \text{plane stress} \end{align} Since we expect the solution to be symmetric about $x = 0\,$, we superpose a rigid body displacement \begin{align} 2\mu u_1 & = \frac{F_2}{4\pi}(\kappa-1)\\ 2\mu u_2 & = \frac{F_2}{2\pi} \end{align} The displacements are \begin{align} u_1 & = \frac{F_2(\kappa-1)\text{sign}(x_1)}{8\mu} \\ u_2 & = - \frac{F_2(\kappa+1)\ln|x_1|}{4\pi\mu} \end{align} where $\text{sign}(x) = \begin{cases} +1 & x > 0 \\ -1 & x < 0 \end{cases}$ and $r = |x|\,$ on $y = 0\,$. ### Problem 2: Stresses and displacements due to $F_1\,$ The tensile force $F_1\,$ produces the stress field $\sigma_{rr} =- \frac{2F_2\cos\theta}{\pi r} ~;~~ \sigma_{r\theta} = \sigma_{\theta\theta} = 0$ Stress due to concentrated force $F_1\,$ on a half plane The displacements are \begin{align} u_1 & = - \frac{F_1(\kappa+1)\ln|x_1|}{4\pi\mu} \\ u_2 & = - \frac{F_1(\kappa-1)\text{sign}(x_1)}{8\mu} \end{align} ### Stresses and displacements due to $F_1 + F_2\,$ Superpose the two solutions. The stresses are $\sigma_{rr} = -\frac{2F_1\cos\theta}{\pi r} - \frac{2F_2\sin\theta}{\pi r} ~;~~ \sigma_{r\theta} = \sigma_{\theta\theta} = 0$ The displacements are \begin{align} u_1 & = - \frac{F_1(\kappa+1)\ln|x_1|}{4\pi\mu} + \frac{F_2(\kappa-1)\text{sign}(x_1)}{8\mu} \\ u_2 & = - \frac{F_2(\kappa+1)\ln|x_1|}{4\pi\mu} - \frac{F_1(\kappa-1)\text{sign}(x_1)}{8\mu} \end{align}
• verb: • to see text (of a newspaper etc.) and edit it to correct blunders. • edit and proper (written or imprinted product) • to learn text (of a newspaper etc.) and edit it to improve mistakes. • edit and proper (written or imprinted product)
## The maximum number of times the same distance can occur among the vertices of a convex $n$-gon is O($n\log n$) Published in: Journal of Combinatorial Theory, Series A, 94, 1, 178-179 Year: 2001 Keywords: Note: Professor Pach's number: [153] Laboratories:
# 如何展示从k-wise独立哈希函数的哈希代码的独立和统一分发? -- probability-theory 领域 和 hash 领域 cs 相关 的问题 ## How to show independence and uniform distribution of hash codes from k-wise independent hash functions? 1 ### 问题 $$k$$哈希函数的独立族族的定义,我遇到了一个romial $$h$$$$d$$$$r$$是k-wise独立的对于所有Distinct $$X_1,X_2, Dots,X_K In D$$$$Y_1,Y_2, Dots,Y_K 在r$$ $$mathbb {p} _ {h 在h}(h(x_1)= y_1,h(x_2)= y_2, dots,h(x_k)= Y_K)= FRAC {1} {| R | ^ K}$$ wikipedia文章在k-wise独立哈希函数(使用上面定义)声明该定义等同于以下两个条件: (i)对于所有$$x In d$$$$h(x)$$是统一分布在$$r$$鉴于$$h$$是随机选择的 $H$ (ii)对于任何固定的不同键$$x_1,x_2, dots,x_k In d$$,作为$$h$$$$h$$,哈希代码$$h(x_1),h( x_2), dots,h(x_k)$$是独立的随机变量。 Most definitions of a $$k$$-wise independent family of hash functions I have encountered state that a family $$H$$ of hash functions from $$D$$ to $$R$$ is k-wise independent if for all distinct $$x_1, x_2,\dots, x_k \in D$$ and $$y_1, y_2,\dots, y_k \in R$$, $$\mathbb{P}_{h \in H}(h(x_1) = y_1, h(x_2) = y_2, \dots, h(x_k) = y_k) = \frac{1}{|R|^k}$$ The Wikipedia article on k-wise independent hash functions (which uses the above definition) claims that the definition is equivalent to the following two conditions: (i) For all $$x \in D$$, $$h(x)$$ is uniformly distributed in $$R$$ given that $$h$$ is randomly chosen from $$H$$. (ii) For any fixed distinct keys $$x_1, x_2,\dots, x_k \in D$$, as $$h$$ is randomly drawn from $$H$$, the hash codes $$h(x_1), h(x_2), \dots, h(x_k)$$ are independent random variables. It is not obvious to me how one proves (i) from the above definition without explicitly assuming (ii) in the definition as well (and vice-versa). How is the definition sufficient for proving both (i) and (ii)? ## 回答列表 1 Throughout, we assume that $$|D| \geq k$$. Suppose that $$H$$ satisfies, for all distinct $$x_1,\dots,x_k \in D$$ and all $$y_1,\ldots,y_k \in R$$, $$\Pr_{h \in H}[h(x_1)=y_1,\ldots,h(x_k)=y_k] = \frac{1}{|R|^k}.$$ Now let $$x \in D$$ be arbitrary. Since $$|D| \geq k$$, we can find $$x_2,\ldots,x_k \in D$$ such that $$x,x_2,\ldots,x_k$$ are distinct. For each $$y \in R$$, $$\Pr_{h \in H}[h(x)=y] = \sum_{y_2,\ldots,y_k \in R} \Pr_{h \in H}[h(x)=y,h(x_2)=y_2,\ldots,h(x_k)=y_k] = \\ \sum_{y_2,\ldots,y_k \in R} \frac{1}{|R|^k} = \frac{|R|^{k-1}}{|R|^k} = \frac{1}{|R|}.$$ This proves (i). To see (ii), let $$x_1,\ldots,x_k \in D$$ be distinct. Then for all $$y_1,\ldots,y_k$$, $$\Pr_{h \in H}[h(x_1)=y_1,\ldots,h(x_k)=y_k] = \frac{1}{|R|^k} = \prod_{i=1}^k \frac{1}{|R|} = \prod_{i=1}^k \Pr_{h \in H}[h(x_i) = y_i].$$ In the other direction, suppose that (i) and (ii) hold. Then for all distinct $$x_1,\ldots,x_k \in D$$ and $$y_1,\ldots,y_k \in R$$, $$\Pr_{h \in H}[h(x_1)=y_1,\ldots,h(x_k)=y_k] = \prod_{i=1}^k \Pr_{h \in H}[h(x_i) = y_i] = \prod_{i=1}^k \frac{1}{|R|} = \frac{1}{|R|^k} .$$ ## 相关问题 6  Codomain /范围是哈希函数始终$mathbb {z}$或$mathbb {n}$?  ( Is the codomain range of a hash function always mathbbz or mathbbn ) 14  传播输入的功能  ( Function that spreads input ) 3  使用哈希的二进制搜索树的术语?  ( Term for binary search tree using hashes ) 1  解释操作系统中的哈希页面表  ( Explain hashed page tables in operating system ) 0  哈希函数订单从原产地最接近最近的船只  ( Hash function orders ships from closest to furthest from the origin ) 1  一些固定间隔中MD5哈希的前缀的冲突  ( Collisions of prefixes of md5 hashes in some fixed interval ) 1  关于佐色斯散系与碰撞机会的理论问题  ( Theoretical question about zobrist hashing and chances of collision with slight ) 2  证明计算Minhash  ( Proving calculating minhash )
Suggested languages for you: Americas Europe Q28P Expert-verified Found in: Page 1216 ### Fundamentals Of Physics Book edition 10th Edition Author(s) David Halliday Pages 1328 pages ISBN 9781118230718 # A cubical box of widths ${{\mathbf{L}}}_{{\mathbf{x}}}{\mathbf{=}}{{\mathbf{L}}}_{{\mathbf{y}}}{\mathbf{=}}{{\mathbf{L}}}_{{\mathbf{z}}}{\mathbf{=}}{\mathbf{L}}$ contains an electron. What multiple of ,${{\mathbf{h}}}^{{\mathbf{2}}}{\mathbf{/}}{\mathbf{8}}{{\mathbf{mL}}}^{{\mathbf{2}}}$ where, m is the electron mass, is (a) the energy of the electron’s ground state, (b) the energy of its second excited state, and (c) the difference between the energies of its second and third excited states? How many degenerate states have the energy of (d) the first excited state and (e) the fifth excited state? (a) The multiple of $\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}$ is 3 . (b) The multiple of $\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}$ is 9. (c) The difference is ${\mathrm{E}}_{1,1,3}-{\mathrm{E}}_{2,2,1}=2\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$. (d) The number of degenerate states are 3. (e) The number of degenerate states are 6. See the step by step solution ## Step 1: The energy of three-dimensional electron traps: The energy equation for an electron trapped in a three-dimensional cell is given by, ${{\mathbf{E}}}_{{\mathbf{n}}_{\mathbf{x}}\mathbf{,}{\mathbf{n}}_{\mathbf{y}}\mathbf{,}{\mathbf{n}}_{\mathbf{z}}}{\mathbf{=}}\frac{{\mathbf{h}}^{\mathbf{2}}}{\mathbf{8}\mathbf{m}}\left(\frac{{n}_{x}^{2}}{{L}_{x}^{2}}+\frac{{n}_{y}^{2}}{{L}_{y}^{2}}+\frac{{n}_{z}^{2}}{{L}_{z}^{2}}\right)$ ….. (1) Here, ${{\mathbf{n}}}_{{\mathbf{x}}}$ is quantum number for which the electron’s matter wave fits in well width ${{\mathbf{L}}}_{{\mathbf{x}}}$,${{\mathbf{n}}}_{{\mathbf{y}}}$ is quantum number for which the electron’s matter wave fits in well width ${{\mathbf{L}}}_{{\mathbf{y}}}$, ${{\mathbf{n}}}_{{\mathbf{z}}}$ is quantum number for which the electron’s matter wave fits in well width ${{\mathbf{L}}}_{{\mathbf{z}}}$, h is plank constant, and m is mass of the electron. ## Step 2: (a)  Find the multiple of  h2/8mL2 for the energy of the electron’s ground state: The first excited state occur at ${\mathrm{n}}_{\mathrm{x}}=1,{\mathrm{n}}_{\mathrm{y}}=1,\mathrm{and}{\mathrm{n}}_{\mathrm{z}}=1$. Substitute L for ${\mathrm{L}}_{\mathrm{x}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{y}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{z}},1\mathrm{for}{\mathrm{n}}_{\mathrm{x}},1\mathrm{for}{\mathrm{n}}_{\mathrm{y}},$and 1 for ${n}_{z}$ in equation (1). ${\mathrm{E}}_{1,1,1}=\frac{{\mathrm{h}}^{2}}{8\mathrm{m}}\left(\frac{{1}^{2}}{{\mathrm{L}}^{2}}+\frac{{1}^{2}}{{\mathrm{L}}^{2}}+\frac{{1}^{2}}{{\mathrm{L}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\left(1+1+1\right)\phantom{\rule{0ex}{0ex}}=3\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$ Therefore, the multiple of $\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}$ is 3 . ## Step 3: (b) Find the multiple of h2/8mL2 for the energy of its second excited state: The second excited state occur at ${\mathrm{n}}_{\mathrm{x}}=2,{\mathrm{n}}_{\mathrm{y}}=2,\mathrm{and}{\mathrm{n}}_{\mathrm{z}}=1$. Substitute L for role="math" localid="1661931817535" ${\mathrm{L}}_{\mathrm{x}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{y}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{z}},2\mathrm{for}{\mathrm{n}}_{\mathrm{x}},2\mathrm{for}{\mathrm{n}}_{\mathrm{y}}$, and 1 for ${n}_{z}$ in equation (1). ${\mathrm{E}}_{2,2,1}=\frac{{\mathrm{h}}^{2}}{8\mathrm{m}}\left(\frac{{2}^{2}}{{\mathrm{L}}^{2}}+\frac{{2}^{2}}{{\mathrm{L}}^{2}}+\frac{{1}^{2}}{{\mathrm{L}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\left(\frac{4}{{\mathrm{L}}^{2}}+\frac{4}{{\mathrm{L}}^{2}}+\frac{1}{{\mathrm{L}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\left(4+4+1\right)\phantom{\rule{0ex}{0ex}}=9\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$ Therefore, the multiple of $\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}$ is 9 . ## Step 4: (c) Find the multiple of h2/8mL2 for the difference between the energies of its second and third excited states: The energy of the third exited state occurs at $\left({n}_{x},{n}_{y}\right)=\left(1,1\right)$ and ${n}_{z}=3$. Substitute L for ${\mathrm{L}}_{\mathrm{x}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{y}},\mathrm{L}\mathrm{for}{\mathrm{L}}_{\mathrm{z}},1\mathrm{for}{\mathrm{n}}_{\mathrm{x}},1\mathrm{for}{\mathrm{n}}_{\mathrm{y}}$ and 3 for ${\mathrm{n}}_{\mathrm{z}}$ in equation (1). ${\mathrm{E}}_{1,1,3}=\frac{{\mathrm{h}}^{2}}{8\mathrm{m}}\left(\frac{{1}^{2}}{{\mathrm{L}}^{2}}+\frac{{1}^{2}}{{\mathrm{L}}^{2}}+\frac{{3}^{2}}{{\mathrm{L}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\left(\frac{1}{{\mathrm{L}}^{2}}+\frac{1}{{\mathrm{L}}^{2}}+\frac{9}{{\mathrm{L}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\left(1+1+9\right)\phantom{\rule{0ex}{0ex}}=11\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$ The difference in energy between the ground state and the second exited state is calculated as follows. ${E}_{1,1,,3}-{E}_{2,2,1}=11\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)-9\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)\phantom{\rule{0ex}{0ex}}=2\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$ Therefore, the difference in energy between the ground state and the second exited state is, ${E}_{1,1,,3}-{E}_{2,2,1}=2\left(\frac{{\mathrm{h}}^{2}}{8{\mathrm{mL}}^{2}}\right)$ ## Step 5: (d) The number of degenerate states that have the energy of the first excited state: The first exited state occur at $\left({n}_{x},{n}_{y},{n}_{z}\right)=\left(2, 1, 1\right)$, $\left({n}_{x},{n}_{y},{n}_{z}\right)=\left(1, 2, 1\right)$, and $\left({n}_{x},{n}_{y},{n}_{z}\right)=\left(1, 1, 2\right)$ . Therefore, the number of degenerate states are 3. ## Step 6: (e) Find the number of degenerate states that have the energy of the fifth excited state: The first exited state occur at . $\left({n}_{x},{n}_{y},{n}_{z}\right)=\left(1, 2, 3\right), \left(1, 3, 2\right), \left(2, 3, 1\right), \left(2, 1, 3\right), \left(3, 1, 2\right), \left(3, 2, 1\right)$ Therefore, the number of degenerate states are 6.
# Single-processor Computing 1.2 : Modern processors 1.2.1 : The processing cores 1.2.1.1 : Instruction handling 1.2.1.2 : Floating point units 1.2.1.3 : Pipelining 1.2.1.4 : Peak performance 1.2.2 : 8-bit, 16-bit, 32-bit, 64-bit 1.2.3 : Caches: on-chip memory 1.2.4 : Graphics, controllers, special purpose hardware 1.2.5 : Superscalar processing and instruction-level parallelism 1.3 : Memory Hierarchies 1.3.1 : Busses 1.3.2 : Latency and Bandwidth 1.3.3 : Registers 1.3.4 : Caches 1.3.4.1 : A motivating example 1.3.4.2 : Cache tags 1.3.4.3 : Cache levels, speed and size 1.3.4.4 : Types of cache misses 1.3.4.5 : Reuse is the name of the game 1.3.4.6 : Replacement policies 1.3.4.7 : Cache lines 1.3.4.8 : Cache mapping 1.3.4.9 : Direct mapped caches 1.3.4.10 : Associative caches 1.3.4.11 : Cache memory versus regular memory 1.3.5 : Prefetch streams 1.3.6 : Concurrency and memory transfer 1.3.7 : Memory banks 1.3.8 : TLB, pages, and virtual memory 1.3.8.1 : Large pages 1.3.8.2 : TLB 1.3.9 : Multicore architectures 1.3.10 : Cache coherence 1.3.10.1 : Solutions to cache coherence 1.3.10.2 : False sharing 1.3.10.3 : Tag directories 1.3.11 : Computations on multicore chips 1.3.12 : TLB shootdown 1.4 : Node architecture and sockets 1.5 : Locality and data reuse 1.5.1 : Data reuse and arithmetic intensity 1.5.1.1 : Examples 1.5.1.2 : The roofline model 1.5.2 : Locality 1.5.2.1 : Temporal locality 1.5.2.2 : Spatial locality 1.5.2.3 : Examples of locality 1.5.2.4 : Core locality 1.5.3 : Programming strategies for high performance 1.5.4 : Peak performance 1.5.5 : Pipelining 1.5.6 : Cache size 1.5.7 : Cache lines 1.5.8 : TLB 1.5.9 : Cache associativity 1.5.10 : Loop nests 1.5.11 : Loop tiling 1.5.12 : Optimization strategies 1.5.13 : Cache aware and cache oblivious programming 1.5.14 : Case study: Matrix-vector product 1.6 : Further topics 1.6.1 : Power consumption 1.6.2 : Derivation of scaling properties 1.6.3 : Multicore 1.6.4 : Total computer power 1.6.5 : Operating system effects 1.7 : FPGA computing # 1 Single-processor Computing In order to write efficient scientific codes, it is important to understand computer architecture. The difference in speed between two codes that compute the same result can range from a few percent to orders of magnitude, depending only on factors relating to how well the algorithms are coded for the processor architecture. Clearly, it is not enough to have an algorithm and put it on the computer': some knowledge of computer architecture is advisable, sometimes crucial. Some problems can be solved on a single CPU , others need a parallel computer that comprises more than one processor. We will go into detail on parallel computers in the next chapter, but even for parallel processing, it is necessary to understand the invidual CPU . In this chapter, we will focus on what goes on inside a CPU and its memory system. We start with a brief general discussion of how instructions are handled, then we will look into the arithmetic processing in the processor core; last but not least, we will devote much attention to the movement of data between memory and the processor, and inside the processor. This latter point is, maybe unexpectedly, very important, since memory access is typically much slower than executing the processor's instructions, making it the determining factor in a program's performance; the days when flop counting' was the key to predicting a code's performance are long gone. This discrepancy is in fact a growing trend, so the issue of dealing with memory traffic has been becoming more important over time, rather than going away. This chapter will give you a basic understanding of the issues involved in CPU design, how it affects performance, and how you can code for optimal performance. For much more detail, see an online book about PC architecture  [Karbo:book] , and the standard work about computer architecture, Hennesey and Patterson  [HennessyPatterson] . # 1.1 The Von Neumann architecture Top > The Von Neumann architecture While computers, and most relevantly for this chapter, their processors, can differ in any number of details, they also have many aspects in common. On a very high level of abstraction, many architectures can be described as von Neumann architectures . This describes a design with an undivided memory that stores both program and data (stored program'), and a processing unit that executes the instructions, operating on the data in fetch, execute, store cycle' % . This setup distinguishes modern processors for the very earliest, and some special purpose contemporary, designs where the program was hard-wired. It also allows programs to modify themselves or generate other programs, since instructions and data are in the same storage. This allows us to have editors and compilers: the computer treats program code as data to operate on % . In this book we will not explicitly discuss compilers, the programs that translate high level languages to machine instructions. However, on occasion we will discuss how a program at high level can be written to ensure efficiency at the low level. In scientific computing, however, we typically do not pay much attention to program code, focusing almost exclusively on data and how it is moved about during program execution. For most practical purposes it is as if program and data are stored separately. The little that is essential about instruction handling can be described as follows. The machine instructions that a processor executes, as opposed to the higher level languages users write in, typically specify the name of an operation, as well as of the locations of the operands and the result. These locations are not expressed as memory locations, but as registers : a small number of named memory locations that are part of the CPU % . As an example, here is a simple C routine void store(double *a, double *b, double *c) { *c = *a + *b; } and its X86 assembler output, obtained by \verb+gcc -O2 -S -o - store.c+: .text .p2align 4,,15 .globl store .type store, @function store: movsd (%rdi), %xmm0 # Load *a to %xmm0 movsd %xmm0, (%rdx) # Store to *c ret The instructions here are: • A load from memory to register; • Writing back the result to memory. Each instruction is processed as follows: • Instruction fetch: the next instruction according to the program counter is loaded into the processor. We will ignore the questions of how and from where this happens. • Instruction decode: the processor inspects the instruction to determine the operation and the operands. • Memory fetch: if necessary, data is brought from memory into a register. • Execution: the operation is executed, reading data from registers and writing it back to a register. • Write-back: for store operations, the register contents is written back to memory. The case of array data is a little more complicated: the element loaded (or stored) is then determined as the base address of the array plus an offset. In a way, then, the modern CPU looks to the programmer like a von Neumann machine. There are various ways in which this is not so. For one, while memory looks randomly addressable % , in practice there is a concept of locality : once a data item has been loaded, nearby items are more efficient to load, and reloading the initial item is also faster. Another complication to this story of simple loading of data is that contemporary CPU operate on several instructions simultaneously, which are said to be in flight', meaning that they are in various stages of completion. Of course, together with these simultaneous instructions, their inputs and outputs are also being moved between memory and processor in an overlapping manner. This is the basic idea of the superscalar CPU architecture, and is also referred to as \indexacf{ILP}. Thus, while each instruction can take several clock cycles to complete, a processor can complete one instruction per cycle in favourable circumstances; in some cases more than one instruction can be finished per cycle. The main statistic that is quoted about CPU is their Gigahertz rating, implying that the speed of the processor is the main determining factor of a computer's performance. While speed obviously correlates with performance, the story is more complicated. Some algorithms are cpu-bound , and the speed of the processor is indeed the most important factor; other algorithms are memory-bound , and aspects such as bus speed and cache size, to be discussed later, become important. In scientific computing, this second category is in fact quite prominent, so in this chapter we will devote plenty of attention to the process that moves data from memory to the processor, and we will devote relatively little attention to the actual processor. # 1.2 Modern processors Top > Modern processors Modern processors are quite complicated, and in this section we will give a short tour of what their constituent parts. Figure  is a picture of the die of an Intel Sandy Bridge processor. This chip is about an inch in size and contains close to a billion transistors. ## 1.2.1 The processing cores Top > Modern processors > The processing cores In the Von Neuman model there is a single entity that executes instructions. This has not been the case in increasing measure since the early 2000s. The Sandy Bridge pictured in figure  has eight cores , each of which is an independent unit executing a stream of instructions. In this chapter we will mostly discuss aspects of a single core ; section  will discuss the integration aspects of the multiple cores. ### 1.2.1.1 Instruction handling Top > Modern processors > The processing cores > Instruction handling The Von Neumann model is also unrealistic in that it assumes that all instructions are executed strictly in sequence. Increasingly, over the last twenty years, processor have used out-of-order instruction handling, where instructions can be processed in a different order than the user program specifies. Of course the processor is only allowed to re-order instructions if that leaves the result of the execution intact! In the block diagram (figure  ) you see various units that are concerned with instruction handling: This cleverness actually costs considerable energy, as well as sheer amount of transistors. For this reason, processors such as the first generation Intel Xeon Phi, the Knights Corner , used in-order instruction handling. However, in the next generation, the Knights Landing , this decision was reversed for reasons of performance. ### 1.2.1.2 Floating point units Top > Modern processors > The processing cores > Floating point units In scientific computing we are mostly interested in what a processor does with floating point data. Computing with integers or booleans is typically of less interest. For this reason, cores have considerable sophistication for dealing with numerical data. For instance, while past processors had just a single FPU , these days they will have multiple, capable of executing simultaneously. For instance, often there are separate addition and multiplication units; if the compiler can find addition and multiplication operations that are independent, it can schedule them so as to be executed simultaneously, thereby doubling the performance of the processor. In some cases, a processor will have multiple addition or multiplication units. Another way to increase performance is to have a \indexacf{FMA} unit, which can execute the instruction $x\leftarrow ax+b$ in the same amount of time as a separate addition or multiplication. Together with pipelining (see below), this means that a processor has an asymptotic speed of several floating point operations per clock cycle. \begin{array} Processoryearadd/mult/fma units daxpy cycles\\ (count$\times$width)(arith vs load/store) MIPS R10000 1996 $1\times1+1\times1+0$ 8/24 Alpha EV5 1996 $1\times1+1\times1+0$ 8/12 IBM Power5 2004 $0+0+2\times1$ 4/12 AMD Bulldozer 2011 $2\times2+2\times2+0$ 2/4 Intel Sandy Bridge2012 $1\times4+1\times4+0$ 2/4 Intel Haswell 2014 $0+0+2\times 4$ 1/2 {p{2in}llll} \caption{Floating point capabilities (per core) of several processor architectures, and DAXPY cycle number for 8 operands} \label{tab:chipfloats} \begin{table} Incidentally, there are few algorithms in which division operations are a limiting factor. Correspondingly, the division operation is not nearly as much optimized in a modern CPU as the additions and multiplications are. Division operations can take 10 or 20 clock cycles, while a CPU can have multiple addition and/or multiplication units that (asymptotically) can produce a result per cycle. ### 1.2.1.3 Pipelining Top > Modern processors > The processing cores > Pipelining The floating point add and multiply units of a processor are pipelined, which has the effect that a stream of independent operations can be performed at an asymptotic speed of one result per clock cycle. The idea behind a pipeline is as follows. Assume that an operation consists of multiple simpler operations, and that for each suboperation there is separate hardware in the processor. For instance, an addition instruction can have the following components: • Decoding the instruction, including finding the locations of the operands. • Copying the operands into registers (data fetch'). • Aligning the exponents; the addition $.35\times 10^{-1}\,+\, .6\times 10^{-2}$ becomes $.35\times 10^{-1}\,+\, .06\times 10^{-1}$. • Executing the addition of the mantissas, in this case giving $.41$. • Normalizing the result, in this example to $.41\times 10^{-1}$. (Normalization in this example does not do anything. Check for yourself that in $.3\times10^0\,+\,.8\times 10^0$ and $.35\times10^{-3}\,+\,(-.34)\times 10^{-3}$ there is a non-trivial adjustment.) • Storing the result. These parts are often called the stages' or segments' of the pipeline. If every component is designed to finish in 1 clock cycle, the whole instruction takes 6 cycles. However, if each has its own hardware, we can execute two operations in less than 12 cycles: • Execute the decode stage for the first operation; • Do the data fetch for the first operation, and at the same time the decode for the second. • Execute the third stage for the first operation and the second stage of the second operation simultaneously. • Et cetera. You see that the first operation still takes 6 clock cycles, but the second one is finished a mere 1 cycle later. Let us make a formal analysis of the speedup you can get from a pipeline. On a traditional FPU , producing $n$ results takes $t(n)=n\ell\tau$ where $\ell$ is the number of stages, and $\tau$ the clock cycle time. The rate at which results are produced is the reciprocal of $t(n)/n$: $r_{\mathrm{serial}}\equiv(\ell\tau)\inv$. On the other hand, for a pipelined FPU the time is $t(n)=[s+\ell+n-1]\tau$ where $s$ is a setup cost: the first operation still has to go through the same stages as before, but after that one more result will be produced each cycle. We can also write this formula as $$t(n)=[n+n_{1/2}]\tau.$$ Exercise Let us compare the speed of a classical FPU , and a pipelined one. Show that the result rate is now dependent on $n$: give a formula for $r(n)$, and for $r_\infty=\lim_{n\rightarrow\infty}r(n)$. What is the asymptotic improvement in $r$ over the non-pipelined case? Next you can wonder how long it takes to get close to the asymptotic behaviour. Show that for $n=n_{1/2}$ you get $r(n)=r_\infty/2$. This is often used as the definition of $n_{1/2}$. Since a vector processor works on a number of instructions simultaneously, these instructions have to be independent. The operation $\forall_i\colon a_i\leftarrow b_i+c_i$ has independent additions; the operation $\forall_i\colon a_{i+1}\leftarrow a_ib_i+c_i$ feeds the result of one iteration ($a_i$) to the input of the next ($a_{i+1}=\ldots$), so the operations are not independent. A pipelined processor can speed up operations by a factor of $4,5,6$ with respect to earlier CPUs. Such numbers were typical in the 1980s when the first successful vector computers came on the market. These days, CPUs can have 20-stage pipelines. Does that mean they are incredibly fast? This question is a bit complicated. Chip designers continue to increase the clock rate, and the pipeline segments can no longer finish their work in one cycle, so they are further split up. Sometimes there are even segments in which nothing happens: that time is needed to make sure data can travel to a different part of the chip in time. The amount of improvement you can get from a pipelined CPU is limited, so in a quest for ever higher performance several variations on the pipeline design have been tried. For instance, the Cyber 205 had separate addition and multiplication pipelines, and it was possible to feed one pipe into the next without data going back to memory first. Operations like $\forall_i\colon a_i\leftarrow b_i+c\cdot d_i$ were called linked triads' (because of the number of paths to memory, one input operand had to be scalar). Exercise Analyse the speedup and $n_{1/2}$ of linked triads. Another way to increase performance is to have multiple identical pipes. This design was perfected by the NEC SX series. With, for instance, 4 pipes, the operation $\forall_i\colon a_i\leftarrow b_i+c_i$ would be split module 4, so that the first pipe operated on indices $i=4\cdot j$, the second on $i=4\cdot j+1$, et cetera. Exercise Analyze the speedup and $n_{1/2}$ of a processor with multiple pipelines that operate in parallel. That is, suppose that there are $p$ independent pipelines, executing the same instruction, that can each handle a stream of operands. (You may wonder why we are mentioning some fairly old computers here: true pipeline supercomputers hardly exist anymore. In the US, the Cray X1 was the last of that line, and in Japan only NEC still makes them. However, the functional units of a CPU these days are pipelined, so the notion is still important.) Exercise The operation for (i) { x[i+1] = a[i]*x[i] + b[i]; } can not be handled by a pipeline because there is a dependency between input of one iteration of the operation and the output of the previous. However, you can transform the loop into one that is mathematically equivalent, and potentially more efficient to compute. Derive an expression that computes x[i+2] from x[i] without involving x[i+1] . This is known as \indexterm{recursive doubling}. Assume you have plenty of temporary storage. You can now perform the calculation by • Doing some preliminary calculations; • computing x[i],x[i+2],x[i+4],... , and from these, • compute the missing terms x[i+1],x[i+3],... . Analyze the efficiency of this scheme by giving formulas for $T_0(n)$ and $T_s(n)$. Can you think of an argument why the preliminary calculations may be of lesser importance in some circumstances? ### 1.2.1.4 Peak performance Top > Modern processors > The processing cores > Peak performance Thanks to pipelining, for modern CPU there is a simple relation between the clock speed and the peak performance . Since each FPU can produce one result per cycle asymptotically, the peak performance is the clock speed times the number of independent FPU . The measure of floating point performance is floating point operations per second', abbreviated flops . Considering the speed of computers these days, you will mostly hear floating point performance being expressed in gigaflops': multiples of $10^9$ flops. ## 1.2.2 8-bit, 16-bit, 32-bit, 64-bit Top > Modern processors > 8-bit, 16-bit, 32-bit, 64-bit Processors are often characterized in terms of how big a chunk of data they can process as a unit. This can relate to • The width of the path between processor and memory: can a 64-bit floating point number be loaded in one cycle, or does it arrive in pieces at the processor. • The way memory is addressed: if addresses are limited to 16 bits, only 64,000 bytes can be identified. Early PCs had a complicated scheme with segments to get around this limitation: an address was specified with a segment number and an offset inside the segment. • The number of bits in a register, in particular the size of the integer registers which manipulate data address; see the previous point. (Floating point register are often larger, for instance 80 bits in the x86 architecture.) This also corresponds to the size of a chunk of data that a processor can operate on simultaneously. • The size of a floating point number. If the arithmetic unit of a CPU is designed to multiply 8-byte numbers efficiently (double precision'; see section  ) then numbers half that size (single precision') can sometimes be processed at higher efficiency, and for larger numbers (quadruple precision') some complicated scheme is needed. For instance, a quad precision number could be emulated by two double precision numbers with a fixed difference between the exponents. These measurements are not necessarily identical. For instance, the original Pentium processor had 64-bit data busses, but a 32-bit processor. On the other hand, the Motorola 68000 processor (of the original Apple Macintosh) had a 32-bit CPU , but 16-bit data busses. The first Intel microprocessor, the 4004, was a 4-bit processor in the sense that it processed 4 bit chunks. These days, 64 bit processors are becoming the norm. ## 1.2.3 Caches: on-chip memory Top > Modern processors > Caches: on-chip memory The bulk of computer memory is in chips that are separate from the processor. However, there is usually a small amount (typically a few megabytes) of on-chip memory, called the cache . This will be explained in detail in section  . ## 1.2.4 Graphics, controllers, special purpose hardware Top > Modern processors > Graphics, controllers, special purpose hardware One difference between consumer' and server' type processors is that the consumer chips devote considerable real-estate on the processor chip to graphics. Processors for cell phones and tablets can even have dedicated circuitry for security or mp3 playback. Other parts of the processor are dedicated to communicating with memory or the I/O subsystem . We will not discuss those aspects in this book. ## 1.2.5 Superscalar processing and instruction-level parallelism Top > Modern processors > Superscalar processing and instruction-level parallelism In the von Neumann model processors operate through control flow : instructions follow each other linearly or with branches without regard for what data they involve. As processors became more powerful and capable of executing more than one instruction at a time, it became necessary to switch to the data flow model. Such superscalar processors analyze several instructions to find data dependencies, and execute instructions in parallel that do not depend on each other. This concept is also known as \indexacf{ILP}, and it is facilitated by various mechanisms: • multiple-issue: instructions that are independent can be started at the same time; • pipelining: already mentioned, arithmetic units can deal with multiple operations in various stages of completion; • branch prediction and speculative execution: a compiler can guess' whether a conditional instruction will evaluate to true, and execute those instructions accordingly; • out-of-order execution: instructions can be rearranged if they are not dependent on each other, and if the resulting execution will be more efficient; • prefetching : data can be speculatively requested before any instruction needing it is actually encountered (this is discussed further in section  ). Above, you saw pipelining in the context of floating point operations. Nowadays, the whole CPU is pipelined. Not only floating point operations, but any sort of instruction will be put in the instruction pipeline as soon as possible. Note that this pipeline is no longer limited to identical instructions: the notion of pipeline is now generalized to any stream of partially executed instructions that are simultaneously in flight''. As clock frequency has gone up, the processor pipeline has grown in length to make the segments executable in less time. You have already seen that longer pipelines have a larger $n_{1/2}$, so more independent instructions are needed to make the pipeline run at full efficiency. As the limits to instruction-level parallelism are reached, making pipelines longer (sometimes called deeper' ) no longer pays off. This is generally seen as the reason that chip designers have moved to multicore architectures as a way of more efficiently using the transistors on a chip; section  . There is a second problem with these longer pipelines: if the code comes to a branch point (a conditional or the test in a loop), it is not clear what the next instruction to execute is. At that point the pipeline can stall . CPU have taken to speculative execution for instance, by always assuming that the test will turn out true. If the code then takes the other branch (this is called a branch misprediction ), the pipeline has to be flushed and restarted. The resulting delay in the execution stream is called the branch penalty . # 1.3 Memory Hierarchies Top > Memory Hierarchies We will now refine the picture of the Von Neuman architecture, in which data is loaded immediately from memory to the processors, where it is operated on. This picture is unrealistic because of the so-called memory wall   [Wulf:memory-wall] : the memory is too slow to load data into the process at the rate the processor can absorb it. Specifically, a single load can take 1000 cycles, while a processor can perform several operations per cycle. (After this long wait for a load, the next load can come faster, but still too slow for the processor. This matter of wait time versus throughput will be addressed below in section  .) In reality, there will be various memory levels in between the FPU and the main memory: the registers and the caches , together called the memory hierarchy . These try to alleviate the memory wall problem by making recently used data available quicker than it would be from main memory. Of course, this presupposes that the algorithm and its implementation allow for data to be used multiple times. Such questions of data reuse will be discussed in more detail in section  . Both registers and caches are faster than main memory to various degrees; unfortunately, the faster the memory on a certain level, the smaller it will be. These differences in size and access speed lead to interesting programming problems, which we will discuss later in this chapter, and particularly section  . We will now discuss the various components of the memory hierarchy and the theoretical concepts needed to analyze their behaviour. ## 1.3.1 Busses Top > Memory Hierarchies > Busses The wires that move data around in a computer, from memory to cpu or to a disc controller or screen, are called busses . The most important one for us is the \indexac{FSB} which connects the processor to memory. In one popular architecture, this is called the north bridge', as opposed to the south bridge' which connects to external devices, with the exception of the graphics controller. The bus is typically slower than the processor, operating with clock frequencies slightly in excess of 1GHz, which is a fraction of the CPU clock frequency. This is one reason that caches are needed; the fact that a processors can consume many data items per clock tick contributes to this. Apart from the frequency, the bandwidth of a bus is also determined by the number of bits that can be moved per clock cycle. This is typically 64 or 128 in current architectures. We will now discuss this in some more detail. ## 1.3.2 Latency and Bandwidth Top > Memory Hierarchies > Latency and Bandwidth Above, we mentioned in very general terms that accessing data in registers is almost instantaneous, whereas loading data from memory into the registers, a necessary step before any operation, incurs a substantial delay. We will now make this story slightly more precise. There are two important concepts to describe the movement of data: latency and bandwidth . The assumption here is that requesting an item of data incurs an initial delay; if this item was the first in a stream of data, usually a consecutive range of memory addresses, the remainder of the stream will arrive with no further delay at a regular amount per time period. • [Latency] is the delay between the processor issuing a request for a memory item, and the item actually arriving. We can distinguish between various latencies, such as the transfer from memory to cache, cache to register, or summarize them all into the latency between memory and processor. Latency is measured in (nano) seconds, or clock periods. If a processor executes instructions in the order they are found in the assembly code, then execution will often stall while data is being fetched from memory; this is also called memory stall . For this reason, a low latency is very important. In practice, many processors have out-of-order execution' of instructions, allowing them to perform other operations while waiting for the requested data. Programmers can take this into account, and code in a way that achieves latency hiding ; see also section  . GPU (see section  ) can switch very quickly between threads in order to achieve latency hiding. • [Bandwidth] is the rate at which data arrives at its destination, after the initial latency is overcome. Bandwidth is measured in bytes (kilobytes, megabytes, gigabytes) per second or per clock cycle. The bandwidth between two memory levels is usually the product of the cycle speed of the channel (the bus speed ) and the bus width : the number of bits that can be sent simultaneously in every cycle of the bus clock. The concepts of latency and bandwidth are often combined in a formula for the time that a message takes from start to finish: $$T(n) = \alpha+\beta n$$ where $\alpha$ is the latency and $\beta$ is the inverse of the bandwidth: the time per byte. Typically, the further away from the processor one gets, the longer the latency is, and the lower the bandwidth. These two factors make it important to program in such a way that, if at all possible, the processor uses data from cache or register, rather than from main memory. To illustrate that this is a serious matter, consider a vector addition for (i) a[i] = b[i]+c[i] Each iteration performs one floating point operation, which modern CPU can do in one clock cycle by using pipelines. However, each iteration needs two numbers loaded and one written, for a total of 24 bytes of memory traffic. Typical memory bandwidth figures (see for instance figure  ) are nowhere near 24 (or 32) bytes per cycle. This means that, without caches, algorithm performance can be bounded by memory performance. Of course, caches will not speed up every operations, and in fact will have no effect on the above example. Strategies for programming that lead to significant cache use are discussed in section  . The concepts of latency and bandwidth will also appear in parallel computers, when we talk about sending data from one processor to the next. ## 1.3.3 Registers Top > Memory Hierarchies > Registers Every processor has a small amount of memory that is internal to the processor: the registers , or together the register file . The registers are what the processor actually operates on: an operation such as a := b + c is actually implemented as • load the value of b from memory into a register, • load the value of c from memory into another register, • compute the sum and write that into yet another register, and • write the sum value back to the memory location of  a . Looking at assembly code (for instance the output of a compiler), you see the explicit load, compute, and store instructions. Compute instructions such as add or multiply only operate on registers. For instance, in assembly language you will see instructions such as addl %eax, %edx which adds the content of one register to another. As you see in this sample instruction, registers are not numbered, as opposed to memory addresses, but have distinct names that are referred to in the assembly instruction. Typically, a processor has 16 or 32 floating point registers; the Intel Itanium was exceptional with 128 floating point registers. Registers have a high bandwidth and low latency because they are part of the processor. You can consider data movement to and from registers as essentially instantaneous. In this chapter you will see stressed that moving data from memory is relatively expensive. Therefore, it would be a simple optimization to leave data in register when possible. For instance, if the above computation is followed by a statement a := b + c d := a + e the computed value of a could be left in register. This optimization is typically performed as a compiler optimization : the compiler will simply not generate the instructions for storing and reloading  a . We say that a stays resident in register . Keeping values in register is often done to avoid recomputing a quantity. For instance, in t1 = sin(alpha) * x + cos(alpha) * y; t2 = -cos(alpha) * x + sin(alpha) * y; the sine and cosine quantity will probably be kept in register. You can help the compiler by explicitly introducing temporary quantities: s = sin(alpha); c = cos(alpha); t1 = s * x + c * y; t2 = -c * x + s * y Of course, there is a limit to how many quantities can be kept in register; trying to keep too many quantities in register is called register spill and lowers the performance of a code. Keeping a variable in register is especially important if that variable appears in an inner loop. In the computation for i=1,length a[i] = b[i] * c the quantity c will probably be kept in register by the compiler, but in for k=1,nvectors for i=1,length a[i,k] = b[i,k] * c[k] it is a good idea to introduce explicitly a temporary variable to hold  c[k] . In C, you can give a hint to the compiler to keep a variable in register by declaring it as a register variable : register double t; ## 1.3.4 Caches Top > Memory Hierarchies > Caches In between the registers, which contain the immediate input and output data for instructions, and the main memory where lots of data can reside for a long time, are various levels of cache memory, that have lower latency and higher bandwidth than main memory and where data are kept for an intermediate amount of time. Data from memory travels through the caches to wind up in registers. The advantage to having cache memory is that if a data item is reused shortly after it was first needed, it will still be in cache, and therefore it can be accessed much faster than if it would have to be brought in from memory. On a historical note, the notion of levels of memory hierarchy was already discussed in 1946  [Burks:discussion] , motivated by the slowness of the memory technology at the time. ### 1.3.4.1 A motivating example Top > Memory Hierarchies > Caches > A motivating example As an example, let's suppose a variable x is used twice, and its uses are too far apart that it would stay \indextermsub{resident in}{register}: ... = ... x ..... // instruction using x ......... // several instructions not involving x ... = ... x ..... // instruction using x The assembly code would then be • load x from memory into register; operate on it; • do the intervening instructions; • load x from memory into register; operate on it; With a cache, the assembly code stays the same, but the actual behaviour of the memory system now becomes: • load x from memory into cache, and from cache into register; operate on it; • do the intervening instructions; • request x from memory, but since it is still in the cache, load it from the cache into register; operate on it. Since loading from cache is faster than loading from main memory, the computation will now be faster. Caches are fairly small, so values can not be kept there indefinitely. We will see the implications of this in the following discussion. There is an important difference between cache memory and registers: while data is moved into register by explicit assembly instructions, the move from main memory to cache is entirely done by hardware. Thus cache use and reuse is outside of direct programmer control. Later, especially in sections and   , you will see how it is possible to influence cache use indirectly. ### 1.3.4.2 Cache tags Top > Memory Hierarchies > Caches > Cache tags In the above example, the mechanism was left unspecified by which it is found whether an item is present in cache. For this, there is a tag for each cache location: sufficient information to reconstruct the memory location that the cache item came from. ### 1.3.4.3 Cache levels, speed and size Top > Memory Hierarchies > Caches > Cache levels, speed and size The caches are called level 1' and level 2' (or, for short, L1 and L2) cache; some processors can have an L3 cache. The L1 and L2 caches are part of the die , the processor chip, although for the L2 cache that is a relatively recent development; the L3 cache is off-chip. The L1 cache is small, typically around 16Kbyte. Level 2 (and, when present, level 3) cache is more plentiful, up to several megabytes, but it is also slower. Unlike main memory, which is expandable, caches are fixed in size. If a version of a processor chip exists with a larger cache, it is usually considerably more expensive. Data needed in some operation gets copied into the various caches on its way to the processor. If, some instructions later, a data item is needed again, it is first searched for in the L1 cache; if it is not found there, it is searched for in the L2 cache; if it is not found there, it is loaded from main memory. Finding data in cache is called a cache hit , and not finding it a cache miss . Figure illustrates the basic facts of the cache hierarchy , in this case for the Intel Sandy Bridge chip: the closer caches are to the FPU , the faster, but also the smaller they are. • Loading data from registers is so fast that it does not constitute a limitation on algorithm execution speed. On the other hand, there are few registers. Each core has 16 general purpose registers, and 16 SIMD registers. • The L1 cache is small, but sustains a bandwidth of 32 bytes, that is 4 double precision number, per cycle. This is enough to load two operands each for two operations, but note that the core can actually perform 4 operations per cycle. Thus, to achieve peak speed, certain operands need to stay in register: typically, L1 bandwidth is enough for about half of peak performance. • The bandwidth of the L2 and L3 cache is nominally the same as of L1. However, this bandwidth is partly wasted on coherence issues. • Main memory access has a latency of more than 100 cycles, and a bandwidth of 4.5 bytes per cycle, which is about $1/7$th of the L1 bandwidth. However, this bandwidth is shared by the multiple cores of a processor chip, so effectively the bandwidth is a fraction of this number. Most clusters will also have more than one socket (processor chip) per node, typically 2 or 4, so some bandwidth is spent on maintaining cache coherence (see section  ), again reducing the bandwidth available for each chip. On level 1, there are separate caches for instructions and data; the L2 and L3 cache contain both data and instructions. You see that the larger caches are increasingly unable to supply data to the processors fast enough. For this reason it is necessary to code in such a way that data is kept as much as possible in the highest cache level possible. We will discuss this issue in detail in the rest of this chapter. Exercise The L1 cache is smaller than the L2 cache, and if there is an L3, the L2 is smaller than the L3. Give a practical and a theoretical reason why this is so. ### 1.3.4.4 Types of cache misses Top > Memory Hierarchies > Caches > Types of cache misses There are three types of cache misses. As you saw in the example above, the first time you reference data you will always incur a cache miss. This is known as a \emph{compulsory cache miss} since these are unavoidable. Does that mean that you will always be waiting for a data item, the first time you need it? Not necessarily: section  explains how the hardware tries to help you by predicting what data is needed next. The next type of cache misses is due to the size of your working set: a capacity cache miss is caused by data having been overwritten because the cache can simply not contain all your problem data. (Section  discusses how the processor decides what data to overwrite.) If you want to avoid this type of misses, you need to partition your problem in chunks that are small enough that data can stay in cache for an appreciable time. Of course, this presumes that data items are operated on multiple times, so that there is actually a point in keeping it in cache; this is discussed in section  . Finally, there are conflict misses caused by one data item being mapped to the same cache location as another, while both are still needed for the computation, and there would have been better candidates to evict. This is discussed in section  . In a multicore context there is a further type of cache miss: the invalidation miss . This happens if an item in cache has become invalid because another core changed the value of the corresponding memory address. The core will then have to reload this address. ### 1.3.4.5 Reuse is the name of the game Top > Memory Hierarchies > Caches > Reuse is the name of the game The presence of one or more caches is not immediately a guarantee for high performance: this largely depends on the \indextermbus{memory}{access pattern} of the code, and how well this exploits the caches. The first time that an item is referenced, it is copied from memory into cache, and through to the processor registers. The latency and bandwidth for this are not mitigated in any way by the presence of a cache. When the same item is referenced a second time, it may be found in cache, at a considerably reduced cost in terms of latency and bandwidth: caches have shorter latency and higher bandwidth than main memory. We conclude that, first, an algorithm has to have an opportunity for data reuse. If every data item is used only once (as in addition of two vectors), there can be no reuse, and the presence of caches is largely irrelevant. A code will only benefit from the increased bandwidth and reduced latency of a cache if items in cache are referenced more than once; see section  for a detailed discussion.. An example would be the matrix-vector multiplication $y=Ax$ where each element of $x$ is used in $n$ operations, where $n$ is the matrix dimension. Secondly, an algorithm may theoretically have an opportunity for reuse, but it needs to be coded in such a way that the reuse is actually exposed. We will address these points in section  . This second point especially is not trivial. Some problems are small enough that they fit completely in cache, at least in the L3 cache. This is something to watch out for when benchmarking , since it gives a too rosy picture of processor performance. ### 1.3.4.6 Replacement policies Top > Memory Hierarchies > Caches > Replacement policies Data in cache and registers is placed there by the system, outside of programmer control. Likewise, the system decides when to overwrite data in the cache or in registers if it is not referenced in a while, and as other data needs to be placed there. Below, we will go into detail on how caches do this, but as a general principle, a LRU cache replacement policy is used: if a cache is full and new data needs to be placed into it, the data that was least recently used is flushed , meaning that it is overwritten with the new item, and therefore no longer accessible. LRU is by far the most common replacement policy; other possibilities are FIFO (first in first out) or random replacement. Exercise How does the LRU replacement policy related to direct-mapped versus associative caches? Exercise Sketch a simple scenario, and give some (pseudo) code, to argue that LRU is preferable over FIFO as a replacement strategy. ### 1.3.4.7 Cache lines Top > Memory Hierarchies > Caches > Cache lines Data movement between memory and cache, or between caches, is not done in single bytes, or even words. Instead, the smallest unit of data moved is called a cache line , sometimes called a cache block . A typical cache line is 64 or 128 bytes long, which in the context of scientific computing implies 8 or 16 double precision floating point numbers. The cache line size for data moved into L2 cache can be larger than for data moved into L1 cache. It is important to acknowledge the existence of cache lines in coding, since any memory access costs the transfer of several words (see section  for some examples). An efficient program then tries to use the other items on the cache line, since access to them is effectively free. This phenomenon is visible in code that accesses arrays by stride : elements are read or written at regular intervals. Stride 1 corresponds to sequential access of an array: for (i=0; i<N; i++) ... = ... x[i] ... Let us use as illustration a case with 4 words per cacheline. Requesting the first elements loads the whole cacheline that contains it into cache. A request for the 2nd, 3rd, and 4th element can then be satisfied from cache, meaning with high bandwidth and low latency. \bigskip A larger stride for (i=0; i<N; i+=stride) ... = ... x[i] ... implies that in every cache line only certain elements are used. We illustrate that with stride 3: requesting the first elements loads a cacheline, and this cacheline also contains the second element. However, the third element is on the next cacheline, so loading this incurs the latency and bandwidth of main memory. The same holds for the fourth element. Loading four elements now needed loading three cache lines instead of one, meaning that two-thirds of the available bandwidth has been wasted. (This second case would also incur three times the latency of the first, if it weren't for a hardware mechanism that notices the regular access patterns, and pre-emptively loads further cachelines; see section  .) Some applications naturally lead to strides greater than 1, for instance, accessing only the real parts of an array of complex numbers (for some remarks on the practical realization of complex numbers see section  ). Also, methods that use recursive doubling often have a code structure that exhibits non-unit strides for (i=0; i<N/2; i++) x[i] = y[2*i]; In this discussion of cachelines, we have implicitly assumed the beginning of a cacheline is also the beginning of a word, be that an integer or a floating point number. This need not be true: an 8-byte floating point number can be placed straddling the boundary between two cachelines. You can image that this is not good for performance. Section  discusses ways to address cacheline boundary alignment in practice. ### 1.3.4.8 Cache mapping Top > Memory Hierarchies > Caches > Cache mapping Caches get faster, but also smaller, the closer to the FPU they get, yet even the largest cache is considerably smaller than the main memory size. In section  we have already discussed how the decision is made which elements to keep and which to replace. We will now address the issue of cache mapping , which is the question of if an item is placed in cache, where does it get placed'. This problem is generally addressed by mapping the (main memory) address of the item to an address in cache, leading to the question what if two items get mapped to the same address'. ### 1.3.4.9 Direct mapped caches Top > Memory Hierarchies > Caches > Direct mapped caches The simplest cache mapping strategy is \indexterm{direct mapping}. Suppose that memory addresses are 32 bits long, so that they can address 4G bytes % ; suppose further that the cache has 8K words, that is, 64K bytes, needing 16 bits to address. Direct mapping then takes from each memory address the last (least significant') 16 bits, and uses these as the address of the data item in cache; see figure  . Direct mapping is very efficient because its address calculations can be performed very quickly, leading to low latency, but it has a problem in practical applications. If two items are addressed that are separated by 8K words, they will be mapped to the same cache location, which will make certain calculations inefficient. Example: double A[3][8192]; for (i=0; i<512; i++) a[2][i] = ( a[0][i]+a[1][i] )/2.; or in Fortran: real*8 A(8192,3); do i=1,512 a(i,3) = ( a(i,1)+a(i,2) )/2 end do Here, the locations of a[0][i] , a[1][i] , and a[2][i] (or a(i,1),a(i,2),a(i,3) ) are 8K from each other for every  i , so the last 16 bits of their addresses will be the same, and hence they will be mapped to the same location in cache; see figure  . The execution of the loop will now go as follows: • The data at a[0][0] is brought into cache and register. This engenders a certain amount of latency. Together with this element, a whole cache line is transferred. • The data at a[1][0] is brought into cache (and register, as we will not remark anymore from now on), together with its whole cache line, at cost of some latency. Since this cache line is mapped to the same location as the first, the first cache line is overwritten. • In order to write the output, the cache line containing a[2][0] is brought into memory. This is again mapped to the same location, causing flushing of the cache line just loaded for  a[1][0] . • In the next iteration, a[0][1] is needed, which is on the same cache line as a[0][0] . However, this cache line has been flushed, so it needs to be brought in anew from main memory or a deeper cache level. In doing so, it overwrites the cache line that holds a[2][0] . • A similar story hold for a[1][1] : it is on the cache line of \text{a[1][0]}, which unfortunately has been overwritten in the previous step. If a cache line holds four words, we see that each four iterations of the loop involve eight transfers of elements of  a , where two would have sufficed, if it were not for the cache conflicts. Exercise In the example of direct mapped caches, mapping from memory to cache was done by using the final 16 bits of a 32 bit memory address as cache address. Show that the problems in this example go away if the mapping is done by using the first (most significant') 16 bits as the cache address. Why is this not a good solution in general? Remark So far, we have pretended that caching is based on virtual memory addresses. In reality, caching is based on \indexterm{physical addresses} of the data in memory, which depend on the algorithm mapping virtual addresses to \indextermbus{memory}{pages} . ### 1.3.4.10 Associative caches Top > Memory Hierarchies > Caches > Associative caches The problem of cache conflicts, outlined in the previous section, would be solved if any data item could go to any cache location. In that case there would be no conflicts, other than the cache filling up, in which case a cache replacement policy (section  ) would flush data to make room for the incoming item. Such a cache is called fully associative , and while it seems optimal, it is also very costly to build, and much slower in use than a direct mapped cache. For this reason, the most common solution is to have a $k$-way associative cache , where $k$ is at least two. In this case, a data item can go to any of $k$ cache locations. Code would have to have a $k+1$-way conflict before data would be flushed prematurely as in the above example. In that example, a value of $k=2$ would suffice, but in practice higher values are often encountered. Figure  illustrates the mapping of memory addresses to cache locations for a direct mapped and a 3-way associative cache. Both caches have 12 elements, but these are used differently. The direct mapped cache (left) will have a conflict between memory address 0 and 12, but in the 3-way associative cache these two addresses can be mapped to any of three elements. As a practical example, the Intel Woodcrest processor has an L1 cache of 32K bytes that is 8-way set associative with a 64 byte cache line size, and an L2 cache of 4M bytes that is 8-way set associative with a 64 byte cache line size. On the other hand, the AMD Barcelona chip has 2-way associativity for the L1 cache, and 8-way for the L2. A higher associativity (way-ness') is obviously desirable, but makes a processor slower, since determining whether an address is already in cache becomes more complicated. For this reason, the associativity of the L1 cache, where speed is of the greatest importance, is typically lower than of the L2. Exercise Write a small cache simulator in your favourite language. Assume a $k$-way associative cache of 32 entries and an architecture with 16 bit addresses. Run the following experiment for $k=1,2,4,\ldots$: 1. Let $k$ be the associativity of the simulated cache. 2. Write the translation from 16 bit memory addresses to $32/k$ cache addresses. 3. Generate 32 random machine addresses, and simulate storing them in cache. Since the cache has 32 entries, optimally the 32 addresses can all be stored in cache. The chance of this actually happening is small, and often the data of one address will be evicted from the cache (meaning that it is overwritten) when another address conflicts with it. Record how many addresses, out of 32, are actually stored in the cache at the end of the simulation. Do step  100 times, and plot the results; give median and average value, and the standard deviation. Observe that increasing the associativity improves the number of addresses stored. What is the limit behaviour? (For bonus points, do a formal statistical analysis.) ### 1.3.4.11 Cache memory versus regular memory Top > Memory Hierarchies > Caches > Cache memory versus regular memory So what's so special about cache memory; why don't we use its technology for all of memory? Caches typically consist of \indexac{SRAM}, which is faster than \indexac{DRAM} used for the main memory, but is also more expensive, taking 5--6 transistors per bit rather than one, and it draws more power. Top > Memory Hierarchies > Caches > Loads versus stores In the above description, all data accessed in the program needs to be moved into the cache before the instructions using it can execute. This holds both for data that is read and data that is written. However, data that is written, and that will not be needed again (within some reasonable amount of time) has no reason for staying in the cache, potentially creating conflicts or evicting data that can still be reused. For this reason, compilers often have support for streaming stores : a contiguous stream of data that is purely output will be written straight to memory, without being cached. ## 1.3.5 Prefetch streams Top > Memory Hierarchies > Prefetch streams In the traditional von Neumann model (section  ), each instruction contains the location of its operands, so a CPU implementing this model would make a separate request for each new operand. In practice, often subsequent data items are adjacent or regularly spaced in memory. The memory system can try to detect such data patterns by looking at cache miss points, and request a prefetch data stream ; figure  . In its simplest form, the CPU will detect that consecutive loads come from two consecutive cache lines, and automatically issue a request for the next following cache line. This process can be repeated or extended if the code makes an actual request for that third cache line. Since these cache lines are now brought from memory well before they are needed, prefetch has the possibility of eliminating the latency for all but the first couple of data items. The concept of cache miss now needs to be revisited a little. From a performance point of view we are only interested in stall s on cache misses, that is, the case where the computation has to wait for the data to be brought in. Data that is not in cache, but can be brought in while other instructions are still being processed, is not a problem. If an L1 miss' is understood to be only a stall on miss', then the term L1 cache refill' is used to describe all cacheline loads, whether the processor is stalling on them or not. Since prefetch is controlled by the hardware, it is also described as hardware prefetch . Prefetch streams can sometimes be controlled from software, for instance through intrinsics . Introducing prefetch by the programmer is a careful balance of a number of factors  [Guttman:prefetchKNC] . Prime among these is the prefetch distance : the number of cycles between the start of the prefetch and when the data is needed. In practice, this is often the number of iterations of a loop: the prefetch instruction requests data for a future iteration. ## 1.3.6 Concurrency and memory transfer Top > Memory Hierarchies > Concurrency and memory transfer In the discussion about the memory hierarchy we made the point that memory is slower than the processor. As if that is not bad enough, it is not even trivial to exploit all the bandwidth that memory offers. In other words, if you don't program carefully you will get even less performance than you would expect based on the available bandwidth. Let's analyze this. The memory system typically has a bandwidth of more than one floating point number per cycle, so you need to issue that many requests per cycle to utilize the available bandwidth. This would be true even with zero latency; since there is latency, it takes a while for data to make it from memory and be processed. Consequently, any data requested based on computations on the first data has to be requested with a delay at least equal to the memory latency. For full utilization of the bandwidth, at all times a volume of data equal to the bandwidth times the latency has to be in flight. Since these data have to be independent, we get a statement of \indexterm{Little's law}  [Little:law] : $$\mathrm{Concurrency}=\mathrm{Bandwidth}\times \mathrm{Latency}.$$ This is illustrated in figure  . The problem with maintaining this concurrency is not that a program does not have it; rather, the program is to get the compiler and runtime system recognize it. For instance, if a loop traverses a long array, the compiler will not issue a large number of memory requests. The prefetch mechanism (section  ) will issue some memory requests ahead of time, but typically not enough. Thus, in order to use the available bandwidth, multiple streams of data need to be under way simultaneously. Therefore, we can also phrase Little's law as $$\mathrm{Effective\ throughput}=\mathrm{Expressed\ concurrency} / \mathrm{Latency}.$$ ## 1.3.7 Memory banks Top > Memory Hierarchies > Memory banks Above, we discussed issues relating to bandwidth. You saw that memory, and to a lesser extent caches, have a bandwidth that is less than what a processor can maximally absorb. The situation is actually even worse than the above discussion made it seem. For this reason, memory is often divided into memory banks that are interleaved: with four memory banks, words $0,4,8,\ldots$ are in bank 0, words $1,5,9,\ldots$ are in bank 1, et cetera. Suppose we now access memory sequentially, then such 4-way interleaved memory can sustain four times the bandwidth of a single memory bank. Unfortunately, accessing by stride 2 will halve the bandwidth, and larger strides are even worse. In practice the number of memory banks will be higher, so that strided memory access with small strides will still have the full advertised bandwidth. This concept of banks can also apply to caches. For instance, the cache lines in the L1 cache of the AMD Barcelona chip are 16 words long, divided into two interleaved banks of 8 words. This means that sequential access to the elements of a cache line is efficient, but strided access suffers from a deteriorated performance. ## 1.3.8 TLB, pages, and virtual memory Top > Memory Hierarchies > TLB, pages, and virtual memory All of a program's data may not be in memory simultaneously. This can happen for a number of reasons: • The computer serves multiple users, so the memory is not dedicated to any one user; • The computer is running multiple programs, which together need more than the physically available memory; • One single program can use more data than the available memory. For this reason, computers use \indextermsubdef{virtual}{memory}: if more memory is needed than is available, certain blocks of memory are written to disc. In effect, the disc acts as an extension of the real memory. This means that a block of data can be anywhere in memory, and in fact, if it is swapped in and out, it can be in different locations at different times. Swapping does not act on individual memory locations, but rather on \indextermbusdef{memory}{pages}: contiguous blocks of memory, from a few kilobytes to megabytes in size. (In an earlier generation of operating systems, moving memory to disc was a programmer's responsibility. Pages that would replace each other were called overlays .) For this reason, we need a translation mechanism from the memory addresses that the program uses to the actual addresses in memory, and this translation has to be dynamic. A program has a logical data space' (typically starting from address zero) of the addresses used in the compiled code, and this needs to be translated during program execution to actual memory addresses. For this reason, there is a page table that specifies which memory pages contain which logical pages. ### 1.3.8.1 Large pages Top > Memory Hierarchies > TLB, pages, and virtual memory > Large pages In very irregular applications, for instance databases, the page table can get very large as more-or-less random data is brought into memory. However, sometimes these pages show some amount of clustering, meaning that if the page size had been larger, the number of needed pages would be greatly reduced. For this reason, operating systems can have support for large pages , typically of size around 2Mb. (Sometimes huge pages' are used; for instance the Intel Knights Landing has Gigabyte pages.) The benefits of large pages are application-dependent: if the small pages have insufficient clustering, use of large pages may fill up memory prematurely with the unused parts of the large pages. ### 1.3.8.2 TLB Top > Memory Hierarchies > TLB, pages, and virtual memory > TLB However, address translation by lookup in this table is slow, so CPU have a \indexac{TLB}. The TLB is a cache of frequently used Page Table Entries: it provides fast address translation for a number of pages. If a program needs a memory location, the TLB is consulted to see whether this location is in fact on a page that is remembered in the TLB . If this is the case, the logical address is translated to a physical one; this is a very fast process. The case where the page is not remembered in the TLB is called a TLB miss , and the page lookup table is then consulted, if necessary bringing the needed page into memory. The TLB is (sometimes fully) associative (section  ), using an LRU policy (section  ). A typical TLB has between 64 and 512 entries. If a program accesses data sequentially, it will typically alternate between just a few pages, and there will be no TLB misses. On the other hand, a program that access many random memory locations can experience a slowdown because of such misses. The set of pages that is in current use is called the working set'. Section  and appendix  discuss some simple code illustrating the behaviour of the TLB . [There are some complications to this story. For instance, there is usually more than one TLB . The first one is associated with the L2 cache, the second one with the L1. In the AMD Opteron , the L1  TLB has 48 entries, and is is fully (48-way) associative, while the L2  TLB has 512 entries, but is only 4-way associative. This means that there can actually be TLB conflicts. In the discussion above, we have only talked about the L2 TLB . The reason that this can be associated with the L2 cache, rather than with main memory, is that the translation from memory to L2 cache is deterministic.] Use of large pages also reduces the number of potential TLB misses, since the working set of pages can be reduced. # 1.3.9 Multicore architectures Top > Multicore architectures In recent years, the limits of performance have been reached for the traditional processor chip design. • Clock frequency can not be increased further, since it increases energy consumption, heating the chips too much; see section  . • It is not possible to extract more ILP from codes, either because of compiler limitations, because of the limited amount of intrinsically available parallelism, or because branch prediction makes it impossible (see section  ). One of the ways of getting a higher utilization out of a single processor chip is then to move from a strategy of further sophistication of the single processor, to a division of the chip into multiple processing cores' % . The separate cores can work on unrelated tasks, or, by introducing what is in effect data parallelism (section  ), collaborate on a common task at a higher overall efficiency [Olukotun:1996:single-chip] . This solves the above two problems: • Two cores at a lower frequency can have the same throughput as a single processor at a higher frequency; hence, multiple cores are more energy-efficient. • Discovered ILP is now replaced by explicit task parallelism, managed by the programmer. While the first multicore CPU were simply two processors on the same die, later generations incorporated L3 or L2 caches that were shared between the two processor cores; see figure This design makes it efficient for the cores to work jointly on the same problem. The cores would still have their own L1 cache, and these separate caches lead to a cache coherence problem; see section  below. We note that the term processor' is now ambiguous: it can refer to either the chip, or the processor core on the chip. For this reason, we mostly talk about a socket for the whole chip and core for the part containing one arithmetic and logic unit and having its own registers. Currently, CPU with 4 or 6 cores are common, even in laptops, and Intel and AMD are marketing 12-core chips. The core count is likely to go up in the future: Intel has already shown an 80-core prototype that is developed into the 48 core Single-chip Cloud Computer', illustrated in fig  . This chip has a structure with 24 dual-core tiles' that are connected through a 2D mesh network. Only certain tiles are connected to a memory controller, others can not reach memory other than through the on-chip network. With this mix of shared and private caches, the programming model for multicore processors is becoming a hybrid between shared and distributed memory: • [ Core ] The cores have their own private L1 cache, which is a sort of distributed memory. The above mentioned Intel 80-core prototype has the cores communicating in a distributed memory fashion. • [ Socket ] On one socket, there is often a shared L2 cache, which is shared memory for the cores. • [ Node ] There can be multiple sockets on a single node' or motherboard, accessing the same shared memory. • [ Network ] Distributed memory programming (see the next chapter) is needed to let nodes communicate. Historically, multicore architectures have a precedent in multiprocessor shared memory designs (section  ) such as the Sequent Symmetry and the \indexterm{Alliant FX/8}. Conceptually the program model is the same, but the technology now allows to shrink a multiprocessor board to a multicore chip. ## 1.3.10 Cache coherence Top > Multicore architectures > Cache coherence With parallel processing, there is the potential for a conflict if more than one processor has a copy of the same data item. The problem of ensuring that all cached data are an accurate copy of main memory is referred to as cache coherence : if one processor alters its copy, the other copy needs to be updated. In distributed memory architectures, a dataset is usually partitioned disjointly over the processors, so conflicting copies of data can only arise with knowledge of the user, and it is up to the user to deal with the problem. The case of shared memory is more subtle: since processes access the same main memory, it would seem that conflicts are in fact impossible. However, processors typically have some private cache that contains copies of data from memory, so conflicting copies can occur. This situation arises in particular in multicore designs. Suppose that two cores have a copy of the same data item in their (private) L1 cache, and one modifies its copy. Now the other has cached data that is no longer an accurate copy of its counterpart: the processor will invalidate that copy of the item, and in fact its whole cacheline. When the process needs access to the item again, it needs to reload that cacheline. The alternative is for any core that alters data to send that cacheline to the other cores. This strategy probably has a higher overhead, since other cores are not likely to have a copy of a cacheline. This process of updating or invalidating cachelines is known as maintaining cache coherence , and it is done on a very low level of the processor, with no programmer involvement needed. (This makes updating memory locations an \indexterm{atomic operation}; more about this in section  .) However, it will slow down the computation, and it wastes bandwidth to the core that could otherwise be used for loading or storing operands. The state of a cache line with respect to a data item in main memory is usually described as one of the following: • [Scratch:] the cache line does not contain a copy of the item; • [Valid:] the cache line is a correct copy of data in main memory; • [Reserved:] the cache line is the only copy of that piece of data; • [Dirty:] the cache line has been modified, but not yet written back to main memory; • [Invalid:] the data on the cache line is also present on other processors (it is not reserved ), and another process has modified its copy of the data. A simpler variant of this is the MSI coherence protocol, where a cache line can be in the following states on a given core: • [Modified:] the cacheline has been modified, and needs to be written to the backing store. This writing can be done when the line is evicted , or it is done immediately, depending on the write-back policy. • [Shared:] the line is present in at least one cache and is unmodified. • [Invalid:] the line is not present in the current cache, or it is present but a copy in another cache has been modified. These states control the movement of cachelines between memory and the caches. For instance, suppose a core does a read to a cacheline that is invalid on that core. It can then load it from memory or get it from another cache, which may be faster. (Finding whether a line exists (in state M or S) on another cache is called snooping ; an alternative is to maintain cache directories; see below.) If the line is Shared, it can now simply be copied; if it is in state M in the other cache, that core first needs to write it back to memory. Exercise Consider two processors, a data item $x$ in memory, and cachelines $x_1$,$x_2$ in the private caches of the two processors to which $x$ is mapped. Describe the transitions between the states of $x_1$ and $x_2$ under reads and writes of $x$ on the two processors. Also indicate which actions cause memory bandwidth to be used. (This list of transitions is a \indexacf{FSA}; see section  .) Variants of the MSI protocol add an Exclusive' or Owned' state for increased efficiency. ### 1.3.10.1 Solutions to cache coherence Top > Multicore architectures > Cache coherence > Solutions to cache coherence There are two basic mechanisms for realizing cache coherence: snooping and directory-based schemes. In the snooping mechanism, any request for data is sent to all caches, and the data is returned if it is present anywhere; otherwise it is retrieved from memory. In a variation on this scheme, a core listens in' on all bus traffic, so that it can invalidate or update its own cacheline copies when another core modifies its copy. Invalidating is cheaper than updating since it is a bit operation, while updating involves copying the whole cacheline. Exercise When would updating pay off? Write a simple cache simulator to evaluate this question. Since snooping often involves broadcast information to all cores, it does not scale beyond a small number of cores. A solution that scales better is using a tag directory : a central directory that contains the information on what data is present in some cache, and what cache it is in specifically. For processors with large numbers of cores (such as the Intel Xeon Phi ) the directory can be distributed over the cores. ### 1.3.10.2 False sharing Top > Multicore architectures > Cache coherence > False sharing The cache coherence problem can even appear if the cores access different items. For instance, a declaration double x,y; will likely allocate x and  y next to each other in memory, so there is a high chance they fall on the same cacheline. Now if one core updates  x and the other  y , this cacheline will continuously be moved between the cores. This is called false sharing . The most common case of false sharing happens when threads update consecutive locations of an array. For instance, in the following OpenMP fragment all threads update their own location in an array of partial results: local_results = new double[num_threads]; #pragma omp parallel { for (int i=my_lo; i<my_hi; i++) } global_result = g(local_results) While there is no actual race condition (as there would be if the threads all updated the global_result variable), this code will have low performance, since the cacheline(s) with the local_result array will continuously be invalidated. ### 1.3.10.3 Tag directories Top > Multicore architectures > Cache coherence > Tag directories In multicore processors with distributed, but coherent, caches (such as the Intel Xeon Phi ) the tag directories can themselves be distributed. This increases the latency of cache lookup. ## 1.3.11 Computations on multicore chips Top > Multicore architectures > Computations on multicore chips There are various ways that a multicore processor can lead to increased performance. First of all, in a desktop situation, multiple cores can actually run multiple programs. More importantly, we can use the parallelism to speed up the execution of a single code. This can be done in two different ways. The MPI library (section  ) is typically used to communicate between processors that are connected through a network. However, it can also be used in a single multicore processor: the MPI calls then are realized through shared memory copies. Alternatively, we can use the shared memory and shared caches and program using threaded systems such as OpenMP (section  ). The advantage of this mode is that parallelism can be much more dynamic, since the runtime system can set and change the correspondence between threads and cores during the program run. We will discuss in some detail the scheduling of linear algebra operations on multicore chips; section  . ## 1.3.12 TLB shootdown Top > Multicore architectures > TLB shootdown Section  explained how the TLB is used to cache the translation from logical address, and therefore logical page, to physical page. The TLB is part of the memory unit of the socket , so in a multi-socket design, it is possible for a process on one socket to change the page mapping, which makes the mapping on the other incorrect. One solution to this problem is called TLB shoot-down : the process changing the mapping generates an Inter-Processor Interrupt , which causes the other processors to rebuild their TLB. # 1.4 Node architecture and sockets Top > Node architecture and sockets In the previous sections we have made our way down through the memory hierarchy, visiting registers and various cache levels, and the extent to which they can be private or shared. At the bottom level of the memory hierarchy is the memory that all cores share. This can range from a few Gigabyte on a lowly laptop to a few Terabyte in some supercomputer centers. While this memory is shared between all cores, there is some structure to it. This derives from the fact that cluster node can have more than one socket , that is, processor chip. The shared memory on the node is typically spread over banks that are directly attached to one particular socket. This is for instance illustrated in figure  , which shows the four-socket node of the Ranger supercomputer (no longer in production) and the two-socket node of the Stampede supercomputer which contains an Intel Xeon Phi co-processor. In both designs you clearly see the memory chips that are directly connected to the sockets. This is an example of \indexac{NUMA} design: for a process running on some core, the memory attached to its socket is slightly faster to access than the memory attached to another socket. One result of this is the first-touch phenomenon. Dynamically allocated memory is not actually allocated until it's first written to. Consider now the following OpenMP (section  ) code: double *array = (double*)malloc(N*sizeof(double)); for (int i=0; i<N; i++) array[i] = 1; #pragma omp parallel for for (int i=0; i<N; i++) .... lots of work on array[i] ... Because of first-touch, the array is allocated completely in the memory of the socket of the master thread. In the subsequently parallel loop the cores of the other socket will then have slower access to the memory they operate on. The solution here is to also make the initialization loop parallel, even if the amount of work in it may be negligible. # 1.5 Locality and data reuse Top > Locality and data reuse By now it should be clear that there is more to the execution of an algorithm than counting the operations: the data transfer involved is important, and can in fact dominate the cost. Since we have caches and registers, the amount of data transfer can be minimized by programming in such a way that data stays as close to the processor as possible. Partly this is a matter of programming cleverly, but we can also look at the theoretical question: does the algorithm allow for it to begin with. It turns out that in scientific computing data often interacts mostly with data that is close by in some sense, which will lead to data locality; section  . Often such locality derives from the nature of the application, as in the case of the PDE you will see in chapter  . In other cases such as molecular dynamics (chapter  ) there is no such intrinsic locality because all particles interact with all others, and considerable programming cleverness is needed to get high performance. ## 1.5.1 Data reuse and arithmetic intensity Top > Locality and data reuse > Data reuse and arithmetic intensity In the previous sections you learned that processor design is somewhat unbalanced: loading data is slower than executing the actual operations. This imbalance is large for main memory and less for the various cache levels. Thus we are motivated to keep data in cache and keep the amount of data reuse as high as possible. Of course, we need to determine first if the computation allows for data to be reused. For this we define the arithmetic intensity of an algorithm as follows: If $n$ is the number of data items that an algorithm operates on, and $f(n)$ the number of operations it takes, then the arithmetic intensity is $f(n)/n$. (We can measure data items in either floating point numbers or bytes. The latter possibility makes it easier to relate arithmetic intensity to hardware specifications of a processor.) Arithmetic intensity is also related to latency hiding : the concept that you can mitigate the negative performance impact of data loading behind computational activity going on. For this to work, you need more computations than data loads to make this hiding effective. And that is the very definition of computational intensity: a high ratio of operations per byte/word/number loaded. ### 1.5.1.1 Examples Top > Locality and data reuse > Data reuse and arithmetic intensity > Examples Consider for example the vector addition $$\forall_i\colon x_i\leftarrow x_i+y_i.$$ This involves three memory accesses (two loads and one store) and one operation per iteration, giving a arithmetic intensity of $1/3$. The axpy (for a  times x plus  y ) operation $$\forall_i\colon x_i\leftarrow a\,x_i+ y_i$$ has two operations, but the same number of memory access since the one-time load of $a$ is amortized. It is therefore more efficient than the simple addition, with a reuse of $2/3$. The inner product calculation $$\forall_i\colon s\leftarrow s+x_i\cdot y_i$$ is similar in structure to the axpy operation, involving one multiplication and addition per iteration, on two vectors and one scalar. However, now there are only two load operations, since $s$ can be kept in register and only written back to memory at the end of the loop. The reuse here is $1$. Next, consider the matrix-matrix product : $$\forall_{i,j}\colon c_{ij} = \sum_k a_{ik}b_{kj}.$$ This involves $3n^2$ data items and $2n^3$ operations, which is of a higher order. The arithmetic intensity is $O(n)$, meaning that every data item will be used $O(n)$ times. This has the implication that, with suitable programming, this operation has the potential of overcoming the bandwidth/clock speed gap by keeping data in fast cache memory. Exercise The matrix-matrix product, considered as operation , clearly has data reuse by the above definition. Argue that this reuse is not trivially attained by a simple implementation. What determines whether the naive implementation has reuse of data that is in cache? [In this discussion we were only concerned with the number of operations of a given implementation , not the mathematical operation . For instance, there are ways of performing the matrix-matrix multiplication and Gaussian elimination algorithms in fewer than $O(n^3)$ operations  [St:gaussnotoptimal,Pa:combinations] . However, this requires a different implementation, which has its own analysis in terms of memory access and reuse.] The matrix-matrix product is the heart of the \indextermbus{LINPACK} {benchmark}  [Dongarra1987LinpackBenchmark] ; see section  . Using this as the sole measure of benchmarking a computer may give an optimistic view of its performance: the matrix-matrix product is an operation that has considerable data reuse, so it is relatively insensitive to memory bandwidth and, for parallel computers, properties of the network. Typically, computers will attain 60--90\% of their peak performance on the Linpack benchmark. Other benchmark may give considerably lower figures. ### 1.5.1.2 The roofline model Top > Locality and data reuse > Data reuse and arithmetic intensity > The roofline model There is an elegant way of talking about how arithmetic intensity, which is a statement about the ideal algorithm, not its implementation, interacts with hardware parameters and the actual implementation to determine performance. This is known as the roofline model   [Williams:2009:roofline] , and it expresses the basic fact that performance is bounded by two factors, illustrated in the first graph of figure  . 1. The peak performance , indicated by the horizontal line at the top of the graph, is an absolute bound on the performance % , achieved only if every aspect of a CPU (pipelines, multiple floating point units) are perfectly used. The calculation of this number is purely based on CPU properties and clock cycle; it is assumed that memory bandwidth is not a limiting factor. 2. The number of operations per second is also limited by the product of the bandwidth, an absolute number, and the arithmetic intensity: \frac{\hbox operations }{\hbox second }= \frac{\hbox operations }{\hbox data item }\cdot \frac{\hbox data items }{\hbox second } This is depicted by the linearly increasing line in the graph. The roofline model is an elegant way of expressing that various factors lower the ceiling. For instance, if an algorithm fails to use the full SIMD width , this inbalance lowers the attainable peak. The second graph in figure  indicates various factors that lower the ceiling. There are also various factors that lower the available bandwidth, such as imperfect data hiding. This is indicated by a lowering of the sloping roofline in the third graph. For a given arithmetic intensity, the performance is determined by where its vertical line intersects the roof line. If this is at the horizontal part, the computation is called compute-bound : performance is determined by characteristics of the processor, and bandwidth is not an issue. On the other hand, if that vertical line intersects the sloping part of the roof, the computation is called bandwidth-bound : performance is determined by the memory subsystem, and the full capacity of the processor is not used. Exercise How would you determine whether a given program kernel is bandwidth or compute bound? ## 1.5.2 Locality Top > Locality and data reuse > Locality Since using data in cache is cheaper than getting data from main memory, a programmer obviously wants to code in such a way that data in cache is reused. While placing data in cache is not under explicit programmer control, even from assembly language, in most CPU % , it is still possible, knowing the behaviour of the caches, to know what data is in cache, and to some extent to control it. The two crucial concepts here are temporal locality \index{temporal locality|see{locality, temporal}} and spatial locality \index{spatial locality|see{locality, spatial}}. Temporal locality is the easiest to explain: this describes the use of a data element within a short time of its last use. Since most caches have an LRU replacement policy (section  ), if in between the two references less data has been referenced than the cache size, the element will still be in cache and therefore be quickly accessible. With other replacement policies, such as random replacement, this guarantee can not be made. ### 1.5.2.1 Temporal locality Top > Locality and data reuse > Locality > Temporal locality As an example of temporal locality, consider the repeated use of a long vector: for (loop=0; loop<10; loop++) { for (i=0; i<N; i++) { ... = ... x[i] ... } } Each element of x will be used 10 times, but if the vector (plus other data accessed) exceeds the cache size, each element will be flushed before its next use. Therefore, the use of x[i] does not exhibit temporal locality: subsequent uses are spaced too far apart in time for it to remain in cache. If the structure of the computation allows us to exchange the loops: for (i=0; i<N; i++) { for (loop=0; loop<10; loop++) { ... = ... x[i] ... } } the elements of x are now repeatedly reused, and are therefore more likely to remain in the cache. This rearranged code displays better temporal locality in its use of  x[i] . ### 1.5.2.2 Spatial locality Top > Locality and data reuse > Locality > Spatial locality The concept of spatial locality is slightly more involved. A program is said to exhibit spatial locality if it references memory that is close' to memory it already referenced. In the classical von Neumann architecture with only a processor and memory, spatial locality should be irrelevant, since one address in memory can be as quickly retrieved as any other. However, in a modern CPU with caches, the story is different. Above, you have seen two examples of spatial locality: • Since data is moved in cache line s rather than individual words or bytes, there is a great benefit to coding in such a manner that all elements of the cacheline are used. In the loop for (i=0; i<N*s; i+=s) { ... x[i] ... } spatial locality is a decreasing function of the stride   s . Let S  be the cacheline size, then as s ranges from $1\ldots\mathtt{S}$, the number of elements used of each cacheline goes down from  S to 1. Relatively speaking, this increases the cost of memory traffic in the loop: if $\mathtt{s}=1$, we load $1/\mathtt{S}$ cachelines per element; if $\mathtt{s}=\mathtt{S}$, we load one cacheline for each element. This effect is demonstrated in section  . • A second example of spatial locality worth observing involves the TLB (section  ). If a program references elements that are close together, they are likely on the same memory page, and address translation through the TLB will be fast. On the other hand, if a program references many widely disparate elements, it will also be referencing many different pages. The resulting TLB misses are very costly; see also section  . Exercise Consider the following pseudocode of an algorithm for summing $n$ numbers $x[i]$ where $n$ is a power of 2: for s=2,4,8,...,n/2,n: for i=0 to n-1 with steps s: x[i] = x[i] + x[i+s/2] sum = x[0] Analyze the spatial and temporal locality of this algorithm, and contrast it with the standard algorithm sum = 0 for i=0,1,2,...,n-1 sum = sum + x[i] Exercise Consider the following code, and assume that nvectors is small compared to the cache size, and length large. for (k=0; k<nvectors; k++) for (i=0; i<length; i++) a[k,i] = b[i] * c[k] How do the following concepts relate to the performance of this code: • Reuse • Cache size • Associativity Would the following code where the loops are exchanged perform better or worse, and why? for (i=0; i<length; i++) for (k=0; k<nvectors; k++) a[k,i] = b[i] * c[k] ### 1.5.2.3 Examples of locality Top > Locality and data reuse > Locality > Examples of locality Let us examine locality issues for a realistic example. The matrix-matrix multiplication $C\leftarrow A\cdot B$ can be computed in several ways. We compare two implementations, assuming that all matrices are stored by rows, and that the cache size is insufficient to store a whole row or column. \hbox{% \setbox0=\hbox{% for i=1..n for j=1..n for k=1..n c[i,j] += a[i,k]*b[k,j] } \ht0=1.2\ht0 \box0 \, for i=1..n for k=1..n for j=1..n c[i,j] += a[i,k]*b[k,j] }% These implementations are illustrated in figure The first implemenation constructs the $(i,j)$ element of $C$ by the inner product of a row of $A$ and a column of $B$, in the second a row of $C$ is updated by scaling rows of $B$ by elements of $A$. Our first observation is that both implementations indeed compute $C\leftarrow C+A\cdot B$, and that they both take roughly $2n^3$ operations. However, their memory behaviour, including spatial and temporal locality is very different. • [ c[i,j] ] In the first implementation, c[i,j] is invariant in the inner iteration, which constitutes temporal locality, so it can be kept in register. As a result, each element of $C$ will be loaded and stored only once. In the second implementation, c[i,j] will be loaded and stored in each inner iteration. In particular, this implies that there are now $n^3$ store operations, a factor of $n$ more than the first implementation. • [ a[i,k] ] In both implementations, a[i,k] elements are accessed by rows, so there is good spatial locality, as each loaded cacheline will be used entirely. In the second implementation, a[i,k]  is invariant in the inner loop, which constitutes temporal locality; it can be kept in register. As a result, in the second case $A$ will be loaded only once, as opposed to $n$ times in the first case. • [ b[k,j] ] The two implementations differ greatly in how they access the matrix $B$. First of all, b[k,j]  is never invariant so it will not be kept in register, and $B$ engenders $n^3$ memory loads in both cases. However, the access patterns differ. In second case, b[k,j]  is access by rows so there is good spatial locality: cachelines will be fully utilized after they are loaded. In the first implementation, b[k,j]  is accessed by columns. Because of the row storage of the matrices, a cacheline contains a part of a row, so for each cacheline loaded, only one element is used in the columnwise traversal. This means that the first implementation has more loads for $B$ by a factor of the cacheline length. There may also be TLB effects. Note that we are not making any absolute predictions on code performance for these implementations, or even relative comparison of their runtimes. Such predictions are very hard to make. However, the above discussion identifies issues that are relevant for a wide range of classical CPU . Exercise There are more algorithms for computing the product $C\leftarrow A\cdot B$. Consider the following: for k=1..n: for i=1..n: for j=1..n: c[i,j] += a[i,k]*b[k,j] Analyze the memory traffic for the matrix $C$, and show that it is worse than the two algorithms given above. ### 1.5.2.4 Core locality Top > Locality and data reuse > Locality > Core locality The above concepts of spatial and temporal locality were mostly properties of programs, although hardware properties such as cacheline length and cache size play a role in analyzing the amount of locality. There is a third type of locality that is more intimately tied to hardware: core locality . A code's execution is said to exhibit core locality if write accesses that are spatially or temporally close are performed on the same core or processing unit. The issue here is that of cache coherence (section  ) where two cores both have a copy of a certain cacheline in their local stores. If they both read from it there is no problem. However, if one of them writes to it, the coherence protocol will copy the cacheline to the other core's local store. This takes up precious memory bandwidth, so it is to be avoided. Core locality is not just a property of a program, but also to a large extent of how the program is executed in parallel. # 1.5.3 Programming strategies for high performance Top > Programming strategies for high performance In this section we will look at how different ways of programming can influence the performance of a code. This will only be an introduction to the topic; for further discussion see the book by Goedeker and Hoisie  [Goedeker:performance-book] . The full listings of the codes and explanations of the data graphed here can be found in chapter  . All performance results were obtained on the AMD Opteron processors of the Ranger computer  [tacc:ranger] . ## 1.5.4 Peak performance Top > Programming strategies for high performance > Peak performance For marketing purposes, it may be desirable to define a top speed' for a CPU. Since a pipelined floating point unit can yield one result per cycle asymptotically, you would calculate the theoretical peak performance as the product of the clock speed (in ticks per second), number of floating point units, and the number of cores; see section  . This top speed is unobtainable in practice, and very few codes come even close to it. The Linpack benchmark is one of the measures how close you can get to it; the parallel version of this benchmark is reported in the top 500'; see section  . ## 1.5.5 Pipelining Top > Programming strategies for high performance > Pipelining In section  you learned that the floating point units in a modern CPU are pipelined, and that pipelines require a number of independent operations to function efficiently. The typical pipelineable operation is a vector addition; an example of an operation that can not be pipelined is the inner product accumulation for (i=0; i<N; i++) s += a[i]*b[i] The fact that s gets both read and written halts the addition pipeline. One way to fill the floating point pipeline is to apply \indextermbusdef{loop}{unrolling}: for (i = 0; i < N/2-1; i ++) { sum1 += a[2*i] * b[2*i]; sum2 += a[2*i+1] * b[2*i+1]; } Now there are two independent multiplies in between the accumulations. With a little indexing optimization this becomes: for (i = 0; i < N/2-1; i ++) { sum1 += *(a + 0) * *(b + 0); sum2 += *(a + 1) * *(b + 1); a += 2; b += 2; } A first observation about this code is that we are implicitly using associativity and commutativity of addition: while the same quantities are added, they are now in effect added in a different order. As you will see in chapter  , in computer arithmetic this is not guaranteed to give the exact same result. In a further optimization, we disentangle the addition and multiplication part of each instruction. The hope is that while the accumulation is waiting for the result of the multiplication, the intervening instructions will keep the processor busy, in effect increasing the number of operations per second. for (i = 0; i < N/2-1; i ++) { temp1 = *(a + 0) * *(b + 0); temp2 = *(a + 1) * *(b + 1); sum1 += temp1; sum2 += temp2; a += 2; b += 2; } Finally, we realize that the furthest we can move the addition away from the multiplication, is to put it right in front of the multiplication of the next iteration : for (i = 0; i < N/2-1; i ++) { sum1 += temp1; temp1 = *(a + 0) * *(b + 0); sum2 += temp2; temp2 = *(a + 1) * *(b + 1); a += 2; b += 2; } s = temp1 + temp2; Of course, we can unroll the operation by more than a factor of two. While we expect an increased performance because of the longer sequence of pipelined operations, large unroll factors need large numbers of registers. Asking for more registers than a CPU has is called register spill , and it will decrease performance. Another thing to keep in mind is that the total number of operations is unlikely to be divisible by the unroll factor. This requires cleanup code after the loop to account for the final iterations. Thus, unrolled code is harder to write than straight code, and people have written tools to perform such source-to-source transformations automatically. Cycle times for unrolling the inner product operation up to six times are given in table  . Note that the timings do not show a monotone behaviour at the unrolling by four. This sort of variation is due to various memory-related factors. \begin{array} 123456 6794507340359334528 {|rrrrrr|} \caption{Cycle times for the inner product operation, unrolled up to six times} \label{tab:unroll-inner} \begin{table} ## 1.5.6 Cache size Top > Programming strategies for high performance > Cache size Above, you learned that data from L1 can be moved with lower latency and higher bandwidth than from L2, and L2 is again faster than L3 or memory. This is easy to demonstrate with code that repeatedly access the same data: for (i=0; i<NRUNS; i++) for (j=0; j<size; j++) array[j] = 2.3*array[j]+1.2; If the size parameter allows the array to fit in cache, the operation will be relatively fast. As the size of the dataset grows, parts of it will evict other parts from the L1 cache, so the speed of the operation will be determined by the latency and bandwidth of the L2 cache. This can be seen in figure  . The full code is given in section  . Exercise Argue that with a large enough problem and an LRU replacement policy (section  ) essentially all data in the L1 will be replaced in every iteration of the outer loop. Can you write an example code that will let some of the L1 data stay resident? Often, it is possible to arrange the operations to keep data in L1 cache. For instance, in our example, we could write for (b=0; b<size/l1size; b++) { blockstart = 0; for (i=0; i<NRUNS; i++) { for (j=0; j<l1size; j++) array[blockstart+j] = 2.3*array[blockstart+j]+1.2; } blockstart += l1size; } assuming that the L1 size divides evenly in the dataset size. This strategy is called cache blocking or blocking for cache reuse . Exercise To arrive at the blocked code, the loop over j was split into a loop over blocks and an inner loop over the elements of the block; the outer loop over i was then exchanged with the loop over the blocks. In this particular example you could also simply exchange the i and j loops. Why may this not be optimal for performance? ## 1.5.7 Cache lines Top > Programming strategies for high performance > Cache lines Since data is moved from memory to cache in consecutive chunks named cachelines (see section  ), code that does not utilize all data in a cacheline pays a bandwidth penalty. This is born out by a simple code for (i=0,n=0; i<L1WORDS; i++,n+=stride) array[n] = 2.3*array[n]+1.2; Here, a fixed number of operations is performed, but on elements that are at distance stride . As this stride increases, we expect an increasing runtime, which is born out by the graph in figure  . The graph also shows a decreasing reuse of cachelines, defined as the number of vector elements divided by the number of L1 misses (on stall; see section  ). The full code is given in section  . ## 1.5.8 TLB Top > Programming strategies for high performance > TLB As explained in section  , the TLB maintains a small list of frequently used memory pages and their locations; addressing data that are location on one of these pages is much faster than data that are not. Consequently, one wants to code in such a way that the number of pages accessed is kept low. Consider code for traversing the elements of a two-dimensional array in two different ways. #define INDEX(i,j,m,n) i+j*m array = (double*) malloc(m*n*sizeof(double)); /* traversal #1 */ for (j=0; j<n; j++) for (i=0; i<m; i++) array[INDEX(i,j,m,n)] = array[INDEX(i,j,m,n)]+1; /* traversal #2 */ for (i=0; i<m; i++) for (j=0; j<n; j++) array[INDEX(i,j,m,n)] = array[INDEX(i,j,m,n)]+1; The results (see Appendix  for the source code) are plotted in figures and  . Using $m=1000$ means that, on the AMD Opteron which has pages of $512$ doubles, we need roughly two pages for each column. We run this example, plotting the number TLB misses', that is, the number of times a page is referenced that is not recorded in the TLB. 1. In the first traversal this is indeed what happens. After we touch an element, and the TLB records the page it is on, all other elements on that page are used subsequently, so no further TLB misses occur. Figure  shows that, with increasing $n$, the number of TLB misses per column is roughly two. 2. In the second traversal, we touch a new page for every element of the first row. Elements of the second row will be on these pages, so, as long as the number of columns is less than the number of TLB entries, these pages will still be recorded in the TLB. As the number of columns grows, the number of TLB increases, and ultimately there will be one TLB miss for each element access. Figure  shows that, with a large enough number of columns, the number of TLB misses per column is equal to the number of elements per column. ## 1.5.9 Cache associativity Top > Programming strategies for high performance > Cache associativity There are many algorithms that work by recursive division of a problem, for instance the \indexac{FFT} algorithm. As a result, code for such algorithms will often operate on vectors whose length is a power of two. Unfortunately, this can cause conflicts with certain architectural features of a CPU, many of which involve powers of two. In section  you saw how the operation of adding a small number of vectors $$\forall_j\colon y_j= y_j+\sum_{i=1}^mx_{i,j}$$ is a problem for direct mapped caches or set-associative caches with associativity. As an example we take the AMD Opteron , which has an L1 cache of 64K bytes, and which is two-way set associative. Because of the set associativity, the cache can handle two addresses being mapped to the same cache location, but not three or more. Thus, we let the vectors be of size $n=4096$ doubles, and we measure the effect in cache misses and cycles of letting $m=1,2,\ldots$. First of all, we note that we use the vectors sequentially, so, with a cacheline of eight doubles, we should ideally see a cache miss rate of $1/8$ times the number of vectors $m$. Instead, in figure  we see a rate approximately proportional to $m$, meaning that indeed cache lines are evicted immediately. The exception here is the case $m=1$, where the two-way associativity allows the cachelines of two vectors to stay in cache. Compare this to figure  , where we used a slightly longer vector length, so that locations with the same $j$ are no longer mapped to the same cache location. As a result, we see a cache miss rate around $1/8$, and a smaller number of cycles, corresponding to a complete reuse of the cache lines. Two remarks: the cache miss numbers are in fact lower than the theory predicts, since the processor will use prefetch streams. Secondly, in figure  we see a decreasing time with increasing $m$; this is probably due to a progressively more favourable balance between load and store operations. Store operations are more expensive than loads, for various reasons. ## 1.5.10 Loop nests Top > Programming strategies for high performance > Loop nests If your code has nested loops , and the iterations of the outer loop are independent, you have a choice which loop to make outer and which to make inner. Exercise Give an example of a doubly-nested loop where the loops can be exchanged; give an example where this can not be done. If at all possible, use practical examples from this book. If you have such choice, there are many factors that can influence your decision. Programming language: C~versus~Fortran If your loop describes the $(i,j)$ indices of a two-dimensional array, it is often best to let the $i$-index be in the inner loop for Fortran, and the $j$-index inner for C. Exercise Can you come up with at least two reasons why this is possibly better for performance? However, this is not a hard-and-fast rule. It can depend on the size of the loops, and other factors. For instance, in the matrix-vector product, changing the loop ordering changes how the input and output vectors are used. Parallelism model If you want to parallelize your loops with OpenMP , you generally want the outer loop to be larger than the inner. Having a very short outer loop is definitely bad. A short inner loop can also often be \emph{vectorized by the compiler} . On the other hand, if you are targeting a \indexac{GPU}, you want the large loop to be the inner one. The unit of parallel work should not have branches or loops. ## 1.5.11 Loop tiling Top > Programming strategies for high performance > Loop tiling In some cases performance can be increased by breaking up a loop into two nested loops, an outer one for the blocks in the iteration space, and an inner one that goes through the block. This is known as \indextermbusdef{loop}{tiling}: the (short) inner loop is a tile, many consecutive instances of which form the iteration space. For instance for (i=0; i<n; i++) ... becomes bs = ... /* the blocksize */ nblocks = n/bs /* assume that n is a multiple of bs */ for (b=0; b<nblocks; b++) for (i=b*bs,j=0; j<bs; i++,j++) ... For a single loop this may not make any difference, but given the right context it may. For instance, if an array is repeatedly used, but it is too large to fit into cache: for (n=0; n<10; n++) for (i=0; i<100000; i++) ... = ...x[i] ... then loop tiling may lead to a situation where the array is divided into blocks that will fit in cache: bs = ... /* the blocksize */ for (b=0; b<100000/bs; b++) for (n=0; n<10; n++) for (i=b*bs; i<(b+1)*bs; i++) ... = ...x[i] ... For this reason, loop tiling is also known as cache blocking . The block size depends on how much data is accessed in the loop body; ideally you would try to make data reused in L1 cache, but it is also possible to block for L2 reuse. Of course, L2 reuse will not give as high a performance as L1 reuse. Exercise Analyze this example. When is x brought into cache, when is it reused, and when is it flushed? What is the required cache size in this example? Rewrite this example, using a constant #define L1SIZE 65536 For a less trivial example, let's look at matrix transposition $A\leftarrow B^t$. Ordinarily you would traverse the input and output matrices: // regular.c for (int i=0; i<N; i++) for (int j=0; j<N; j++) A[i][j] += B[j][i]; Using blocking this becomes: // blocked.c for (int ii=0; ii<N; ii+=blocksize) for (int jj=0; jj<N; jj+=blocksize) for (int i=ii*blocksize; i<MIN(N,(ii+1)*blocksize); i++) for (int j=jj*blocksize; j<MIN(N,(jj+1)*blocksize); j++) A[i][j] += B[j][i]; Unlike in the example above, each element of the input and output is touched only once, so there is no direct reuse. However, there is reuse of cachelines. Figure  shows how one of the matrices is traversed in a different order from its storage order, for instance columnwise while it is stored by rows. This has the effect that each element load transfers a cacheline, of which only one element is immediately used. In the regular traversal, this streams of cachelines quickly overflows the cache, and there is no reuse. In the blocked traversal, however, only a small number of cachelines is traversed before the next element of these lines is needed. Thus there is reuse of cachelines, or spatial locality . The most important example of attaining performance through blocking is the matrix!matrix product!tiling . In section  we looked at the matrix-matrix multiplication, and concluded that little data could be kept in cache. With loop tiling we can improve this situation. For instance, the standard way of writing this product for i=1..n for j=1..n for k=1..n c[i,j] += a[i,k]*b[k,j] can only be optimized to keep c[i,j] in register: for i=1..n for j=1..n s = 0 for k=1..n s += a[i,k]*b[k,j] c[i,j] += s Using loop tiling we can easily keep parts of  a[i,:] in cache, assuming that a is stored by rows: for kk=1..n/bs for i=1..n for j=1..n s = 0 for k=(kk-1)*bs+1..kk*bs s += a[i,k]*b[k,j] c[i,j] += s ## 1.5.12 Optimization strategies Top > Programming strategies for high performance > Optimization strategies Figures and show that there can be wide discrepancy between the performance of naive implementations of an operation (sometimes called the reference implementation'), and optimized implementations. Unfortunately, optimized implementations are not simple to find. For one, since they rely on blocking, their loop nests are double the normal depth: the matrix-matrix multiplication becomes a six-deep loop. Then, the optimal block size is dependent on factors like the target architecture. We make the following observations: • Compilers are not able to extract anywhere close to optimal performance . • There are autotuning projects for automatic generation of implementations that are tuned to the architecture. This approach can be moderately to very successful. Some of the best known of these projects are Atlas  [atlas-parcomp] for Blas kernels, and Spiral  [spiral] for transforms. ## 1.5.13 Cache aware and cache oblivious programming Top > Programming strategies for high performance > Cache aware and cache oblivious programming Unlike registers and main memory, both of which can be addressed in (assembly) code, use of caches is implicit. There is no way a programmer can load data explicitly to a certain cache, even in assembly language. However, it is possible to code in a cache aware' manner. Suppose a piece of code repeatedly operates on an amount of data that is less than the cache size. We can assume that the first time the data is accessed, it is brought into cache; the next time it is accessed it will already be in cache. On the other hand, if the amount of data is more than the cache size , it will partly or fully be flushed out of cache in the process of accessing it. We can experimentally demonstrate this phenomenon. With a very accurate counter, the code fragment for (x=0; x<NX; x++) for (i=0; i<N; i++) a[i] = sqrt(a[i]); will take time linear in N up to the point where a fills the cache. An easier way to picture this is to compute a normalized time, essentially a time per execution of the inner loop: t = time(); for (x=0; x<NX; x++) for (i=0; i<N; i++) a[i] = sqrt(a[i]); t = time()-t; t_normalized = t/(N*NX); The normalized time will be constant until the array a fills the cache, then increase and eventually level off again. (See section  for an elaborate discussion.) The explanation is that, as long as a[0]...a[N-1] fit in L1 cache, the inner loop will use data from the L1 cache. Speed of access is then determined by the latency and bandwidth of the L1 cache. As the amount of data grows beyond the L1 cache size, some or all of the data will be flushed from the L1, and performance will be determined by the characteristics of the L2 cache. Letting the amount of data grow even further, performance will again drop to a linear behaviour determined by the bandwidth from main memory. If you know the cache size, it is possible in cases such as above to arrange the algorithm to use the cache optimally. However, the cache size is different per processor, so this makes your code not portable, or at least its high performance is not portable. Also, blocking for multiple levels of cache is complicated. For these reasons, some people advocate \emph{cache oblivious programming}  [Frigo:oblivious] . Cache oblivious programming can be described as a way of programming that automatically uses all levels of the cache hierarchy . This is typically done by using a divide-and-conquer strategy, that is, recursive subdivision of a problem. As a simple example of cache oblivious programming is the \indextermbus{matrix} {transposition} operation $B\leftarrow A^t$. First we observe that each element of either matrix is accessed once, so the only reuse is in the utilization of cache lines. If both matrices are stored by rows and we traverse $B$ by rows, then $A$ is traversed by columns, and for each element accessed one cacheline is loaded. If the number of rows times the number of elements per cacheline is more than the cachesize, lines will be evicted before they can be reused. In a cache oblivious implementation we divide $A$ and $B$ as $2\times2$ block matrices, and recursively compute $B_{11}\leftarrow A_{11}^t$, $B_{12}\leftarrow A_{21}^t$, et cetera; see figure  . At some point in the recursion, blocks $A_{ij}$ will now be small enough that they fit in cache, and the cachelines of $A$ will be fully used. Hence, this algorithm improves on the simple one by a factor equal to the cacheline size. The cache oblivious strategy can often yield improvement, but it is not necessarily optimal. In the matrix-matrix product it improves on the naive algorithm, but it is not as good as an algorithm that is explicitly designed to make optimal use of caches  [GotoGeijn:2008:Anatomy] . See section  for a discussion of such techniques in stencil computations. ## 1.5.14 Case study: Matrix-vector product Top > Programming strategies for high performance > Case study: Matrix-vector product Let us consider in some detail the matrix-vector product $$\forall_{i,j}\colon y_i\leftarrow a_{ij}\cdot x_j$$ This involves $2n^2$ operations on $n^2+2n$ data items, so reuse is $O(1)$: memory accesses and operations are of the same order. However, we note that there is a double loop involved, and the $x,y$ vectors have only a single index, so each element in them is used multiple times. Exploiting this theoretical reuse is not trivial. In /* variant 1 */ for (i) for (j) y[i] = y[i] + a[i][j] * x[j]; the element y[i] seems to be reused. However, the statement as given here would write y[i] to memory in every inner iteration, and we have to write the loop as /* variant 2 */ for (i) { s = 0; for (j) s = s + a[i][j] * x[j]; y[i] = s; } to ensure reuse. This variant uses $2n^2$ loads and $n$ stores. This code fragment only exploits the reuse of  y explicitly. If the cache is too small to hold the whole vector  x plus a column of  a , each element of  x is still repeatedly loaded in every outer iteration. Reversing the loops as /* variant 3 */ for (j) for (i) y[i] = y[i] + a[i][j] * x[j]; exposes the reuse of  x , especially if we write this as /* variant 3 */ for (j) { t = x[j]; for (i) y[i] = y[i] + a[i][j] * t; } but now y is no longer reused. Moreover, we now have $2n^2+n$ loads, comparable to variant 2, but $n^2$ stores, which is of a higher order. It is possible to get reuse both of $x$ and $y$, but this requires more sophisticated programming. The key here is to split the loops into blocks. For instance: for (i=0; i<M; i+=2) { s1 = s2 = 0; for (j) { s1 = s1 + a[i][j] * x[j]; s2 = s2 + a[i+1][j] * x[j]; } y[i] = s1; y[i+1] = s2; } ` This is also called loop {unrolling}, or strip mining . The amount by which you unroll loops is determined by the number of available registers. # 1.6 Further topics Top > Further topics ## 1.6.1 Power consumption Top > Further topics > Power consumption \SetBaseLevel 2 Another important topic in high performance computers is their power consumption. Here we need to distinguish between the power consumption of a single processor chip, and that of a complete cluster. As the number of components on a chip grows, its power consumption would also grow. Fortunately, in a counter acting trend, miniaturization of the chip features has simultaneously been reducing the necessary power. Suppose that the feature size $\lambda$ (think: thickness of wires) is scaled down to $s\lambda$ with $s<1$. In order to keep the electric field in the transistor constant, the length and width of the channel, the oxide thickness, substrate concentration density and the operating voltage are all scaled by the same factor. ## 1.6.2 Derivation of scaling properties Top > Further topics > Derivation of scaling properties The properties of constant field scaling or Dennard scaling   [Bohr:30yearDennard,Dennard:scaling] are an ideal-case description of the properties of a circuit as it is miniaturized. One important result is that power density stays constant as chip features get smaller, and the frequency is simultaneously increased. The basic properties derived from circuit theory are that, if we scale feature size down by $s$: $$\begin{array} {|l|c|}\hline \hbox{Feature size}&\sim s\\ \hbox{Voltage}&\sim s\\ \hbox{Current}&\sim s \\ \hbox{Frequency}&\sim s\\ \hline \end{array}$$ Then we can derive that $$\hbox{Power} = V\cdot I \sim s^2,$$ and because the total size of the circuit also goes down with $s^2$, the power density stays the same. Thus, it also becomes possible to put more transistors on a circuit, and essentially not change the cooling problem. This result can be considered the driving force behind Moore's law , which states that the number of transistors in a processor doubles every 18 months. The frequency-dependent part of the power a processor needs comes from charging and discharging the capacitance of the circuit, so $$\begin{array} {|l|l|} \hline \hbox{Charge}&q=CV\\ \hbox{Work}&W=qV=CV^2\\ \hbox{Power}&W/\hbox{time}=WF=CV^2F \\ \hline \end{array}$$ This analysis can be used to justify the introduction of multicore processors. ## 1.6.3 Multicore Top > Further topics > Multicore At the time of this writing (circa 2010), miniaturization of components has almost come to a standstill, because further lowering of the voltage would give prohibitive leakage. Conversely, the frequency can not be scaled up since this would raise the heat production of the chip too far. Figure  gives a dramatic illustration of the heat that a chip would give off, if single-processor trends had continued. One conclusion is that computer design is running into a power wall , where the sophistication of a single core can not be increased any further (so we can for instance no longer increase \indexac{ILP} and pipeline depth ) and the only way to increase performance is to increase the amount of explicitly visible parallelism. This development has led to the current generation of multicore processors; see section  . It is also the reason GPU with their simplified processor design and hence lower energy consumption are attractive; the same holds for FPGA . One solution to the power wall problem is introduction of multicore processors. Recall equation  , and compare a single processor to two processors at half the frequency. That should have the same computing power, right? Since we lowered the frequency, we can lower the voltage if we stay with the same process technology. The total electric power for the two processors (cores) is, ideally, $$\left. \begin{array} {c} C_{\mathrm{multi}} = 2C\\ F_{\mathrm{multi}} = F/2\\ V_{\mathrm{multi}} = V/2\\ \end{array} \right\} \Rightarrow P_{\mathrm{multi}} = P/4.$$ In practice the capacitance will go up by a little over 2, and the voltage can not quite be dropped by 2, so it is more likely that $P_{\mathrm{multi}} \approx 0.4\times P$  [Chandrakasa:transformations] . Of course the integration aspects are a little more complicated in practice  [Bohr:ISSCC2009] ; the important conclusion is that now, in order to lower the power (or, conversely, to allow further increase in performance while keeping the power constant) we now have to start programming in parallel. ## 1.6.4 Total computer power Top > Further topics > Total computer power The total power consumption of a parallel computer is determined by the consumption per processor and the number of processors in the full machine. At present, this is commonly several Megawatts. By the above reasoning, the increase in power needed from increasing the number of processors can no longer be offset by more power-effective processors, so power is becoming the overriding consideration as parallel computers move from the petascale (attained in 2008 by the IBM Roadrunner ) to a projected exascale. In the most recent generations of processors, power is becoming an overriding consideration, with influence in unlikely places. For instance, the SIMD design of processors (see section  , in particular section  ) is dictated by the power cost of instruction decoding. \SetBaseLevel 1 ## 1.6.5 Operating system effects Top > Further topics > Operating system effects HPC practitioners typically don't worry much about the \indexacdef{OS}. However, sometimes the presence of the OS can be felt, influencing performance. The reason for this is the \indextermsubdef{periodic}{interrupt}, where the operating system upwards of 100 times per second interrupts the current process to let another process or a system daemon have a time slice . If you are running basically one program, you don't want the overhead and jitter , the unpredictability of process runtimes, this introduces. Therefore, computers have existed that basically dispensed with having an OS to increase performance. The periodic interrupt has further negative effects. For instance, it pollutes the cache and TLB . As a fine-grained effect of jitter, it degrades performance of codes that rely on barriers between threads, such as frequently happens in OpenMP (section  ). In particular in financial applications , where very tight synchronization is important, have adopted a Linux kernel mode where the periodic timer ticks only once a second, rather than hundreds of times. This is called a \indextermsubdef{tickless}{kernel}.
# name 'bpy' is undefined In test.py I have the following line. It was taken from this page: bpy.data.objects Then I start blender from the bash prompt: blender Rigged\ Hand.blend --python test.py The response on the terminal: Traceback (most recent call last): File "/home/sam/proj/Rigged Hand/a/setup.py", line 1, in <module> bpy.data.objects NameError: name 'bpy' is not defined Even though blender has opened my file, I can open the python prompt inside blender and type the same thing; I get the response <bpy_collection[5], BlendDataObjects> as expected. Why can I not use bpy in the script which I invoked from the command line? • You have to "import bpy" first. – Doyousketch2 Apr 16 '17 at 7:55 • @Doyousketch2 Okay, that was unobvious, because you don't have to import bpy from the python console inside Blender. But post it as an answer, and I'll accept it. – Wilson Apr 16 '17 at 13:04
# Typeset large Go boards I have need to typeset some very large Go boards, much larger than normal 19x19 boards, as part of a project I am undertaking in infinite Go, similar to my projects in infinite chess. Can you help me typeset very large Go boards,such as the following comparatively tiny instance: I am aware of the LaTeX package igo, which seems to produce high-quality Go board images, and while I don't have much experience with it, it does seem to be flexible on board size. I can imagine placing several of these boards next to each other, if need be, in order to make large boards. That is what I had had to do to make my various infinite chess positions, and it worked fine. I was not actually able to get the igo package working on my (miktek, Windows) system, however, and so I am very sorry that I cannot provide a minimal working example. Meanwhile, I find the input format used in the igo package to be inconvenient for the typical infinite Go positions, which have many dozens of stones. With the igo format, one evidently types something like \white{b4,c4,d4,e4,f4,g3,g2,c3} \black{b3,b2,c2,d3,e3,f3,f2} \begin{center} \shortstack{\showgoban\\White to kill} \end{center} to get the image: Thus, one specifies the coordinates of each stone separately. It would be much more convenient for me, however, to be able to specify the position something like this: showgoboard{........\\ .WWWWW..\\ .BWBBBW.\\ .BB..BW.\\ ........\\} or something similar, and also with (much) larger board sizes. The main point is that with the design of infinite go positions, one is often copying and pasting parts of a position or shifting everything by one unit or two units up or to the right, and so on, and this would be extremely irritating with the coordinate-style igo notation, but it would be easy with my notation. I want a notation system that allows one easily to copy parts of a position as a block into other parts of the board. Can someone help me out? I would like either a Go board package that can accept something like my notation, or else a translation macro that would convert my notation or something like it into the notation accepted by igo or another Go package. • Just some idea: I'd probably use a single TikZ grid instead of several boards placed next to each other. And since my TeX programming skills are pretty non-existent, I'd probably use some scripting language to translate your desired input into the right coordinates. But I am sure someone will have a better clue about this. – Uwe Ziegenhagen May 14 '18 at 3:51 • @UweZiegenhagen I'd rather use a Go-specific package, since I will also need to label stones and locations, etc., and I would expect that most of them can do that kind of thing very well. I'd rather not end up writing my own Go package. – JDH May 14 '18 at 3:54 • If you are going to be copying and pasting positions, I think it would be beneficial to define macros: \whitetokill takes two arguments and typesets your example at that location, \line{b}{5,6}{50,6} typesets the semi-infinite black line in your drawing, etc. But that's all beyond my TeX abilities. – Teepeemm May 15 '18 at 1:12 • The point is that the positions will be huge, and the design of them will involve dozens of minor adjustments and movings-around of portions. I want to lay them out and work with them in ASCII, where they can be visualized, rather than with an opaque coordinate representation. – JDH May 15 '18 at 1:21 ## First proposition Here is simple solution via TikZ (note: don't use . as empty intersection). \documentclass[tikz,margin=2mm]{standalone} \usepackage{etoolbox} \usepackage{xparse} \tikzset{ @go line/.code args={#1#2#3!}{ \edef\myrest{#3} \edef\myletter{#2} \draw (#1,-\mycount) +(0,-.5) -- +(0,.5) +(.5,0) -- +(-.5,0); \ifdefstring{\myletter}{W}{ \path[draw,fill=white] (#1,-\mycount) circle (.4); }{ \ifdefstring{\myletter}{B}{ \path[draw,fill=black] (#1,-\mycount) circle (.4); }{} } \ifdefstring{\myrest}{}{}{ \pgfmathsetmacro\mynext{int(#1+1)} \pgfkeysalso{@go line={\mynext}#3!} } }, go line/.style={@go line={0}#1!}, } \NewDocumentCommand\showgoboard{m}{ \begin{tikzpicture}[x=\mygounit,y=\mygounit] \edef\myconf{#1} \foreach [count=\mycount] \myline in \myconf { \tikzset{go line/.expanded=\myline} } \end{tikzpicture} } \def\mygounit{5mm} \begin{document} \showgoboard{ --------, -WWWWW--, -BWBBBW-, -BB--BW-, -------- } \end{document} ## Second proposition Enhanced version with better use of pgfkeys and possible annotations. The (customisable) parser uses the following convention: • . (a point) is an error! • B as black stone • W as white stone • - (a minus) as empty point • any other symbol as annotation letter \documentclass[tikz,margin=2mm]{standalone} \usepackage{lmodern} \usepackage{etoolbox} \usepackage{xparse} \tikzset{ @go draw intersection/.code 2 args={ \draw[line width=.5\pgflinewidth] (#1,-#2) +(0,-.5) -- +(0,.5) +(.5,0) -- +(-.5,0); }, @go draw stone/.code n args={3}{ \path[draw,fill=#3] (#1,-#2) circle (.4); }, @go draw annot/.code n args={3}{ \pgfmathsetmacro\gofsa{\gounit*.8} \pgfmathsetmacro\gofsb{\gofsa*1.2} \path (#1,-#2) node[fill=white,inner sep=.1em,node font=\fontsize{\gofsa pt}{\gofsb pt}\selectfont,anchor=mid] {#3}; }, @go draw W/.style 2 args={@go draw stone={#1}{#2}{white}}, @go draw B/.style 2 args={@go draw stone={#1}{#2}{black}}, @go draw -/.style 2 args={}, @go line/.code args={#1#2#3!}{ \pgfkeysalso{ @go draw intersection={#1}{\gocount}, @go draw #2/.try={#1}{\gocount} } \ifbool{pgfkeyssuccess}{}{\pgfkeysalso{@go draw annot={#1}{\gocount}{#2}}} \edef\gorest{#3} \ifdefstring{\gorest}{}{}{ \pgfmathsetmacro\gonext{int(#1+1)} \pgfkeysalso{@go line={\gonext}#3!} } }, go line/.style={@go line={0}#1!}, } \NewDocumentCommand\showgoboard{m}{ \begin{tikzpicture}[x=\gounit,y=\gounit,line width=\gounit*0.03] \edef\goconf{#1} \foreach [count=\gocount] \goline in \goconf { \tikzset{go line/.expanded=\goline} } \end{tikzpicture} } \def\gounit{6mm} \begin{document} \showgoboard{ ---------, --BBcd---, -WWWWW-e-, -BWBBBW--, -BB-aBWb-, --------- } \end{document} If you are willing to use LuaLaTex, converting your notation into the igo notation is reasonably simple: \documentclass[10pt,a4paper]{article} \usepackage{luacode} \newcommand{\white}[1]{WHITE:#1\\} \newcommand{\black}[1]{BLACK:#1\\} \newbox\tmp \newenvironment{go}{ \setbox\tmp\vbox\bgroup{} \directlua{startrecording()} }{ \egroup \directlua{stoprecording()} } \begin{luacode*} do local board = {} -- record a part (line) of the go board board = board .. buf .. "\n" end function startrecording() -- start recording the go board; called before the go environment starts board = "" end function row_to_char(row) -- converts a row number to a character (0=>a, 1=>b, 2=>c, etc) return string.char(65+row):lower() end function stoprecording() -- called when the go environment ends -- converts the board notation into a string for white & black -- removes the callback to stop recording local row = 0 -- current row number local col = 0 -- current column local white = "" -- the string of white positions local black = "" -- the string of black positions local c_char = "" -- helper variable containing the current character for pos = 1 , board:len() do -- iterate over the board string c_char = board:sub(pos,pos):lower() print("At pos"..row_to_char(row)..col.."("..pos.."), char:"..board:sub(pos,pos)) if c_char == '.' then col = col + 1 elseif c_char == 'w' then col = col + 1 white = white..','..row_to_char(row)..col elseif c_char == 'b' then col = col + 1 black = black..','..row_to_char(row)..col elseif c_char == '\n' then col = 0 row = row + 1 end end -- output the commands \white{white position} \black{black position} tex.print("\\white{"..white:sub(2).."}\\black{"..black:sub(2).."}") end end \end{luacode*} \begin{document} \begin{go} ..W..W.. BB.BB..B ........ WWBBWWBB \end{go} \end{document} • Thanks, Jonathan! I'll give this a try and see if it will work for me. It looks very promising. – JDH May 14 '18 at 11:28
# Proving that a space of compactly supported functions forms a topological vector space Let $\mathcal{D}_K$ be the space of all on $K\subseteq \mathbb{R}$ compactly supported, infinitely differentiable functions. I have shown for $N \in \mathbb{N}$, $\epsilon > 0$ and $U_{N,\epsilon} := \{f \in \mathcal{D}_K: \max_{0\leq i \leq N} ||f^{(i)}||_\infty < \epsilon\}$, that the sets $f + U_{N,\epsilon}$ with $f\in \mathcal{D}_K$ form a basis of a topology on our space. I would now like to show that with this topology, our space beocmes a topological vector space. However, I am having troubles showing the continuity of the addition and multiplication. Since I have a basis of my topology, it would seem helpful to use the fact that I only have to show that preimages of basis sets are open. I tried to use this fact by showing that $$\{(g_1,g_2) \in (\mathcal{D}_K \times \mathcal{D}_K) : g_1+g_2 \in h + U_{N,\epsilon}\} \in \mathcal{T} \times \mathcal{T}$$ but this did not really get me anywhere. Any hints would be greatly appreciated! This has nothing to do with the particular space $\mathcal D_K$. You have a vector space $X$ and a sequence $(p_N)_{N\in\mathbb N}$ of norms with corresponding balls $B_N(x,r)=\{y\in X: p_N(x-y)<r\}$. They define a topology where $A\subseteq X$ is open if, for every $a\in A$ there are $N\in\mathbb N$ and $\varepsilon>0$ such that $B_N(a,\varepsilon)\subseteq A$. Continuity of multiplication with scalars and addition then just follows from $p_N(tx)=|t|p_N(x)$ and the triangle inequality, respectively: For the latter, consider $x,y\in X$ and an open set $A$ containing $x+y$. For some $N$ and $\varepsilon>0$ you then have $B_N(x+y,\varepsilon)\subseteq A$ and this yields $B_N(x,\varepsilon/2)+B_N(y,\varepsilon/2)\subseteq A$.
# Implementing hexagon binning in mathematica Hexagon bin plots are a useful way of visualising large datasets of bivariate data. Here are a few examples: With bin frequency indicated by grey level... ..and by glyph size There are packages for creating this kind of plot in both "R" and Python. Obviously, the idea is similar to DensityHistogram plots. How would one go about generating hexagonal bins in Mathematica? Also, how would one control the size of a plotmarker based on the bin frequency? Update As a starting point I have tried to create a triangular grid of points: vert1 = Table[{x, Sqrt[3] y}, {x, 0, 20}, {y, 0, 10}]; vert2 = Table[{1/2 x, Sqrt[3] /2 y}, {x, 1, 41, 2}, {y, 1, 21, 2}]; verttri = Flatten[Join[vert1, vert2], 1]; overlaying some data.. data = RandomReal[{0, 20}, {500, 2}]; ListPlot[{verttri, data}, AspectRatio -> 1] next step might involve using Nearest: nearbin = Nearest[verttri]; ListPlot[nearbin[#] & /@ data, AspectRatio -> 1] This gives the location of vertices with nearby data points. Unfortunately, I can't see how to count those data points.. - Do you know about SmoothDensityHistogram - and likewise DensityHistogram? – Jens Jul 6 '13 at 5:38 MMA's VoronoiDiagram will be too slow for your needs. – VF1 Jul 6 '13 at 5:49 @Jens, Yes I am aware of these functions. Not quite what I am looking for. Depending on the context I think hexagonal binning gives better representation of the data due to it's regular tessellated symmetry and nearest neighbour optimisation than a square grid. – geordie Jul 6 '13 at 6:01 @geordie OK - so you insist on hexbins. That means work. The bins form a honeycomb lattice, which is not the same as a hexagonal lattice because it's not a 2D Bravias lattice. You have to choose two Bravais lattice basis vectors and two more basis vectors for the unit cell. Then for each data point, project its coordinate on those vectors, find the integer part of the projections and count them. That will lead to the histogram count for each cell. It doesn't involve finding Voronoi diagrams or using Nearest (too slow). Don't have time to execute these steps myself... – Jens Jul 6 '13 at 6:10 @geordie Don't forget to accept answers to your questions! Our unanswered pile is growing! – Dr. belisarius Feb 20 '14 at 2:32 With the set-up you already have, you can do nearbin = Nearest[Table[verttri[[i]] -> i, {i, Length@verttri}]]; counts = BinCounts[nearbin /@ data, {1, Length@verttri + 1, 1}]; which counts the number of data points nearest to each vertex. Then just draw the glyphs directly: With[{maxCount = Max@counts}, Graphics[ Table[Disk[verttri[[i]], 0.5 Sqrt[counts[[i]]/maxCount]], {i, Length@verttri}], Axes -> True]] The square root is so that the area of the glyphs, and the number of black pixels, corresponds to the number of data points in each bin. I used data = RandomVariate[MultinormalDistribution[{10, 10}, 7 IdentityMatrix[2]], 500] to get the following plot: As Jens has commented already, though, this is a unnecessarily slow way of going about it. One ought to be able to directly compute the bin index from the coordinates of a data point without going through Nearest. This way was easy to implement and works fine for a 500-point dataset though. Update: Here's an approach that doesn't require you to set up a background grid in advance. We'll directly find the nearest grid vertex for each data point and then tally them up. To do so, we'll break the hexagonal grid into rectangular tiles of size $1\times\sqrt3$. As it turns out, when you're in say the $[0,1]\times[0,\sqrt3]$ tile, your nearest grid vertex can only be one of the five vertices in the tile, $(0,0)$, $(1,0)$, $(1/2,\sqrt3/2)$, $(0,\sqrt3)$, and $(1,\sqrt3)$. We could work out the conditions explicitly, but let's just let Nearest do the work: tileContaining[{x_, y_}] := {Floor[x], Sqrt[3] Floor[y/Sqrt[3]]}; nearestWithinTile = Nearest[{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}, {0, Sqrt[3]}, {1, Sqrt[3]}}]; nearest[point_] := Module[{tile, relative}, tile = tileContaining[point]; relative = point - tile; tile + First@nearestWithinTile[relative]]; The point is that a NearestFunction over just five points ought to be extremely cheap to evaluate—certainly much cheaper than your NearestFunction over the several hundred points in verttri. Then we just have to apply nearest on all the data points and tally the results. tally = Tally[nearest /@ data]; With[{maxTally = Max[Last /@ tally]}, Graphics[ Disk[#[[1]], 1/2 Sqrt[#[[2]]/maxTally]] & /@ tally, Axes -> True, AxesOrigin -> {0, 0}]] - Nice update. This is indeed a lot faster. – geordie Jul 7 '13 at 7:37 As this answer got much more attention than I thought at first, I felt compelled to pack the answer as a function. The following function draws three different hex-histogram representations of your data. Code stolen liberally from the question and from Rahul's answer. f[data_, cs_, ptype_] := Module[{hc, vh, nearbin, counts, trr, maxCount}, hc = Flatten[Table[{i, j}, (*hexagons positions*) {p, 0, 1}, {j, 3 p/2 cs + Min[data[[All, 2]]], Max[data[[All, 2]]], 3 cs}, {i, p Sqrt[3]/2 cs + Min[data[[All, 1]]], Max[data[[All, 1]]], Sqrt[3] cs}], 2]; vh = cs Vertices[Hexagon]; nearbin = Nearest[Table[hc[[i]] -> i, {i, Length@hc}]]; counts = BinCounts[nearbin /@ data, {1, Length@hc + 1, 1}]; trr[v_, tr_] := Translate[Rotate[Polygon[v], Pi/2], tr]; maxCount = Max@counts; Graphics[Table[ Switch[ptype, 1, {Opacity[counts[[n]]/maxCount], trr[vh, hc[[n]]]}, 2, trr[counts[[n]]/maxCount vh, hc[[n]]], 3, trr[Sqrt[counts[[n]]/maxCount] vh, hc[[n]]]], {n, Length@hc}], Axes -> True] ] << Polytopes cs = 1/2;(*cell size*) data = RandomVariate[MultinormalDistribution[{10, 10}, 7 IdentityMatrix[2]], 500]; GraphicsRow@Table[f[data, cs, i], {i, 3}] ### The old answer can be found in the post's edit history - removed - This is great. I have incorporated your answer into a standalone function with inputs honeybin[xdata_, ydata_, cs_, {{xmin_, xmax_}, {ymin_, ymax_}}]:=... and the dimensions of the verttri array set by Max@xdata + padding etc. I can post it as an answer unless you want to do something similar? Also, I couldn't find a way to crop the hexagons within the axes (i also tried Framed->True). any suggestions? should I post what I have? – geordie Jul 7 '13 at 4:15 +1 that's sweet (like honey! :)). You can do it without the Polytopes package: Hexagon[w_] := Polygon[w + N[#] & /@ (Through[{Cos, Sin}[Pi # /3]] & /@ Range[0, 6])]; then ..Rotate[Hexagon[cs] ... – cormullion Jul 7 '13 at 6:54 @geordie Re: Cropping. Try with these two options combined: PlotRange -> { {...},{...} } and PlotRangeClipping -> True – Dr. belisarius Jul 7 '13 at 13:35 @geordie I tried to make a standalone function. See if it fits you – Dr. belisarius Jul 7 '13 at 14:34 You may want to steal from my updated answer, too... :) – Rahul Jul 7 '13 at 16:29 Here is a standalone function incorporating belisarius' standalone framework and Rahul Narain's fast use of NearestFunction. Full credit to these guys! Needs["Polytopes"]; honeybin[data_, cs_, ptype_, {{xmin_, xmax_}, {ymin_, ymax_}}] := (*"cs_" is the width of a bin,i.e.the distance between bin centers*) Module[{tileContaining, nearestWithinTile, nearest, tally, vh, hexr, trr}, tileContaining[{x_, y_}] := {Floor[x], Sqrt[3] Floor[y/Sqrt[3]]}; nearestWithinTile = Nearest[{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}, {0, Sqrt[3]}, {1, Sqrt[3]}}]; nearest[point_] := Module[{tile, relative}, tile = tileContaining[point]; relative = point - tile; tile + First@nearestWithinTile[relative]]; vh = cs 1.05 Vertices[Hexagon]/Sqrt[3]; trr[v_, tr_] := Translate[Rotate[Polygon[v], Pi/2], tr]; tally = Tally[cs (nearest /@ (data/cs))]; With[{maxTally = Max[Last /@ tally]}, Graphics[Table[ Switch[ptype, 1, {Blend[{Lighter[Red, 0.99], Darker[Red, 0.6]}, Sqrt[Last@tally[[n]]/maxTally]], trr[vh, First@tally[[n]]]}, 2, trr[Last@tally[[n]]/maxTally vh, First@tally[[n]]], 3, trr[Sqrt[Last@tally[[n]]/maxTally] vh, First@tally[[n]]]], {n, Length@tally}], Frame -> True, PlotRange -> {{xmin, xmax}, {ymin, ymax}}, PlotRangeClipping -> True]]] note there is a fudge factor (1.05) to get rid of whitespace between the cells. With a little Colour... data = RandomVariate[MultinormalDistribution[{10, 10}, 7 IdentityMatrix[2]], 1000]; honeybin[data, 1, 1, {{1, 15}, {3, 20}}] - Nice! By the way, the easiest way to do a different bin size is to divide all the data by cs, bin them, and then multiply the bin positions by cs. – Rahul Jul 8 '13 at 4:55 @RahulNarain I just tried modifying the following: tally = Tally[nearest /@ (data **/ cs**)]; and ...1, {Opacity[Last@tally[[n]]/maxTally], trr[vh, **cs** First@tally[[n]]]},...etc... is this what you meant? it gives very odd results. feel free to edit my post if you wish. – geordie Jul 8 '13 at 5:24 There, I fixed it. I changed the meaning of cs to "the width of a cell" so the original cells in the question correspond to cs = 1. Also, the results were no longer up to date, so I removed them; sorry if you're not happy about that. – Rahul Jul 8 '13 at 16:29 By the way, your function doesn't work for me unless I take << Polytopes; out of the function, but I don't know if that's me doing something wrong or what, so I haven't touched it. – Rahul Jul 8 '13 at 16:30 Geordie: ClearAll["Global"] only clears variables in the Global context, whereas your package definitions live inside its context. It is certainly possible to load a package in a function and use it immediately. With your previous definition, when you evaluate that cell, the symbols Vertices and Hexagon are created in the Global context at the parsing stage, but they're supposed to be in the Polytopes context. You could instead define them with their long forms as PolytopesVertices` so that it is created in that context at parse time. You'll get a shadowing warning otherwise @Rah – R. M. Jul 9 '13 at 0:43
# How do you simplify (root4(5))/(4root4(27))? Dec 19, 2016 $\frac{\sqrt[4]{15}}{12}$ #### Explanation: $\frac{\sqrt[4]{5}}{4 \cdot \sqrt[4]{27}}$, we first want to remove the radicals from the denominator, we can do this by multiplying by $\frac{\sqrt[\frac{3}{4}]{27}}{\sqrt[\frac{3}{4}]{27}}$ which is equal to $1$. $\frac{\sqrt[4]{5}}{4 \cdot \sqrt[4]{27}} \cdot {\sqrt[4]{27}}^{3} / {\sqrt[4]{27}}^{3} = \frac{\sqrt[4]{5} \cdot {\sqrt[4]{27}}^{3}}{4 \cdot \sqrt[4]{27} \cdot {\sqrt[4]{27}}^{3}} = \frac{\sqrt[4]{5} \cdot {\sqrt[4]{27}}^{3}}{4 \cdot {\sqrt[4]{27}}^{4}} = \frac{\sqrt[4]{5} \cdot {\sqrt[4]{27}}^{3}}{4 \cdot 27}$ $= \frac{\sqrt[4]{5} \cdot \sqrt[4]{27} \cdot \sqrt[4]{27} \cdot \sqrt[4]{27}}{108} = \frac{\sqrt[4]{5 \cdot {27}^{3}}}{108} = \frac{\sqrt[4]{5 \cdot {3}^{9}}}{108} = \frac{\sqrt[4]{5 \cdot {3}^{4} \cdot {3}^{4} \cdot 3}}{108} = \frac{3 \cdot 3 \sqrt[4]{5 \cdot 3}}{108} = \frac{9 \sqrt[4]{15}}{9 \cdot 12} = \frac{\cancel{9} \sqrt[4]{15}}{\cancel{9} \cdot 12}$ This is as simplified as it gets, $\frac{\sqrt[4]{15}}{12}$
Find the Price Index Number using Simple Aggregate Method in the following example. Use 1995 as base year in the following problem. - Mathematics and Statistics Sum Find the Price Index Number using Simple Aggregate Method in the following example. Use 1995 as base year in the following problem. Commodity A B C D E Price (in ₹) in 1995 42 30 54 70 120 Price (in ₹) in 2005 60 55 74 110 140 Solution Commodity Price in 1995(Base year) p0 Price in 2005(Current year)p1 A 42 60 B 30 55 C 54 74 D 70 110 E 120 140 Total 316 439 From the table, ∑ p0 = 316, ∑ p1 = 439 Price Index Number (P01) = (sum "p"_1)/(sum "p"_0) xx 100 = 439/316 xx 100 = 138.92 Concept: Construction of Index Numbers Is there an error in this question or solution?
class allennlp.data.dataset_readers.coreference_resolution.conll.ConllCorefReader(max_span_width: int, token_indexers: Dict[str, allennlp.data.token_indexers.token_indexer.TokenIndexer] = None, lazy: bool = False)[source] Reads a single CoNLL-formatted file. This is the same file format as used in the SrlReader, but is preprocessed to dump all documents into a single file per train, dev and test split. See scripts/compile_coref_data.sh for more details of how to pre-process the Ontonotes 5.0 data into the correct format. Returns a Dataset where the Instances have four fields: text, a TextField containing the full document text, spans, a ListField[SpanField] of inclusive start and end indices for span candidates, and metadata, a MetadataField that stores the instance’s original text. For data with gold cluster labels, we also include the original clusters (a list of list of index pairs) and a SequenceLabelField of cluster ids for every span candidate. Parameters max_span_width: int, required. The maximum width of candidate spans to consider. token_indexersDict[str, TokenIndexer], optional This is used to index the words in the document. See TokenIndexer. Default is {"tokens": SingleIdTokenIndexer()}. text_to_instance(self, sentences:List[List[str]], gold_clusters:Union[List[List[Tuple[int, int]]], NoneType]=None) → allennlp.data.instance.Instance[source] Parameters sentencesList[List[str]], required. A list of lists representing the tokenised words and sentences in the document. gold_clustersOptional[List[List[Tuple[int, int]]]], optional (default = None) A list of all clusters in the document, represented as word spans. Each cluster contains some number of spans, which can be nested and overlap, but will never exactly match between clusters. Returns An Instance containing the following Fields: textTextField The text of the full document. spansListField[SpanField] A ListField containing the spans represented as SpanFields with respect to the document text. span_labelsSequenceLabelField, optional The id of the cluster which each possible span belongs to, or -1 if it does not belong to a cluster. As these labels have variable length (it depends on how many spans we are considering), we represent this a as a SequenceLabelField with respect to the spans ListField. allennlp.data.dataset_readers.coreference_resolution.conll.canonicalize_clusters(clusters:DefaultDict[int, List[Tuple[int, int]]]) → List[List[Tuple[int, int]]][source] The CONLL 2012 data includes 2 annotated spans which are identical, but have different ids. This checks all clusters for spans which are identical, and if it finds any, merges the clusters containing the identical spans. class allennlp.data.dataset_readers.coreference_resolution.winobias.WinobiasReader(max_span_width: int, token_indexers: Dict[str, allennlp.data.token_indexers.token_indexer.TokenIndexer] = None, lazy: bool = False)[source] Winobias is a dataset to analyse the issue of gender bias in co-reference resolution. It contains simple sentences with pro/anti stereotypical gender associations with which to measure the bias of a coreference system trained on another corpus. It is effectively a toy dataset and as such, uses very simplistic language; it has little use outside of evaluating a model for bias. The dataset is formatted with a single sentence per line, with a maximum of 2 non-nested coreference clusters annotated using either square or round brackets. For example: [The salesperson] sold (some books) to the librarian because [she] was trying to sell (them). Returns a list of Instances which have four fields: text, a TextField containing the full sentence text, spans, a ListField[SpanField] of inclusive start and end indices for span candidates, and metadata, a MetadataField that stores the instance’s original text. For data with gold cluster labels, we also include the original clusters (a list of list of index pairs) and a SequenceLabelField of cluster ids for every span candidate in the metadata also. Parameters max_span_width: int, required. The maximum width of candidate spans to consider. token_indexersDict[str, TokenIndexer], optional This is used to index the words in the sentence. See TokenIndexer. Default is {"tokens": SingleIdTokenIndexer()}. text_to_instance(self, sentence:List[allennlp.data.tokenizers.token.Token], gold_clusters:Union[List[List[Tuple[int, int]]], NoneType]=None) → allennlp.data.instance.Instance[source] Parameters sentenceList[Token], required. The already tokenised sentence to analyse. gold_clustersOptional[List[List[Tuple[int, int]]]], optional (default = None) A list of all clusters in the sentence, represented as word spans. Each cluster contains some number of spans, which can be nested and overlap, but will never exactly match between clusters. Returns An Instance containing the following Fields: textTextField The text of the full sentence. spansListField[SpanField] A ListField containing the spans represented as SpanFields with respect to the sentence text. span_labelsSequenceLabelField, optional The id of the cluster which each possible span belongs to, or -1 if it does not belong to a cluster. As these labels have variable length (it depends on how many spans we are considering), we represent this a as a SequenceLabelField with respect to the spans ListField.
# $\sum_{n\geq 0}a_n$ converges iff $\displaystyle \sum_{n\geq 0} \frac{a_n}{\sum_{k=0}^n a_k}$ converges The last problem I posted had a wrong statement. I recovered the correct one. Let $(a_n)$ be a sequence of positive real numbers. Prove that $\sum_{n\geq 0}a_n$ converges iff $\displaystyle \sum_{n\geq 0} \frac{a_n}{\sum_{k=0}^n a_k}$ converges The direct statement is easy to prove using comparison test. I'm (again!) stuck with the converse. I tried summation by part, without success. Note that the convergence of $\displaystyle \sum_{n\geq 0} \frac{a_n}{\sum_{k=0}^n a_k}$ implies $\displaystyle \frac{a_n}{\sum_{k=0}^n a_k} \to 0$ I don't know what to do next ... Call $S_n=\sum_{k=0}^n a_k$ and suppose $\sum a_n$ does not converge, hence $\lim S_n = +\infty$. Notice also $(S_n)$ is increasing. Then $\sum_{n=p+1}^q \frac{a_n}{S_n} \geq \sum_{n=p+1}^q \frac{a_n}{S_q} =\frac{S_q-S_p}{S_q}=1-\frac{S_p}{S_q}$. If $\sum \frac{a_n}{S_n}$ converges, letting $q$ tend to infinity gives: $\sum_{n=p+1}^{+\infty} \frac{a_n}{S_n}\geq 1$, which is absurd, as the rest of a convergent series should have limit 0.
# Injection iff Left Inverse/Proof 1 ## Theorem A mapping $f: S \to T, S \ne \O$ is an injection if and only if: $\exists g: T \to S: g \circ f = I_S$ where $g$ is a mapping. That is, if and only if $f$ has a left inverse. ## Proof Let: $\exists g: T \to S: g \circ f = I_S$ From Identity Mapping is Injection, $I_S$ is injective, so $g \circ f$ is injective. So from Injection if Composite is Injection, $f$ is an injection. Note that the existence of such a $g$ requires that $S \ne \O$. $\Box$ Now, assume $f$ is an injection. We now define a mapping $g: T \to S$ as follows. As $S \ne \O$, we choose $x_0 \in S$. By definition of injection: $f^{-1} {\restriction_{\Img f} } \to S$ is a mapping so it is possible to define: $\map g y = \begin{cases} x_0: & y \in T \setminus \Img f \\ \map {f^{-1} } y: & y \in \Img f \end{cases}$ It does not matter what the elements of $T \setminus \Img f$ are. Using the construction given, the equation $g \circ f = I_S$ holds whatever value (or values) we choose for $g \sqbrk {T \setminus \Img f}$. The remaining elements of $T$ can be mapped arbitrarily, and will not affect the image of any $x \in S$ under the mapping $g \circ f$. So, for all $x \in S$: $\map {g \circ f} x = \map g {\map f x}$ is the unique element of $S$ which $f$ maps to $\map f x$. This unique element is $x$. Thus $g \circ f = I_S$. $\blacksquare$
# diophantus Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to [email protected] #### Neutron Inelastic Scattering Processes as Background for Double-Beta Decay Experiments 02 Apr 2007 nucl-ex, hep-ex arxiv.org/abs/0704.0306 Abstract. We investigate several Pb$(n,n'\gamma$) and Ge$(n,n'\gamma$) reactions. We measure $\gamma$-ray production from Pb$(n,n'\gamma$) reactions that can be a significant background for double-beta decay experiments which use lead as a massive inner shield. Particularly worrisome for Ge-based double-beta decay experiments are the 2041-keV and 3062-keV $\gamma$ rays produced via Pb$(n,n'\gamma$). The former is very close to the ^{76}Ge double-beta decay endpoint energy and the latter has a double escape peak energy near the endpoint. Excitation $\gamma$-ray lines from Ge$(n,n'\gamma$) reactions are also observed. We consider the contribution of such backgrounds and their impact on the sensitivity of next-generation searches for neutrinoless double-beta decay using enriched germanium detectors. # Reviews There are no reviews yet.
# What is lost in terms of approximation, when writing a problem in terms of a Differential Algebraic Equation (DAE) system rather than an ODE system? It may be that we have a model where the following equation holds for some phenomenon: $$(1)\quad x + y + z = T$$ Importantly, $T$ is a constant, i.e.: $$(2) \quad \frac{\mathrm{d}T(t)}{\mathrm{d}t} = 0$$ We may be interested in the evolution of $x$, $y$ and $z$ as governed by their differential equations. If we want to make sure that we maintain condition $(2)$ (not guaranteed for any numerical integration scheme), then we may represent the problem as follows: $$(3a)\quad \frac{\mathrm{d}x}{\mathrm{d}t} = f(x, y, z, t)$$ $$(3b)\quad \frac{\mathrm{d}y}{\mathrm{d}t} = g(x, y, z, t)$$ $$(3c)\quad \frac{\mathrm{d}z}{\mathrm{d}t} = \frac{\mathrm{d}T(t)}{\mathrm{d}t}- f(x,y,z,t) - g(x, y, z, t) = - f(x,y,z,t) - g(x, y, z, t)$$ So, some program might calculate the evolution of $x$, $y$ and $z$ by numerical integrating $x$ and $y$, while using condition $(2)$ to determine $z$ (generally, abd during the calculations for $x$ and $y$). One takeaway I have had from a numerical analysis course I was a part of recently is that "if you're not violating some conservation rule in the phenomenon being modelled, then you have the exact solution" (and numerical solutions are not exact solutions, usually). Thus, even though I might feel nice about saving $(2)$, I know I am losing something somewhere else -- what is it that I am losing? An idea I have have is that $(3)$ essentially allows a numerical integration scheme to assume that any difference in $T$ is solely made up by $z$, rather than by some combination of $x$, $y$ and $z$? Or, put differently, any error in the numerical integration scheme in violating $(2)$ is chalked up to $z$, so we may not be getting the actual dynamics of $z$, but rather, the dynamics of $z$ along with some error based on how $(2)$ is being violated by the numerical scheme? How might one more precisely explain what the downside of using a DAE system is in terms of "what is lost in approximation"? • Doesn't that imply that for a system with no conservation laws every numerical method produces the exact solution, since it would violate no conservation laws? Not every system is described fully by its conservation laws, not every system is integrable. – Kirill Dec 23 '14 at 7:07 • @Kirill I am basically just paraphrasing something as I understand it, but my understanding is limited. Also, I lack the mathematical sophistication needed to understand the link you supplied, or think of a real life phenomenon that doesn't conserve anything -- at the very least, energy is always conserved? – user89 Dec 23 '14 at 9:06 • @user89 In the shallow water equations (SWE), energy is not conserved across shocks. Indeed, energy is an entropy for the SWE. In the real physical system that SWE seek to model, kinetic and potential energy (modeled by SWE) is converted to heat (not modeled by SWE). – Jed Brown Dec 23 '14 at 22:29 • A simple example of a non-closed system with non-conserved energy is a falling particle whose height satisfies $\ddot h = -g$. – Kirill Dec 24 '14 at 7:38 • The "$T$" in your equation (3c) should not be there. – Jan Dec 24 '14 at 11:13 One takeaway I have had from a numerical analysis course I was a part of recently is that "if you're not violating some conservation rule in the phenomenon being modelled, then you have the exact solution" (and numerical solutions are not exact solutions, usually). Kirill and JedBrown pointed out situations where conservation doesn't hold; open systems in thermodynamics are another situation. Even in those cases, you don't have the exact solution. Here's a stupidly trivial example that can't be solved exactly on a computer: \begin{align} \dot{x} = 0, \quad x(0) = 1/3 \end{align} One-third is not exactly representable in floating-point arithmetic because it can't be expressed as a finite decimal in base-2. So generally speaking, you should expect any numerical solution to have some amount of error in it (even if that error is small); in this case, the error is going to be in the machine representation of one-third. An idea I have have is that (3) essentially allows a numerical integration scheme to assume that any difference in $T$ is solely made up by $z$, rather than by some combination of $x$, $y$ and $z$? Or, put differently, any error in the numerical integration scheme in violating (2) is chalked up to $z$, so we may not be getting the actual dynamics of $z$, but rather, the dynamics of $z$ along with some error based on how (2) is being violated by the numerical scheme? I don't see why this situation would be true in general, and numerical schemes normally have tolerances so that you could control the error in the numerical solution. In many cases, you can control the error on a per-variable basis, so conceivably, you could place tight tolerances on $y$ and $z$, plus a loose tolerance on $x$, and expect that you might have more error in $x$ than in $y$ or $z$. How might one more precisely explain what the downside of using a DAE system is in terms of "what is lost in approximation"? As you pointed out, you can express some physical systems in multiple ways. Any ODE can be expressed as a DAE; you can also take advantage of conservation laws to rewrite your system of differential equations. In general, rewriting an ODE as a DAE (in a trivial way, re-expressing $\dot{y} = f(y)$ as $\dot{y} - f(y) = 0$) and using numerical methods is more expensive, but not always. For instance, if when using sparse direct methods to solve the linear systems for the numerical methods, the Jacobian matrix of the DAE is much more sparse than the Jacobian matrix of the right-hand-side of the ODE, then solving the DAE may be faster. For a while, this approach was advocated in combustion. Some ODE methods don't preserve certain types of invariants (e.g., the conservation laws in your example), so the numerical solution might be mostly accurate, but fail to satisfy the invariant in a noticeable way. An example would be something like using a Runge-Kutta method for some Hamiltonian system like orbital dynamics; over short times, your numerical solution will be fine, but over long times, failure to conserve energy will start to introduce significant error in the numerical solution. This shortcoming can be corrected by using symplectic methods (e.g., Verlet), or you could solve the DAE, assuming the DAE is stable. • "...or you could solve the DAE, assuming the DAE is stable." -- but if I do that, and now save the conservation law, where I am losing out instead? – user89 Dec 25 '14 at 0:50 • To summarize my answer, you usually give up something in execution time; solving the DAE is usually slower. You also might not be able to solve the DAE, depending on its index. – Geoff Oxberry Dec 27 '14 at 3:56
TP # Reactive Programming with F# in London If you’re a registered member of the F#unctional Londoners user group, then you maybe already know that I'll be visiting London on June 23 and I’ll be talking about Reactive programming with F#. If you're not a registered member and occasionally visit London, then you should definitely register. The user group is organized by Carolyn Miller and Phil Trelford (whom I met some time ago at Microsoft Research in Cambridge). Among others, previous speakers include Robert Pickering (who also presented some samples based on my F# and Accelerator series). Finally, another reason for joining the group is that it has a great name (as well as a logo)! ## When, what & where? By the way, I'll also have a free copy of my Real-World Functional Programming book to give away during the talk! ## Reactive Programming with F# I'm sure you're already convinced to come. Nevertheless, you may still want to know what I'm going to talk about. There are many areas where F# offers clear benefits such as parallel & concurrent programming. I believe that reactive programming is another great area for F#. In reactive programming, we face quite different problems than in other programming styles. We (as the authors of the application) can no longer specify what the application should do. Instead, the application needs to be able to handle many possible combinations of events. This aspect of programming is sometimes called inversion of control. Reactive programming is important for programming user interfaces, especially today when user interfaces are becoming more interactive and more "fancy". To demonstrate this, I'm working on some nice Silverlight demos for the talk! However, it is also needed to handle other kinds of events such as completion of background task or message from other application. We'll look at the two essential techniques that F# provides for reactive programming: • Declarative event combinators - one way of writing reactive applications is to specify the whole event processing declaratively by saying "what" should be done with occurrences of events. This is particularly useful when we need to encode simpler logic with a clear data-flow. • Imperative using workflows - for more complicated interactions, we can use asynchronous workflows. This makes the code more explicit, but we get full control over the control-flow of the application. Even though this approach is more "imperative" it can be used for writing nicely composable functional code as well. I'm looking forward to seeing you at the talk next week! Published: Tuesday, 15 June 2010, 2:54 AM Tags: functional, random thoughts, universe, f# # Programming user interfaces using F# workflows Numerous Manning partners already published several exceprts from my Real-World Functional Programming book. You can find a list on the book's web page. However, the last excerpt published at DotNetSlackers is particularly interesting. It discusses how to use F# asynchronous workflows to write GUI applications. This is a very powerful programming pattern that is very difficult to do in any other .NET language. We first discussed it with Don Syme during my internship at Microsoft Research and I found it very elegant, so I made some space for it in the book. In fact, the entire Chapter 16 discusses various reactive programming techniques that can be used in F#. When designing applications that don't react to external events, you have lots of control flow constructs available, such as if-then-else expressions, for loops and while loops in imperative languages, or recursion and higher-order functions in functional languages. Constructs like this make it easy to describe what the application does. The control flow is clearly visible in the source code, so drawing a flowchart to describe it is straightforward. Understanding reactive applications is much more difficult. A typical C# application or GUI control that needs to react to multiple events usually involves mutable state. When an event occurs, it updates the state and may run more code in response to the event, depending on the current state. This architecture makes it quite difficult to understand the potential states of the application and the transitions between them. Using asynchronous workflows, we can write the code in a way that makes the control flow of the application visible even for reactive applications. You can read the complete article here: Programming user interfaces using F# workflows [^]. It is also worth adding that Manning offers 30% discount to DotNetSlackers readers (see the article for details!) Published: Thursday, 18 February 2010, 12:25 AM Tags: functional, random thoughts, universe, universe, f# # Deal of the day: Real-World Functional Programming Some time ago, I received my copies of Real-World Functional Programming. I started working on it back in May 2008 and as many people who had more experience with writing books told me, it took longer than I was expecting! Anyway, I have to say, it was worth it, holding the actual printed book with my name on the cover is just fantastic! The goal of the book is to present functional programming concepts and ideas in a readable form. I wanted to create a book that will teach you how to think functionally without using the usual shock therapy that people usually feel when seeing functional programming for the first time. There are already a couple of reviews that suggest I was quite successful: • Functional Programming for the Real World, by Tomas Petricek and Jon Skeet, has been a very helpful book for moving to F# from C#, as the authors do a fantastic job of helping to explain the differences between OOP and FP. James Black at Amazon.com • This book isn’t just a simple introduction to programming in F#; it’s an introductory text on functional programming covering the many reasons why it is time for this programming paradigm to finally be accepted by mainstream programmers. And it also contains much more... CliveT, Software Engineer at Red Gate Software • ... and there are many other great comments about the book at Manning book page. ## Deal of the day (January 24) Finally, here is one great news if you're interested in getting the book! Real-World Functional Programming is Manning's Deal of the Day this Sunday, January 24. On this day, the print book is available for \$20 from the Manning website, with code dotd0124. Published: Sunday, 24 January 2010, 5:00 PM Tags: random thoughts, functional, universe, writing # Real-World Functional Programming: Completed and printed! If you're following my blog or if you're interested in F# or functional programming in .NET, you probably noticed that I was working on a book Real-World Functional Programming. At some point, we called it Functional Programming for the Real-World, but then we changed the title back to a better sounding version Real-World Functional Programming (subtitle With examples in F# and C#). The book is also reason for a lower number of blog posts over the last year. Over the last month or so, we were doing the final edits, reviewing the final PDF version (I fixed quite a lot minor issues, synchronized book with the Beta 2 F# release and so on). Anyway, before a few days, I received the following email (as an author, I receive the same emails as those who ordered the book through the Manning Early Access Program, so that I can see what we're sending to our dear readers): Dear Tomas Petricek, We are pleased to announce that Real-World Functional Programming is now complete! As a MEAP subscriber you can download your copy of the finished ebook right now! (...) This ebook is the final version, identical to the softbound edition, which is currently being printed and will be available on December 24. If you chose the printed book option when you originally subscribed, we'll ship it to you automatically—no action required from you. ## Finally finished! Yes, that's right. The book is finally completed and as far as I know, it has been printed last week! If you already ordered the book, you won't receive it before Christmas, but it should come shortly after. I can't wait to see the book actually printed. The transition from the Word drafts I initially wrote to a final PDF version was already felt fantastic and I thought "It looks like a real book!" Among other things, there are now graphical arrows with comments inside listings, which looks really great and makes code listings much easier to read. Now I can look forward to seeing the actual book. Maybe I'm too conservative, but I have to say that I'm really glad that I wrote the book before everything is going to be published just electronically! Here is a couple of links that you may found interesting if you want to look inside the book... Published: Saturday, 19 December 2009, 9:54 PM Tags: functional, random thoughts, universe, writing # Functional Programming: Available Chapter Excerpts & Discount The work on my book Functional Programming for the Real World is slowly getting to the end. I'm currently creating index for the last couple of chapters and doing final updates based on the feedback from reviews and also from the forum at manning.com (this means that if you have some suggestions, it's the best time to post them - I haven't yet replied to all of them, but I'll certainly do that before the manuscript will go to the production). Published: Sunday, 26 July 2009, 3:41 AM Tags: functional, random thoughts, universe, parallel # Internship project: Reactive pattern matching I already mentioned that I was doing my second internship with Don Syme at Microsoft Research in Cambridge. This time, I was in Cambridge for 6 months from October until April, so it has been more than a month since I left, but as you can guess I didn't have time to write anything about the internship until now... There isn't much to say though, because the internship was simply fantastic. Cambridge is a beautiful place (here are some autumn and winter photos), the Microsoft Research lab in Cambridge is full of smart people, so it is a perferct working environment (except that you realize that you're not as clever as you think :-)). Also, it is just a few meters away from the Computer Laboratory of the Cambridge University, so there are always many interesting talks and seminars. So, big thanks to Don Syme, James Margetson and everyone else who I had a chance to work with during the internship. One of the reasons why I didn't have much time to write about the internship earlier is that I was invited to the Lang.NET Symposium shortly after the end of the internship. I had a chance to talk about my project there as well and there is even a video recording from the talk (the link is below), so you can watch it to find out more about my recent F# work. Published: Sunday, 17 May 2009, 11:00 PM Tags: random thoughts, universe, parallel, asynchronous, joinads # Source code for Real World Functional Programming available! As you can see, there has been quite a bit of silence on this blog for a while. There are two reasons for that - the first is that I'm still working on the book Real World Functional Programming, so all my writing activities are fully dedicated to the book. The second reason is that I'm having a great time doing an internship in the Programming Principles and Tools group at Microsoft Research in Cambridge with the F# team and namely the F# language designer Don Syme. The photo on the left side is the entrance gate to the Trinity College of the Cambridge University taken during the few days when there was a snow. I recently started using Live Gallery, so you can find more photos from Cambridge in my online gallery. Anyway, I just wanted to post a quick update with some information (and downloads!) related to the book... Published: Thursday, 12 February 2009, 2:10 AM Tags: random thoughts, c#, functional, universe, asynchronous # Announcing: Real-world Functional Programming in .NET If you’ve been reading my blog or seen some my articles, you know that I’m a big fan of the F# language and functional programming style. I’m also often trying to present a bit different view of C# and LINQ – for me it is interesting mainly because it brings many functional features to a main-stream language and allows using of many of the functional patterns in a real-world. Elegant way for working with data, which is the most commonly used feature of C# 3.0, is just one example of this functional approach. Talking about real-world applications of functional programming, there is also fantastic news about F#. It was announced last year that F# will become fully supported Visual Studio language and the first CTP version of F# was released this week! I always thought that the topics mentioned in the previous paragraph are really interesting and that functional programming will continue to become more and more important. That’s why I’m really excited by the news that I’d like to announce today – I’m writing a book about functional programming in F# and C#.... Published: Tuesday, 2 September 2008, 8:03 PM Tags: functional, random thoughts, universe, parallel, writing # Thesis: Client-side Scripting using Meta-programming I realized that I haven’t yet posted a link to my Bachelor Thesis, which I partially worked on during my visit in Microsoft Research and which I successfully defended last year. The thesis is about a client/server web framework for F# called F# WebTools, which I already mentioned here and its abstract is following: “Ajax” programming is becoming a de-facto standard for certain types of web applications, but unfortunately developing this kind of application is a difficult task. Developers have to deal with problems like a language impedance mismatch, limited execution runtime in web browser on the client-side and no integration between client and server-side parts that are developed as a two independent applications, but typically form a single and homogenous application. In this work we present the first project that deals with all three mentioned problems but which still integrates with existing web technologies such as ASP.NET on the server and JavaScript on the client. We use the F# language for writing both client and server-side part of the web application, which makes it possible to develop client-side code in a type-safe programming language using a subset of the F# library, and we provide a way to write both server-side and client-side code as a part of single homogeneous type defining the web page logic. The code is executed heterogeneously, part as JavaScript on the client, and part as native code on the server. Finally we use monadic syntax for the separation of client and server-side code, tracking this separation through the F# type system. The full text is available here: Client side scripting using meta-programming (PDF, 1.31MB) Published: Monday, 17 March 2008, 10:07 AM Tags: random thoughts, universe, web, links
## hpfan101 one year ago $\lim_{x \rightarrow \infty} xe^{-x}$ 1. hpfan101 $e ^{-x}$ 2. anonymous it is the same as $\lim_{x\to \infty}\frac{x}{e^x}$ does that help? 3. hpfan101 Well, I got that far. And when I take the limit of that, I would get infinity over infinity. Not sure what to do next. 4. anonymous think what grows faster $$e^x$$ or $$x$$ 5. hpfan101 e^x 6. anonymous of course much much (much) faster 7. hpfan101 So, since the denominator will be a much bigger number, the fraction will be very small. So is the answer of this limit zero? 8. anonymous yes 9. hpfan101 Ok, thank you! :)
# Eigenvalues of complex special orthogonal matrix I've read that a matrix $P \in SO_3(\mathbb{C})$ must have an eigenvalue 1. How do I prove this? I understand the real case: the eigenvalues of $P \in SO_3(\mathbb{R})$ have complex modulus 1 and solve the characteristic polynomial of the matrix, which has real coefficients. Since the characteristic polynomial is a real cubic, if it has complex roots they are a conjugate pair (multiplying to 1) and the third eigenvalue must be 1 to make all the eigenvalues multiply to $1 = \det P$. Conceptually, this is the fixed axis of rotation. It seems like the analogy should hold to the complex case, but I can't figure it out. The complex matrix $P$ satisfies $P^TP=I$ and $\det(P)=1$. Let $\lambda$ be an eigenvalue of $P$ with eigenvector $x$. Then $\lambda\ne0$ and $Px=\lambda x$ implies $$x=P^TPx = \lambda P^Tx,$$ and $x$ is an eigenvector of $P^T$ to the eigenvalue $\lambda^{-1}$. As $P$ and $P^T$ have the same characteristic polynomial, it follows that $\lambda^{-1}$ is an eigenvalue of $P$ as well. Let $\lambda_1,\lambda_2,\lambda_3$ be the three eigenvalues of $P$. Assume that $1$ is not an eigenvalue of $P$. Assume $\lambda_1^2\ne1$. Then $\lambda_1^{-1}\ne \lambda_1$, and one of $\lambda_2,\lambda_3$ is equal to $\lambda_1^{-1}$. Wlog assume $\lambda_2=\lambda_1^{-1}$. Now $1=\det(P)=\lambda_1\lambda_2\lambda_3 = \lambda_3$. And $\lambda_3=1$. Contradiction. It remains to consider the case $\lambda_1=-1$. If $\lambda_2=-1$ then $\lambda_3=1$ from $\det(P)=1$. Thus we can concentrate on the case $\lambda_2^2\ne1$. It follows $\lambda_2\lambda_3=1$, and from $\det(P)=1$ we get $\lambda_2\lambda_3=-1$, which is impossible. Hence, the matrix $P$ has an eigenvalue $1$.
# Why do modern TeX variants not support floating point arithmetic? I understand that, at the time when TeX was devised, no single standard for floating point calculations was available. But, these days there is IEEE 754. Why doesn't any TeX variant support it? Granted, there's LuaTeX, but IEEE 754 was popular long before that, so the question is justified. • Backward compatibility. – topskip Dec 5 '11 at 12:43 • Personally, I would already be satisfied with 64bit integer arithmetic. – Martin Scharrer Dec 5 '11 at 12:53 • @Seamus: All plots in my thesis are pgfplots-powered. That uses a lot floating point arithmetic. – Stefan Majewsky Dec 5 '11 at 13:19 • When you consider the thinness and preciseness of lines a printer is capable of, coupled to the absorptive properties of paper diffusing the ink (or blurring of toner prior to laser treatment etc.), not to mention the resolution of the human eye, you should be satisfied with pgfplots and 64bit integers at @MartinScharrer says. ;) – qubyte Dec 5 '11 at 15:12 • Contrary to the preceding comments, I can think of situations in which floating-point numbers would come in handy. See tex.stackexchange.com/questions/112599/…, for instance. Note that Bruno Le Floch and the rest of the LaTeX3 team have plans to make LaTeX3 compliant with IEEE-754-2008. – jub0bs May 31 '13 at 22:48 Answering such a question is difficult as records for 'Why no' are usually less easy to come by than for 'Why'. However, we can reconstruct a reasonable chain of events. Knuth wrote TeX to solve a specific problem: typesetting The Art of Computer Programming. Whilst he did make TeX Turing-complete, his model for creating documents is very much that the TeX input is close to the typesetting: looking at the source for The TeXbook for example it is clear that there is an approach having information 'resolved'. Knuth's use cases are also largely not those that might use things like an FPU. It's not surprising, therefore, that he has not extended TeX in this area. (This is as good a point as any to mention back-compatibility. Adding new primitives always has the potential to break something, as some user will doubtless have defined say `\fpexpr` themselves. However, the bigger risk in an archive-stable produce like TeX90 is that any change might bring in new bugs elsewhere: 'if it ain't broke, don't fix it' being a guiding idea.) Engine developers post-Knuth have all had particular issues they've been interested in. Crucially, these tend to be problems that we can't easily solve outside of the typesetting itself. Floating point work doesn't really fall within that scope: for any 'serious' stuff, one might reasonably expect to pre-process in a specialist tool anyway. If we look at major efforts in engine development, we can see that adding an FPU as a specific aim has been unlikely. Taking the ideas in roughly chronological order • e-TeX adds ideas that are very general in supporting TeX programming (`\protected`, new register ranges, etc.) which build on existing ideas in TeX90. Crucially here, whilst `\numexpr` and so on were added, they provide a few operators (`+`, `-`, `*`, `/`, `(`, `)`), with only parentheses not mapping directly to existing primitives. An FPU has to cover a lot more • e-TeX also adds code that is 'close' to typesetting, for example extending widow/orphan control to multiple lines, adding `\middle`, etc.: all very much remote from needed FPU support • pdfTeX added direct PDF output, and whilst is also includes various utility additions most of those are trivial exposures of idea from support libraries (e.g. elapsed time) • XeTeX (and previous projects) extends TeX to support Unicode: focussed on character range, and includes ideas for dealing with a range of scripts • XeTeX also adds support for system fonts: again, no link to FPU work • LuaTeX does the above and exposes the internals of TeX using Lua: adding an FPU is a consequence of integrating the latter but is not a major driver of these efforts Overall, therefore, we can see that for people actually doing the engine work there has been no obvious place to add an FPU prior to LuaTeX. Moreover, there has been little push from the user community. One can do a range of approximate operations in macros, for example see the `trig` package, which will enable support for floats for general typesetting. Doing more complex work tends to be best viewed as the job of a specialist tool: typesetting good-looking results is great, but if you want to do more analysis you probably need an interactive approach. Packages such as `pgfplots` do make it more easy to use TeX for this type of work (it's my workflow), but out of the range of complex typesetting challenges one might point to, FPU support is pretty niche. (I think it is also worth noting that implementing a full range of FP functions without some library support would be non-trivial: I speak from experience in `l3fp` work. The attraction of this work is likely to be low to engine development experts: doing it in the macro layer is an interesting intellectual challenge!) • As comments on the question say, an IEEE 754 FPU can be handy, but that's not the same as vital. – Joseph Wright Jul 27 '17 at 8:43 • may i note that luatex does not guarantee backward compatibility. – barbara beeton Jul 28 '17 at 0:38 It boils down to killing the holy cow of backward compatibility, best explained by Donald Knuth himself in the video on The importance of stability for TeX. • I think not only that, it is also not a trivial task (seeing the amount of work to add fp calculations to metapost) for no apparent gain. – Khaled Hosny Dec 5 '11 at 13:07 • How does adding new features kill backwards compatibility? The only issue I see is name clashes. – Stefan Majewsky Dec 5 '11 at 13:18 • @KhaledHosny High accuracy was not needed for Metafont, because it produces bitmaps. For Metapost it's different. – egreg Dec 5 '11 at 13:29 • @Stefan true, my thought was that one would replaced the other. Is creeping featurism an issue? – uli Dec 5 '11 at 13:32 • @egreg: "no apparent gain" was in reference to TeX not MetaPost. – Khaled Hosny Dec 5 '11 at 15:32
tangent galvanometer by Soshamim Tags: galvanometer, tangent P: 6 Hey, I was wondering how would you find the value of the magnetic field inside a square coil? My books talk about finding the value of the mag. field at the center of a circular coil and everytime I search google for galvanometers made with square loops, I only find information on circular loops. In case you're wondering what my class is doing, our teacher wants us to make a lab that he can use for his future students, so my partner and I chose to make a lab that determines the relationship between the current and the angle the compass needle makes with the vertical plane of the coil. Thanks in advance. -Syed HW Helper Sci Advisor P: 3,033 Quote by Soshamim Hey, I was wondering how would you find the value of the magnetic field inside a square coil? My books talk about finding the value of the mag. field at the center of a circular coil and everytime I search google for galvanometers made with square loops, I only find information on circular loops. In case you're wondering what my class is doing, our teacher wants us to make a lab that he can use for his future students, so my partner and I chose to make a lab that determines the relationship between the current and the angle the compass needle makes with the vertical plane of the coil. Thanks in advance. -Syed I replied to your other post about this in the College thread. Here you have provided more information. You don't need to calculate the field produced by the rectangular coil. You need to calculate the torque on the coil because it is carrying an electric current and it is in a magnetic field produced by a pemanent magnet. Here is a nice graphic of rthe workings of such a device http://hyperphysics.phy-astr.gsu.edu...ic/galvan.html To find the relevant mathematics, do a search for a magnetic dipole. The calculation is easiest for a rectangular coil, but can be generalized to any shape. For example http://www.pa.msu.edu/courses/2000sp...s/dipoles.html http://hyperphysics.phy-astr.gsu.edu...ic/magmom.html HW Helper Sci Advisor P: 6,350 Quote by Soshamim Hey, I was wondering how would you find the value of the magnetic field inside a square coil? My books talk about finding the value of the mag. field at the center of a circular coil and everytime I search google for galvanometers made with square loops, I only find information on circular loops. In case you're wondering what my class is doing, our teacher wants us to make a lab that he can use for his future students, so my partner and I chose to make a lab that determines the relationship between the current and the angle the compass needle makes with the vertical plane of the coil. If this is a tangent galvanometer, the compass needle is in the geometric centre of the loop. It aligns with the vector sum of the coil's and earth's magnetic field. So all you have to do is relate the magnetic field in the centre of a square coil to the current. You have to use the Biot-Savart law for that. It is not quite 4 times the field a distance L/2 from a long conducting wire since the sides of the loop are not arbitrarily long. But that is probably a fair approximation. $$B = 4\frac{\mu_0I}{2\pi d} = \frac{4\mu_0I}{\pi L}$$ where d = L/2[/tex] To get the exact value, you have to do a Biot-Savart integration over the length of the 4 sides. AM P: 6 tangent galvanometer Hey AM, yes this is a tangent galvanometer. The equation you gave me, does that take into account the number of turns in the coil? HW Helper Sci Advisor P: 3,033 Quote by Soshamim Hey AM, yes this is a tangent galvanometer. The equation you gave me, does that take into account the number of turns in the coil? Sorry, I totally missed the "tangent" business in my earlier reply. AM's equation does not include multiple turns. It assumes one wire carrying a current I. If you have multiple turns, each carrying a current I, then effectively you have N times the current of a single wire, so you would need to multiply by N. I thought you were trying for find the field everywhere within your rectangular coil. Now I see you really only need it at the center. AM's equation is an approximation because it is the field from four wires of infinite length. Your wires are not infinite, and as he said you would need to do a calculation using the law of Biot-Savart to get the correct result for shorter wires. I have found one source that gives the result of that calculation for a wire of finite length at an arbitrary point in the vicinity of the wire. You can use the equation to solve your problem at the center of the coil, and if you want to you can explore the variation in the field as you move a bit away from the center. You will find the equation here: http://www.westbay.ndirect.co.uk/field.htm Click on the link titled "Magnetic Field due to a Current in a Wire". Make sure you use the equation for the short wire, Bsw. Here is another useful link http://www.magson.de/technology/tech41.html It gives the fields anywhere along the axis of a circular or rectangular coil. It also gives the fields for circular or square double coils (Helmholtz coils). As you can see from the pictures here http://physics.kenyon.edu/EarlyAppar...vanometer.html many of these devices use double coils. HW Helper Sci Advisor P: 6,350 Quote by Soshamim Hey AM, yes this is a tangent galvanometer. The equation you gave me, does that take into account the number of turns in the coil? As Dan pointed out, this is for a single wire loop. Just multiply by n = the number of turns. Follow the first link that Dan provided to work out the field of each side of the loop. Using the principle of superposition, the total field is the vector sum of the fields of all 4 wire segments. AM P: 6 sup Andrew and Dan, I just wanted to say thank you for all the help, the links were pretty useful and explanatory. Sorry for not responding earlier though as I've been busy with studying for finals lately Related Discussions Introductory Physics Homework 5 Electrical Engineering 6 Introductory Physics Homework 1 Introductory Physics Homework 1
# Role of Kuroshio Current in fish resource variability off southwest Japan ## Abstract Western boundary currents in the subtropics play a pivotal role in transporting warm water from the tropics that contribute to development of highly diverse marine ecosystem in the coastal regions. As one of the western boundary currents in the North Pacific, the Kuroshio Current (hereafter the Kuroshio) exerts great influences on biological resource variability off southwest Japan, but few studies have examined physical processes that attribute the coastal fish resource variability to the basin-scale Kuroshio variability. Using the high-quality fish catch data and high-resolution ocean reanalysis results, this study identifies statistical links of interannual fish resource variability off Sukumo Bay, Shikoku island of Japan, to subsurface ocean temperature variability in the Kuroshio. The subsurface ocean temperature variability off the south of Sukumo Bay exhibits vertically coherent structure with sea-surface height variability, which originates from the westward-propagating oceanic Rossby waves generated through surface wind anomalies in the Northwest Pacific. Although potential sources of the atmospheric variability remain unclarified, the remotely-induced oceanic Rossby waves contribute to fish resource variability off Sukumo Bay. These findings have potential applications to other coastal regions along the western boundary currents in the subtropics where the westward-propagating oceanic Rossby waves may contribute to coastal ocean temperature variability. ## Introduction Western boundary currents in the subtropics contribute to the establishment of rich marine ecosystem along the coastal regions through poleward transport of warm water from the tropics1. The warm water along the western boundary currents also plays a crucial role in the formation of a relatively warm and moist climate in the western side of ocean basin2,3. One of the western boundary currents in the subtropical North Pacific, called the Kuroshio Current (hereafter the Kuroshio), provides favorable environment for highly diverse marine ecosystems off Philippines, Taiwan, and the south of Japan4,5,6. Since these countries are traditionally dependent on fish resources from the Kuroshio as sources of protein foods, understanding physical processes in the Kuroshio and its potential link with fish resource variability is greatly important. The Kuroshio variability has been extensively studied across different temporal and spatial scales using observation data and modelling techniques7,8,9. In particular, off the south of Japan, the Kuroshio frequently undergoes small and large meanderings in a meridional orientation10,11,12,13,14,15, which have great influences on spatial distribution of fish species4,16. Although there are several physical processes proposed to explain the meridional fluctuations of the Kuroshio11,17,18,19,20,21,22, westward-propagating oceanic Rossby waves from the Northwest Pacific associated with ocean density variations are considered to play an important role in triggering small meanderings off Kyushu island in southwest Japan19,21,22,23. This variation may further cause small meanders off the southwest of Shikoku island24 and large meanders off Kii Peninsula13,17 on interannual timescales. However, most of the previous studies limit their discussions to the underlying physical processes behind the Kuroshio variability, and potential impacts on the amount of fish resources off the south of Japan have been poorly understood. One of the coastal bays located off southwest Japan, Sukumo Bay, is known to be greatly influenced by the Kuroshio variability25,26. Climatologically, the Kuroshio off Kyushu island first approaches south of Sukumo Bay, then turns eastward along the south coast of Japan. The northward approach of the Kuroshio brings warm water into Sukumo Bay, and the interaction with nutrient-rich cold water from Bungo Channel provides favorable conditions for biological activity off Sukumo Bay. Physical processes and predictability of warm water intrusion in Sukumo Bay are well-examined in the previous studies25,26,27,28,29, but the potential influence on the fish resource variability has yet to be investigated in the context of the links with basin-scale Kuroshio variability. To bridge a gap in our current understanding of the relationship between coastal fish resources and basin-scale ocean current variability, this study aims to (i) establish statistical links of the Kuroshio variability to the fish resources off Sukumo Bay and (ii) identify potential sources of the Kuroshio variability. For these purposes, we utilize high-resolution ocean reanalysis results in the Northwest Pacific with the ability to resolve the Kuroshio variability involving mesoscale eddies. Also, high-quality fish catch data with daily catch efforts off Sukumo Bay are used to estimate monthly fish resources and their variability. The identified relationship would be beneficial for establishing fish resource prediction and management off Sukumo Bay, and have potential applications to other coastal regions along the western boundary currents in the subtropics. ## Results ### Impact of local processes on fish resource variability Fish resources off Sukumo Bay undergo pronounced seasonal to interannual variability. Monthly fish catch per unit effort (CPUE) during 2006–2018 exhibits remarkable year-to-year variations with high values in 2007 and 2015 and low values in 2009 and 2010 (Fig. 1a). The interannual variability of the CPUE off Sukumo Bay is found to be due mostly to the wintertime variability. This can be clearly seen in Fig. 1b showing a strong seasonality of the CPUE with the large standard deviation during November-January. Further analysis of fish species data also shows that during boreal winter, small fishes such as sardine and horse mackerel become the dominant species off Sukumo Bay due probably to the combined effects of the nutrient-rich southward current from Bungo Channel and the northward warm water intrusion of the Kuroshio. Given that most of the annual CPUE is explained by the wintertime catches, this study focuses on identifying physical processes that control interannual variability in the wintertime CPUE off Sukumo Bay. Spatial patterns of November-January mean sea-surface height (SSH) and subsurface ocean temperature at 150 m depth (T150) are presented in Fig. 2a,b. The Kuroshio, the core of which is estimated by the strongest gradient of the SSH, flows northeastward off Shikoku island. The spatial distribution of T150 mostly follows a similar pathway of the Kuroshio, although some coastal region (132–132.5°E and 32–32.5°N) on the western flank of the Kuroshio shows relatively warm temperature, representing northward intrusion of warm water from the Kuroshio to Bungo Channel. This region well corresponds to the south of major fishing areas during boreal winter. The wintertime CPUE off Sukumo Bay shows moderately high correlation (0.4–0.5) with the SSH and T150 off the south of Sukumo Bay (Fig. 2c,d). Here we calculated the correlation between the November-January mean SSH/T150 data from 2006 to 2018 (i.e. 13 values) as two-dimensional (i.e. latitude/longitude) oceanic variables from the JCOPE reanalysis and the November-January mean one-dimensional CPUE data after aggregating all the CPUE data reported in different regions off the Sukumo Bay. Assuming that the one-dimensional CPUE data represents the area-averaged fish resource off the Sukumo Bay, we constructed the two-dimensional correlation coefficients between the CPUE and the SSH/T150 over the grid cells of the JCOPE model, so the estimated correlation coefficients have 11 degrees of freedom. It should also be noted that lag autocorrelation of November-January CPUE shows a very weak relation with the CPUE beyond two months (Fig. S1), so the wintertime CPUE variability is not much affected by other seasons’ CPUE variability. Although some region near the southwestern tip of Shikoku island shows relatively high correlation, both the SSH and the T150 off the south of Sukumo Bay (132–132.5°E and 32–32.5°N) show a coherent structure with moderately high correlations along the Kuroshio. In particular, the correlation coefficient with the T150 is statistically significant at 90% confidence level, although that with the SSH is not significant. Even if the outlier year of 2016 associated with extremely high CPUE (Fig. 1a) is removed from the analysis, the correlation values with the SSH and T150 remain relatively high above 0.4 off the south of Sukumo Bay (Fig. S2a,b), although the correlation coefficient with the T150 is statistically significant at 80% due to a decrease in the number of available data. This significant relationship with the T150 suggests that the above-normal subsurface ocean temperature associated with the northward approach of the Kuroshio may play an important role in providing favorable conditions for increase in the wintertime fish resources off Sukumo Bay. Since there is no clear relationship between the wintertime CPUE and the area-averaged Chlorophyll-a off the south of Sukumo Bay (Fig. S3), subsurface ocean temperature variations may directly contribute to changes in spatial distribution of small fishes off Sukumo Bay. ### Remote influences on Kuroshio variability Subsurface ocean temperature variability off Sukumo Bay shows strong association to the overlying SSH variability. This is evident in Fig. 3 showing time series of the T150 and SSH anomalies during boreal winter (November-January). It should be noted that we extend the time series back to 1993 when the satellite SSH data become available and incorporated into the ocean reanalysis results. The T150 anomalies remarkably fluctuate year by year, and exhibit a distinct in-phase relationship with the SSH anomalies. To investigate a physical link with the SSH variability and its potential sources, we define positive and negative events as years when the T150 anomalies during boreal winter exceed above one standard deviation and below negative one standard deviation, respectively. This leads to seven positive and six negative events, including higher than normal CPUE years (2007, 2015, and 2016) and lower than normal CPUE years (2013) in Fig. 1a. Since the negative events include the limited number of lower-than-normal CPUE year, we focus on the analysis of physical processes for the positive events. The in-phase relationship with the SSH variability is found for subsurface ocean temperature at different depths. During positive events, the ocean temperature in the upper 700 m shows a warmer-than-average condition compared to that in all the analysis years (Fig. S4a). Since the ocean density becomes lower than that in all the analysis years but does not show much difference below 700 m (Fig. S4b), the thermocline located at around 200–300 m, defined as the maximum vertical gradient of ocean density below the surface mixed-layer, appears to become deeper. The associated downwelling induces above-normal subsurface ocean temperature through moving the isotherm layers downward. A similar but opposite relationship can be seen for the negative events. The strong coherence between the SSH and subsurface ocean temperature involving the thermocline variability is also reported in the observational studies over the Kuroshio recirculation region30. To obtain useful insights into the generation of the SSH anomalies, spatial patterns of the SSH anomalies composited with every three-month lag during the positive events are presented in Fig. 4. The positive SSH anomalies off the south of Sukumo Bay during boreal winter (November-January; NDJ (0)) exhibit strong association with positive SSH anomalies along the Kuroshio pathway. In particular, the positive SSH anomalies off the south of Sukumo Bay appear to be a part of positive SSH anomalies east of Kyushu island. These anomalies seem to originate from the positive SSH anomalies to the southeast at three-month lag (132–136°E and 29–31°N in Fig. 4b) and further east at six-month lag (138–142°E and 29–31°N in Fig. 4c). On the other hand, there is very weak contribution of the SSH anomalies off the southwest Kyushu island at these lags. This lag-composite analysis suggests that most of the positive SSH anomalies off the south of Sukumo Bay may be related to westward migration of the positive SSH anomalies from the Northwest Pacific. To highlight the migration of the SSH anomalies from the Northwest Pacific, we plot two Hovmöller diagrams for north-south SSH anomalies along 132°E and west-east SSH anomalies along 30°N as a function of time lag (Fig. 5). The positive SSH anomalies at zero-month lag (i.e. Dec (0)) off the south of Sukumo Bay are associated with clear northward propagation of the anomalies with two-month lag (left panel in Fig. 5). This indicates that the positive SSH anomalies migrate northward along the east coast of Kyushu island under the influence of the northward Kuroshio (Fig. 4a). The positive SSH anomalies further originate from the westward-propagating SSH anomalies with 12-month lag in the region of 150–160°E at 30°N (right panel in Fig. 5). The westward propagation speed is estimated to be around 10 cm s−1. This is higher, by a factor of two, than the theoretical phase speed (around 5 cm s−1) for the first baroclinic Rossby waves at 30°N, probably due to the acceleration of the oceanic Rossby waves interacting with the mean current field31. As such, the oceanic Rossby waves generated in the Northwest Pacific may contribute to the generation of the positive SSH anomalies off Sukumo Bay. To explore the potential sources of the oceanic Rossby waves from the perspective of atmospheric forcing, we calculated composite anomalies with a one-year lag prior to the positive events for sea-level pressure (SLP) in Fig. 6a and surface wind stress curl in Fig. S5, respectively. The SLP anomalies exhibit significant positive values in the region of the Northwest Pacific (150–160°E at 30°N; Fig. 6a). This leads to anomalous anticyclonic wind stress curl that tends to induce downwelling oceanic Rossby waves, although the negative wind stress curl anomalies along 30°N show significant values in the very limited areas of 140–150°E (Fig. S5). The positive SLP anomalies in the Northwest Pacific are associated with negative sea-surface temperature (SST) anomalies to the northeast and positive SST anomalies to the southwest (Fig. 6b). These SST anomalies appear not to force the atmospheric variability, rather to be driven by the atmospheric forcing in such a way that anomalously dry northwesterly wind on the northeastern flank of the positive SLP anomalies in the Northwest Pacific increases evaporation and deepens the wintertime mixed layer, which enhances entrainment of cold water from deeper ocean. A similar but opposite process seems to operate for the positive SST anomalies on the southwestern flank of the positive SLP anomalies. Therefore, the SLP anomalies in the Northwest Pacific may be driven by remote forcing outside the Northwest Pacific, for example, atmospheric teleconnection from the tropical Pacific associated with El Niño-like condition (Fig. 6b), but potential sources of SLP anomalies require further detailed analysis and modelling studies. ## Discussions This study has identified the potential roles of the Kuroshio variability on the wintertime fish resource variability estimated off Sukumo Bay and the remote influence from the atmospheric variability in the Northwest Pacific. Previous studies have attributed fish resource variability to regional variations in the physical and bio-geochemical conditions4,5,6,16, but little attention has been paid to its remote link with basin-scale ocean and atmosphere variability. The Kuroshio undergoes interannual variability under the influence of westward-propagating oceanic Rossby waves from the Northwest Pacific19,21,22,23. Since the Kuroshio variability is suggested to cause interannual modulation of small meanders off the southwest of Shikoku island24, the remotely induced oceanic Rossby waves have the potential to modulate the ocean circulation near the southwest coast of Japan and the amount of fish resource through changes in the subsurface ocean temperature. The identified relationship can be applied to other coastal regions along the western boundary currents in the subtropics where the westward-propagating oceanic Rossby waves may contribute to coastal ocean temperature variability32. However, some issues behind the fish resource variability remain to be addressed. First, due to the limited availability of high-quality fish catch data (2006–2018), the established relationship between the fish resource and the ocean variables may be influenced by the outlier year, for example, 2016 with extremely high fish catch (Fig. 1a), but more prolonged datasets in the near future will help to verify the robustness of the statistical relationship. Second, the relationship between fish resource increase and anomalous warm water intrusion due to the northward approach of the Kuroshio may be straightforward, but due to the lack of coastal ocean observations and reanalysis results for bio-geochemical components, it is difficult to examine how the warm water affects the phyto- and zoo-plankton activities and the amount of fish species. Since the area off Sukumo Bay is largely influenced by nutrient-rich southward current from Bungo Channel during boreal winter, anomalous warm water advection from the Kuroshio into the channel may provide favorable temperature conditions for spawning grounds of small fishes such as sardines33 and enhance biomass and food availability for small larvae34. The warm water intrusion into Bungo Channel may be associated with more frequent occurrence of the Kyucho25, a coastal phenomenon with a sudden increase in coastal ocean current, but the Kyucho in Bungo Channel rarely occurs during boreal winter. Along this line, further observational and modelling studies involving the interaction of dynamical and bio-geochemical processes would advance our understanding of the fish resource variability off Sukumo Bay. Previous studies have mainly focused on westward propagation of cyclonic oceanic Rossby waves and eddies along 30°N accompanied with the Kuroshio pathway variations southeast of Kyushu island23,24. On the other hand, the present study has also identified potential roles of westward-propagating anticyclonic eddies in the Kuroshio variations. The oceanic Rossby waves that influence the Kuroshio variability are not solely generated by atmospheric forcing but through internal ocean processes such as eddy-mean current interaction31. Since the subtropical Northwest Pacific at 30°N also receives influence of southward current associated with re-circulation from the Kuroshio extension current system (Fig. S6), the anticyclonic eddies traveling from the north may propagate westward via interaction with the re-circulation current. However, the relative contributions from the internal oceanic processes and the atmospheric forcing remain unclear. This needs to await further ocean modelling studies in which the atmospheric forcing such as wind stress curl is prescribed with or without interannual variations. This study provides further implication for the development of fish resource prediction based on the ocean current information. Since there is a one-year lag relationship between anticyclonic eddies in the Northwest Pacific and anomalous increase in fish resource off Sukumo Bay, monitoring the SSH variability in the Northwest Pacific is imperative for predicting fish resource variability off Sukumo Bay one year ahead. Given that the SSH variability in the Northwest Pacific is driven mostly by the atmospheric variations, seasonal climate prediction over the Northwest Pacific using a global ocean-atmosphere coupled model may help extend the prediction lead time beyond one year. The long-term prediction information for fish resource would benefit fishery people to efficiently establish fishing plan as well as sustainably use and manage fish resources. ## Methodology We analyzed daily fish catch data based on mid-size surrounding nets off Sukumo Bay, Shikoku island of Japan from 2006 to 2018. The data covers the period since 2006 when all the local fishery cooperatives near Sukumo Bay were integrated into the Sukumo Bay fishery cooperative. To estimate fish resource variability, we calculated the monthly fish catch per unit effort (CPUE) defined as the fish catch divided by the number of fishing days per person. For oceanic and atmospheric data over the global domain, we used the sea-surface temperature (SST) from the Optimum Interpolation SST version 2 (OISST V2)35 and the basic atmospheric variables from the ERA-Interim reanalysis results36. To estimate biological activity in the upper ocean, we used the monthly Chlorophyll-a data with 4-km horizontal resolution obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite37. Here we analyzed all the above datasets with the same horizontal resolution of 1° × 1° over the domain. To examine coastal ocean variability, we utilized monthly reanalysis results from the Japan Coastal Ocean Predictability Experiment 2 (JCOPE2)38 with a high horizontal resolution of 1/12° in the Northwest Pacific (108–180°E and 10.5–62°N). The JCOPE2 system is based on the Princeton Ocean Model (POM)39 with 46 vertical levels of sigma coordinate. The boundary condition of the JCOPE2 system is provided by relatively low-resolution model with a 1/4° horizontal resolution and 21 vertical levels in the entire Pacific. The low-resolution model was spun-up for 15 years using the monthly mean surface forcing from an initial condition with no motion, annual mean ocean temperature and salinity40. The spin-up results of the low-resolution model over the last five years are used for the lateral boundary condition of the high-resolution model, then both the low and high resolution models are integrated from Oct 1992 using surface wind stress, heat and salt fluxes of the six-hourly NCEP/NCAR reanalysis data41 via bulk formulae42. Besides atmospheric forcing, the ocean model benefited from data assimilation via three-dimensional variational assimilation (3DVAR)43 method using ocean observation data of sea-surface height (SSH) anomaly derived from several satellites, the SST from the Advanced Very High Resolution Radiometer/Multi-Channel SST (AVHRR/MCSST), and subsurface ocean temperature/salinity from the NOAA Global Temperature-Salinity Profile Program (GTSPP). Here we analyzed JCOPE2 reanalysis results during 1993–2018. To calculate monthly anomalies, we subtracted monthly climatology and removed a linear trend using a least squares method. For the analysis of correlation between the fish resource (x) and the oceanic variables (y), we used Pearson product-moment correlation in which the least-squares regression line was calculated and the degree of the line fitting was evaluated using the least-squares method. Here, the correlation coefficient, R, was defined using the following equation: $${\rm{R}}=\frac{\sum x{\text{'}}_{i}\,y{\text{'}}_{i}}{\sqrt{\sum x{\text{'}}_{i}^{2}}\sqrt{\sum y{\text{'}}_{i}^{2}}}$$ (1) where the superscript’ of each variable means monthly detrended anomalies and the subscript i indicates the monthly time series. ## References 1. 1. Tittensor, D. P. et al. Global patterns and predictors of marine biodiversity across taxa. Nature. 466, 1098–1101 (2010). 2. 2. Minobe, S., Kuwano-Yoshida, A., Komori, N., Xie, S. P. & Small, R. J. Influence of the Gulf Stream on the troposphere. Nature. 452, 206–209 (2008). 3. 3. Beal, L. M. et al. On the role of the Agulhas system in ocean circulation and climate. Nature. 472, 429–436 (2011). 4. 4. Nakata, K., Hada, A. & Matsukawa, Y. Variations in food abundance for Japanese sardine larvae related to the Kuroshio meander. Fish. Oceanogr. 3, 39–49 (1994). 5. 5. Zenimoto, K. et al. The effects of seasonal and interannual variability of oceanic structure in the western Pacific North Equatorial Current on larval transport of the Japanese eel Anguilla japonica. J. Fish. Biol. 74, 1878–1890 (2009). 6. 6. Hsiao, S. H., Fang, T. H., Shih, C. T. & Hwang, J. S. Effects of the Kuroshio Current on copepod assemblages in Taiwan. Zool. Stud. 50, 475–490 (2011). 7. 7. Mizuno, K. & White, W. B. Annual and interannual variability in the Kuroshio current system. J. Phys. Oceanogr. 13, 1847–1867 (1983). 8. 8. Qiu, B. & Lukas, R. Seasonal and interannual variability of the North Equatorial Current, the Mindanao Current, and the Kuroshio along the Pacific western boundary. J. Geophys. Res. Oce. 101, 12315–12330 (1996). 9. 9. Feng, M., Mitsudera, H. & Yoshikawa, Y. Structure and variability of the Kuroshio Current in Tokara Strait. J. Phys. Oceanogr. 30, 2257–2276 (2000). 10. 10. Nitani, H. Variation of the Kuroshio south of Japan. J. Oceanogr. Soc. Japan. 31, 154–173 (1975). 11. 11. White, W. B. & McCreary, J. P. On the formation of the Kuroshio meander and its relationship to the large-scale ocean circulation. Deep-Sea Res. 23, 33–47 (1976). 12. 12. Solomon, H. Occurrence of small “trigger” meanders in the Kuroshio off southern Kyushu. J. Oceanogr. 34, 81–84 (1978). 13. 13. Kawabe, M. Sea level variations along the south coast of Japan and the large meander in the Kuroshio. J. Oceanogr. Soc. Japan. 36, 97–104 (1980). 14. 14. Sekine, Y. & Toba, Y. Velocity variation of the Kuroshio during formation of the small meander south of Kyushu. J. Oceanogr. Soc. Japan. 37, 87–93 (1981). 15. 15. Kawabe, M. Variations of current path, velocity, and volume transport of the Kuroshio in relation with the large meander. J. Phys. Oceanogr. 25, 3103–3117 (1995). 16. 16. Nakata, H., Funakoshi, S. & Nakamura, M. Alternating dominance of postlarval sardine and anchovy caught by coastal fishery in relation to the Kuroshio meander in the Enshu‐nada Sea. Fish. Oceanogr. 9, 248–258 (2000). 17. 17. Sekine, Y. A numerical experiment on the path dynamics of the Kuroshio with reference to the formation of the large meander path south of Japan. Deep-Sea Res. 37, 359–380 (1990). 18. 18. Qiu, B. & Miao, W. Kuroshio path variations south of Japan: Bimodality as a self-sustained internal oscillation. J. Phys. Oceanogr. 30, 2124–2137 (2000). 19. 19. Mitsudera, H., Waseda, T., Yoshikawa, Y. & Taguchi, B. Anticyclonic eddies and Kuroshio meander formation. Geophys. Res. Lett. 28, 2025–2028 (2001). 20. 20. Ebuchi, N. & Hanawa, K. Influence of mesoscale eddies on variations of the Kuroshio path south of Japan. J. Oceanogr. 59, 25–36 (2003). 21. 21. Usui, N., Tsujino, H., Fujii, Y. & Kamachi, M. Generation of a trigger meander for the 2004 Kuroshio large meander. J. Geophys. Res. Oce. 113, C01012, https://doi.org/10.1029/2007JC004266 (2008). 22. 22. Usui, N., Tsujino, H., Nakano, H. & Fujii, Y. Formation process of the Kuroshio large meander in 2004. J. Geophys. Res. Oce 113, C08047, https://doi.org/10.1029/2007JC004675 (2008). 23. 23. Usui, N., Tsujino, H., Nakano, H. & Matsumoto, S. Long-term variability of the Kuroshio path south of Japan. J. Oceanogr. 69, 647–670 (2013). 24. 24. Kashima, M. et al. Quasiperiodic small meanders of the Kuroshio off Cape Ashizuri and their inter-annual modulation caused by quasiperiodic arrivals of mesoscale eddies. J. Oceanogr. 65, 73–80 (2009). 25. 25. Takeoka, H., Akiyama, H. & Kikuchi, T. The Kyucho in the Bungo Channel, Japan—Periodic intrusion of oceanic warm water. J. Oceanogr. 49, 369–382 (1993). 26. 26. Akiyama, H. & Saitoh, S. I. TheKyucho in Sukumo Bay induced by Kuroshio warm filament intrusion. J. Oceanogr. 49, 667–682 (1993). 27. 27. Isobe, A., Guo, X. & Takeoka, H. Hindcast and predictability of sporadic Kuroshio‐water intrusion (kyucho in the Bungo Channel) into the shelf and coastal waters. J. Geophys. Res. Oce. 115, C04023, https://doi.org/10.1029/2009JC005818 (2010). 28. 28. Nagai, T. & Hibiya, T. Numerical simulation of tidally induced eddies in the Bungo Channel: A possible role for sporadic Kuroshio-water intrusion (kyucho). J. Oceanogr. 68, 797–806 (2012). 29. 29. Nagai, T. & Hibiya, T. Effects of tidally induced eddies on sporadic Kuroshio-water intrusion (kyucho). J. Oceanogr. 69, 369–377 (2013). 30. 30. Ebuchi, N. & Hanawa, K. Mesoscale eddies observed by TOLEX-ADCP and TOPEX/POSEIDON altimeter in the Kuroshio recirculation region south of Japan. J. Oceanogr. 56, 43–57 (2000). 31. 31. Dewar, W. K. On “too fast” baroclinic planetary waves in the general circulation. J. Phys. Oceanogr. 28, 1739–1758 (1998). 32. 32. Hill, K. L., Robinson, I. S. & Cipollini, P. Propagation characteristics of extratropical planetary waves observed in the ATSR global sea surface temperature record. J. Geophys. Res. Oce. 105, 21927–21945 (2000). 33. 33. Kimura, R., Watanabe, Y. & Zenitani, H. Nutritional condition of first-feeding larvae of Japanese sardine in the coastal and oceanic waters along the Kuroshio Current. ICES J. Mar. Sci. 57, 240–248 (2000). 34. 34. Nakata, K., Zenitani, H. & Inagake, D. Differences in food availability for Japanese sardine larvae between the frontal region and the waters on the offshore side of Kuroshio. Fish. Oceanogr. 4, 68–79 (1995). 35. 35. Reynolds, R. W., Rayner, N. A., Smith, T. M., Stokes, D. C. & Wang, W. An improved in situ and satellite SST analysis for climate. J. Climate. 15, 1609–1625 (2002). 36. 36. Dee, D. P. et al. The ERA‐Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteorol. Soc. 137, 553–597 (2011). 37. 37. Hu, C., Lee, Z. & Franz, B. Chlorophyll a algorithms for oligotrophic oceans: A novel approach based on three‐band reflectance difference. J. Geophys. Res. Oce. 117, C01011, https://doi.org/10.1029/2011JC007395 (2012). 38. 38. Miyazawa, Y. et al. Water mass variability in the western North Pacific detected in a 15-year eddy resolving ocean reanalysis. J. Oceanogr. 65, 737–756 (2009). 39. 39. Mellor, G. L. Häkkinen, S. M. Ezer, T. & Patchen, R. C. A generalization of a sigma coordinate ocean model and an intercomparison of model vertical grids. Oce. Forecast. 55–72 (2002). 40. 40. Conkright, M. E. et al. World Ocean Database, 2001. Volume 1: Introduction. NOAA Atlas NESDIS 42 (ed. Levitus, S.) 1–167 (U.S. Government Printing Office, 2002). 41. 41. Kalnay, E. et al. The NCEP/NCAR 40-year reanalysis project. Bull. Amer. Meteorol. Soc. 77, 437–472 (1996). 42. 42. Kagimoto, T. Miyazawa, Y. Guo, X. & Kawajiri, H. High resolution Kuroshio forecast system: Description and its applications. High Resolution Numerical Modelling of the Atmosphere and Ocean (ed. Ohfuchi, W. & Hamilton, K.) 209–239 (Springer, 2008). 43. 43. Fujii, Y. & Kamachi, M. A reconstruction of observed profiles in the sea east of Japan using vertical coupled temperature-salinity EOF modes. J. Oceanogr. 59, 173–186 (2003). ## Acknowledgements The JCOPE2 was run on the Earth Simulator at the Japan Agency for Marine Earth Science and Technology (JAMSTEC). All figures were generated using Grid Analysis and Display System (GrADS) Version 2.1.a3 (http://cola.gmu.edu/grads/downloads.php). We appreciate the Sukumo Bay fishery cooperative and the Sukumo Bay fishery guidance office for providing high-quality fish catch data. We are also grateful to Drs. Toshio Yamagata and Toru Miyama, and two anonymous reviewers for providing insightful comments to improve the current research. The present research is supported by the Ocean Policy Research Institute, the Sasakawa Peace Foundation, and in part by the Environment Research and Technology Development Fund S-15 (Predicting and Assessing Natural Capital and Ecosystem Services; PANCES) of the Ministry of the Environment, Japan. ## Author information Y.M. conducted research and wrote main manuscript. S.V. and Y.M. guided the whole research. All the authors reviewed the manuscript. Correspondence to Yushi Morioka. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions