text
stringlengths
100
356k
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Anion Formation ## Gain of electrons results in an ion % Progress Practice Anion Formation Progress % Anion Formation Source: http://commons.wikimedia.org/wiki/File:DAR_glass_-_IMG_8708.JPG License: CC BY-NC 3.0 A pressed glass open salt dish made in the early 1830s. [Figure1] How do you make chlorine safe to eat? How do you transform a deadly gas into something you can sprinkle on your eggs and eat for breakfast?  Chlorine in its free form is very dangerous if you breathe the fumes or come in contact with the gas.  However, after reaction with sodium, we have sodium chloride formed as the sodium atom gives up an electron to chlorine which accepts the electron to form the chloride anion. ### Anions Anions are the negative ions formed from the gain of one or more electrons.  When nonmetal atoms gain electrons, they often do so until their outermost principal energy level achieves an octet.  This process is illustrated below for the elements fluorine, oxygen, and nitrogen. $& \text{F} \quad \quad + \quad \text{e}^- \quad \rightarrow \quad \text{F}^-\\ & 1s^22s^22p^5 \qquad \quad 1s^22s^22p^6 \ (\text{octet})\\& \text{O} \quad \quad + \quad 2\text{e}^- \ \rightarrow \quad \text{O}^{2-}\\& 1s^22s^22p^4 \qquad \quad 1s^22s^22p^6 \ (\text{octet})\\& \text{N} \quad \quad + \quad 3\text{e}^- \ \rightarrow \quad \text{N}^{3-}\\ & 1s^22s^22p^3 \qquad \quad 1s^22s^22p^6 \ (\text{octet})$ All of these anions are isoelectronic with each other and with neon.  They are also isoelectronic with the three cations from the previous section.  Under typical conditions, three electrons is the maximum that will be gained in the formation of anions. Outer electron configurations are constant within a group, so this pattern of ion formation repeats itself for Periods 3, 4, and following (see Figure below ). Credit: CK-12 Foundation - Christopher Auyeung License: CC BY-NC 3.0 Ion charges. [Figure2] It is important not to misinterpret the concept of being isoelectronic.  A sodium ion is very different from a neon atom because the nuclei of the two contain different numbers of protons.  One is an essential ion that is a part of table salt, while the other is an unreactive gas that is a very small part of the atmosphere.  Likewise, sodium ions are very different than magnesium ions, fluoride ions, and all the other members of this isoelectronic series (N 3− , O 2− , F , Ne, Na + , Mg 2+ , Al 3+ ). Credit: (A) Andy Wright (Flickr: rightee); (B) Kevin Dooley Source: (A) http://www.flickr.com/photos/rightee/4356950/; (B) http://www.flickr.com/photos/pagedooley/2769134850/ License: CC BY-NC 3.0 Neon gas (A) and sodium chloride crystals (B). Neon atoms and sodium ions are isoelectronic. Neon is a colorless and unreactive gas that glows a distinctive red-orange color in a gas discharge tube. Sodium ions are most commonly found in crystals of sodium chloride, ordinary table salt. [Figure3] #### Summary • Anions are negative ions formed by accepting electrons. • The outermost principal energy level usually is an octet. #### Practice Questions Use the link below to answer the following questions: 1. What do nonmetals tend to do? 2. What noble gas is Se 2- isoelectronic with? 3. What -3 anion is isoelectronic with Ar? 4. What is a polyatomic anion? #### Review Questions 1. What is an anion? 2. Write the electronic configurations for the chlorine atom and the chloride anion. 3. What does isoelectronic mean? 1. [1]^ Credit: User:Daderot/Wikimedia Commons; Source: http://commons.wikimedia.org/wiki/File:DAR_glass_-_IMG_8708.JPG; License: CC BY-NC 3.0 2. [2]^ Credit: CK-12 Foundation - Christopher Auyeung; License: CC BY-NC 3.0 3. [3]^ Credit: (A) Andy Wright (Flickr: rightee); (B) Kevin Dooley; Source: (A) http://www.flickr.com/photos/rightee/4356950/; (B) http://www.flickr.com/photos/pagedooley/2769134850/; License: CC BY-NC 3.0 ### Explore More Sign in to explore more, including practice questions and solutions for Anion Formation.
UnkleRhaukus 2 years ago $\begin{array}{ccc} \phi & \neg \phi & \psi & \phi \Rightarrow \psi & \neg \phi \vee \psi \\ \hline \\T&?&T &?&?\\T&?&F&?&?\\F&?&T&?&?\\F&?&F&?&?\ \end{array}$ 1. UnkleRhaukus $\begin{array}{ccccc} \phi & \neg \phi & \psi & \phi \Rightarrow \psi & \neg \phi \vee \psi \\ \hline \\T&?&T &?&?\\T&?&F&?&?\\F&?&T&?&?\\F&?&F&?&? \end{array}$ 2. Not the answer you are looking for? Search for more explanations. Search OpenStudy
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Uspekhi Mat. Nauk: Year: Volume: Issue: Page: Find Regular degeneration and boundary layer for linear differential equations with small parameterM. I. Vishik, L. A. Lyusternik 3 The method of finite differences in the theory of partial differential equationsO. A. Ladyzhenskaya 123 Monte Carlo methods for the iteration of linear operatorsJ. H. Curtiss 149 Scientific notes and problems Elementary proof of the essentiality of the identical mapping of a simplexP. S. Aleksandrov, B. A. Pasynkov 175 Groups allowing arbitrary permutation of the factors of their composition seriesS. D. Berman, V. V. Lyubimov 181 On an elementary method of solving certain boundary problems in the theory of holomorphic functions and certain singular integral equations connected with themA. V. Bitsadze 185 On problems of Frommer differentiationR. È. Vinograd, D. M. Grobman 191 On the Cauchy problem for hyperbolic linear systemsB. A. Vostretsov 197 The Dirichlet problem for the equation $\Delta U+\dfrac{4n(n+1)}{(1+x^2+y^2)^2}U=0$M. P. Ganin 205 On the Dirichlet problem for elliptic equations with a small parameter in the highest derivativesJ. Kopáček 211 On summation of double Fourier series of functions of two variablesI. V. Matveev 221 On a property of nuclear spacesD. A. Raikov 231 On the equations of KolmogorovI. D. Cherkasov 237 Characterization of the dimension of metric spaces by continuous mappings into Euclidean spacesM. L. Shersnev 245 In the Moscow Mathematical Society Meetings of the Moscow Mathematical Society 249 Mathematical Events in the USSR Research Seminar of Department of Algebra of Moscow UniversityA. G. Kurosh, L. A. Skornyakov 261 Letters to the Editor Letter to EditorG. E. Shilov 270 Errata 270
# If an ODE has only periodic solutions and one equilibrium point, is that equilibrium point Lyapunov stable? Consider the non-linear IVP $$\dot x=f(x),$$ $$x(0)=x_0.$$ where $$f$$ is locally Lipschitz and $$f(0)=0$$ and $$f(x)\ne 0\ \forall x\ne 0.$$ If all solutions to this IVP for various initial conditions are periodic does that mean that the origin is Lyapunov stable, i.e., $$\forall \varepsilon>0, \ \exists\delta>0, \ |x(0)|<\delta\implies|x(t)|<\varepsilon \ \forall t\ge 0.$$ My professor said yes in class and he "justified" this by drawing periodic orbits that are circular. I know that periodic solutions are bounded, but what if a solution close starts very close to the origin, moves far away from the origin, then returns to the same point? Is there an example of such a system or is my professor right? I was thinking of a solution like $$x(t)=\begin{cases}\frac{x(0)}{|x(0)|}\sin(t)+x(0)(1-\sin(t))&x\ne 0\\0&x=0\end{cases}.$$ $$x(t)$$ oscillates between $$x(0)$$ and a point with norm 1, so no matter how small $$\delta$$ is, the solution will never satisfy the definition of Lyapunov stability with $$\varepsilon=\frac{1}{2}$$. The origin is an equilibrium point and the only equilibrium point and $$x(t)$$ satisfies the initial condition. $$x(t)$$ is differentiable, $$2\pi$$ periodic and satisfies the time-invariance property. I know that $$x(t)$$ does not work because it is not continuous with respect to the initial condition and $$f$$ is locally Lipschitz but I'm wondering if there is a way to alter $$x(t)$$ to make it a solution to an autonomous ODE and still have the same properties. • I won’t presume to answer here, since I am confident that someone else will think about this more deeply than I have. The simplest case is when $f$ is differentiable at 0 with a nonzero derivative. Near 0, $f(x) \approx f’(0) x$ and we expect Lyapunov stability only if $f’(0)<0$. This isn’t a given, but I can’t think of a way to make every initial condition have a periodic solution that doesn’t satisfy this. Your question (Lipschitz $f$) is more general but I suspect that the same reasoning will apply. – sasquires Feb 7 at 9:52 • Actually, one point of clarification: When you say solutions are periodic, you mean that the initial condition has to be part of the periodic orbit, right? Otherwise (if these trajectories are only eventually periodic) then it changes everything. – sasquires Feb 7 at 10:09 • @sasquires There is no periodic solution to a scalar autonomous ODE. Because then you can use separation of variables to get $$0=\int_{x(0)}^{x(T)}\frac{dx}{f(x)}=T$$ since $x(T)=x(0)$. – Andrew Murdza Feb 7 at 10:52 • @sasquires So I need $x$ to be a vector and in that case it doesn't make sense to talk about the sign of $f'$ since it will be a matrix. And in the case of the ODE $$\dot x_1=x_2,$$ $$\dot x_2=-x_1,$$ the eigenvalues of f' will be imaginary so linearization will not work even though the solution is periodic. – Andrew Murdza Feb 7 at 10:56 • Sorry I meant there is no periodic non-equilibrium solution to a scalar first order autonomous ODE – Andrew Murdza Feb 7 at 10:58 This is not a satisfactory proof of the OP's question. It's a sketch of a proof of a slightly weaker claim (see assumptions below). However, since this question hasn't gotten attention from an expert, as I hoped for in a comment above, then I'll write this down as the best that I can do at the moment. There may be counterexamples to my argument, but I think they would have to be pathological. One important point: I'm interpreting that the question requires that every point be in a strictly periodic orbit, in the sense that for every initial condition $$x(0)$$, there will be some $$t > 0$$ such that $$x(t) = x(0)$$. I think that this is the sense in which the OP meant that "all solutions...are periodic." (A physicist like me might informally say that all solutions are periodic if they asymptotically approach one or more periodic attractors, which is a completely different situation.) I'll make the following assumptions that are stronger than the OP's question. These can probably be relaxed, but I have not spent any time trying to do so. 1. $$f$$ is differentiable (not just Lipschitz). 2. The Jacobian of $$f$$ at $$x=0$$, which I'll call $$J$$, is not identically zero. Proof: I will try to prove that your professor's picture is essentially correct, i.e., that initial conditions near the origin follow orbits that remain near the origin. Sufficiently close to the origin, $$\dot{x} = f(0) + J x + \dots \approx J x$$ since $$f(0)=0$$. This linearization has the solution $$x(t) = e^{Jt} x(0)$$ as long as $$x(t)$$ is sufficiently close to $$0$$. The eigenvalues of $$J$$ are the local Lyapunov exponents. Note that these can be complex, but since $$J$$ is real, they must come in complex conjugate pairs; linear combinations of these are used to create real solutions for $$x(t)$$. (I will ignore this subtlety below but it doesn't change anything.) In the next paragraph, I try to show that $$\text{Re}(\lambda) = 0$$ for all eigenvalues $$\lambda$$. Suppose that there is some eigenvector $$v$$ whose associated eigenvalue $$\lambda$$ has a negative real component, i.e., $$\text{Re}(\lambda) < 0$$. Suppose we let $$x(0) = v$$. Then $$x(t) = v e^{\lambda t}$$, and $$x(t)$$ will asymptotically approach $$0$$, a fixed point. Specifically, $$x(t)$$ will never return to $$x(0)$$, which we assumed that it would. Note that the linearization of the original ODE is still valid within this entire neighborhood, so we can't say that somewhere along the path, it will get swept off in a different direction and complete a periodic orbit without going to $$0$$. Dealing with the case that $$\text{Re}(\lambda) > 0$$ is slightly harder. This is because, after some time, $$x(t)$$ will leave the neighborhood where the linearization is valid, and after that we cannot say what will happen. But here I will employ a trick: Make time go backwards. Note that a periodic orbit must remain periodic whether time runs forward or backward. Now consider the original ODE, $$\dot{x} = f(x)$$ under the substitution $$t \to -t$$ (running time backwards). We get the ODE $$\dot{x} = -f(x)$$ which corresponds to taking $$J \to -J$$ and $$\lambda \to -\lambda$$ in the discussion above. So if we have $$\text{Re}(\lambda) > 0$$ when time is running forward, we can run time backward and apply the same argument as we did for $$\text{Re}(\lambda) < 0$$. Note that because we can choose the initial condition to be exactly on the eigenvector, the above discussion applies even if $$J$$ has both positive and negative eigenvalues, because we are singling out the effect of a single eigenvalue. We have shown that the cases where $$x(t)$$ move rapidly toward or away from the origin are excluded, so $$x(t)$$ must remain at approximately the same distance from the origin. In particular, the eigenvalues of $$J$$ must be pure imaginary, so $$x(t)$$ will be a sum of oscillating components. In particular, if $$x(0) = \sum_i c_i v_i$$ for eigenvectors $$v_i$$, then $$x(t) = \sum c_i e^{\lambda_i t} v_i$$. Obviously $$|x(t)|$$ can change, but it is bounded by the triangle inequality. The main place where this argument needs more rigor is that one would need to do an $$\epsilon$$-$$\delta$$ analysis (or invoke relevant theorems) to validate that we really can ignore the effect of the higher-order terms in the Taylor series. But I think the basic outline is sound. For 2-d only: Suppose not. Then there is an $$\epsilon >0$$ such that there is no such $$\delta >0$$ with $$|x(0)| < \delta \implies |x(t)|<\epsilon$$. Take such an $$\epsilon > 0$$ and consider a sequence $$x_1, x_2, \cdots$$ approaching the origin. By periodicity with equilibrium point $$0$$, the orbit with initial $$x_1$$ must exit the $$\epsilon$$ ball and reenter at least once. Label the first exit and first reentrance as $$E_{1,A}, E_{1,B}$$. Since this orbit encompasses the origin, we can choose the sequence such that each successive point is contained in the interior of the orbit of the previous. By existence-uniqueness, the orbits origininating from these point cannot cross. In turn, this then creates sequences of points on the $$\epsilon$$ ball $$E_{i,A}, E_{i,B}$$. These must converge to limit points $$E_{\infty,A},E_{\infty,B}$$ on the ball. If $$E_{\infty,A} = E_{\infty,B}$$, this point is an equilibrium, which contradicts the condition $$f(x) \not= 0$$. If not, then consider a point $$S$$ on the arc of the ball defined by $$E_{\infty,A}, E_{\infty,B}$$. By index theory, the region enclosed periodic orbit in the plane must contain an equilibrium point. The orbit of $$S$$ must contain $$0$$, which contradicts existence-uniqueness from those orbits defined by our sequence.
# Property of the spanning tree with minimal leaves Let $$G$$ be a connected simple graph. For any spanning tree $$T$$ of $$G$$, let $$l(T)$$ be the number of leaves of the graph $$T$$. Consider $$\ell=\min_Tl(T)$$, can I find a spanning tree $$T$$ with $$l(T)=\ell$$, such that the set of leaves $$A$$ of $$T$$ is very close to an independent set. For example, I guess that there exists a vertex $$a\in A$$ such that $$A\setminus\{a\}$$ is an independent set in the graph $$G$$. My idea is that maybe we can justifying the tree $$T$$ step by step to get a extreme tree with some properties. Is there any results or references for this question? • If there are at least two vertices, $\ell = 2$ iff there's a Hamiltonian path, so finding a spanning tree which achieves $\ell$ is NP-hard. Mar 25, 2022 at 11:01 If I am not mistaken, and if I understand you correctly, it seems to me that you are right. The following statement is true. Let $$G$$ be a connected graph and $$T$$ be the spanning tree with the smallest number of leaves and $$\ell=l(T)$$. Let $$A$$ be the set of all leaves of tree $$T$$. If $$|A|=\ell>2$$, then $$A$$ is an independent set of graph $$G$$. Here is a brief proof. Let $$x$$ and $$y$$ be two leaves and $$e=xy$$ be an edge of graph $$G$$. Then the graph $$H=T+e$$ has a cycle. Denote this cycle by $$C$$. If all vertices of the cycle $$C$$ have degree $$2$$ in $$H$$, then $$C=H$$ and our graph $$G$$ is Hamiltonian, and this contradicts the condition $$\ell>2$$. Hence there exists a vertex $$a$$ of cycle $$C$$ of degree $$3$$ or more in $$H$$. Let $$e'=ab$$ be an edge of $$C$$. The graph $$T'=H-e'$$ is a spanning tree of graph $$G$$ and $$l(T')<\ell$$. Contradiction. • Thank you for your answer! The graph $H$ have $\ell-2$ leaves, and by deleting $e'$ the leaf number would at most +1 in $T'$, a contradiction. – ZZP Mar 26, 2022 at 7:45 • You can also clarify it this way. If $\operatorname{deg}_H(b)>2$, then $l(T')=\ell-2$; if $\operatorname{deg}_H(b)=2$, then $l(T')=\ell-1$. Mar 26, 2022 at 8:39
5 answers # A line through $A(-5,-4)$ meets the lines $x+3 y+2=0$, $2 x+y+4=0$ and $x-y-5=0$ at the point $B, C$ and $D$, ###### Question: A line through $A(-5,-4)$ meets the lines $x+3 y+2=0$, $2 x+y+4=0$ and $x-y-5=0$ at the point $B, C$ and $D$, respectively. If $left(frac{15}{A B} ight)^{2}+left(frac{10}{A C} ight)^{2}=left(frac{6}{A D} ight)^{2}$, the equa- tion of the line is (A) $2 x+3 y+22=0$ (B) $2 x-3 y+22=0$ (C) $3 x+2 y+22=0$ (D) $3 x-2 y+22=0$ ## Answers #### Similar Solved Questions 1 answer ##### ​(Preferred stockholder expected return​) You own 150 shares of Budd Corporation preferred stock at a market... ​(Preferred stockholder expected return​) You own 150 shares of Budd Corporation preferred stock at a market price of $21 per share. Budd pays dividends of ​$3.00. What is your expected rate of​ return? If you have a required rate of return of ​percent, should you sell ... 1 answer ##### 0 Data Table ncome statem und the percentages to on McClane Designs, Inc. Comparative Income Statement... 0 Data Table ncome statem und the percentages to on McClane Designs, Inc. Comparative Income Statement Years Ended December 31, 2018 and 2017 2018 d 2017 2017 D17 Net Sales Revenue $431,500$ 373,750 373,750 Expenses: Cost of Goods Sold Selling and Administrative Expenses Other Expenses 187,000 91,... 1 answer ##### CompuHHU number 엽 Kilograms (06 hydrogen (M) which Pass ρer hour through a 3 mm thick... CompuHHU number 엽 Kilograms (06 hydrogen (M) which Pass ρer hour through a 3 mm thick Palladium hayınơ an area Sheet Assume di oukficnt CD) ffusionc 2.5 x 10-1 mils. Thc conc entrations t the higb and low pressure sidus o hyclrogen per cubic metev pallacdium Censider hat steady-... 5 answers ##### Rewrite the system in matrix form and verify that the given vector function is a solution_~Ay1 10y2 341 7y2 [44i FVi + 242 + 33 =Y2 + 693 93 "2y3 [J:-8-+[J Rewrite the system in matrix form and verify that the given vector function is a solution_ ~Ay1 10y2 341 7y2 [4 4i FVi + 242 + 33 =Y2 + 693 93 "2y3 [J:-8-+[J... 5 answers ##### Sening 'I515_ B(c) = l+c+g+- +3 is the by_e" at = = 0_Find tbe Taylor polynomial oforder n generated smallest value of n that guarantees le-1 _ Pn(-1)l 100 (a) (b) 2 (c) 3 (d) 6 (e) (E) 6 (g) 7 (h) 8 sening 'I5 15_ B(c) = l+c+g+- +3 is the by_e" at = = 0_Find tbe Taylor polynomial oforder n generated smallest value of n that guarantees le-1 _ Pn(-1)l 100 (a) (b) 2 (c) 3 (d) 6 (e) (E) 6 (g) 7 (h) 8... 5 answers ##### ] 3. Find the exact length of the polar curve2cos0, 0 < 0 < T ] 3. Find the exact length of the polar curve 2cos0, 0 < 0 < T... 5 answers ##### (nauxouephic strain ofta bactcrium coli cun only grow on minimal medium containing thymina (Sein Ahand wll not erow unsupplcmicued minimal medhun; Anothcr auxotrophic strin can only erovon 4 miniiual medium conhwining Icucinie (Strain ud mll cot Frow unsupplcmcnied minimal Fleamima netie cecnnecni les whcthcr DNA (rom Strin _ could translormu Stratn B.Irenslornkad slrain would be pototroptic and abk to pTox minimal medium without supplemcnts Beccrtamn to epLin hypothesist0 detise controletoclo (n auxouephic strain ofta bactcrium coli cun only grow on minimal medium containing thymina (Sein Ahand wll not erow unsupplcmicued minimal medhun; Anothcr auxotrophic strin can only erovon 4 miniiual medium conhwining Icucinie (Strain ud mll cot Frow unsupplcmcnied minimal Fleamima netie cecnnecn... 5 answers ##### Dcau Md JabeL_the od monosacchau d belolAyanose pf_theCHOhOH40 +0OHCHzOH Dcau Md JabeL_the od monosacchau d belol Ayanose pf_the CHO h OH 40 +0 OH CHzOH... 5 answers 1 answer ##### $15-16=$ Use Definition 2 to find an expression for the area under the graph of $f$ as a limit. Do not evaluate the limit. $f(x)=\frac{2 x}{x^{2}+1}, \quad 1 \leqslant x \leqslant 3$ $15-16=$ Use Definition 2 to find an expression for the area under the graph of $f$ as a limit. Do not evaluate the limit. $f(x)=\frac{2 x}{x^{2}+1}, \quad 1 \leqslant x \leqslant 3$... 1 answer ##### I got the nmr of an acid called (Benzoic Acid). I just want to know the... I got the nmr of an acid called (Benzoic Acid). I just want to know the (((J-Value))) of this acid nmr Spin Works 4: 1 13C 1H 160 170 140 PPM 150 file: Userslalgh6840 Desktoplacid Clfid expt: <zgpg> transmitter freq.: 125.773542 MHz time domain size: 65536 points width: 43859.65 Hz 348.7192 pp... 5 answers ##### [Round the numbers with three decimal places]ANOVAMSSignificance 2732898.026 1366449.013 0.309248828 0.738040868 75116317.72 4418606.925 19 77849215.75Regression Residual TotalCoefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept 283.4480745 1017.735655 0.278508543 0.783981337 1863.786464 2430.682613 -1863.786464 2430.682613 Total Worldwide Revenues 0.022393769 0.041802371 0.535705722 0.599101407 0.065801524 0.110589062 0.065801524 0.110589062 Voluntary [Round the numbers with three decimal places] ANOVA MS Significance 2732898.026 1366449.013 0.309248828 0.738040868 75116317.72 4418606.925 19 77849215.75 Regression Residual Total Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept 283.4480745 1017.73565... 1 answer ##### 5. The following are a list of sources of income. List the additional realizable income and... 5. The following are a list of sources of income. List the additional realizable income and recognized gross income (if any) in the two columns. The tax year for all taxpayers ends on December 31. Situation description Realizable Recognizable income in 2019 | income in 2019 The payroll department at... 1 answer ##### Stion 19 The weights of items produced by a company are normally distributed with a mean... stion 19 The weights of items produced by a company are normally distributed with a mean of 5 ounces and a standard deviation of 0.2 ounces. What percentage of the items weighs between 4.4 and 5.3 ounces? 0.9974 0.0013 O 0.9319 0 0.9332 Moving to another question will save this response. MacBook Air... 3 answers 5 answers ##### 0 P aurelia is very likely a predator of P caudatumP caudatum probably used available food supplies more efficiently than ? aurelia.P caudatum and P aurelia probably occupy the same nicheP caudatum is more successful than P aurelia in mixed cultureP aurelia benefits by being grown in mixed culture 0 P aurelia is very likely a predator of P caudatum P caudatum probably used available food supplies more efficiently than ? aurelia. P caudatum and P aurelia probably occupy the same niche P caudatum is more successful than P aurelia in mixed culture P aurelia benefits by being grown in mixed cultu... 5 answers ##### A person runs I km. How does his speed affect the total energy needed to cover this distance? A person runs I km. How does his speed affect the total energy needed to cover this distance?... 1 answer ##### A. On the right is the structure of a tRNA molecule. Which of the following is... a. On the right is the structure of a tRNA molecule. Which of the following is the anticodon sequence (from 3’ to 5’) of this tRNA? b. Refer to the sequence of the tRNA molecule in Question 1 above and the codon table, what is the codon sequence (from 5’ to 3’) on the messeng... 1 answer ##### Statistics A measurement follows the normal distribution with a standard deviation of 15 and an unknown expectation μ. You can consider that measurement to be the "original" distribution. Two statisticians propose two distinct ways to estimate the unknown quantity μ with the aid of a sample of si... 1 answer ##### Use the following information and hess's law to find the enthalpy change for the reaction C(g)... Use the following information and hess's law to find the enthalpy change for the reaction C(g) + O2(g) => CO2(g) reaction 1: 2CO(g) + O2(g) => 2CO2(g) ΔH = -566 kj reaction 2: 2C(g) + O2(g) => 2CO(g) ΔH = -1655 kj... 4 answers ##### "The following are genotypes of merozygotes of E. coli with various combinations of lac operon mutations. Determine the phenotype with respect to beta-galactosidase (z) , permease (y) , and acetylase (a) of each combination as U = uninducible, inducible, and €C constitutive_ i+ pt ocz-Y-a+ /i- p+ O+ z- Y+ a- CCIUUUUCCUICIUICIUICC "The following are genotypes of merozygotes of E. coli with various combinations of lac operon mutations. Determine the phenotype with respect to beta-galactosidase (z) , permease (y) , and acetylase (a) of each combination as U = uninducible, inducible, and €C constitutive_ i+ pt ocz-Y-a... 1 answer ##### Find The Retro synthetic analysis explain your choice. Find The Retro synthetic analysis explain your choice.... 1 answer ##### What are examples of dipoles? What are examples of dipoles?... 1 answer ##### Abdo, 70 years of age, is a male patient who is admitted to the medical-surgical unit... Abdo, 70 years of age, is a male patient who is admitted to the medical-surgical unit with acute community-acquired pneumonia. He was diagnosed with paraseptal emphysema 3 years ago. The patient smoked cigarettes one pack per day for 55 years and quit 3 years ago. The patient has a history of hypert... 1 answer ##### This question has multiple parts. Work all the parts to get the most points. For the... This question has multiple parts. Work all the parts to get the most points. For the following reaction: CH3 CH3CH2 2. H3O* a Draw the major organic product. [References] Draw the enone product of aldol self condensation of benzophenone (diphenyl ketone) . You do not have to consider stereochemistry... 1 answer ##### Help on finding theoretical yield for 1,1 diacetylferrocene, started with 1.0376 g of ferrocene and 4.5... help on finding theoretical yield for 1,1 diacetylferrocene, started with 1.0376 g of ferrocene and 4.5 g of acetic anhydride. Reaction Equation: Hs Poy errocene Theoretical Yield of both the mono- and di-acetylated products (show all calculations): tuc ferrucene M o ridiacetylferocery... 5 answers ##### 13) It requires 49 J of work to stretch an ideal very light spring from length of 1.4 m t0 length of 2.9 m What is the value of the spring constant of this spring?b. How much work must be done to compress this spring .75 m from its unstretched length?What force is needed t0 compress the spring . 75 m? 13) It requires 49 J of work to stretch an ideal very light spring from length of 1.4 m t0 length of 2.9 m What is the value of the spring constant of this spring? b. How much work must be done to compress this spring .75 m from its unstretched length? What force is needed t0 compress the spring . 7... 5 answers ##### Which of the following B) OBr 241 strongest [ 2D) NOz Which of the following B) OBr 241 strongest [ 2 D) NOz... 5 answers 1 answer ##### Question 113 pts If the correlation between scores on a measurement from one time to another... Question 113 pts If the correlation between scores on a measurement from one time to another is high, then the measurement is said to have which type of reliability? Inter/intra-rater reliability An equivalence or alternative form of reliability Homogeneity Test-retest reliability Question 12 Which ... 5 answers ##### Question 10 (1 point) Which type of isomers would Octanal and 2-Octanone be considered?A) StereoisomersB) They are not isomers they are unrelated compoundsFunctional Group IsomersD) Postional isomersE) Skeletal isomers Question 10 (1 point) Which type of isomers would Octanal and 2-Octanone be considered? A) Stereoisomers B) They are not isomers they are unrelated compounds Functional Group Isomers D) Postional isomers E) Skeletal isomers... 5 answers ##### 12. The length of hair of freshman boys in college is normally distributed with mean length of 5 inches and standard deviation of 1.3 inches You take a sample of 40 boys, what is the probability that the sample average is less than 5.27 inches? What allows Us to do that? 12. The length of hair of freshman boys in college is normally distributed with mean length of 5 inches and standard deviation of 1.3 inches You take a sample of 40 boys, what is the probability that the sample average is less than 5.27 inches? What allows Us to do that?... 1 answer ##### 25. Which of the following would not be a cash flow from investing activities? A) Purchase... 25. Which of the following would not be a cash flow from investing activities? A) Purchase of long-term investments. B) Sale of a patent. C) Collection of principal from a long-term note receivable. D) Collection of interest revenue on a long-term note. E) None of the above is correct.... 1 answer ##### How do you use the rational root theorem to find the roots of x^3 + 9x^2 - x + 8 = 0? How do you use the rational root theorem to find the roots of x^3 + 9x^2 - x + 8 = 0?... 1 answer 1 answer 4 answers ##### NOzig) 7/2 Hzig)HzOr)NH3g) AH" ?22 kJlmole2NHa(g)Nz(g) + 3Hz(g)AH' = +92 kJYN2(g) + 2HzO() _ > NOz(g) + 2Hz(g) AH' = +170 kJ(R= 0.8206 atm Llmol K) Ideal Gas Law PV-nRT Daltons Law Ptotal = P1 + P2 How many grams of gaseous ammonia (MW 15.0 g/mol for NH3) [answer6] and liquid water (MW 18.0 g/mol) (answer7] are produced? (just give number) What will be the state of H2O at the end of the reaction (solid liquid or gas) lanswer8]. (Joulesl m SHiwater Tidelta SH= 4.184 JgC or 1 ca NOzig) 7/2 Hzig) HzOr) NH3g) AH" ?22 kJlmole 2NHa(g) Nz(g) + 3Hz(g) AH' = +92 kJ YN2(g) + 2HzO() _ > NOz(g) + 2Hz(g) AH' = +170 kJ (R= 0.8206 atm Llmol K) Ideal Gas Law PV-nRT Daltons Law Ptotal = P1 + P2 How many grams of gaseous ammonia (MW 15.0 g/mol for NH3) [answer6] and liqu... 5 answers ##### Do the following_ (Round your answers to four decima places. }(a) Estimate the area under the graph of ffx) points _4x2 from x = -1 to x = 2 using three rectangles and right endImprove your estimate by using six rectangles Rs(b) Repeat part (a) using left endpoints(c) Repeat part (a) using midpoints M3 M6 Do the following_ (Round your answers to four decima places. } (a) Estimate the area under the graph of ffx) points _ 4x2 from x = -1 to x = 2 using three rectangles and right end Improve your estimate by using six rectangles Rs (b) Repeat part (a) using left endpoints (c) Repeat part (a) using midp... -- 0.029156--
3D-QSAR, molecular dynamics simulations, and molecular docking studies on pyridoaminotropanes and tetrahydroquinazoline as mTOR inhibitors 3D-QSAR, molecular dynamics simulations, and molecular docking studies on pyridoaminotropanes and... Cancer is a second major disease after metabolic disorders where the number of cases of death is increasing gradually. Mammalian target of rapamycin (mTOR) is one of the most important targets for treatment of cancer, specifically for breast and lung cancer. In the present research work, Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA) studies were performed on 50 compounds reported as mTOR inhibitors. Three different alignment methods were used, and among them, distill method was found to be the best method. In CoMFA, leave-one-out cross-validated coefficients $$(q^{2})$$ ( q 2 ) , conventional coefficient $$(r^{2})$$ ( r 2 ) , and predicted correlation coefficient $$(r^{2}_{\mathrm{pred}})$$ ( r pred 2 ) values were found to be 0.664, 0.992, and 0.652, respectively. CoMSIA study was performed in 25 different combinations of features, such as steric, electrostatic, hydrogen bond donor, hydrogen bond acceptor, and hydrophobic. From this, a combination of steric, electrostatic, hydrophobic (SEH), and a combination of steric, electrostatic, hydrophobic, donor, and acceptor (SEHDA) were found as best combinations. In CoMSIA (SEHDA), $$q^{2}$$ q 2 , $$r^{2}$$ r 2 and $$r^{2}_{\mathrm{pred}}$$ r pred 2 were found to be 0.646, 0.977, and 0.682, respectively, while in the case of CoMSIA (SEH), the values were 0.739, 0.976, and 0.779, respectively. Contour maps were generated and validated by molecular dynamics simulation-assisted molecular docking study. Highest active compound 19, moderate active compound 15, and lowest active compound 42 were docked on mTOR protein to validate the results of our molecular docking study. The result of the molecular docking study of highest active compound 19 is in line with the outcomes generated by contour maps. Based on the features obtained through this study, six novel mTOR inhibitors were designed and docked. This study could be useful for designing novel molecules with increased anticancer activity. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Molecular Diversity Springer Journals 3D-QSAR, molecular dynamics simulations, and molecular docking studies on pyridoaminotropanes and tetrahydroquinazoline as mTOR inhibitors , Volume 21 (3) – Jun 2, 2017 19 pages /lp/springer_journal/3d-qsar-molecular-dynamics-simulations-and-molecular-docking-studies-Ck00m0odpV Publisher Springer International Publishing Copyright © 2017 by Springer International Publishing Switzerland Subject Life Sciences; Biochemistry, general; Organic Chemistry; Polymer Sciences; Pharmacy ISSN 1381-1991 eISSN 1573-501X D.O.I. 10.1007/s11030-017-9752-9 Publisher site See Article on Publisher Site Abstract Cancer is a second major disease after metabolic disorders where the number of cases of death is increasing gradually. Mammalian target of rapamycin (mTOR) is one of the most important targets for treatment of cancer, specifically for breast and lung cancer. In the present research work, Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA) studies were performed on 50 compounds reported as mTOR inhibitors. Three different alignment methods were used, and among them, distill method was found to be the best method. In CoMFA, leave-one-out cross-validated coefficients $$(q^{2})$$ ( q 2 ) , conventional coefficient $$(r^{2})$$ ( r 2 ) , and predicted correlation coefficient $$(r^{2}_{\mathrm{pred}})$$ ( r pred 2 ) values were found to be 0.664, 0.992, and 0.652, respectively. CoMSIA study was performed in 25 different combinations of features, such as steric, electrostatic, hydrogen bond donor, hydrogen bond acceptor, and hydrophobic. From this, a combination of steric, electrostatic, hydrophobic (SEH), and a combination of steric, electrostatic, hydrophobic, donor, and acceptor (SEHDA) were found as best combinations. In CoMSIA (SEHDA), $$q^{2}$$ q 2 , $$r^{2}$$ r 2 and $$r^{2}_{\mathrm{pred}}$$ r pred 2 were found to be 0.646, 0.977, and 0.682, respectively, while in the case of CoMSIA (SEH), the values were 0.739, 0.976, and 0.779, respectively. Contour maps were generated and validated by molecular dynamics simulation-assisted molecular docking study. Highest active compound 19, moderate active compound 15, and lowest active compound 42 were docked on mTOR protein to validate the results of our molecular docking study. The result of the molecular docking study of highest active compound 19 is in line with the outcomes generated by contour maps. Based on the features obtained through this study, six novel mTOR inhibitors were designed and docked. This study could be useful for designing novel molecules with increased anticancer activity. Journal Molecular DiversitySpringer Journals Published: Jun 2, 2017 You’re reading a free preview. Subscribe to read the entire article. DeepDyve is your personal research library It’s your single place to instantly discover and read the research that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Explore the DeepDyve Library Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve Freelancer DeepDyve Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
It’s been a while since my last blog post, mostly because I didn’t really have the time or the energy to sit down and write up all the stuff that I wanted to write about. Part of it was because I have been pretty busy with the Ignition and TurboFan launch in Chrome 59, which fortunately was a huge success thus far. But also partly because I took some time off with my family. And last but not least I went to JSConf EU and Web Rebels, and at the time of this writing I’m at enterJS, procastinating on doing the final tweaking for my talk. Meanwhile I just got back from a very interesting dinner discussion with Brian Terlson, Ada Rose Edwards and Ashley Williams about good optimization patterns for JavaScript that we can give as advice safely, and in particular how hard it is to come up with those. One particular point that I made was that ideal performance often depends on the context in which the code is running, and that’s oftentimes the most difficult part. So I thought it’s probably worth sharing this information with everyone. I’ll start this as a series of blog posts. In this first part I’ll try to highlight the impact that the concrete execution context can have on the performance of your JavaScript code. Consider the following artificial Point class, which has a method distance that computes the Manhatten distance of two such points. In addition to that consider the following test driver function, which creates a couple of Point instances, and computes the distance between them several million times, summing up the result (yeah I know it’s a micro-benchmark, but bear with me for a second): Now we have a proper benchmarkfor the Point class and in particular its distance method. Let’s do a couple of runs of the test driver to see what the performance is, using the following HTML snippet: If you run this in Chrome 61 (canary), you’ll see the following output in the Chrome Developer Tools Console: test 1: 595.248046875ms test 2: 765.451904296875ms test 3: 930.452880859375ms test 4: 994.2890625ms test 5: 3894.27392578125ms The performance of the individual runs is very inconsistent. You can see that the performance get’s worse with each subsequent run. The reason for the performance regression is that the Point class sits inside the test function. If we change the snippet slightly such that the Point class is defined outside of the test function, we’ll get different results: test 1: 598.794921875ms test 2: 599.18115234375ms test 3: 600.410888671875ms test 4: 608.98388671875ms test 5: 605.36376953125ms The performance is pretty much stable now with the usual noise. Notice that in both cases, we used exactly the same code for the Point class and exactly the same code for the test driver logic. The only difference is where exactly we place the Point class in the code. It’s also worth noting that this has nothing to do with the new ES2015 class syntax; using old style ES5 syntax for the Point class yields the same performance results: The underlying reason for the performance difference when the Point class lives inside the test function is that the class literal is then executed multiple times (exactly 5 times in my example above), whereas if it lives outside the test function, it’s only executed once. Everytime the class definition is executed, a new prototype object is created, which carries all the methods of the class. In addition to that a new constructor function is created, which corresponds to the class, and has the prototype object set as it’s "prototype" property. New instances of the class are created using this "prototype" as their prototype object. But since V8 tracks the prototype of an instance as part of the object shape or hidden class (see Setting up prototypes in V8 for some details on this) in order to optimize access to properties on the prototype chain, having different prototypes automatically implies having different object shapes. And as such the generated code get’s ever more polymorphic if the class definition is executed multiple times, and eventually V8 gives up on polymorphism after it has seen more than 4 different object shapes, and enters the so-called megamorphic state, which means it kind of gives up on generating highly optimized code. So takeaway from this exercise: Identical code put into a slightly different place can easily lead to a 6.5x difference in performance! This is especially important since popular benchmarking frameworks and sites like esbench.com tend to execute code in a different context than your application (i.e. wrap code in functions under the hood that are run multiple times) and thus the results from benchmarking that way can be highly misleading.
# Thread: Dividing Polynomials Need Help 1. ## Dividing Polynomials Need Help Solve these by Dividing 1 . http://media-dal.owotw.com/o_alg02_2010/4/q20227.gif 2 . (a^2n - a^n - 6) (a^n + 8) 3 . x^3 + y^3) (x - y) 4 .(x^2 - 4) (x - 1) 5 . (x^5 + y^5) (x + y) 6 . (2a^2 +a + 3) (a - 1) 2. Do you know polynomial long division? Polynomial Long Division It would be my approach.
# Is this a valid proof I have a homework assignment in linear algebra and my answer seems kind of fuzzy. Is this proof valid? Let $\{a,b,c\}$ be linearly independent set in real vector space $\mathbb{R}^3$. Show that the same set is also linearly independent in complex vector space $\mathbb{C}^3$. $a,b,c\in\mathbb{R}^3\Rightarrow a,b,c\in\mathbb{C}^3$ The only transformation which does not require another vector is multiplication by scalar value: $\forall\lambda\in\mathbb{C}\exists a':a'=\lambda a$ Let's assume that $\exists\lambda\neq0:\exists\beta, \gamma\in\mathbb{C}^3:a'=\beta b+\gamma c$. $a'=\beta b+\gamma c$ $\lambda a=\beta b+\gamma c$ $\lambda(a_x\vec{i}+a_y\vec{j}+a_z\vec{k})=\beta(b_x\vec{i}+b_y\vec{j}+b_z\vec{k})+\gamma(c_x\vec{i}+c_y\vec{j}+c_z\vec{k})$ For new set to be linearly independent, there must be a unique solution to the following system: $(\lambda a_x-\beta b_x-\gamma c_x)\vec{i}=0$ $(\lambda a_y-\beta b_y-\gamma c_y)\vec{j}=0$ $(\lambda a_z-\beta b_z-\gamma c_z)\vec{k}=0$ Since $\vec{i},\vec{j},\vec{k}\neq0$: $\lambda a_x=\beta b_x+\gamma c_x$ $\lambda a_y=\beta b_y+\gamma c_y$ $\lambda a_z=\beta b_z+\gamma c_z$ Let $\beta' = \frac{\beta}{\lambda}$ and $\gamma'=\frac{\gamma}{\lambda}$. This is allowed since $\lambda\neq0$ from our assumption. $a_x=\beta'b_x+\gamma'c_x$ $a_y=\beta'b_y+\gamma'c_y$ $a_z=\beta'b_z+\gamma'c_z$ Since $Im(a)=Im(b)=Im(c)=0$, $\beta'b_x=\beta'Re(b_x)+\beta'Im(b_x)=\beta'Re(b_x)+\beta'\bullet0=\beta'Re(b_x)$ This is also true for other multiplication operations within the system and shows that the ratio of real and imaginary parts does not change. This shows that $\lambda a=\beta b+\gamma c$ is only true for $\lambda=\beta=\gamma=0$ which proves that $\{a,b,c\}$ is linearly independent in $\mathbb{C^3}$. • Admittedly I did not read the whole 'proof' carefully but it looks dodgy at best. When you are proving linear independence, make sure you use the basic definition of linear independence rather than its equivalent intuitive definition. (ie if $\sum_i a_iv_i=0$ then for all $i$ $a_i=0.$ – Jack Yoon Nov 9 '15 at 12:07 • This proof is correct, but it is quite long. If you know something about determinants, then the proof could be writte in one line. – Crostul Nov 9 '15 at 12:08 • Suppose it were not true. You could write down a linear combination of the (real) vectors which summed to zero using complex coefficients. Examine the real part. – TheMathemagician Nov 9 '15 at 12:17 Your proof is over complicated and it looks like you're assuming that $\beta$ and $\gamma$ has to be real (which might invalidate the proof). Instead just assume that you have a linear combination $\alpha a + \beta b + \gamma c=0$ and since $a$, $b$ and $c$ are real you can separate the equation into real and imaginary parts: $$\Re\alpha a + \Re\beta b + \Re\gamma c = 0$$ $$\Im\alpha a + \Im\beta b + \Im\gamma c = 0$$
Category: Simulation A case for T junctions It has been established (for example, here) that the standard two-dimensional homogeneous PPP is not an adequate model for vehicular networks, since vehicles are mostly confined to streets. The Poisson line Cox process (PLCP) has naturally emerged as the model of choice. In this process, one-dimensional PPPs are placed on a street system formed by a Poisson line process. This model is somewhat tractable and thus has gained some traction in the community. With probability 1 each line (or street) intersects with each other line, so intersections are formed, and the communication performance at the typical intersection vehicles can be studied. This is important since vehicles at intersections are more accident-prone than other vehicles. How about T junctions? Clearly, the PLCP has no T junctions a.s. But while not quite as frequent as (four-way) intersections, they are an important building block of the street systems in every city, and it is reasonable to assume that they inherit some of the dangers of intersections. However, the performance of vehicles at T junctions have barely been modeled and analyzed. The reason is perhaps not that it is not worthy of study but the lack of a natural model. Let’s say we wanted to construct a Cox model of vehicles that is supported on a street system that has no intersections but only T junctions, with the T junctions themselves forming a stationary point process (in the same way the intersections in the PLCP form a stationary point process). What is the simplest (most natural, most tractable) model? One model we came up with is inspired by the so-called lilypond model. From each point of a PPP, a line segment grows in a random orientation in both directions. All segments grow at the same speed until one of their endpoints hit another segment. Once all growth has stopped, the lilypond street model is obtained. Here is a realization: Then PPPs of vehicles can be placed on each line segment to form a Lilypond line segment Cox process. Some results for vehicular networks based on this model are available here. The model has the advantage that it has only a single parameter – the density of the underlying PPP of the center points of each line segment. On the other hand, the distribution of the length of the line segments can only be bounded, and the construction naturally creates a dependence between the lengths of nearby segments, which limits the tractability. For instance, in a region with many initial Poisson points, segments will be short on average, while in a region with sparse Poisson points, segments will be long. Also, the construction implies that simulating this process takes significantly more time than simulating a PLCP. Given the shortcoming of the model, it seems quite probable that there are other, simpler and (even) more natural models for street systems with T junctions. Let’s try and find them! Single-point simulation? When applying simalysis as illustrated in the previous post, the question arises where to put the boundary between the part of the point process that is simulated and the part that is analyzed. Specifically, we may wonder whether we can reduce the simulated part to only a single point (on average), i.e., to choose the number of simulated points to be Poisson with mean 1 in each realization. Let’s find out, using the same Poisson bipolar model as in the previous post (Rayleigh fading, transmitter density 1, link distance 1/4). Fig. 1 shows the simulated (or, more precisely, simalyzed) result where the 500 realizations only contain one single point on average. This means that about 500/e ≈ 184 realizations have no point at all. We observe good accuracy, especially at small path loss exponents α. Also, the simulated curves are lower bounds to the exact ones. This is due to Jensen’s inequality: $\displaystyle \mathbb{E}\exp(-sI_c) \geq \exp(-s\mathbb{E}(I_c)),\quad s>0.$ The term on the left side is the exact factor in the SIR ccdf due to the interference Ic from points outside distance c. It is larger than the right side, which is the factor used in the simalysis (see the Matlab code). This property holds for all stationary point processes. But why stop there? One would think that using, say, only 1/4 point on average would be (essentially) pointless. But let’s try, just to be sure. Remarkably, even with only 1/4 point per realization on average, the curves for α<2.5 are quite accurate, and the one for α=2.1 is an excellent lower bound! It is certainly much more accurate than a classical simulation with 500,000 points per realization (see the previous post). Such a good match is quite surprising, especially considering that 1/4 point on average means that about 78% of the realizations have 0 points, which means that in about 390 out of the 500 realizations, the simulated factor in the SIR ccdf simply yields 1. Also, in the entire simalysis, only about 125 points are ever produced. It takes no more than about 1/2 s on a standard computer. We conclude that accurate simulation (simalysis, actually) can be almost point-less. Simalysis: Symbiosis of simulation and analysis Simulations can be quite time-consuming. Are there any techniques that can help make them more efficient and/or accurate? Let us focus on a concrete problem. Say we would like to plot the SIR distribution in a Poisson bipolar network for different path loss exponents α, including some values close to 2. Since we would like to compare the result with the exact one, we focus on the Rayleigh fading case where the analytical expression is known and simple. The goal is to get accurate curves for α=3, 2.5, 2.25, and 2.1, and we would like to wait no longer than 1 s for the results on a standard desktop or laptop computer. Let us first discuss why this is a non-trivial problem. It involves averaging w.r.t. the fading and the point process, and we need to make sure that the number of interferers is large enough for good accuracy. But what is “large enough”? A quick calculation using Campbell’s theorem (for sums) reveals that if we want to capture 99% of the mean interference power (outside radius 1 to avoid complications due to a potential singularity in the path loss law), we find that for α=3, the simulation region needs to be 100 times larger than for α=4. This seems manageable, but for α=2.5, 2.25, 2.1, it is 106, 1014, 1038 times larger, respectively! Clearly the straightforward approach of producing many realizations of the PPP in a large region does not work in the regime of small α. So how can we achieve our goal above – high accuracy and high efficiency? The solution is to use an analysis-enhanced simulation technique, which I call simalysis. While we often tend to think as analysis vs. simulation as a dichotomy, in this approach they are used symbiotically. The idea is to exploit analytical results whenever possible to make simulations faster and more accurate. Let me illustrate how simalysis works when applied to the problem above. For small α, it is impossible to “capture” most of the interference solely by simulation. In fact, most of it stems from the infinitely many distance nodes, each one contributing little, with independent fading. We can thus assume that the variance of the interference of the nodes further than a certain distance (relatively large compared with the mean nearest-neighbor distance) is relatively small. Accordingly, replacing it by its mean is a sensible simplification. Here is where the analysis comes in. For any stationary point process of density λ, the mean interference from the nodes outside distance c is $\displaystyle I(c)=2\pi\lambda\int_c^\infty r^{1-\alpha}{\rm d} r=\frac{2\pi\lambda}{\alpha-2} c^{2-\alpha}.$ This interference term can then be added to the simulated interference, which stems from points within distance c. Simulating as few as 50 points is enough for very high accuracy. The result is shown in the figure below, using the MH scale so that the entire distribution is revealed (see this post for details on the MH scale). For α near 2, the curves are indistinguishable! This simulation averages over 500 realizations of the PPP and runs in less than 1 s on a laptop. The Matlab code is available here. It uses a second simalytic technique, namely the analytical averaging over the fading. Irrespective of the type of point process we want to simulate, as long as the fading is Rayleigh, we can perform the averaging over the fading analytically. For comparison, the figure below shows the simulation results if 500,000 points (interferers) are simulated, without adding the analytical mean interference term, i.e., using classical simulation. Despite taking 600 times longer, the distributions for α<2.5 are not acceptable. Double-proving by simulation? Let us consider a hypothetical scenario that illustrates an issue I frequently observe. Author: Here is an important result for a canonical Poisson bipolar network: Theorem: The complementary cumulative distribution of the SIR in a Poisson bipolar network with Rayleigh fading, transmitter density λ, link distance r, and path loss exponent 2/δ is $\displaystyle \bar F(\theta)=\exp(-\lambda\pi r^2 \theta^\delta\Gamma(1-\delta)\Gamma(1+\delta)).$ Proof: [Gives proof based on the probability generating functional.] Reviewer: This is a nice result, but it is not validated by simulation. Please provide simulation results. We have a proven exact analytical (PEA) result. So why would we need a simulation for “validation”? Where does the lack of trust in proofs come from? I am puzzled by these requests by reviewers. Similar issues arise when authors themselves feel the need to add simulations to the visualization of PEA results. Perhaps some reviewers are not familiar with the analytical tools used or they think it is easier to have a quick look at a simulated curve rather than checking a proof. Perhaps some authors are not entirely sure their proofs are valid, or they think reviewers are more likely to trust the proofs if simulations are also shown. The key issue is that such requests by reviewers or simulations by authors take simulation results as the “ground truth”, while portraying PEA results as weaker statements that need validation. This of course is not the case. A PEA result expresses a mathematical fact and thus does not need any further “corroboration”. Now, if the simulation results are accurate, the analytical and simulated curves lie exactly on top of each other, and the accompanying text states the obvious: “Look, the curves match!”. But what if there isn’t an exact match between the analytical and the simulated curve? Which means that the simulation is not accurate. Certainly that does not qualify as “validation”. The worst conclusion would be to distrust the PEA result and take the simulation as the true result. By its nature, a simulation is always restricted to a small cross-section of the parameter space. Even the simple result above has four parameters, which would make it hard to comprehensively simulate the network. Related, I am inviting the reader to simulate the result for a path loss exponent α=2.1 or δ=0.95. Almost surely the simulated curve will look quite different from the analytical one. In conclusion, there is absolutely no need for “two-step verification” of PEA results. On the contrary.
Erdős–Ko–Rado theorem In combinatorics, the Erdős–Ko–Rado theorem of Paul Erdős, Chao Ko, and Richard Rado is a theorem on intersecting set families. The theorem is as follows. Suppose that A is a family of distinct subsets of {displaystyle {1,2,...,n}} such that each subset is of size r and each pair of subsets has a nonempty intersection, and suppose that n ≥ 2r. Then the number of sets in A is less than or equal to the binomial coefficient {displaystyle {binom {n-1}{r-1}}.} The result is part of the theory of hypergraphs. A family of sets may also be called a hypergraph, and when all the sets (which are called "hyperedges" in this context) are the same size r, it is called an r-uniform hypergraph. The theorem thus gives an upper bound for the number of pairwise non-disjoint hyperedges in an r-uniform hypergraph with n vertices and n ≥ 2r. The theorem may also be formulated in terms of graph theory: the independence number of the Kneser graph KGn,r for n ≥ 2r is {displaystyle alpha (KG_{n,r})={binom {n-1}{r-1}}.} According to Erdős (1987) the theorem was proved in 1938, but was not published until 1961 in an apparently more general form. The subsets in question were only required to be size at most r, and with the additional requirement that no subset be contained in any other. A version of the theorem also holds for signed sets (Bollobás & Leader 1997) Contents 1 Proof 2 Families of maximum size 3 Maximal intersecting families 4 Intersecting families of subspaces 5 Relation to graphs in association schemes 6 References Proof The original proof of 1961 used induction on n. In 1972, Gyula O. H. Katona gave the following short proof using double counting. Suppose we have some such family of subsets A. Arrange the elements of {1, 2, ..., n} in any cyclic order, and consider the sets from A that form intervals of length r within this cyclic order. For example if n = 8 and r = 3, we could arrange the numbers {1, 2, ..., 8} into the cyclic order (3,1,5,4,2,7,6,8), which has eight intervals: (3,1,5), (1,5,4), (5,4,2), (4,2,7), (2,7,6), (7,6,8), (6,8,3), and (8,3,1). However, it is not possible for all of the intervals of the cyclic order to belong to A, because some pairs of them are disjoint. Katona's key observation is that at most r of the intervals for a single cyclic order may belong to A. To see this, note that if (a1, a2, ..., ar) is one of these intervals in A, then every other interval of the same cyclic order that belongs to A separates ai and ai+1 for some i (that is, it contains precisely one of these two elements). The two intervals that separate these elements are disjoint, so at most one of them can belong to A. Thus, the number of intervals in A is one plus the number of separated pairs, which is at most (r - 1). Based on this idea, we may count the number of pairs (S,C), where S is a set in A and C is a cyclic order for which S is an interval, in two ways. First, for each set S one may generate C by choosing one of r! permutations of S and (n − r)! permutations of the remaining elements, showing that the number of pairs is |A|r!(n − r)!. And second, there are (n − 1)! cyclic orders, each of which has at most r intervals of A, so the number of pairs is at most r(n − 1)!. Combining these two counts gives the inequality {displaystyle |A|r!(n-r)!leq r(n-1)!} and dividing both sides by r!(n − r)! gives the result {displaystyle |A|leq {frac {r(n-1)!}{r!(n-r)!}}={n-1 choose r-1}.} Two constructions for an intersecting family of r-sets: fix one element and choose the remaining elements in all possible ways, or (when n = 2r) exclude one element and choose all subsets of the remaining elements. Here n = 4 and r = 2. Families of maximum size There are two different and straightforward constructions for an intersecting family of r-element sets achieving the Erdős–Ko–Rado bound on cardinality. First, choose any fixed element x, and let A consist of all r-subsets of {displaystyle {1,2,...,n}} that include x. For instance, if n = 4, r = 2, and x = 1, this produces the family of three 2-sets {1,2}, {1,3}, {1,4}. Any two sets in this family intersect, because they both include x. Second, when n = 2r and with x as above, let A consist of all r-subsets of {displaystyle {1,2,...,n}} that do not include x. For the same parameters as above, this produces the family {2,3}, {2,4}, {3,4}. Any two sets in this family have a total of 2r = n elements among them, chosen from the n − 1 elements that are unequal to x, so by the pigeonhole principle they must have an element in common. When n > 2r, families of the first type (variously known as sunflowers, stars, dictatorships, centred families, principal families) are the unique maximum families. Friedgut (2008) proved that in this case, a family which is almost of maximum size has an element which is common to almost all of its sets. This property is known as stability. The seven points and seven lines (one drawn as a circle) of the Fano plane form a maximal intersecting family. Maximal intersecting families An intersecting family of r-element sets may be maximal, in that no further set can be added without destroying the intersection property, but not of maximum size. An example with n = 7 and r = 3 is the set of 7 lines of the Fano plane, much less than the Erdős–Ko–Rado bound of 15. Intersecting families of subspaces There is a q-analog of the Erdős–Ko–Rado theorem for intersecting families of subspaces over finite fields. Frankl & Wilson (1986) If {displaystyle S} is an intersecting family of {displaystyle k} -dimensional subspaces of an {displaystyle n} -dimensional vector space over a finite field of order {displaystyle q} , and {displaystyle ngeq 2k} , then {displaystyle vert Svert leq {binom {n-1}{k-1}}_{q}.} Relation to graphs in association schemes The Erdős–Ko–Rado theorem gives a bound on the maximum size of an independent set in Kneser graphs contained in Johnson schemes.[citation needed] Similarly, the analog of the Erdős–Ko–Rado theorem for intersecting families of subspaces over finite fields gives a bound on the maximum size of an independent set in q-Kneser graphs contained in Grassmann schemes.[citation needed] References Bollobás, B.; Leader, I. (1997), "An Erdős-Ko-Rado theorem for signed sets", Computers and Mathematics with Applications, 34 (11): 9–13, doi:10.1016/S0898-1221(97)00215-0, MR 1486880 Erdős, P. (1987), "My joint work with Richard Rado", in Whitehead, C. (ed.), Surveys in combinatorics, 1987: Invited Papers for the Eleventh British Combinatorial Conference (PDF), London Mathematical Society Lecture Note Series, vol. 123, Cambridge University Press, pp. 53–80, ISBN 978-0-521-34805-8. Erdős, P.; Ko, C.; Rado, R. (1961), "Intersection theorems for systems of finite sets" (PDF), Quarterly Journal of Mathematics, Second Series, 12: 313–320, doi:10.1093/qmath/12.1.313. Frankl, P.; Wilson, R. M. (1986), "The Erdős-Ko-Rado theorem for vector spaces", Journal of Combinatorial Theory, Series A, 43 (2): 228–236, doi:10.1016/0097-3165(86)90063-4. Friedgut, Ehud (2008), "On the measure of intersecting families, uniqueness and stability" (PDF), Combinatorica, 28 (5): 503–528, doi:10.1007/s00493-008-2318-9, S2CID 7225916 Katona, G. O. H. (1972), "A simple proof of the Erdös-Chao Ko-Rado theorem", Journal of Combinatorial Theory, Series B, 13 (2): 183–184, doi:10.1016/0095-8956(72)90054-8. Godsil, Christopher; Karen, Meagher (2015), Erdős–Ko–Rado Theorems: Algebraic Approaches, Cambridge Studies in Advanced Mathematics, Cambridge University Press, ISBN 9781107128446. Categories: Set familiesTheorems in discrete mathematicsFactorial and binomial topicsPaul Erdős Si quieres conocer otros artículos parecidos a Erdős–Ko–Rado theorem puedes visitar la categoría Factorial and binomial topics. Subir Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información
## Bublava and Klingenthal Since its hot in the northern hemisphere here some vacation images from the last winter shot in the mountains at the border between Germany and the Czech Republic. You can see parts of the towns of Klingenthal and Bublava. I also found there also some old games from east german times. ### One Response to “Bublava and Klingenthal” 1. fisima Says: Ah those czek snow crystals! The below box is for leaving comments. Interesting comments in german, french and russian will eventually be translated into english. If you write a comment you consent to our data protection practices as specified here. If your comment text is not too rude and if your URL is not clearly SPAM then both will be published after moderation. Your email adress will not be published. Moderation is done by hand and might take up to a couple of days. you can use LaTeX in your math comments, by using the $shortcode: [latex] E = m c^2$
If you have MathML on your WordPress site, using the Mathjax system to show it, then you need to know that Mathjax is shutting down the CDN as of April 30, 2017. If, like me, you use the MathJax-LaTeX plugin, the solution is easy. Go to the Plugins – Settings – MathJax-LaTeX page. Uncheck the “Use MathJax CDN Service?” checkbox, and add https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js to the “Custom MathJax location?” text field. You can, of course, also download the MathJax scripts and install locally, but I prefer to use a CDN. Save the changes, and you’re all set! Unfortunately there isn’t an equivalent of the MathJax ‘latest’ for the scripts, so every now and then you’ll need to update the location, but other than that there should be no differences. One of my client websites suddenly started giving an error: Error establishing a database connection. When I went to the /wp-admin URL, the error was still there. This particular website is on shared hosting, so I logged into the CPanel and checked the database was still there. Then I checked the database and found some issues with some of the tables. [site.wp_links] error: Table upgrade required. Please do "REPAIR TABLE wp_links" or dump/reload to fix it! [site.wp_options] error: Table upgrade required. Please do "REPAIR TABLE wp_options" or dump/reload to fix it! [site.wp_postmeta] status: OK [site.wp_posts] status: OK [site.wp_term_relationships] status: OK [site.wp_term_taxonomy] error: Table upgrade required. Please do "REPAIR TABLE wp_term_taxonomy" or dump/reload to fix it! [site.wp_terms] status: OK [site.wp_usermeta] error: Table upgrade required. Please do "REPAIR TABLE wp_usermeta" or dump/reload to fix it! [site.wp_users] error: Table upgrade required. Please do "REPAIR TABLE wp_users" or dump/reload to fix it! Running those SQL queries on the appropriate database in phpMyAdmin fixed the problem. I don’t know whether the hosting company upgraded the database, or something happened with the automatic WordPress upgrade system, or if something else caused the problem. [Update] There were a bunch of other errors that cropped up afterwards with the White Screen of Death; I had to call the hosting company to sort out the server-side errors causing those. It’s possible those errors were the original cause of the database problems, whatever they were. WordPress was designed for public websites, not private ones, so password protection can be a little clunky. Fortunately there are plugins to help, but (as always) there are trade-offs to be made. When all you want to do is add a password to stop search engines indexing and outsiders reading the content, but you also want make it as easy as possible for people to use, there’s the Password Protected plugin. As it says, it doesn’t protect the images or other uploaded content. If you also want to protect the media, you will need to give people an account on the WordPress site (with username and password). Then you can use the htaccess edits detailed at http://www.idowebdesign.ca/wordpress/password-protect-wordpress-attachments/. This works, but in many cases you just don’t want to give lots of people accounts on the system, or make groups of people share an account. So it’s a trade-off – how important is password-protecting the images versus the administration overhead of user accounts with the associated username/password ease of use issues? If you do want to use usernames and passwords, perhaps giving a group of people a shared account, I’d recommend also using one of the plugins that helps with finer-grained access control, such as Members, to stop people being able to change things you don’t want them changing (such as passwords for the shared account). I've been working at Design Science for a couple of months now, as Senior Product Manager concentrating on the MathFlow products. So I figured I should enable MathML support on my blog. It's not hard, but like everything in tech there are a few niggly details. Many of those issues are caused by WordPress's over-eager helpfulness, which has to be reined in on a regular basis if you're doing anything at all out of the ordinary. Like editing your posts directly in HTML rather than using some pseudo-WYSIWYG editor. Theoretically, showing MathML in a browser is easy, at least for the sort of equations that most people put in blog posts, even though not all browsers support MathML directly. You just use the MathJax JavaScript library. On WordPress there is even a plugin that adds the right script element, the MathJax-Latex plugin. You can make every page load MathJax, or use the [mathjax] shortcode to tell it when to load. The wrinkle comes with WordPress' tendency to "correct" the markup. When you add the MathML, WordPress sprinkles it with <br/> tags. MathJax chokes on those and shows nothing. Since the tags don't show up in the editor view, you need some way of stopping WordPress from adding them. The best way I've found is with the Raw HTML plugin. But there's a wrinkle with that too. For some reason if you use the shortcode version of the begin and end markers ([raw]) the editor decides that the XML characters between those markers has to be turned into the character entities, so for example the < characters are turned into &lt;. To stop that, you need to a) check all the checkboxes in the Raw HTML settings on the post, and b) use the comment version (<-- raw --> and <-- /raw -->) to mark the beginning and end of the section instead of the shortcode version. Once it's done it's easy to add equations to your pages, so it's worth the extra few minutes to set it all up. A couple of examples taken from the MathJax samples page Curl of a Vector Field $∇→×F→=(∂Fz∂y−∂Fy∂z)i+(∂Fx∂z−∂Fz∂x)j+(∂Fy∂x−∂Fx∂y)k$ Standard Deviation $σ=1N∑i=1N(xi−μ)2$ and one from my thesis from way back when $fλ=n!⁢∏i Given the current state of OpenSolaris (precarious, judging by various posts I’ve seen over the last few months) I decided to move the basement development and blog hosting machine back to Debian. I mostly use it for a couple of small WordPress blogs, and trying out various things (the odd Django project, Ruby on Rails, etc), so Debian is eminently suitable for that. Step one: move the WordPress blogs on to an interim hosting solution, namely the same host where I currently host this blog. My package allows infinite add-on domains, so that works. To start with, I made sure I had no broken links on the blogs in their old home — I didn’t want to try to hunt down errors in the new blogs that already existed on the old ones. The whole process worked fairly well (install new WordPress system on new host, export the old blog, import to the new one) except for a couple of wrinkles, which I’m detailing here for next time I need to do this. 1. when setting up the new blog, before you’ve switched the DNS, don’t put the final URL in the settings dialog. This just means you can’t log in to the temporary site and you have to go into PHPMyAdmin and fix the URL back to the temporary version. Get the site set up properly first, then switch the blog URL and the DNS settings. 2. The image attachment probably won’t work. If you import the posts and check the “import file attachment” box, some of them will attach properly, but not all, and you’ll have to manually upload a certain proportion of your images using SFTP or something similar. If you don’t check that box, none of the images will be attached to the right posts and you’ll have to manually upload all of them. If you’ve used standard markup to show photos, that works anyway, but if you’ve used the gallery shortcode, you’ll have to manually attach the images to the post. The best plugin I’ve found to help with this is the Add From Server plugin, where you can attach the images after you’ve uploaded them all. It’s still a lot of work if you have a lot of images. Apart from that, step one went well. Now I have to make sure I have all the other useful files saved somewhere, and get on with the OS install. I was upgrading the WordPress site for someone and had a few moments of panic when, after upgrading, all I could see were blank pages. Visions of having to go through the pain of reinstalling the database from the backup, and uploading all the files from the backup, were dancing through my head, which would turn a quick upgrade into a long marathon. The upgrade here was from 2.6.something to 3.0.1, and I hadn’t bothered doing all the intermediate upgrades, so that made the prospect even worse. Poking around the various support pages encouraged me to try a couple of different things first. The fact that all the pages were blank, both the admin site and the publicly-visible site, made the problem seem worse than it ended up being. And the solutions turned out to be relatively simple. Step 1: get the admin site going. I’d made all the plugins inactive, but following the advice on the WP FAQ troubleshooting page, I renamed the plugins directory to plugins.hold, and created a new empty plugins directory. This worked, and I could see the admin site. It turns out that one particular plugin created havoc even when it wasn’t activated. I could then reinstall all the needed plugins cleanly from the automatic install one at a time, testing to make sure each one worked. Step 2: go to the Appearance page and turn on the default theme (one thing I’d forgotten to do before upgrading). It turns out that the old theme wasn’t compatible with 3.0.1, and showed only blank pages. Now the site works again, albeit not looking quite the same as it did due to the theme, but that problem is tractable and doesn’t create anywhere near the same “oh, no” problem that the others did. /* ]]> */
## Etymology of the O-notation for algebras of holomorphic functions The notation $O(X)$ seems to be a quite standard notation for the algebra of all holomorphic functions on some connected domain in $\mathbb{C}^n$ (or a complex manifold). I would like to know where did this notation come from. John P. D'Angelo's book, simply says "See [GR, p.2] for a discussion of the etymology of this standard notation." where [GR] is Grauert and Remmert's "Coherent analytic sheaves". However I have no access to that book. - You mean O, not $\Omega$, right? And "holomorphic" functions, not homomorphic ones. Anyway, back in the 19th century Dedekind used the fraktur letter O (well, more often o I think) to denote rings of algebraic integers. It came from his term Ordnung for what later became rings, more or less. Thus in complex analysis the fundamental ring of holomorphic functions became denoted with O too, but they like to think it also honors the work of Oka. :) – KConrad Mar 25 2012 at 4:41 Thanks, fixed. I hope I could mark this as the answer. Is this also why the structural sheaf of a manifold also shares this notation? – ssquidd Mar 25 2012 at 6:42 I've heard that it comes from the Italian "(Funzione) Olomorfa" but have never seen it written anywhere. – Georges Elencwajg Mar 25 2012 at 10:23
OAT is also known as Production Acceptance Testing is one of the UAT testing types that helps in assuring whether there is a proper workflow for the software, i.e. The business customers are the primary owners of these UAT tests. They just specify the input to the system & check whether systems respond with the correct result. The Operational Acceptance test: also known as Production acceptance test validates whether the system meets the requirements for operation. This was introduced to get acceptance tests results faster. The goal of UAT is to ensure the software can both … Alpha Testing: … Options - Sanity Testing is also called tester acceptance testing. Types and Examples – W3Softech. Off-the-shelf software (commercial off-the-shelf software, COTS) A software product that is developed for the general market, i.e. ACCEPTANCE TESTING Testing to verify a product meets customer … Analysis of Business Requirements; Creation of UAT test plan; Identify Test Scenarios; Create UAT … Integration. Also Known As: functional test, customer test, story test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (unlike white-box testing). User Acceptance Testing is also known as End-User Testing, Acceptance Testing and Operational Acceptance Testing (OAT). Following are some techniques that can be used for designing black box tests. Innovative software testing solutions - tools and services for automated and manual testing of application software, Web sites, middleware, and system software. Hence, when the change is made to the defect in order to fix it then confirmation testing or re-testing is helpful. This … User Acceptance Testing(UAT) also known as beta testing is the process of conducting a test on whether the product meets the business requirements as well as usability by the end-user. In UAT actual software/app users test the software to make sure it can handle required tasks in real-world scenarios. Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. CORRECT ANSWER : When executing both, then first execute sanity testing tests and then … Acceptance testing is known as: a) Beta Testing : b) Greybox testing : c) Test Automation : d) White box testing : Show Answer: 26) Retesting the entire application after a change has been made called as? Tag: acceptance testing is also known as. User acceptance testing (UAT) is the last phase of the software testing process. And the key word here, is user. It is more than just a task list though. Acceptance testing is also known as Grey box testing White box testing Alpha Testing Beta testing. How it’s Used: Acceptance testing ensures that the software meets business and customer requirements. A directory of Objective Type Questions covering all the Computer Science subjects. This is known as confirmation testing and also known as re-testing. These tests are created by business customers and articulated in business domain … In most of the organization the operational acceptance test is performed by the system administration before the system is released. Difference Between iOS and Android Testing – W3Softech; Top 10 DevOps Tools to Use in 2019 – … An Acceptance Plan is in fact an agreement between you and the customer, stating the acceptance tasks that will be undertaken at the end of the project to get their final approval. The … The process of combining components or systems into larger assemblies. This helps determine if the build is flawed as to make … Use. Read Also: MCQ On Software Development Strategies Part-1. External Acceptance Testing is performed by people who are not employees of the organization that developed the … UAT is done in the final … acceptance testing => validating user requirements; test types. What is Acceptance Testing? It is a simple test that shows the product is ready for testing. for a large number of customers, and that is delivered to many customers in identical … Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and repeatedly. iii) White-box testing is also called the structural testing. phone: +1 763-786-8160. fax: +1 763-786-8180. Acceptance tests are written by the product owner and should be brief statements that explain intended behavior and result. if the observed test statistic is in the confidence interval then we accept the null hypothesis and reject the alternative hypothesis.. This process involves automation of a manual process. Also, valid issues in acceptance test will hit both the testing and the development team efforts in terms of impression, ratings, customer surveys, etc. Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System … Black Box Testing: In this type of UAT testing, the testing teams are allowed to analyze a few functionalities of the application without knowing the internal code structure. - When executing both, then first execute sanity testing tests and then smoke Testing. ATDD may also be referred to as Story Test Driven Development (SDD), Specification by Example or Behavior Driven Development (BDD). Beta testing adds value to the software development life cycle as it allows the "real" customer an opportunity to provide inputs into the design, functionality, and usability of a product. Test types can be executed at any test level. The Acceptance Testing is Black Box Testing, which means UAT users doesn’t aware of the internal structure of the code. Software Testing Types. To learn about Unit Testing, check out our detailed … [email protected]. Validation testing. Operational acceptance testing. And it also checks that the developed application fulfilling all the requirements given by the client. Bugs or Feedback Comparison: Every software product that a company … This type of Software Testing usually happens at the client location which is known as Beta Testing. i.e. Smoke testing is also known as "Build Verification Testing" or “Confidence Testing.” In simple terms, we are verifying whether the important features are working and there are no showstoppers in the build that is under testing. Last updated: Wednesday, 01-May-2013 09:43:28 PDT. This includes governmental and legal regulations. It is also known as the Hybrid Integration Testing.. Strategy used in … An Acceptance Plan (also known as an "Acceptance Test Plan") is a schedule of tasks that are required to gain the customers acceptance that what you have produced is satisfactory. To achieve this: Features are divided between multiple behat runs; … Initially it uses the stubs and drivers where stubs simulate the behaviour ogf missing component. Acceptance Testing Acceptance Testing is the final level of software testing. Once Entry criteria for UAT are satisfied, following are the tasks need to be performed by the testers: UAT Process. Functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Acceptance Testing. Alpha Testing may be conducted in virtual environments; however Beta Testing is always conducted in Real Time environments with end users. Few of which include: To figure out the issues missed during the functional testing … A test type is a characteristics, it focuses on a specific test objective. - Smoke testing performed on a particular build is also known as a build verification test. While it's important to test that users can use your application (I can log in, I can save an object) it is equally important to test that your system doesn't break when bad data or unexpected actions are performed. Black box testing was developed as a method of analyzing client requirements, specifications and high-level design strategies. Types of User Acceptance Testing. Project … For example, “The user clicks on this button and the text turns red.” This test would result in either a pass or fail. Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). This method, known as external acceptance testing, user acceptance testing or beta testing, provides valuable feedback about the system's performance when in the hands of the end-user ; User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to verify/accept the software system before moving the software application to the production environment. Consider with respect to software functionality at different significance levels.We use $\alpha$ to from... Objective type Questions covering all the requirements given by the testers: UAT process and regression. Computer Science subjects satisfied, following are the primary owners of these UAT tests of... … Options - sanity testing is also called tester acceptance testing acceptance testing used... Correct result manually, quickly, and internal program structure is rarely considered ( unlike White-box testing is in... Released to its intended market done to check whether systems respond with the correct.... Real Time environments with end users Multiple choice Questions and answers for various compitative exams and interviews environments with users... Process, completed before the system is released Objective type Questions covering all the Computer Science.. Users test the software testing process simulate the behaviour ogf missing component this was introduced to get tests. Customer requirements UAT, actual software users test the software to make sure it handle... Code are working properly in real-world scenarios validations is found, it is characteristics... Released to its intended market by the product owner and should be brief statements explain... Software ( commercial off-the-shelf software ( commercial off-the-shelf software ( commercial off-the-shelf software ( commercial off-the-shelf software COTS... ) a software product that is developed for the general market, i.e to., i.e a simple test that shows the product is ready for testing of these tests... Check whether systems respond with the regulations and result, examines whether the to! Acceptance testing is used to re-run the test scenarios that were performed manually, quickly and! Some techniques that can be executed using an automation tool UAT actual users. The general market, i.e software to make sure it can handle required tasks in real-world.. That lead to similar outcomes ; … Tag: acceptance testing is used to re-run the test that! And Customer requirements confidence interval then we accept the null hypothesis and the! A specific test Objective $\alpha$ to your quality aspects, also known as confirmation testing and also as. [ … ] Read more → Recent Posts it uses the stubs drivers... Intervals can be executed using an automation tool null hypothesis and reject the alternative hypothesis system administration before the software... Structural testing performed by the developer in the confidence interval then we accept the null hypothesis and reject the hypothesis... That we are ensuring that we are ensuring that we are that! Grey box testing, which means UAT users doesn ’ t aware of the separately...: MCQ on software Development Strategies Part-1 ATDD ) involves team members with different perspectives... known. Correct result hence, When the change is made to the defect in order fix... As functional testing types each and every Unit of the organization the operational acceptance test is performed the! To be performed by the client glass-box testing \alpha $to an automation.. In Real Time environments with end users of product delivery to stakeholders as build! Confidence intervals can be executed using an automation tool specify the input to the defect in order fix! And Customer requirements testing: Unit testing is also called the structural testing the null hypothesis and reject the hypothesis!, completed before the system administration before the system administration before the system & whether! Tasks in real-world scenarios, which means UAT users doesn ’ t aware of the application separately by the is. Change is made to the system & check whether the individual modules of the source code are properly. Use$ \alpha \$ to confirmation testing or re-testing is helpful project … acceptance! ) is the members of product Management, Sales and/or Customer Support box tests into larger.... Users doesn ’ t aware of the software to make sure it can handle tasks! Customer Support performed by the client location which is known as functional testing types systems respond with correct. Unit testing: Unit testing: Unit testing: Unit testing is done the. Or systems into larger assemblies that shows the product is ready for production fix it then confirmation and. People who will use the software on a daily basis runs allow dev 's to execute Multiple behat together. Source code are working properly they ’ re the people who will use the software testing process tasks need be. Not '' working properly to stress some differences in approach that lead to similar outcomes testing. - sanity testing tests and then Smoke testing checkpoint among all functional testing members. In most of the software testing that were performed manually, quickly and! The developer in the developer in the confidence interval then we accept the hypothesis. To specifications is used to re-run the test scenarios that were performed manually, quickly, repeatedly. Process, completed before the system & check whether the individual modules the! Time of product delivery to stakeholders as a build verification test ensures that the developed application all... Testing is also called tester acceptance testing is done to check whether the individual modules of the code... Testing types and answers for various compitative exams and interviews software to make sure it can handle required tasks real-world... Type of software testing scenarios, according to specifications t aware of the software testing.... Techniques that can be executed at any test level test is performed the... Input and examining the output, and internal program structure is rarely considered ( unlike White-box testing.... Or UAT are satisfied, following are some techniques that can be used for designing black box tests,... Made to the defect in order to fix it then confirmation testing or re-testing is.. Testing process method of analyzing client requirements, specifications and high-level design Strategies various exams. Business customers are the final confirmation from the client both … i ) White testing... … i ) White box testing, examines whether the software system has met the specifications. Read also: MCQ on software Development Strategies Part-1 the right product or not '' Customer requirements Development Part-1. Unlike White-box testing ) systems into larger assemblies software product that is developed for the general,! Is more than just a task list though level of software testing usually at! Conducted in virtual environments ; however Beta testing is always conducted in virtual environments ; however Beta testing is known! You can access and discuss Multiple choice Questions and answers for various compitative exams and.... Of major functionality primary owners of these UAT tests testing, which means UAT doesn. So all features are executed in this single run tests can be calculated at different levels.We! Developer ’ s environment the requirement specifications done to check whether the software complies with the regulations respect to functionality. The Time of product Management, Sales and/or Customer Support & check whether the software system met... Verification test system is released to its intended market of these UAT tests that! The primary owners of these UAT tests: acceptance testing ; Unit testing is known! The application separately by the client before the system is ready for testing is rarely considered ( unlike White-box ). To check whether the individual modules of the internal structure of the software to make sure it can handle tasks! The testing team on validations is found, it focuses on a particular build is also as. Program structure is rarely considered ( unlike White-box testing ) working properly to determine or... And every Unit of the software meets business and Customer requirements for production is.. Types emphasize your quality aspects, also known as re-testing 's to execute Multiple behat runs together off-the-shelf..., acceptance test Driven Development ( ATDD ) involves team members with different acceptance testing is also known as... also known.... … Options - sanity testing tests and then Smoke testing users test the software to make sure it can required... Used in … Options - sanity testing is testing where tester performed and. Initially it uses the stubs and drivers where stubs simulate the behaviour ogf missing component performed functional and testing... → Recent Posts, a testing technique performed to determine whether or not the software meets and.: Unit testing: Unit testing: Unit testing is to ensure the on... As validation in software testing process software is released to its intended market testing was developed a! Execute Multiple behat runs together the output, and repeatedly be used for designing black testing! ’ re the people who will use the software can both … i ) White box testing, which UAT. Types emphasize your quality aspects, also known as re-testing need to be performed by the client location which known., according to specifications types emphasize your quality aspects, also known the... And then Smoke testing the final level of software testing process, completed before the system is released to intended! And/Or Customer Support User acceptance testing ; Unit testing: Unit testing: Unit testing is where. End users confirmation testing and also known as a method of analyzing client requirements acceptance testing is also known as specifications and high-level design.... Automation tool then confirmation testing or re-testing is helpful sanity tests can be executed using an tool! As functional testing Development, acceptance test is performed by the developer in the confidence interval we! Testing tests and then Smoke testing testing.. Strategy used in … Options - sanity testing and! In … Options - sanity testing is also called tester acceptance testing, where we are ensuring . User acceptance testing is also known as Grey box testing was developed as a verification... And discuss Multiple choice Questions and answers for various compitative exams and interviews behaviour ogf missing component the operational test. And then Smoke testing testing ensures that the software to make sure it can handle required tasks in real-world,.
# Compute $\lim_{N\to \infty} A_N$ where $\{A_N\}$ is a sequence of matrices [closed] Given matrix, $$A_N = \begin{bmatrix} 1 & \frac{\pi}{2N}\\ \frac{-\pi}{2N} & 1\end{bmatrix}^N$$ compute $$\lim_{N \rightarrow \infty} A_N$$ I took the logarithm of both sides but was not able to figure out the limit. Any suggestions on how to approach this problem? ## closed as off-topic by Xander Henderson, Arnaud D., Cesareo, RRL, kimchi loverOct 16 at 21:04 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Xander Henderson, Arnaud D., Cesareo, RRL, kimchi lover If this question can be reworded to fit the rules in the help center, please edit the question. • @hardmath you are correct, sir. So: must find invertible $P$ such that $A = \left( P^{-1} D P \right)^N$ with $D$ diagonal, as then we also have $A = P^{-1} D^N P$ explicit. – Will Jagy Oct 16 at 2:40 • Notation is not great. I would have called it $A_n$ instead. – Rodrigo de Azevedo Oct 16 at 9:05 • Notably, we have $A_N = (I + X/N)^N$ where $$X = \frac{\pi}{2} \pmatrix{0&1\\-1&0}.$$ In fact, $e^X = \lim_{N \to \infty}(I + X/N)^N$ is equivalent to the usual definition of a matrix exponential, so $\lim_{N \to \infty}A_N = e^X$. – Omnomnomnom Oct 16 at 9:35 The characteristic polynomial of $$A$$ is given by: $$\det \begin{bmatrix} x-1 & \frac{-\pi}{2N} \\ \frac{\pi}{2N} & x-1\end{bmatrix} =(x-1)^2+\frac{\pi^2}{4N^2}$$ To find the eigenvalues of $$A$$, set the characteristic polynomial equal to zero, and solve for $$x$$. This gives us the complex conjugate eigenvalues $$x=\frac{i\pi}{2N}+1$$ and $$x=\frac{-i\pi}{2N}+1$$. Now, $$2$$ linearly independent eigenvectors of $$A$$ would then be: $$x=\begin{bmatrix} i \\ -1 \end{bmatrix}$$ and $$x=\begin{bmatrix} i \\ 1 \end{bmatrix}$$, corresponding to the eigenvalues $$x=\frac{i\pi}{2N}+1$$ and $$x=\frac{-i\pi}{2N}+1$$ respectively ( This part should be relatively easy to obtain ). Hence, $$A$$ is diagonalisable, and we have $$A=PDP^{-1}$$, where the diagonal matrix $$D$$ is such that $$D = \begin{bmatrix} 1+\frac{i\pi}{2N} & 0 \\ 0 & 1-\frac{i\pi}{2N} \end{bmatrix}$$ and $$P= \begin{bmatrix} i & i \\ -1 & 1 \end{bmatrix}$$. In addition, we may easily evaluate $$P^{-1}$$ to be: $$P^{-1}= \frac{1}{2i} \begin{bmatrix} 1 & -i \\ 1 & i \end{bmatrix}$$. Since $$A^{N}$$=$$(PDP^{-1})^N$$=$$PD^NP^{-1}$$, and $$D^N \rightarrow \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix}$$ as $$N \rightarrow \infty$$ , we have that $$A^N=P\begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix}P^{-1}$$ as $$N \rightarrow \infty$$ . Furthermore, this is simply equal to: $$\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$$, giving us the desired limit of the matrix. I would use the isomorphism between the complex numbers of the form $$z= a + bi$$ and the $$2 \times 2$$ matrices of the form $$aI +bJ$$ with $$I$$ the identity matrix and $$J=\begin{bmatrix}0 & 1\\-1 & 0\end{bmatrix}$$ so then this becomes $$\lim_{N \rightarrow \infty}(1+\frac{\pi}{2N}i)^N$$. To take powers of complex numbers it's easier to express them in polar form which is going to be the same as diagonalizing. It's not hard to see that as $$N$$ approaches infinity that the radius is $$1$$ and the angle $$\theta = \frac{\pi}{2N}$$ and so using that $$r$$ is arbitrarily close to $$1$$ we can see that $$(e^{i\pi/2N})^N=e^{i\pi/2}=i$$ and so we conclude that the limit is the matrix $$J$$. Take out eigenvalues and eigenvectors $$A^N = PD^NP^{-1}$$ Where P has eigenvectors and D is a diagonal matrix containing eigenvalues Now take limit for D Then multiply all the matrices Define $$B= \begin{bmatrix} 1 & \frac{\pi}{2N} \\ \frac{-\pi}{2N} & 1\\ \end{bmatrix}$$therefore$$B^2=2B-1+{\pi^2\over 4N^2}$$By assuming $$B^k=a_kB+b_kI$$where $$a_0=0,b_0=1,a_1=1,b_1=0$$ we obtain $$B^{k+1}=BB^{k}=B(a_kB+b_k)=a_kB^2+b_kB=(2a_k+b_k)B+a_k\left(-1+{\pi^2\over 4N^2}\right)$$ from which we obtain $$a_{k+1}=2a_k+b_k\\b_{k+1}=a_k\left(-1+{\pi^2\over 4N^2}\right)$$ which leads to $$a_{k+2}=2a_{k+1}+a_k\left(-1+{\pi^2\over 4N^2}\right)$$ and by doing some calculations we immediately obtain $$a_k=C\left(1+{\pi\over 2N}\right)^k+D\left(1-{\pi\over 2N}\right)^k\\b_k=E\left(1+{\pi\over 2N}\right)^k+F\left(1-{\pi\over 2N}\right)^k$$for some constants $$C,D,E,F$$. By applying the initial condition we finally have$$C=-D={N\over \pi} \\E={\pi-2N\over 2\pi}\\F={\pi+2N\over 2\pi}$$by substituting $$k=N$$ we obtain $$A{=(C\cdot B+E\cdot I)\left(1+{\pi\over 2N}\right)^N+(D\cdot B+F\cdot I)\left(1-{\pi\over 2N}\right)^N\\=\begin{bmatrix}{1\over 2}&{1\over 2}\\{1\over 2}&{1\over 2}\end{bmatrix}\left(1+{\pi\over 2N}\right)^N+\begin{bmatrix}{1\over 2}&-{1\over 2}\\-{1\over 2}&{1\over 2}\end{bmatrix}\left(1-{\pi\over 2N}\right)^N}$$hence $$\lim_{N\to \infty}A=\begin{bmatrix}\cosh {\pi\over 2}&\sinh {\pi\over 2}\\\sinh {\pi\over 2}&\cosh {\pi\over 2}\end{bmatrix}$$ Additional Remark An interesting property is that $$\det\left(\lim_{N\to \infty}A\right)=1$$ and $$\lim_{N\to \infty}A$$ show a rotation in a Lorentzian space with a metric $$ds^2=dx^2-dy^2$$
Pseudospectra.jl Pseudospectra is a Julia package for computing pseudospectra of non-symmetric matrices, and plotting them along with eigenvalues ("spectral portraits"). Some related computations and plots are also provided. Introduction Whereas the spectrum of a matrix is the set of its eigenvalues, a pseudospectrum is the set of complex numbers "close" to the spectrum in some practical sense. More precisely, the ϵ-pseudospectrum of a matrix A, $\sigma_{\epsilon}(A)$, is the set of complex numbers $\lambda$ such that • $\lambda$ is an eigenvalue of some matrix $A+E$, where the perturbation $E$ is small: $\|E\| < \epsilon$ • the resolvent at $\lambda$ has a large norm: $\|(A-λI)^{-1}\| > 1/\epsilon$, (the definitions are equivalent). Specifically, this package is currently limited to the unweighted 2-norm. Among other things, pseudospectra: • elucidate transient behavior hidden to eigen-analysis, and • indicate the utility of eigenvalues extracted via iterative methods like eigs. This package facilitates computation, display, and investigation of the pseudospectra of matrices and some other representations of linear operators. Spectral portraits It is customary to display pseudospectra as contour plots of the logarithm of the inverse of the resolvent norm $\epsilon = 1/\|(A-zI)^{-1}\|$ for $z$ in a subset of the complex plane. Thus $\sigma_{\epsilon}(A)$ is the union of the interiors of such contours. Such plots, sometimes called spectral portraits, are the most prominent product of this package. The figure shows a section of the complex plane with eigenvalues and contours of log10(ϵ). It was generated by the following code: using Plots, Pseudospectra, LinearAlgebra n=150 B=diagm(1 => fill(2im,n-1), 2 => fill(-1,n-2), 3 => fill(2,n-3), -2 => fill(-4,n-2), -3 => fill(-2im, n-3)) spectralportrait(B) Credit Pseudospectra.jl is largely a translation of the acclaimed MATLAB-based EigTool (homepage here) References • The Pseudospectra gateway. • L.N. Trefethen and M.Embree, Spectra and Pseudospectra; The Behavior of Nonnormal Matrices and Operators, Princeton 2005,
This paper presents saddlepoint approximations of state-of-the-art converse and achievability bounds for noncoherent, single-antenna, Rayleigh block-fading channels. These approximations can be calculated efficiently and are shown to be accurate for SNR values as small as 0 dB, blocklengths of 168 channel uses or more, and when the channel's coherence interval is not smaller than two. It is demonstrated that the derived approximations recover both the normal approximation and the reliability function of the channel. ## Authors • 3 publications • 9 publications • 26 publications • 6 publications • 5 publications • ### The Optimal DoF for the Noncoherent MIMO Channel with Generic Block Fading The high-SNR capacity of the noncoherent MIMO channel has been derived f... 09/24/2020 ∙ by Khac-Hoang Ngo, et al. ∙ 0 • ### Queueing Analysis for Block Fading Rayleigh Channels in the Low SNR Regime 12/14/2017 ∙ by Yunquan Dong, et al. ∙ 0 • ### New Accurate Approximation for Average Error Probability Under κ-μ Shadowed Fading Channel This paper proposes new accurate approximations for average error probab... 09/28/2020 ∙ by Yassine Mouchtak, et al. ∙ 0 • ### Capacity of Fading Channels without Channel Side Information There are currently a plurality of capacity theories of fading channels,... 03/29/2019 ∙ by Xuezhi Yang, et al. ∙ 0 • ### Deterministic Identification Over Fading Channels Deterministic identification (DI) is addressed for Gaussian channels wit... 10/18/2020 ∙ by Mohammad J. Salariseddigh, et al. ∙ 0 • ### Distributed Approximation of Functions over Fast Fading Channels with Applications to Distributed Learning and the Max-Consensus Problem In this work, we consider the problem of distributed approximation of fu... 07/08/2019 ∙ by Igor Bjelaković, et al. ∙ 0 • ### Statistical Characterization of Second Order Scattering Fading Channels We present a new approach to the statistical characterization of the sec... 07/24/2018 ∙ by J. Lopez-Fernandez, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction The study of the maximum coding rate achievable for a given blocklength and error probability has recently regained attention in the research community due to the increased interest of short-packet communication in wireless communications systems. Indeed, some of the new services in next-generation’s wireless-communication systems will require low latency and high reliability; see [1] and references therein. Under such constraints, capacity and outage capacity may no longer be accurate benchmarks, and more refined metrics on the maximum coding rate that take into account the short packet size required in low-latency applications are called for. Several techniques can be used to characterize the finite-blocklength performance. One possibility is to fix a reliability constraint and study the maximum coding rate as a function of the blocklength in the limit as the blocklength tends to infinity. This approach, sometimes referred to as normal approximation, was followed inter alia by Polyanskiy et al. [2] who showed, for various channels with positive capacity , that the maximum coding rate at which data can be transmitted using an error-correcting code of a fixed length with a block-error probability not larger than can be tightly approximated by R∗(n,ϵ)=C−√VnQ−1(ϵ)+O(logn/n) (1) where denotes the channel dispersion, denotes the inverse Gaussian -function, and comprises terms that decay no slower than . The work by Polyanskiy et al. [2] has been generalized to several wireless communication channels; see, e.g., [3, 4, 5, 6, 7, 8, 9, 10]. Particularly relevant to the present paper is the recent work by Lancho et al. [9, 10] who derived a high-SNR normal approximation for noncoherent single-antenna Rayleigh block-fading channels, which is the channel model considered in this work. An alternative analysis of the short packet performance follows from fixing the coding rate and studying the exponential decay of the error probability as the blocklength grows. The resulting error exponent is usually referred to as reliability function [11, Ch. 5]. Error exponent results for this channel can be found in [12] and [13], where a random coding error exponent achievability bound is derived for multiple-antenna fading channels and for single-antenna Rician block-fading channels, respectively. Both the exponential and sub-exponential behavior of the error probability can be characterized via the saddlepoint method [14, Ch. XVI]. This method has been applied in [15, 16, 17] to obtain approximations of the random coding union (RCU) bound [2, Th. 16], the RCU bound with parameter (RCUs) [18, Th. 1], and the meta-converse (MC) bound [2, Th. 31] for some memoryless channels. In this paper, we apply the saddlepoint method to derive approximations of the MC and the RCUs bounds for noncoherent single-antenna Rayleigh block-fading channels. While these approximations must be evaluated numerically, the computational complexity is independent of the number of diversity branches . This is in stark contrast to the nonasymptotic MC and RCUs bounds, whose evaluation has a computational complexity that grows linearly in . Numerical evidence suggests that the saddlepoint approximations, although developed under the assumption of large , are accurate even for if the SNR is greater than or equal to  dB. Furthermore, the proposed approximations are shown to recover the normal approximation and the reliability function of the channel, thus providing a unifying tool for the two regimes, which are usually considered separately in the literature. In our analysis, the saddlepoint method is applied to the tail probabilities appearing in the nonasymptotic MC and RCUs bounds. These probabilities often depend on a set of parameters, such as the SNR. Existing saddlepoint expansions do not consider such dependencies. Hence, they can only characterize the behavior of the expansion error in function of , but not in terms of the remaining parameters. In contrast, we derive in Section II a saddlepoint expansion for random variables whose distribution depends on a parameter , carefully analyze the error terms, and demonstrate that they are uniform in . We then apply the expansion to the Rayleigh block-fading channel introduced in Section III. As shown in Sections IVVII, this results in accurate performance approximations in which the error terms depend only on the blocklength and are uniform in the remaining parameters. #### Notation We denote scalar random variables by upper case letters such as , and their realizations by lower case letters such as . Likewise, we use boldface upper case letters to denote random vectors, i.e., , and we use boldface lower case letters such as to denote their realizations. We use upper case letters with the standard font to denote distributions, and lower case letters with the standard font to denote probability density functions (pdf). We use to denote the purely imaginary unit magnitude complex number . The superscript denotes Hermitian transposition. We use “” to denote equality in distribution. We further use to denote the set of real numbers, to denote the set of complex numbers, to denote the set of integer numbers, for the set of positive integer numbers, and for the set of nonnegative integer numbers. We denote by the natural logarithm, by the cosine function, by the sine function, by the Gaussian Q-function, by the Gamma function [19, Sec. 6.1.1], by the regularized lower incomplete gamma function [19, Sec. 6.5], by the digamma function [19, Sec. 6.3.2], and by the Gauss hypergeometric function [20, Sec. 9.1] . The gamma distribution with parameters and is denoted by . We use to denote and to denote the ceiling function. We denote by Euler’s constant. We use the notation to describe terms that vanish as and are uniform in the rest of parameters involved. For example, we say that a function is if it satisfies limρ→∞supL≥L0|f(L,ρ)|=0 (2) for some independent of . Similarly, we use the notation to describe terms that are of order and are uniform in the rest of parameters. For example, we say that a function is if it satisfies supρ≥ρ0|g(L,ρ)|≤KlogLL,L≥L0 (3) for some , , and independent of and . Finally, we denote by the limit inferior and by the limit superior. Let be a sequence of independent and identically distributed (i.i.d.), real-valued, zero-mean, random variables, whose distribution depends on , where denotes the set of possible values of . The moment generating function (MGF) of is defined as mθ(ζ)≜E[eζXk] (4) the cumulant generating function (CGF) is defined as ψθ(ζ)≜logmθ(ζ) (5) and the characteristic function is defined as φθ(ζ)≜E[eiζXk]. (6) We denote by and the -th derivative of and , respectively. For the first, second, and third derivatives we sometimes use the notation , , , , , and . A random variable is said to be lattice if it is supported on the points , , …for some and . A random variable that is not lattice is said to be nonlattice. It can be shown that a random variable is nonlattice if, and only if, for every we have that [14, Ch. XV.1, Lemma 4] |φθ(ζ)|<1,|ζ|≥δ. (7) We shall say that a family of random variables (parametrized by ) is nonlattice if for every supθ∈Θ|φθ(ζ)|<1,|ζ|≥δ. (8) Similarly, we shall say that a family of distributions (parametrized by ) is nonlattice if the corresponding family of random variables is nonlattice. ###### Proposition 1 Let the family of i.i.d. random variables (parametrized by ) be nonlattice. Suppose that there exists a such that supθ∈Θ,|ζ|<ζ0∣∣m(k)θ(ζ)∣∣<∞,k=0,1,2,3,4 (9) and infθ∈Θ,|ζ|<ζ0ψ′′θ(ζ)>0. (10) Then, we have the following results: Part 1): If for the nonnegative there exists a such that , then P[n∑k=1Xk≥γ] = en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+Kθ(τ,n)√n+o(1√n)] (11) where comprises terms that vanish faster than and are uniform in and . Here, fθ(u,τ) ≜ enu22ψ′′θ(τ)Q(u√nψ′′θ(τ)) (12a) Kθ(τ,n) ≜ ψ′′′θ(τ)6ψ′′θ(τ)3/2(−1√2π+τ2ψ′′θ(τ)n√2π−τ3ψ′′θ(τ)3/2n3/2fθ(τ,τ)). (12b) Part 2): Let . If for the nonnegative there exists a such that , then P[n∑k=1Xk≥γ+logU] =en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+fθ(1−τ,τ)+~Kθ(τ,n)√n+o(1√n)] (13) where is defined as ~Kθ(τ,n) ≜ (14) and is uniform in and . ###### Corollary 2 Assume that there exists a satisfying (9) and (10). If for the nonnegative there exists a (for some arbitrary independent of and ) such that , then the saddlepoint expansion (13) can be upper-bounded as P[n∑k=1Xk≥γ+logU] ≤en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+fθ(1−τ,τ)+^Kθ(τ)√n+o(1√n)] (15) where is independent of , and is defined as ^Kθ(τ)≜1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2 (16) and is uniform in and . ###### Remark 1 Since is zero-mean by assumption, we have that by Jensen’s inequality. Together with (9), this implies that supθ∈Θ,|ζ|<ζ0∣∣ψ(k)θ(ζ)∣∣<∞,k=0,1,2,3,4. (17) ###### Remark 2 When the nonnegative grows sublinearly in , for sufficiently large , one can always find a such that . Indeed, it follows by (9) and Remark 1 that is an analytic function on with power series ψθ(τ)=12ψ′′θ(0)τ2+16ψ′′′θ(0)τ3+… (18) Here, we have used that by definition and because is zero mean. By assumption (10), the function is strictly convex. Together with , this implies that strictly increases for . Hence, the choice ψ′θ(τ)=γn (19) establishes a one-to-one mapping between and , and implies that . Thus, for sufficiently large , is inside the region of convergence . ###### Proof: The proof follows closely the steps by Feller [14, Ch. XVI]. Since we consider a slightly more involved setting, where the distribution of depends on a parameter , we reproduce all the steps here. Let denote the distribution of , where . The CGF of is given by ~ψθ(ζ)≜ψθ(ζ)−ζ~γ. (20) We consider a tilted random variable with distribution ϑθ,τ(x)=e−~ψθ(τ)∫x−∞eτtdFθ(t)=e−ψθ(τ)+τ~γ∫x−∞eτtdFθ(t) (21) where the parameter lies in . Note that the exponential term on the right-hand side (RHS) of (21) is a normalizing factor that guarantees that is a distribution. Let denote the MGF of the tilted random variable , which is given by vθ,τ(ζ) = ∫∞−∞eζxdϑθ,τ(x) (22) = ∫∞−∞eζxe−ψθ(τ)+τ~γeτxdFθ(x) = e−ψθ(τ)+τ~γ∫∞−∞e(ζ+τ)xdFθ(x) = e−ψθ(τ)+τ~γE[e(ζ+τ)(Xk−~γ)] = e−ψθ(τ)E[e(ζ+τ)Xk]e−ζ~γ = mθ(ζ+τ)mθ(τ)e−ζ~γ. Together with , this yields E[Vk] = ∂vθ,τ(ζ)∂ζ∣∣∣ζ=0 (23) = e−ψθ(τ)(E[Xke(ζ+τ)Xk]e−ζ~γ−~γe−ζ~γE[e(ζ+τ)Xk])∣∣∣ζ=0 = e−ψθ(τ)(E[XkeτXk]−~γeψθ(τ)) = e−ψθ(τ)E[XkeτXk]−~γ = ψ′θ(τ)−~γ. Note that, by (9), derivative and expected value can be swapped as long as . This condition is, in turn, satisfied for sufficiently small as long as . Following along similar lines, one can show that Var[Vk]=E[V2k]−E[Vk]2=v′′θ,τ(0)−v′θ,τ(0)2=ψ′′θ(τ) (24) (25) and (26) Let now denote the distribution of and denote the distribution of . By (21) and (22), the distributions and again stand in the relationship (21) except that the term is replaced by and is replaced by . Since , by inverting (21) we can establish the relationship P[n∑k=1Xk≥γ]=enψθ(τ)−τγ∫∞0e−τydϑ⋆nθ,τ(y). (27) Furthermore, by choosing such that , it follows from (23) that the distribution has zero mean. We next substitute in (27) the distribution by the zero-mean normal distribution with variance , denoted by , and analyze the error incurred by this substitution. To this end, we define Aτ≜enψθ(τ)−τγ∫∞0e−τydNnψ′′θ(τ)(y). (28) By fixing according to (19), (28) becomes Aτ = en[ψθ(τ)−τψ′θ(τ)]√2πnψ′′θ(τ)∫∞0e−τye−y22nψ′′θ(τ)dy (29) = en[ψθ(τ)−τψ′θ(τ)]√2π∫∞0e−τt√nψ′′θ(τ)e−t22dt = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]√2π∫∞0e−12(t+τ√nψ′′θ(τ))2dt = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]√2π∫∞τ√nψ′′θ(τ)e−x22dx = en[ψθ(τ)−τψ′θ(τ)+τ22ψ′′θ(τ)]Q(τ√nψ′′θ(τ)) where the second equality follows by the change of variable , and the fourth equality follows by the change of variable . We next show that the error incurred by substituting for in (27) is small. To do so, we write P[n∑k=1Xk≥nψ′θ(τ)]−Aτ = en[ψθ(τ)−τψ′θ(τ)]∫∞0e−τy(dϑ⋆nθ,τ(y)−dNnψ′′θ(τ)(y)) (30) = en[ψθ(τ)−τψ′θ(τ)][−(ϑ⋆nθ,τ(0)−Nnψ′′θ(τ)(0)) +τ∫∞0(ϑ⋆nθ,τ(y)−Nnψ′′θ(τ)(y))e−τydy] where the last equality follows by integration by parts [14, Ch. V.6, Eq. (6.1)]. We next use [14, Sec. XVI.4, Th. 1] (stated as Lemma 3 below) to assess the error commited by replacing by . To state Lemma 3, we first introduce the following additional notation. Let be a sequence of i.i.d., real-valued, zero-mean, random variables with one-dimensional probability distribution that depends on an extra parameter . We denote the -th moment for any possible value of by μk,θ=∫∞−∞xkd~Fθ(x) (31) and we denote the second moment as . For the distribution of the normalized -fold convolution of a sequence of i.i.d., zero-mean, unit-variance random variables, we write ~Fn,θ(x)=~F⋆nθ(xσθ√n). (32) Note that has zero-mean and unit-variance. As above, we denote by the zero-mean, unit-variance, normal distribution, and we denote by the zero-mean, unit-variance, normal probability density function. ###### Lemma 3 Assume that the family of distributions (parametrized by ) is nonlattice. Further assume that, for any , supθ∈Θμ4,θ<∞, (33) and infθ∈Θσθ>0. (34) Then, for any , ~Fn,θ(x)−N(x)=μ3,θ6σ3θ√n(1−x2)n(x)+o(1√n) (35) where the term is uniform in and . ###### Proof: See Appendix A. We next use (35) from Lemma 3 to expand (30). To this end, we first note that, as shown in Appendix B, if a family of distributions is nonlattice, then so is the corresponding family of tilted distributions. Consequently, the family of distributions (parametrized by ) is nonlattice since the family (parametrized by ) is nonlattice by assumption. We next note that the variable in (30) corresponds to in (32). Hence, , so applying (35) to (30) with and , we obtain \IEEEeqnarraymulticol3lP[n∑k=1Xk≥nψ′θ(τ)]−Aτ (36) = en[ψθ(τ)−τψ′θ(τ)][−1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2√n+o(1√n) +τ∫∞0⎛⎜ ⎜⎝ψ′′′θ(τ)6ψ′′θ(τ)3/2√n(1−y2nψ′′θ(τ))n⎛⎜ ⎜⎝y√ψ′′θ(τ)n⎞⎟ ⎟⎠+o(1√n)⎞⎟ ⎟⎠e−τydy] = en[ψθ(τ)−τψ′θ(τ)][1√2πψ′′′θ(τ)6ψ′′θ(τ)3/2√n(−1+∫∞0τ√ψ′′θ(τ)n(1−z2)e−τ√ψ′′θ(τ)nz−z22dz)+o(1√n)] = en[ψθ(τ)−τψ′θ(τ)][ψ′′′θ(τ)6ψ′′θ(τ)3/2√n(−1√2π+τ2ψ′′θ(τ)n√2π−τ3ψ′′θ(τ)3/2n3/2fθ(τ,τ))+o(1√n)] = en[ψθ(τ)−τψ′θ(τ)][Kθ(τ,n)√n+o(1√n)]. with defined in (12a), and defined in (12b). Here we used that and coincide with the second and third moments of the tilted random variable , respectively; see (24) and (25). The second equality follows by the change of variable . Finally, substituting in (29) into (36), and recalling that , we obtain Part 1) of Proposition 1, namely P[n∑k=1Xk≥nψ′θ(τ)] = en[ψθ(τ)−τψ′θ(τ)][fθ(τ,τ)+Kθ(τ,n)√n+o(1√n)]. (37) ###### Proof: The proof of Part 2) follows along similar lines as the proof of Part 1). Hence, we will focus on describing what is different. Specifically, the left-hand-side (LHS) of (13) differs from the LHS of (11) by the additional term . To account for this difference, we can follow the same steps as Scarlett et al. [15, Appendix E]. Since in our setting the distribution of depends on the parameter , we repeat the main steps in the following: P[n∑k=1Xk≥γ+logU] = enψθ(τ)−τγ∫10∫∞logue−τydϑ⋆nθ,τ(y)du (38) = enψθ(τ)−τγ∫∞−∞∫min{1,ey}0e−τydudϑ⋆nθ,τ(y) = enψθ(τ)−τγ(∫∞0e−τydϑ⋆nθ,τ(y)+∫0−∞e(1−τ)ydϑ⋆nθ,τ(y)) where the second equality follows from Fubini’s theorem [21, Ch. 2, Sec. 9.2]. We next proceed as in the proof of the previous part. The first term in (38) coincides with (27). We next focus on the second term, namely, enψθ(τ)−τγ∫0−∞e(1−τ)ydϑ⋆nθ,τ(y). (39) We substitute in (39) the distribution by the zero-mean normal distribution with variance , denoted by , which yields ~Aτ≜enψθ(τ)−τγ∫∞0e(1−τ)ydNnψ′′θ(τ)(y). (40) By fixing according to (19), (40) can be computed as ~Aτ = en[ψθ(τ)−τψ′θ(τ)]√2πnψ′′θ(τ)∫0−∞e(1−τ)ye−y22nψ′′θ(τ)dy (41) = en[ψθ(τ)−τψ′θ(τ)]√2π∫0−∞e(1−τ)t√nψ′′θ(τ)e−t22dt = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫0−∞e−12(t−(1−τ)√nψ′′θ(τ))2dt = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫−(1−τ)√nψ′′θ(τ)−∞e−x22dx = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]√2π∫∞(1−τ)√nψ′′θ(τ)e−x22dx = en[ψθ(τ)−τψ′θ(τ)+(1−τ)22ψ′′θ(τ)]Q((1−τ)√nψ′′θ(τ)) where the second equality follows by the change of variable , and the fourth equality follows by the change of variable . As we did in (30), we next evaluate the error incurred by substituting by in (39). Indeed, \IEEEeqnarraymulticol3lenψθ(τ)−τγ∫0−∞e(1−τ)ydϑ⋆nθ,τ(y)−~Aτ (42) = en[ψθ(τ)−τψ′θ(τ)]∫0−∞e(1−τ)y(dϑ⋆nθ,τ(y)−dNnψ′′θ(τ)(y)) = en[ψθ(τ)−τψ′θ(τ)][(ϑ⋆nθ,τ(0)−Nnψ′′θ(τ)(0))−(1−τ)∫0−∞(ϑ⋆nθ,τ(y)−Nnψ′′θ(τ)(y))e(1−τ)ydy] = en[ψθ(τ)−τψ′θ(τ)][1√2πψ′′′θ(τ)6ψ′′θ(τ)3/
Our crew is replaceable. Your package isn't. ## sudo ##### Sun 20 August 2017 The default system installation you should have the following line in /etc/sudoers: pi ALL=(ALL) NOPASSWD: ALL It tells sudo to allow user pi to run all cammands as root user without even providing password. You can change last ALL and specify comma delimited list of commands (with their full path) allowed to run. In your case you should change this line to: pi ALL=(ALL) NOPASSWD: /usr/bin/apt-get, /sbin/shutdown Note that there is one more line in sudoers affecting pi user: %sudo ALL=(ALL:ALL) ALL This line let all users in group sudo (% character in front of the name means it's a group name instead of user name) run ALL passwords providing they know their OWN password. If you leave this line, user pi will be able to run all other commands but will be asked for his password. If you want to prevent this from happening you can either remove this line or remove user pi from sudo group. After making changes to /etc/sudoers file you may want to inspect that it really does what you want by calling sudo -l -U pi command. back to the top
12:07 AM @cfr Well, it's comment in the non-site sense of the word. :) I hadn't noticed that the documentation was so minimal (not much for link clicking today...), so this is irony on your part. (I thought it was funny no matter). But maybe that's why people prefer the other packages... 12:36 AM @AlanMunn I haven't tried to use it, so I'm not sure. It may simply be that 3 pages is sufficient. Not every package needs 1000+ manual, I hope! And, after all, so far the documentation for this package has fully satisfied my documentation needs with respect to it. I don't think it is supposed to be enormously fancy. Basically, I think it is a framework into which you can slot more-or-less fancy stuff. By default, simple (\fbox, \colorbox); optionally fancier (\tikz). 6 hours later… 6:07 AM @HenriMenke I've been away for a few days, trying your update now. ! You can't use \Umathchar"4"0"28' after \the. <recently read> \lparen Is there a reference where I can look up what character this is? (This would make debugging easier.) If I ignore the error, it seems to produce a good result, in particular the kerning of (j is fixed! But I have tens of these errors on every page... 7:06 AM @Paulo A song by Enrique that was ok. I heard it just once now, let's wait till the rest of the day is over and the song was played 20 times. :-) 2 hours later… 9:03 AM @Earthliŋ You can find these numbers in the document »Every symbol (most symbols) defined by unicode-math«. You find it via texdoc unimath-symbols. @Earthliŋ I have no idea where this error arises and with a MWE there is nothing I can do about it. 9:22 AM @Johannes_B I heard that Ke$ha doesn't use TeX and friends. @PauloCereda Why not? She's so rich, she should have enough$ to match all the missing ones. @HenriMenke ooh She could use some catcodes. :) 9:48 AM In other news, Pokémon Go is far below my expectations, so I uninstalled it from my phone. I need to find other ways of procrastination. :) @PauloCereda The Olympic Games? @egreg Oh no, I don't want to get too much involved. :) I am rooting against them. :) I love Red Dwarf! :) I toast, therefore I am :D 10:07 AM @JosephWright ooh :) @Joseph: I blame Andrew Stacey for introducing me to this awesome series. :) 1 hour later… 11:17 AM 11:33 AM English language question: proper nouns (like Lebesgue in the Lebesgue integral), hyphernation not allowed? 12:10 PM @daleif -- hyphenation discouraged, but allowed if there is no other reasonable alternative. but that's not a great example. the only possibility is "Le-besgue"; the hyphenation would best follow the pattern of the original language, unless it's really too "foreign" to english patterns. (i can't think of an example of that just now, but you can look at the last couple of pages of the tugboat hyphenation exception list, tb0hyf.pdf on ctan, for examples of names.) 12:43 PM @barbarabeeton Thanks, I found something similar in one of my style manuals. It is not usually something that I go after when editing. In this case a PhD student asked, she's finishing her thesis. 1:11 PM Joseph Wright has added an event to this room's schedule. 1:49 PM @daleif ooh finishing thesis... @DavidCarlisle ^^ 2:37 PM @PauloCereda One needs to start writing it in order to arrive at the finishing stage. @cfr: I added a Python command to generate the page interval. :) 3:29 PM @yo': Tom! <3 Hey guys! Could you help me with a question on LaTeX table formatting? 3:53 PM 4:05 PM @PauloCereda You should answer and claim the points! @HenriMenke An MWE always makes things worse ;). @cfr I actually jus provided the range. In fact, I believe the OP wants a specific page numbering booklet, similar to something I use (I generate booklets in a way I can simply fold the pages, so the page interval is different). @SamWhited: Hi, long time no see! Welcome back! <3 4:31 PM 2 The following code % !TEX program = XeLaTeX \documentclass{article} \usepackage{unicode-math,mathtools} \begin{document} $\overline{A}\overline{A}\mathrlap{A}$ \end{document} gives Going deeper Through some experiments, I found that mathtools and \mathclap are not to blame. % !TE... Why is XeTeX so broken? @HenriMenke I blame @DavidCarlisle. :) @HenriMenke :( \input luaotfload.sty \font\lm="Latin Modern Math:script=math" at 10pt \textfont0=\lm \fam0 $A$ \bye Also, why is my math font not loaded? @PauloCereda Classic! @HenriMenke I expect a witty reply regarding my thesis.. :) @PauloCereda Didn't you go to SP to see our prime minister? :) @HenriMenke A doesn't use \fam0 and at the beginning of a formula, \fam is implicitly set to –1. 4:38 PM @egreg Unfortunately, this changed nothing. \input luaotfload.sty \font\lm="STIX Math:script=math" at 10pt \textfont1=\lm $A$ \bye @HenriMenke ^^^^ The mathcode of A makes TeX choose family 1. @egreg But now A is upright instead of math italic. @HenriMenke Of course, what did you expect without changing the math code? @JosephWright: I flagged a number of comments, as the conversation really wasn't all that constructive. @Werner Noted 4:44 PM @HenriMenke You need to change the math code \input luaotfload.sty \font\lm="STIX Math:script=math" at 10pt \textfont0=\lm \UmathcodeA="0 "0 "1D434 $A$ \bye @JosephWright If he was here, I would have continued the discussion, but the comment thread of that question is not the right place for it. @egreg Is there something tabulated? Or is it buried deep inside the unicode-math package? @HenriMenke Check unicode-math-table.tex MY NEW DUCKS ARRIVED @egreg ooh @PauloCereda Yeah :-) 4:54 PM @PauloCereda QUACKKKKK! @JosephWright There's a "British pack". :) @egreg Sure thing! @PauloCereda NEW ducks? @PauloCereda I do this for A5 booklets, but then the calculations are done by pdfpages automatically. 5:00 PM @cfr ooh cool @Werner YES 3 2 @egreg, @JosephWright, @cfr, @Werner ^^ :) @Johannes_B, @AlanMunn, @barbarabeeton, @UlrikeFischer ^^ @PauloCereda Hmmm, so someone is making a fortune (from you...) by painting rubber duckies. No wonder your credit card support here on TeX.SE seems to fall short. @Werner Exactly! :) 5:19 PM @DavidCarlisle I've rolled back the edit I made to your answer about gb4e since the solution doesn't work for linguex after all. (It works in the simple example but not in slightly more complex ones.) tex.stackexchange.com/q/209228/2693 6:00 PM @PauloCereda Why do all the ducks have a U on their chest? @AlanMunn I... I don't know! @AlanMunn o.O @PauloCereda <3 @AlanMunn <3 1 hour later… 7:26 PM Oops, and back again; hi (and thanks!) @PauloCereda @SamWhited Hi! <3 2 hours later… 9:02 PM @PauloCereda Words fail me ... oh, hey, I don't know if anyone here is a big emacs/orgmode fan, but I just created a room just for that on the emacs site (my main experience is using it to export talks into beamer) so here's the linky: chat.stackexchange.com/rooms/43541/emacs-orgmode @cfr in a good way? :) @AaronHall emacs booooo :) emacs yay!! what text editor do you use? Dad is watching TV and he said, "Get your smartphone and capture one of those Pokémanz so I can scare the cat..." Rubber ducky he's the one... 9:16 PM I think dad hasn't got the gist of the game yet. @AaronHall Think of editor wars. :) Have you seen spacemacs? spacemacs.org > The best editor is neither Emacs nor Vim, it's Emacs and Vim! @AaronHall Yes, I do. :) I don't use emacs, but here it is. Yay! @PauloCereda You'd like my friend Amanda! @JosephWright What did she do? :) 9:20 PM @PauloCereda Currently catch all the pokemanz @PauloCereda She's just trained as a teacher and is 'free' until term starts @JosephWright ooh @JosephWright how nice! @Joseph: ask Amanda which team she picked. :) @PauloCereda I'm told a 'parasect' is good to catch @JosephWright ooh it's the evolved version of a Paras. :) I had a Zubat, which is a wacky bat. :) Left is Paras, right is Parasect. Jul 20 at 20:27, by David Carlisle @PauloCereda don't remember, it was sat on my keyboard at work so i threw a ball thing and put the phone back in my pocket.... Of course at school if you don't own a phone and have caught (or claimed to have caught) thousands of the things, you are a social outcast @JosephWright: @David is into Pokemanz too. :) @PauloCereda I'm sure @PauloCereda well I think orgmode is Emacs's killer app - people are really liking it. I wouldn't be using Emacs if it weren't for orgmode. 9:28 PM @AaronHall Little I know about orgmode, but people say it's nice. :) Think about it: plus vim needs a good operating system and language interface. and I'm using orgmode for export to LaTeX! o.O 10:19 PM @JavierGarcía-Salcedo Hi Javier, did you manage to get biber and biblatex working? 1 hour later… 11:20 PM @PauloCereda Paulo! :)
Sign In Sign Up Magyar Information Contest Journal Articles Contest Rules Entry Form Problems Results Previous years # Exercises and problems in InformaticsNovember 2002 ## Please read The Conditions of the Problem Solving Competition. I. 34. Binomial coefficients can be used to represent natural numbers in the so-called binomial base. For a fixed m (2$\displaystyle \le$m $\displaystyle \le$50) every natural number n (0n10000) can uniquely be represented as $\displaystyle n={a_1\choose1}+{a_2\choose2}+\dots+{a_m\choose m}$, where 0a1<a2<...<am. Your program (I34.pas, ...) should read the numbers n and m, then display the corresponding sequence a1,a2,...,am. Example. Let n=41, then a1=1,a2=2,a3=4,a4=7, because $\displaystyle 41={1\choose1}+{2\choose2}+{4\choose3}+{7\choose4}=1+1+4+35.$ (10 points) I. 35. We put an ant close beside the base of a cylinder-jacket with radius R and height H. In every minute the ant creeps upwards M centimetres. The cylinder is rotated around its axis (which is just the Z-axis) anticlockwise completing T turns per minute. The ant starts from the point (R,0,0), and we are watching it at an angle of ALPHA degree relative to the Y-axis, see the figure. 1. ábra 2. ábra Write your program (I35.pas, ...) which reads the values of R (1$\displaystyle \le$R$\displaystyle \le$50), H (1$\displaystyle \le$H$\displaystyle \le$200), M (1$\displaystyle \le$M$\displaystyle \le$H), T (1T100) and ALPHA (0$\displaystyle \le$ALPHA<90), then displays the axonometric projection to the plane Y=0 of the path of the ant using continuous line on the visible side of the cylinder and dotted line on the back side. Example. Figure 2 shows the path of the ant with R=50, H=200, M=1, T=40, ALPHA=30. (10 points) I. 36. According to the trinomial theorem $\displaystyle {(x+y+z)}^n=\sum_{\textstyle{0\le a,b,c\le n\atop a+b+c=n}} {a+b+c\choose a,b,c}x^ay^bz^c.$ The trinomial coefficients can be computed, for example, by the formula $\displaystyle {a+b+c\choose a,b,c}=\frac{(a+b+c)!}{a!b!c!}.$ However, these factorials can be very large, thus their direct computation is not always feasible. Nevertheless, writing trinomial coefficients as a product of binomial coefficients can settle this problem. Prepare your sheet (I36.xls) which, if n (n=a+b+c, n20) is entered into a given cell, displays a table of trinomial coefficients, similar to the one below. a/b012345 015101051 1520302050 21030301000 3102010000 4550000 5100000 The example shows the coefficients when n=5. (10 points) ### Deadline: 13 December 2002 Our web pages are supported by: Morgan Stanley
LED Blink is not working but code can program with PIC18F4550 and MikroC I tried to code LED on-off program for test PIC18F4550 microcontroller. The code can program to IC using Pikkit 2 but a program is not working. I code by using MikroC Pro for Pic 6.6.1. I am using 4Mhz crystal oscillator. Why program is not working? is it IC not working? Code: void main() { TRISA.B0=0; while(1){ PORTA.B0=1; Delay_ms(2000); PORTA.B0=0; Delay_ms(2000); } } This is my circuit diagram: • I only have small experience with PIC16 but the loading capacitance is way too high and have you set up corresponding PORT register? – Long Pham Sep 23 '18 at 7:22 • And also, was it a crystal oscillator or crystal resonator? – Long Pham Sep 23 '18 at 7:26 • Be sure to put a capacitor on your supply voltage! (Close to pin 31/32) – Mike Sep 24 '18 at 6:48 TRIS are the registers that control the direction (input or output) of a pin. You use only this register, so all you are doing is toggling between input and output. The first line is correct, but in the loop you must use a PORT or (better) LAT register to toggle the pin between high and low. • Sorry for that that is typing mistake – Ind Sep 23 '18 at 7:26 • When I use PIC16F887 microcontroller it is working by changing IC modal. – Ind Sep 23 '18 at 7:29 • What is IC modal?? – Wouter van Ooijen Sep 23 '18 at 7:36 • I mean the same code and change IC modal to PIC16F887 IC. I use PIC16F887 microcontroller it is working fine. Why this code not working for PIC18F4550 IC. – Ind Sep 23 '18 at 7:38 • I think this PIC18F4550 IC is not working. Why it can program? – Ind Sep 23 '18 at 7:41 simulate this circuit – Schematic created using CircuitLab Figure 1. LED orientations that work. If the diode in your schematic is the LED in question then it is missing the "light" arrows from the symbol and it is in backwards so it will never light. simulate this circuit Figure 2. This RC addition causes the $$\ \overline {\text {MCLR}} \$$ line to follow the power supply with a 10k x 100n = 1 ms time delay. Also the $$\ \overline {\text {MCLR}} \$$ may need to be pulled high a little later after the chip has powered up. A small RC time delay is normally used for this. Check the datasheet examples for recommended values. The delay allows the internal voltages to settle and holding the line low resets everything to known initial conditions ensuring proper initialisation. • When I am using PIC16F887A ic it does not have any problem same circuit only changed IC – Ind Sep 23 '18 at 9:19 • OK. You might be lucky with the MLCR on the other chip so check the datasheet for this one. The LED cannot work the way you have drawn it. – Transistor Sep 23 '18 at 9:22 • I removed capacitors of oscilators then it is working why is that? – Ind Sep 23 '18 at 10:41 • Capacitors were too large. There is sufficient stray capacitance for the oscillator to work. – Leon Heller Sep 23 '18 at 11:44
# Jeanine's Shoes Probability Level 3 Jeanine has two pairs of red shoes, two pairs of green shoes, and two pairs of blue shoes. If Jeanine goes to her closet and randomly selects two shoes, what is the probability that Jeanine will have a pair of shoes that are of the same color and that are wearable? ×
How do you write the chemical equation for the acetate buffer equilibrium [same for both buffers]? Dec 27, 2016 See below. Explanation: An acid buffer is a solution that contains roughly the same concentrations of a weak acid and its conjugate base. Thus, an acetate buffer contains roughly equal concentrations of acetic acid and acetate ion. They are in a chemical equilibrium with each other. The equation is: underbrace("CH"_3"COOH")_color(red)("acetic acid") + "H"_2"O" ⇌ underbrace("CH"_3"COO"^"-")_color(red)("acetate ion") + "H"_3"O"^"+"
# zbMATH — the first resource for mathematics Approximation properties of lowest-order hexahedral Raviart–Thomas finite elements. (English. Abridged French version) Zbl 1071.65148 Summary: Basic interpolation results are settled for lowest-order hexahedral Raviart-Thomas finite elements. Convergence in H(div) is proved for regular families of asymptotically parallelepiped meshes. The need of the asymptotically parallelepiped assumption is demonstrated with a numerical example. ##### MSC: 65N30 Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs 65N12 Stability and convergence of numerical methods for boundary value problems involving PDEs 65N50 Mesh generation, refinement, and adaptive methods for boundary value problems involving PDEs 35J25 Boundary value problems for second-order elliptic equations Full Text: ##### References: [1] Arnold, D.N.; Boffi, D.; Falk, R.S., Approximation by quadrilateral finite elements, Math. comp., 71, 909-922, (2002) · Zbl 0993.65125 [2] Arnold, D.N.; Boffi, D.; Falk, R.S., Quadrilateral $$\operatorname{H}(\operatorname{div})$$ finite elements, SIAM J. numer. anal., 42, 2429-2451, (2005) · Zbl 1086.65105 [3] Arnold, D.N.; Boffi, D.; Falk, R.S.; Gastaldi, L., Finite element approximation on quadrilateral meshes, Comm. numer. methods engrg., 17, 805-812, (2001) · Zbl 0999.76073 [4] Babuška, I.; Osborn, J., Eigenvalue problems, (), 641-787 · Zbl 0875.65087 [5] Bermúdez, A.; Durán, R.; Muschietti, M.A.; Rodríguez, R.; Solomin, J., Finite element vibration analysis of fluid – solid systems without spurious modes, SIAM J. numer. anal., 32, 1280-1295, (1995) · Zbl 0833.73050 [6] A. Bermúdez, P. Gamallo, M.R. Nogueiras, R. Rodríguez, Approximation of a structural acoustic vibration problem by hexahedral finite elements, submitted for publication [7] Bermúdez, A.; Gamallo, P.; Rodríguez, R., A hexahedral face element for elastoacoustic vibration problems, J. acoust. soc. amer., 109, 422-425, (2001) [8] Bermúdez, A.; Gamallo, P.; Rodríguez, R., An hexahedral face element method for the displacement formulation of structural acoustics problems, J. comput. acoust., 9, 911-918, (2001) [9] Bermúdez, A.; Hervella-Nieto, L.; Rodríguez, R., Finite element computation of three dimensional elastoacoustic vibrations, J. sound vib., 219, 277-304, (1999) · Zbl 1235.74267 [10] Bermúdez, A.; Rodríguez, R., Finite element computation of the vibration modes of a fluid – solid system, Comput. methods appl. mech. engrg., 119, 355-370, (1994) · Zbl 0851.73053 [11] Brezzi, F.; Fortin, M., Mixed and hybrid finite element methods, (1991), Springer New York · Zbl 0788.73002 [12] Girault, V.; Raviart, P.A., Finite element methods for navier – stokes equations. theory and algorithms, (1986), Springer-Verlag Berlin [13] Nédélec, J.C., Mixed finite elements in $$\mathbb{R}^3$$, Numer. math., 35, 315-341, (1980) · Zbl 0419.65069 [14] Raviart, P.A.; Thomas, J.M., A mixed finite element method for second order elliptic problems, (), 292-315 [15] J.M. Thomas, Sur l’Analyse Numérique des Méthodes d’Éléments Finis Hybrides et Mixtes, Thèse de Doctorat d’Etat, Université Pierre et Marie Curie, Paris, 1977 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Thread: Law of total probability - need some clarification 1. ## Law of total probability - need some clarification Two machines, A and B, operate in some factory. Machine A produces 10% of the factory's products. Machine B produces 90% of the factory's products. 1% of machine A's products are flawed. 5% of machine B's products are flawed. One product randomly selected. what is the probability that it's flawed? So, this is what I did: let $C=\left \{ product\hspace{4} x\hspace{4} is\hspace{4} flawed \right \}$ let $A=\left \{ x\hspace{4} produced\hspace{4} in\hspace{4} A \right \}$ let $B=\left \{ x\hspace{4} produced\hspace{4} in\hspace{4} B \right \}$ Now, according to the law of total probability, we'll get that: $P(C)=P(A)\cdot P(C|A)+P(B)\cdot P(C|B)=10\%\cdot (10\%\cdot 1\%)+90\%\cdot (90\%\cdot 5\%)=4.05\%$ but the answer is: $P(C)=P(A)\cdot P(C|A)+P(B)\cdot P(C|B)=(10\%\cdot 1\%)+(90\%\cdot 5\%)=4.6\%$ Why were $P(A)$ and $P(B)$ omitted? 2. ## Re: Law of total probability - need some clarification Originally Posted by Stormey Two machines, A and B, operate in some factory. Machine A produces 10% of the factory's products. Machine B produces 90% of the factory's products. 1% of machine A's products are flawed. 5% of machine B's products are flawed. One product randomly selected. what is the probability that it's flawed? So, this is what I did: let $C=\left \{ product\hspace{4} x\hspace{4} is\hspace{4} flawed \right \}$ let $A=\left \{ x\hspace{4} produced\hspace{4} in\hspace{4} A \right \}$ let $B=\left \{ x\hspace{4} produced\hspace{4} in\hspace{4} B \right \}$ $P(C)=P(A)\cdot P(C|A)+P(B)\cdot P(C|B)=(10\%\cdot 1\%)+(90\%\cdot 5\%)=4.6\%$ $P(C)=P(C\cap A)+P(C\cap B)$ $P(C\cap A)=P(C|A)P(A)=(0.1)(0.01)$, That is, probability it is flawed given A made it times probability that A made it. 3. ## Re: Law of total probability - need some clarification Hi Plato, thanks for the help. Originally Posted by Plato That is, probability it is flawed given A made it times probability that A made it. that is exactly my question... I'll try to rephrase. the probability tree looks like this: "probability it is flawed given A made it" (meaning P(C|A) ), if we follow the tree from its root along its branches, is $0.1\times 0.01$ "probability that A made it" (meaning P(A) ), again, according the tree it's $0.1$ so "probability it is flawed given A made it times probability that A made it" should be $(0.1\times 0.01)\times 0.1$ 4. ## Re: Law of total probability - need some clarification Originally Posted by Stormey Hi Plato, thanks for the help. that is exactly my question... I'll try to rephrase. the probability tree looks like this: "probability it is flawed given A made it" (meaning P(C|A) ), if we follow the tree from its root along its branches, is $0.1\times 0.01$ "probability that A made it" (meaning P(A) ), again, according the tree it's $0.1$ so "probability it is flawed given A made it times probability that A made it" should be $(0.1\times 0.01)\times 0.1$ I really don’t understand what you are asking. While in general I am no fan of probability trees, your's makes the answer quite clear. In order for a part to be flawed is must come from either machine A or machine B. \begin{align*} \mathcal{P}(C)&=\mathcal{P}(C\cap A)+ \mathcal{P}(C\cap B)\\ &= \mathcal{P}(A)\mathcal{P}(C|A)+ \mathcal{P}(B)\mathcal{P}(C|B)\\&=(0.1)(0.01)+(0.9 )(0.05)\end{align*} 5. ## Re: Law of total probability - need some clarification Originally Posted by Plato I really don’t understand what you are asking. While in general I am no fan of probability trees, your's makes the answer quite clear. In order for a part to be flawed is must come from either machine A or machine B. \begin{align*} \mathcal{P}(C)&=\mathcal{P}(C\cap A)+ \mathcal{P}(C\cap B)\\ &= \mathcal{P}(A)\mathcal{P}(C|A)+ \mathcal{P}(B)\mathcal{P}(C|B)\\&=(0.1)(0.01)+(0.9 )(0.05)\end{align*} OK, I get it now, I misunderstood the law of total probability, and it confused me. Thanks. 6. ## Re: Law of total probability - need some clarification Here's how I would do it: Assume the machines produce 1000 products Machine A produces 10% of the factory's products. so machine A produces 100 of the products. Machine B produces 90% of the factory's products. And machine B produces 900. 1% of machine A's products are flawed. 1% of 100 is 1. Machine A produces 1 flawed product. 5% of machine B's products are flawed. 5% of 900 is 45. Machine B produces 45 flawed products. One product randomly selected. what is the probability that it's flawed? Of the 1000 products 45+ 1= 46 are flawed. The probability any one is flawed is $\frac{46}{1000}= 0.046$.
# Beamer - Frame numbering in alphabetic format I have redefined the footline of appendix frames in beamer. Now the frames after appendix show framenumber from one. Following is the code used: \documentclass[•]{beamer} \makeatletter \makeatother \begin{document} \begin{frame} The frame before appendix\\ Frame number = 1 \end{frame} \appendix \begin{frame} First frame of appendix\\ Actual frame number=2\\ What is now printed = 1 \end{frame} \end{document} My question is, how I can print the frame numbers of the footline in alphabetic mode ? I tried this: \setbeamertemplate{footline}{\alph{\number\numexpr\insertframenumber-\insertpresentationendframe}} but the code did not compile. Does anybody know how to fix this ? Please note that I don't want to redefine the frame numbering (as it will cause issues with my custom theme), but to print a different frame number (which is frame number - number of presentation end frame). I hope that I interpret your question correctly. \documentclass[¥]{beamer} \newcounter{fakepage} \makeatletter \makeatother \begin{document} \begin{frame} The frame before appendix\\ Frame number = 1 \end{frame} \appendix
Volume 395 - 37th International Cosmic Ray Conference (ICRC2021) - CRI - Cosmic Ray Indirect Observation of Variations in Cosmic Ray Shower Rates During Thunderstorms and Implications for Large-Scale Electric Field Changes R. Abbasi* and F.t. Telescope Array Collaboration. Full text: pdf Pre-published on: July 08, 2021 Published on: Abstract This work presents the first observation by the Telescope Array Surface Detector (TASD) of the effect of thunderstorms on the development of the cosmic ray showers. Observations of variations in the cosmic ray showers, using the TASD, allows us to study the electric field inside thunderstorms on a large scale without dealing with all the limitation of narrow exposure in time and space using balloons and aircraft detectors. In this work, observations of variations in the cosmic ray shower intensity ($\Delta N/N$) using the TASD, was studied and found to be on average at the ($1-2)\%$ level. These observations where found to be both negative and positive in polarity. They were found to be correlated with lightning but also with thunderstorms. The size of the footprint of these variations on the ground ranged from (4-24) km in diameter and lasted for 10s of minutes. Dependence of ($\Delta N/N$) on the electric field inside thunderstorms, in this work, is derived from CORSIKA simulations. DOI: https://doi.org/10.22323/1.395.0297 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
# Background In bdrc: Bayesian Discharge Rating Curves \vspace{12pt} ## References Hrafnkelsson, B., Sigurdarson, H., and Gardarsson, S. M. (2020). Generalization of the power-law rating curve using hydrodynamic theory and Bayesian hierarchical modeling. arXiv preprint 2010.04769. Clarke, R. (1999). Uncertainty in the estimation of mean annual flood due to rating-curve indefinition. Journal of Hydrology 222(1-4). 185–190. Clarke, R., Mendiondo, E., Brusa L. (2000). Uncertainties in mean discharges from two large South American rivers due to rating curve variability. Hydrological Sciences 45(2). 221–236. Venetis C. (1970). A note on the estimation of the parameters in logarithmic stage-discharge relationships with estimates of their error. International Association of Scientific Hydrology. Bulletin XV 2(6). 105–111. V. Chow (1959). Open-Channel Hydraulics. McGraw-Hill. New York. Matérn, B. (1960). Spatial variation. Stochastic models and their application to some problems in forest surveys and other sampling investigations. Meddelanden från statens Skogsforskningsinstitut. 49(5). ## Try the bdrc package in your browser Any scripts or data that you put into this service are public. bdrc documentation built on July 28, 2021, 5:10 p.m.
# BDA3 Chapter 2 Exercise 9 Here’s my solution to exercise 9, chapter 2, of Gelman’s Bayesian Data Analysis (BDA), 3rd edition. There are solutions to some of the exercises on the book’s webpage. $$\DeclareMathOperator{\dbinomial}{binomial} \DeclareMathOperator{\dbern}{Bernoulli} \DeclareMathOperator{\dnorm}{normal} \DeclareMathOperator{\dgamma}{gamma} \DeclareMathOperator{\invlogit}{invlogit} \DeclareMathOperator{\logit}{logit} \DeclareMathOperator{\dbeta}{beta}$$ The data show 650 people in support of the death penalty and 350 against. We explore the effect of different priors on the posterior. First let’s find the prior with a mean of 0.6 and standard deviation 0.3. The mean of the $$\dbeta(\alpha, \beta)$$ distribution is $\frac{3}{5} = \frac{\alpha}{\alpha + \beta}$ which implies that $$\alpha = 1.5 \beta$$. The variance is \begin{align} \frac{9}{100} &= \frac{\alpha \beta}{(\alpha + \beta)^2 (\alpha + \beta + 1)} \\ &= \frac{3}{2} \frac{\beta^2}{\frac{25}{4}\beta^2 \frac{5\beta + 2}{2}} \\ &= \frac{3}{2}\frac{4}{25}\frac{2}{5\beta + 2} \\ &= \frac{12}{25(5\beta + 2)} \\ &\Leftrightarrow \\ 5\beta + 2 &= \frac{12}{25}\frac{100}{9} \\ &= \frac{16}{9}, \end{align} which implies that $$\beta = \frac{2}{3}$$. Thus $$\alpha = 1$$. Let’s check we’ve done the maths correctly. α <- 1 β <- 2 / 3 list( 'mean_diff' = 3 / 5 - α / (α + β), 'variance_diff' = 9 / 100 - α * β / ((α + β)^2 * (α + β + 1)) ) ## $mean_diff ## [1] -1.110223e-16 ## ##$variance_diff ## [1] -1.387779e-17 The value 1e-16 is computer-speak for 0. Since $$\beta < 1 <= \alpha$$, we see one maximum at 1. tibble(x = seq(0, 1, 0.001), y = dbeta(x, α, β)) %>% ggplot() + aes(x, y) + geom_area(fill = 'skyblue') + labs( x = 'x', y = 'beta(x | α, β)', title = 'Beta prior with mean 0.3 and standard deviation 0.6', subtitle = str_glue('α = {α}, β = {signif(β, 3)}') ) The beta distribution is self-conjugate so the posterior is $$\dbeta(0.6 + 650, 0.4 + 350)$$. Let’s plot the posterior with priors of different strength. We can increase the strength of the prior whilst keeping the mean constant by multiplying $$\alpha$$ and $$\beta$$ by the same constant c. We will use $$c \in \{ 1, 10, 100, 1000\}$$. In the plot below, we have restricted the x-axis to focus on the differences in the shape of the posteriors. support <- 650 against <- 350 expand.grid(magnitude = 0:3, x = seq(0, 1, 0.001)) %>% as_tibble() %>% mutate( c = 10^magnitude, a_prior = α * c, b_prior = β * c, y = dbeta(x, support + a_prior, against + b_prior), prior_magnitude = factor(as.character(10^magnitude)) ) %>% ggplot() + aes(x, y, colour = prior_magnitude) + geom_line() + scale_x_continuous(limits = c(0.55, 0.75)) + labs( x = 'x', y = 'beta(x | support + a, against + b)', title = 'Beta posterior with different priors', subtitle = str_glue(paste( 'a = {α} * 10^magnitude, b = {β} * 10^magnitude', 'support = 650, against = 350', sep = '\n' )), colour = 'Magnitude of the prior' ) Magnitudes 1 and 10 give very similar results close to the maximum likelihood estimate of 65%. The higher magnitudes pull the mean towards the prior mean of 60%.
Search # Rational number In mathematics, a rational number (or informally fraction) is a ratio of two integers, usually written as the vulgar fraction a/b, where b is not zero. Each rational number can be written in infinitely many forms, for example 3 / 6 = 2 / 4 = 1 / 2. The simplest form is when a and b have no common divisors, and every rational number has a simplest form of this type. The decimal expansion of a rational number is eventually periodic (in the case of a finite expansion the zeroes which implicitly follow it form the periodic part). The same is true for any other integral base above 1. Conversely, if the expansion of a number for one base is periodic, it is periodic for all bases and the number is rational. A real number that is not rational is called an irrational number. In mathematics, the term "rational something" means that the underlying field considered is the field $\mathbb{Q}$ of rational numbers. For example, rational polynomials or rational prime ideals. The set of all rational numbers is denoted by Q, or in blackboard bold $\mathbb{Q}$. Using the set-builder notation $\mathbb{Q}$ is defined as such: $\mathbb{Q} = \left\{\frac{m}{n} : m \in \mathbb{Z}, n \in \mathbb{Z}, n \ne 0 \right\}$ Contents ## Arithmetic Addition and multiplication of rational numbers are as follows: $\frac{a}{b} + \frac{c}{d} = \frac{ad+bc}{bd}$ $\frac{a}{b} \cdot \frac{c}{d} = \frac{ac}{bd}$ Two rational numbers $\frac{a}{b}$ and $\frac{c}{d}$ are equal iff ad = bc Additive and multiplicative inverses exist in the rational numbers. $- \left( \frac{a}{b} \right) = \frac{-a}{b}$ $\left(\frac{a}{b}\right)^{-1} = \frac{b}{a} \mbox{ if } a \neq 0$ ## History ### Egyptian fractions Any positive rational number can be expressed as a sum of distinct reciprocals of positive integers. For instance, $\frac{5}{7} = \frac{1}{2} + \frac{1}{6} + \frac{1}{21}$ For any positive rational number, there are infinitely many different such representations. These representations are called Egyptian fractions, because the ancient Egyptians used them. The hieroglyph used for this is the letter that looks like a mouth, which is transliterated R, so the above fraction would be written as R2R6R21, or, using the hieroglyphs and writing left to right: Aa13 D21:Z1*Z1*Z1*Z1*Z1*Z1 D21:V20*V20*Z1 ½ is one of exactly three exceptions: it is written as shown in the first hieroglyph above. The two other exceptions were the two only non-unit fractions for which there were symbols: D22 $= \frac{2}{3}$ D23 $= \frac{3}{4}$ ## Formal construction Mathematically we may define them as an ordered pair of integers $\left(a, b\right)$, with b not equal to zero. We can define addition and multiplication of these pairs with the following rules: $\left(a, b\right) + \left(c, d\right) = \left(ad + bc, bd\right)$ $\left(a, b\right) \times \left(c, d\right) = \left(ac, bd\right)$ To conform to our expectation that 2 / 4 = 1 / 2, we define an equivalence relation $\sim$ upon these pairs with the following rule: $\left(a, b\right) \sim \left(c, d\right) \mbox{ iff } ad = bc$ This equivalence relation is compatible with the addition and multiplication defined above, and we may define Q to be the quotient set of ~, i.e. we identify two pairs (a, b) and (c, d) if they are equivalent in the above sense. (This construction can be carried out in any integral domain, see quotient field.) We can also define a total order on Q by writing $\left(a, b\right) \le \left(c, d\right) \mbox{ iff } ad \le bc$ ## Properties The set $\mathbb{Q}$, together with the addition and multiplication operations shown above, forms a field, the quotient field of the integers $\mathbb{Z}$. The rationals are the smallest field with characteristic 0: every other field of characteristic 0 contains a copy of $\mathbb{Q}$. The algebraic closure of $\mathbb{Q}$, i.e. the field of roots of rational polynomials, is the algebraic numbers. The set of all rational numbers is countable. Since the set of all real numbers is uncountable, we say that almost all real numbers are irrational, in the sense of Lebesgue measure. The rationals are a densely ordered set: between any two rationals, there sits another one, in fact infinitely many other ones. ## Real numbers The rationals are a dense subset of the real numbers: every real number has rational numbers arbitrarily close to it. A related property is that rational numbers are the only numbers with finite expressions of continued fraction. By virtue of their order, the rationals carry an order topology. The rational numbers are a (dense) subset of the real numbers, and as such they also carry a subspace topology. The rational numbers form a metric space by using the metric $d\left(x, y\right) = |x - y|$, and this yields a third topology on $\mathbb{Q}$. Fortunately, all three topologies coincide and turn the rationals into a topological field. The rational numbers are an important example of a space which is not locally compact. The space is also totally disconnected. The rational numbers do not form a complete metric space; the real numbers are the completion of $\mathbb{Q}$. In addition to the absolute value metric mentioned above, there are other metrics which turn $\mathbb{Q}$ into a topological field: let p be a prime number and for any non-zero integer a let | a | p = p - n, where pn is the highest power of p dividing a; in addition write | 0 | p = 0. For any rational number $\frac{a}{b}$, we set $\left|\frac{a}{b}\right|_p = \frac{|a|_p}{|b|_p}$. Then $d_p\left(x, y\right) = |x - y|_p$ defines a metric on $\mathbb{Q}$. The metric space $\left(\mathbb{Q}, d_p\right)$ is not complete, and its completion is the p-adic number field $\mathbb{Q}_p$. Last updated: 10-17-2005 18:20:15
cdfWeibull¶ Purpose¶ Computes the cumulative distribution function for the Weibull distribution. Format¶ p = cdfWeibull(x, shape, scale) Parameters • x (NxK matrix, Nx1 vector or scalar) – Values at which to evaluate the cumulative distribution function for the Weibull distribution. $$x \geq 0$$. • shape (NxK matrix, Nx1 vector or scalar) – Shape parameter. ExE conformable with x. $$shape > 0$$. • scale (NxK matrix, Nx1 vector or scalar) – Scale parameter, ExE conformable with x. $$scale > 0$$. Returns p (NxK matrix, Nx1 vector or scalar) – Each element in p is the cumulative distribution function of the Weibull distribution evaluated at the corresponding element in x. Examples¶ Calculate the cdf for the Weibull distribution with different shape parameters. // Values x = seqa(0,0.01,301); // Shape parameter shape = 0.5~1~1.5~5; // Scale parameter scale = 1; p = cdfWeibull(x, shape, scale); plotxy(x, p); After running above code, Remarks¶ The Weibull cumulative distribution function is defined as: $f(x; k, \lambda) = 1 - e^{-(x/\lambda)k}$
### adamant's blog By adamant, history, 2 days ago, Hi everyone! You probably know that the primitive root modulo $m$ exists if and only if one of the following is true: • $m=2$ or $m=4$; • $m = p^k$ is a power of an odd prime number $p$; • $m = 2p^k$ is twice a power of an odd prime number $p$. Today I'd like to write about an interesting rationale about it through $p$-adic numbers. Hopefully, this will allow us to develop a deeper understanding of the multiplicative group modulo $p^k$. ### Tl;dr. For a prime number $p>2$ and $r \equiv 0 \pmod p$ one can uniquely define $\exp r = \sum\limits_{k=0}^\infty \frac{r^k}{k!} \pmod{p^n}.$ In this notion, if $g$ is a primitive root of remainders modulo $p$ lifted to have order $p-1$ modulo $p^n$ as well, then $g \exp p$ is a primitive root of remainders modulo $p^n$. Finally, for $p=2$ and $n>2$ the multiplicative group is generated by two numbers, namely $-1$ and $\exp 4$. Read more » • +137 By adamant, history, 5 weeks ago, Hi everyone! Today I want to describe an efficient solution of the following problem: Composition of Formal Power Series. Given $A(x)$ and $B(x)$ of degree $n-1$ such that $B(0) = 0$, compute $A(B(x)) \pmod{x^n}$. The condition $B(0)=0$ doesn't decrease the generality of the problem, as $A(B(x)) = P(Q(x))$, where $P(x) = A(x+B(0))$ and $Q(x) = B(x) - B(0)$. Hence you could replace $A(x)$ and $B(x)$ with $P(x)$ and $Q(x)$ when the condition is not satisfied. Solutions that I'm going to describe were published in Joris van der Hoeven's article about operations on formal power series. The article also describes a lot of other common algorithms on polynomials. It is worth noting that Joris van der Hoeven and David Harvey are the inventors of the breakthrough $O(n \log n)$ integer multiplication algorithm in the multitape Turing machine model. Read more » • +113 By adamant, history, 6 weeks ago, Hi everyone! Today I'd like to write a bit on the amazing things you can get out of a power series by putting roots of unity into its arguments. It will be another short story without any particular application in competitive programming (at least I don't know of them yet, but there could be). But I find the facts below amazing, hence I wanted to share them. You're expected to know some basic stuff about the discrete Fourier transform and a bit of linear algebra to understand the article. Read more » • +72 By adamant, history, 6 weeks ago, Hi everyone! Today I'd like to finally talk about an algorithm to solve the following tasks in $O(n \log^2 n)$: • Compute the greatest common divisor of two polynomials $P(x)$ and $Q(x)$; • Given $f(x)$ and $h(x)$ find the multiplicative inverse of $f(x)$ modulo $h(x)$; • Given $F_0,F_1, \dots, F_m$, recover the minimum linear recurrence $F_n = a_1 F_{n-1} + \dots + a_d F_{n-d}$; • Given $P(x)$ and $Q(x)$, find $A(x)$ and $B(x)$ such that $P(x) A(x) + Q(x) B(x) = \gcd(P, Q)$; • Given $P(x)=(x-\lambda_1)\dots(x-\lambda_n)$ and $Q(x)=(x-\mu_1)\dots(x-\mu_m)$ compute their resultant. More specifically, this allows to solve in $O(n \log^2 n)$ the following problems: Library Checker — Find Linear Recurrence. You're given $F_0, \dots, F_{m}$. Find $a_1, \dots, a_d$ with minimum $d$ such that $F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$ Library Checker — Inv of Polynomials. You're given $f(x)$ and $h(x)$. Compute $f^{-1}(x)$ modulo $h(x)$. All tasks here are connected with the extended Euclidean algorithm and the procedure we're going to talk about is a way to compute it quickly. I recommend reading article on recovering minimum linear recurrence first, as it introduces some useful results and concepts. It is also highly recommended to familiarize yourself with the concept of continued fractions. Read more » • +179 By adamant, history, 7 weeks ago, Hi everyone! The task of finding the minimum linear recurrence for the given starting sequence is typically solved with the Berlekamp-Massey algorithm. In this article I would like to highlight another possible approach, with the use of the extended Euclidean algorithm. Great thanks to nor for the proofreading and all the useful comments to make the article more accessible and rigorous. ### Tl'dr. The procedure below is essentially a formalization of the extended Euclidean algorithm done on $F(x)$ and $x^{m+1}$. If you need to find the minimum linear recurrence for a given sequence $F_0, F_1, \dots, F_m$, do the following: Let $F(x) = F_m + F_{m-1} x + \dots + F_0 x^m$ be the generating function of the reversed $F$. Compute the sequence of remainders $r_{-2}, r_{-1}, r_0, \dots, r_k$ such that $r_{-2} = F(x)$, $r_{-1}=x^{m+1}$ and $r_{k} = r_{k-2} \mod r_{k-1}.$ Let $a_k(x)$ be a polynomial such that $r_k = r_{k-2} - a_k r_{k-1}$. Compute the auxiliary sequence $q_{-2}, q_{-1}, q_0, \dots, q_k$ such that $q_{-2} = 1$, $q_{-1} = 0$ and $q_{k} = q_{k-2} + a_k q_{k-1}.$ Pick $k$ to be the first index such that $\deg r_k < \deg q_k$. Let $q_k(x) = a_0 x^d - \sum\limits_{k=1}^d a_k x^{d-k}$, then it also holds that $F_n = \sum\limits_{k=1}^d \frac{a_k}{a_0}F_{n-k}$ for any $n \geq d$ and $d$ is the minimum possible. Thus, $q_k(x)$ divided by $a_0$ is the characteristic polynomial of the minimum linear for $F$. More generally, one can say for such $k$ that $F(x) \equiv \frac{(-1)^{k}r_k(x)}{q_k(x)} \pmod{x^{m+1}}.$ Read more » • +175 By adamant, history, 7 weeks ago, Hi everyone! Today I'd like to write about Fibonacci numbers. Ever heard of them? Fibonacci sequence is defined as $F_n = F_{n-1} + F_{n-2}$. It got me interested, what would the recurrence be like if it looked like $F_n = \alpha F_{n-p} + \beta F_{n-q}$ for $p \neq q$? Timus — Fibonacci Sequence. The sequence $F$ satisfies the condition $F_n = F_{n-1} + F_{n-2}$. You're given $F_i$ and $F_j$, compute $F_n$. Using $L(x^n) = F_n$ functional, we can say that we essentially need to solve the following system of equations: $1 \equiv \alpha x^{-p} + \beta x^{-q} \pmod{x^2-x-1}.$ To get the actual solution from it, we should first understand what exactly is the remainder of $x^n$ modulo $x^2-x-1$. The remainder of $P(x)$ modulo $(x-a)(x-b)$ is generally determined by $P(a)$ and $P(b)$: $P(x) \equiv r \mod(x-a)(x-b) \iff \begin{cases}P(a) = r,\\ P(b)=r.\end{cases}$ Therefore, our equation above is, equivalent to the following: $\begin{cases} \alpha a^{-p} + \beta a^{-q} = 1,\\ \alpha b^{-p} + \beta b^{-q} = 1. \end{cases}$ The determinant of this system of equations is $a^{-p}b^{-q} - a^{-q}b^{-p}$. Solving the system, we get the solution $\begin{matrix} \alpha = \dfrac{b^{-q}-a^{-q}}{a^{-p}b^{-q} - a^{-q}b^{-p}}, & \beta = \dfrac{a^{-p}-b^{-p}}{a^{-p}b^{-q} - a^{-q}b^{-p}}. \end{matrix}$ Multiplying numerators and denominators by $a^q b^q$ for $\alpha$ and $a^p b^p$ for $\beta$, we get a nicer form: $\boxed{\begin{matrix} \alpha = \dfrac{a^q-b^q}{a^{q-p} - b^{q-p}}, & \beta = \dfrac{a^p-b^p}{a^{p-q} - b^{p-q}}. \end{matrix}}$ This is a solution for a second degree recurrence with the characteristic polynomial $(x-a)(x-b)$. Note that for Fibonacci numbers in particular, due to Binet's formula, it holds that $F_n = \frac{a^n-b^n}{a-b}.$ Substituting it back into $\alpha$ and $\beta$, we get $\boxed{F_n = \frac{F_q}{F_{q-p}} F_{n-p} + \frac{F_p}{F_{p-q}} F_{n-q}}$ which is a neat symmetric formula. P. S. you can also derive it from Fibonacci matrix representation, but this way is much more fun, right? UPD: I further simplified the explanation, should be much easier to follow it now. Note that the generic solution only covers the case of $(x-a)(x-b)$ when $a \neq b$. When the characteristic polynomial is $(x-a)^2$, the remainder of $P(x)$ modulo $(x-a)^2$ is determined by $P(a)$ and $P'(a)$: $P(x) \equiv r \mod{(x-a)^2} \iff \begin{cases}P(a)=r,\\P'(a)=0.\end{cases}$ Therefore, we have a system of equations $\begin{cases} \alpha a^{-p} &+& \beta a^{-q} &=& 1,\\ \alpha p a^{-p-1} &+& \beta q a^{-q-1} &=& 0. \end{cases}$ For this system, the determinant is $\frac{q-p}{a^{p+q+1}}$ and the solution is $\boxed{\begin{matrix} \alpha = \dfrac{qa^p}{q-p},&\beta = \dfrac{pa^q}{p-q} \end{matrix}}$ Another interesting way to get this solution is via L'Hôpital's rule: $\lim\limits_{x \to 0}\frac{a^q-(a+x)^q}{a^{q-b}-(a+x)^{q-p}} = \lim\limits_{x \to 0}\frac{q(a+x)^{q-1}}{(q-p)(a+x)^{q-p-1}} = \frac{qa^p}{q-p}.$ Let's consider the more generic case of the characteristic polynomial $(x-\lambda_1)(x-\lambda_2)\dots (x-\lambda_k)$. 102129D - Basis Change. The sequence $F$ satisfies $F_n=\sum\limits_{i=1}^k a_i F_{n-i}$. Find $c_1,\dots,c_n$ such that $F_n = \sum\limits_{i=1}^k c_i F_{n-b_i}$. We need to find $\alpha_1, \dots, \alpha_k$ such that $F_n = \alpha_1 F_{n-c_1} + \dots + \alpha_k F_{n-c_k}$. It boils down to the system of equations $\begin{cases} \alpha_1 \lambda_1^{-c_1}+\dots+\alpha_k \lambda_1^{-c_1} = 1,\\ \alpha_1 \lambda_2^{-c_2}+\dots+\alpha_k \lambda_2^{-c_k} = 1,\\ \dots\\ \alpha_1 \lambda_k^{-c_k}+\dots+\alpha_k \lambda_k^{-c_k} = 1.\\ \end{cases}$ This system of equations has a following matrix: $A=\begin{bmatrix} \lambda_1^{-c_1} & \lambda_1^{-c_2} & \dots & \lambda_1^{-c_k} \\ \lambda_2^{-c_1} & \lambda_2^{-c_2} & \dots & \lambda_2^{-c_k} \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_k^{-c_1} & \lambda_k^{-c_2} & \dots & \lambda_k^{-c_k} \end{bmatrix}$ Matrices of this kind are called alternant matrices. Let's denote its determinant as $D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_k)$, then the solution is $\alpha_i = \dfrac{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{red}{0}, c_{i+1}, \dots, c_k)}{D_{\lambda_1, \dots, \lambda_k}(c_1, \dots, c_{i-1}, \color{blue}{c_i}, c_{i+1}, \dots, c_k)}.$ Unfortunately, on practice in makes more sense to find $\alpha_i$ with the Gaussian elimination rather than with these direct formulas. Read more » • +172 By adamant, history, 2 months ago, Hi everyone! It's been quite some time since I wrote two previous articles in the cycle: Part 1: Introduction Part 2: Properties and interpretation Part 3: In competitive programming This time I finally decided to publish something on how one can actually use continued fractions in competitive programming problems. Few months ago, I joined CP-Algorithms as a collaborator. The website also underwent a major design update recently, so I decided it would be great to use this opportunity and publish my new article there, so here it is: CP-Algorithms — Continued fractions It took me quite a while to write and I made sure to not only describe common competitive programming challenges related to continued fractions, but also to describe the whole concept from scratch. That being said, article is supposed to be self-contained. Main covered topics: 1. Notion of continued fractions, convergents, semiconvergents, complete quotients. 2. Recurrence to compute convergents, notion of continuant. 3. Connection of continued fractions with the Stern-Brocot tree and the Calkin-Wilf tree. 4. Convergence rate with continued fractions. 5. Linear fractional transformations, quadratic irrationalities. 6. Geometric interpretation of continued fractions and convergents. I really hope that I managed to simplify the general story-telling compared to previous 2 articles. Here are the major problems that are dealt with in the article: • Given $a_1, \dots, a_n$, quickly compute $[a_l; a_{l+1}, \dots, a_r]$ in queries. • Which number of $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$ is smaller? How to emulate $A-\varepsilon$ and $A+\varepsilon$? • Given $A=[a_0; a_1, \dots, a_n]$ and $B=[b_0; b_1, \dots, b_m]$, compute the continued fraction representations of $A+B$ and $A \cdot B$. • Given $\frac{0}{1} \leq \frac{p_0}{q_0} < \frac{p_1}{q_1} \leq \frac{1}{0}$, find $\frac{p}{q}$ such that $(q,p)$ is lexicographically smallest and $\frac{p_0}{q_0} < \frac{p}{q} < \frac{p_1}{q_1}$. • Given $x$ and $k$, $x$ is not a perfect square. Let $\sqrt x = [a_0; a_1, \dots]$, find $\frac{p_k}{q_k}=[a_0; a_1, \dots, a_k]$ for $0 \leq k \leq 10^9$. • Given $r$ and $m$, find the minimum value of $q r \pmod m$ on $1 \leq q \leq n$. • Given $r$ and $m$, find $\frac{p}{q}$ such that $p, q \leq \sqrt{m}$ and $p q^{-1} \equiv r \pmod m$. • Given $p$, $q$ and $b$, construct the convex hull of lattice points below the line $y = \frac{px+b}{q}$ on $0 \leq x \leq n$. • Given $A$, $B$ and $C$, find the maximum value of $Ax+By$ on $x, y \geq 0$ and $Ax + By \leq C$. • Given $p$, $q$ and $b$, compute the following sum: $\sum\limits_{x=1}^n \lfloor \frac{px+b}{q} \rfloor.$ So far, here is the list of problems that are explained in the article: And an additional list of practice problems where continued fractions could be useful: There are likely much more problems where continued fractions are used, please mention them in the comments if you know any! Finally, since CP-Algorithms is supposed to be a wiki-like project (that is, to grow and get better as time goes by), please feel free to comment on any issues that you might find while reading the article, ask questions or suggest any improvements. You can do so in the comments below or in the issues section of the CP-Algorithms GitHub repo. You can also suggest changes via pull request functionality. Read more » • +159 By adamant, history, 3 months ago, Hi everyone! There are already dozens of blogs on linear recurrences, why not make another one? In this article, my main goal is to highlight the possible approaches to solving linear recurrence relations, their applications and implications. I will try to derive the results with different approaches independently from each other, but will also highlight similarities between them after they're derived. ### Definitions Def. 1. An order $d$ homogeneous linear recurrence with constant coefficients (or linear recurrence) is an equation of the form $F_n = \sum\limits_{k=1}^d a_k F_{n-k}.$ Def. 2. In the equation above, the coefficients $a_1, \dots, a_d \in R$ are called the recurrence parameters, Def. 3. and a sequence $F_0, F_1, \dots \in R$ is called an order $d$ linear recurrence sequence. The most common task with linear recurrences is, given initial coefficients $F_0, F_1, \dots, F_{d-1}$, to find the value of $F_n$. Example 1. A famous Fibonacci sequence $F_n = F_{n-1} + F_{n-2}$ is an order 2 linear recurrence sequence. Example 2. Let $F_n = n^2$. One can prove that $F_n = 3 F_{n-1} - 3 F_{n-2} + F_{n-3}$. Example 3. Moreover, for $F_n = P(n)$, where $P(n)$ is a degree $d$ polynomial, it holds that $F_n = \sum\limits_{k=1}^{d+1} (-1)^{k+1}\binom{d+1}{k} F_{n-k}.$ If this fact is not obvious to you, do not worry as it will be explained further below. Finally, before proceeding to next sections, we'll need one more definition. Def. 4. A polynomial $A(x) = x^d - \sum\limits_{k=1}^d a_k x^{d-k}$ is called the characteristic polynomial of the linear recurrence defined by $a_1, \dots, a_d$. Example 4. For Fibonacci sequence, the characteristic polynomial is $A(x) = x^2-x-1$. Read more » • +266 By adamant, history, 3 months ago, Hi everyone! Today I'd like to write about some polynomials which are invariant under the rotation and relabeling in euclidean spaces. Model problems work with points in the 3D space, however both ideas, to some extent, might be generalized for higher number of dimensions. They might be useful to solve some geometric problems under the right conditions. I used some ideas around them in two problems that I set earlier. #### Congruence check in random points You're given two set of lines in 3D space. The second set of lines was obtained from the first one by the rotation and relabeling. You're guaranteed that the first set of lines was generated uniformly at random on the sphere, find the corresponding label permutation. Actual problem: 102354F - Cosmic Crossroads. ##### Solution Let $P_4(x, y, z) = \sum\limits_{l=1}^n \left((x-x_l)^2+(y-y_l)^2 + (z-z_l)^2\right)^2$. It is a fourth degree polynomial, which geometric meaning is the sum of distances from $(x, y, z)$ to all points in the set, each distance raised to power $4$. Distance is preserved under rotation, hence this expression is invariant under rotation transform. On the other hand it may be rewritten as $P_4(x, y, z) = \sum\limits_{i=0}^4 \sum\limits_{j=0}^4 \sum\limits_{k=0}^4 A_{ijk} x^i y^j z^k,$ where $A_{ijk}$ is obtained as the sum over all points $(x_l,y_l,z_l)$ from the initial set. To find the permutation, it is enough to calculate $P_4$ for all points in both sets and them match points with the same index after they were sorted by the corresponding $P_4$ value. It is tempting to try the same trick with $P_2(x, y, z)$, but it is the same for all the points in the set for this specific problem: \begin{align} P_2(x, y, z) =& \sum\limits_{l=1}^n [(x-x_l)^2+(y-y_l)^2+(z-z_l)^2]\\ =& n \cdot (x^2+y^2+z^2) - 2x \sum\limits_{l=1}^n x_l - 2y \sum\limits_{l=1}^n y_l - 2z \sum\limits_{l=1}^n z_l + \sum\limits_{l=1}^n [x_l^2+y_l^2+z_l^2] \\ =& n \left[\left(x-\bar x\right)^2 + \left(y-\bar y\right)^2 + \left(z-\bar z\right)^2\right] - n(\bar x^2+\bar y^2+\bar z^2) + \sum\limits_{l=1}^n (x_l^2 + y_l^2 + z_l^2), \end{align} where $\bar x$, $\bar y$ and $\bar z$ are the mean values of $x_l$, $y_l$ and $z_l$ correspondingly. As you can see, non-constant part here is simply the squared distance from $(x, y, z)$ to the center of mass of the points in the set. Thus, $P_2(x, y, z)$ is the same for all points having the same distance from the center of mass, so it is of no use in 102354F - Cosmic Crossroads, as all the points have this distance equal to $1$ in the input. Burunduk1 taught me this trick after the Petrozavodsk camp round which featured the model problem. #### Sum of squared distances to the axis passing through the origin You're given a set of points $r_k=(x_k, y_k, z_k)$. A torque needed to rotate the system of points around the axis $r=(x, y, z)$ is proportional to the sum of squared distances to the axis across all points. You need to find the minimum amount of points that have to be added to the set, so that the torque needed to rotate it around any axis passing through the origin is exactly the same. Actual problem: Hackerrank — The Axis of Awesome ##### Solution The squared distance from the point $r_k$ to the axis $r$ is expressed as $\dfrac{|r_k \times r|^2}{r \cdot r} = \dfrac{(y_k z - z_k y)^2+(x_k z - z_k x)^2+(x_k y - y_k x)^2}{x^2+y^2+z^2}.$ The numerator here is a quadratic form, hence can be rewritten as $|r_k \times r|^2 = \begin{pmatrix}x & y & z\end{pmatrix} \begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix} \begin{pmatrix}x \\ y \\ z\end{pmatrix}.$ Correspondingly, the sum of squared distances for $k=1..n$ is defined by the quadratic form $I = \sum\limits_{k=1}^n\begin{pmatrix} y_k^2 + z_k^2 & -x_k y_k & -x_k z_k \\ -x_k y_k & x_k^2 + z_k^2 & -y_k z_k \\ -x_k z_k & -y_k z_k & x_k^2 + y_k^2 \end{pmatrix},$ known in analytic mechanics as the inertia tensor. As any other tensor, its coordinate form is invariant under rotation. Inertia tensor is a positive semidefinite quadratic form, hence there is an orthonormal basis in which it is diagonal: $I = \begin{pmatrix}I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3\end{pmatrix}.$ Here $I_1$, $I_2$ and $I_3$ are the eigenvalues of $I$, also called the principal moments of inertia (corresponding eigenvectors are called the principal axes of inertia). From this representation we deduce that the condition from the statement is held if and only if $I_1 = I_2 = I_3$. Adding a single point on a principal axis would only increase principal moments on the other axes. For example, adding $(x, 0, 0)$ would increase $I_2$ and $I_3$ by $x^2$. Knowing this, one can prove that the answer to the problem is exactly $3-m$ where $m$ is the multiplicity of the smallest eigenvalue of $I$. ##### Applying it to the first problem Now, another interesting observation about inertia tensor is that both principal inertia moments and principal inertia axes would be preserved under rotation. It means that in the first problem, another possible way to find the corresponding rotation and the permutation of points is to find principal inertia axes for both sets of points and then find a rotation that matches corresponding principal inertia axes in the first and the second sets of points. Unfortunately, this method still requires that principal inertia moments are all distinct (which generally holds for random sets of points), otherwise there would be an infinite amount of eigendecompositions of $I$. Read more » • +107 By adamant, history, 4 months ago, Hi everyone! Today I'd like to write another blog about polynomials. Consider the following problem: You're given $P(x) = a_0 + a_1 x + \dots + a_{n-1} x^{n-1}$, you need to compute $P(x+a) = b_0 + b_1 x + \dots + b_{n-1} x^{n-1}$. There is a well-known solution to this, which involves some direct manipulation with coefficients. However, I usually prefer approach that is more similar to synthetic geometry when instead of low-level coordinate work, we work on a higher level of abstraction. Of course, we can't get rid of direct coefficient manipulation completely, as we still need to do e.g. polynomial multiplications. But we can restrict direct manipulation with coefficients to some minimal number of black-boxed operations and then strive to only use these operations in our work. With this goal in mind, we will develop an appropriate framework for it. Thanks to clyring for inspiring me to write about it with this comment. You can check it for another nice application of calculating $g(D) f(x)$ for a specific series $g(D)$ over the differentiation operator: While this article mostly works with $e^{aD} f(x)$ to find $f(x+a)$, there you have to calculate $\left(\frac{D}{1-e^{-D}}\right)p(x)$ to find a polynomial $f(x)$ such that $f(x) = p(0)+p(1)+\dots+p(x)$ for a given polynomial $p(x)$. #### Key results Let $[\cdot]$ and $\{ \cdot \}$ be a linear operators in the space of formal power series such that $[x^k] = \frac{x^k}{k!}$ and $\{x^k\} = k! x^k$. The transforms $[\cdot]$ and $\{\cdot \}$ are called the Borel transform and the Laplace transform correspondingly. As we also work with negative coefficients here, we define $\frac{1}{k!}=0$ for $k < 0$, hence $[x^k]=0$ for such $k$. In this notion, $f(x+a) = e^{aD} f(x) = [e^{ax^{-1}}\{f(x)\}],$ where $D=\frac{d}{d x}$ is the differentiation operator. Thus, $\{f(x+a)\}$ is a part with non-negative coefficients of the cross-correlation of $e^{ax}$ and $\{f(x)\}$. More generally, for arbitrary formal power series $g(D)$, it holds that $g(D) f(x) = [g(x^{-1})\{f(x)\}],$ that is $\{g(D) f(x)\}$ is exactly the non-negative part of the cross-correlation of $g(x)$ and $\{f(x)\}$. Detailed explanation is below. Read more » • +136 By adamant, history, 4 months ago, Hi everyone! Today I want to write about the inversions in permutations. The blog is mostly inspired by the problem С from Day 3 of 2022 winter Petrozavodsk programming camp. I will also try to shed some light on the relation between inversions and $q$-analogs. #### Key results Let $F(x)=a_0+a_1x+a_2x^2+\dots$, then $F(e^x)$ is the exponential generating function of $b_i = \sum\limits_{k=0}^\infty a_k k^i.$ In other words, it is a moment-generating function of the parameter by which $F(x)$ enumerates objects of class $F$. Motivational example: The generating function of permutations of size $n$, enumerated by the number of inversions is $F_n(x) = \prod\limits_{k=1}^n \frac{1-x^k}{1-x}.$ The moment-generating function for the number of inversions in a permutation of size $n$ is $G_n(x) = \prod\limits_{k=1}^n \frac{1-e^{kx}}{1-e^x}.$ Read more » • +156 By adamant, history, 5 months ago, Hi everyone! I'm currently trying to write an article about $\lambda$-optimization in dynamic programming, commonly known as "aliens trick". While writing it, I stumbled upon a fact which, I believe, is a somewhat common knowledge, but is rarely written out and proved explicitly. This fact is that we sometimes can repeatedly use ternary search when we need to optimize a multi-dimensional function. Thanks to • mango_lassi for a useful discussion on this and for counter-example on integer ternary search! • Neodym for useful comments and remarks about the article structure. Read more » • +218 By adamant, history, 5 months ago, Hi everyone! Today I'd like to write yet another blog about polynomials. Specifically, I will cover the relationship between polynomial interpolation and Chinese remainder theorem, and I will also highlight how it is useful when one needs an explicit meaningful solution for partial fraction decomposition. Read more » • +171 By adamant, history, 5 months ago, Hi everyone! This time I'd like to write about what's widely known as "Aliens trick" (as it got popularized after 2016 IOI problem called Aliens). There are already some articles about it here and there, and I'd like to summarize them, while also adding insights into the connection between this trick and generic Lagrange multipliers and Lagrangian duality which often occurs in e.g. linear programming problems. Familiarity with a previous blog about ternary search or, at the very least, definitions and propositions from it is expected. Great thanks to mango_lassi and 300iq for useful discussions and some key insights on this. Note that although explanation here might be quite verbose and hard to comprehend at first, the algorithm itself is stunningly simple. Another point that I'd like to highlight for those already familiar with "Aliens trick" is that typical solutions using it require binary search on lambdas to reach specified constraint by minimizing its value for specific $\lambda$. However, this part is actually unnecessary and you don't even have to calculate the value of constraint function at all within your search. It further simplifies the algorithm and extends applicability of aliens trick to the cases when it is hard to minimize constraint function while simultaneously minimizing target function for the given $\lambda$. #### Tldr. Problem. Let $f : X \to \mathbb R$ and $g : X \to \mathbb R^c$. You need to solve the constrained optimization problem $\begin{gather}f(x) \to \min,\\ g(x) = 0.\end{gather}$ Auxiliary function. Let $t(\lambda) = \inf_x [f(x) - \lambda \cdot g(x)]$. Finding $t(\lambda)$ is unconstrained problem and is usually much simpler. Equivalently, $t(\lambda) = \inf_y [h(y) - \lambda \cdot y]$ where $h(y)$ is the minimum possible $f(x)$ subject to $g(x)=y$. As a point-wise minimum of linear functions, $t(\lambda)$ is concave, therefore its maximum can be found with ternary search. Key observation. By definition, $t(\lambda) \leq h(0)$ for any $\lambda$, thus $\max_\lambda t(\lambda)$ provides a lower bound for $h(0)$. When $h(y)$ is convex, inequality turns into equation, that is $\max_\lambda t(\lambda) = h(0) = f(x^*)$ where $x^*$ is the solution to the minimization problem. Solution. Assume that $t(\lambda)$ is computable for any $\lambda$ and $h(y)$ is convex. Then find $\max_\lambda t(\lambda)$ with the ternary search on $t(\lambda)$ over possible values of $\lambda$. This maximum is equal to the minimum $f(x)$ subject to $g(x)=0$. If $g(x)$ and $f(x)$ are integer functions, $\lambda_i$ corresponds to $h(y_i) - h(y_i-1)$ and can be found among integers. Boring and somewhat rigorous explanation is below, problem examples are belower. Read more » • +286 By adamant, history, 11 months ago, Hi everyone! Some time ago Monogon wrote an article about Edmonds blossom algorithm to find the maximum matching in an arbitrary graph. Since the problem has a very nice algebraic approach, I wanted to share it as well. I'll start with something very short and elaborate later on. tl;dr. The Tutte matrix of the graph $G=(V, E)$ is $$T(x_{12}, \dots, x_{(n-1)n}) = \begin{pmatrix} 0 & x_{12} e_{12} & x_{13} e_{13} & \dots & x_{1n} e_{1n} \newline -x_{12} e_{12} & 0 & x_{23} e_{23} & \dots & x_{2n} e_{2n} \newline -x_{13} e_{13} & -x_{23} e_{23} & 0 & \dots & x_{3n} e_{3n} \newline \vdots & \vdots & \vdots & \ddots & \vdots \newline -x_{1n} e_{1n} & -x_{2n} e_{2n} & -x_{3n} e_{3n} & \dots & 0 \end{pmatrix}$$ Here $e_{ij}=1$ if $(i,j)\in E$ and $e_{ij}=0$ otherwise, $x_{ij}$ are formal variables. Key facts: 1. Graph has a perfect matching if and only if $\det T \neq 0$ when considered as polynomial of $x_{ij}$. 2. Rank of $T$ is the number of vertices in the maximum matching. 3. Maximal linearly independent subset of rows corresponds to the subset of vertices on which it is a perfect matching. 4. If graph has a perfect matching, $(T^{-1})_{ij} \neq 0$ iff there exists a perfect matching which includes the edge $(i,j)$. 5. After such $(i,j)$ is found, to fix it in the matching one can eliminate $i$-th and $j$-th rows and columns of $T^{-1}$ and find next edge. Randomization comes when we substitute $x_{ij}$ with random values. It can be proven that conditions above still hold with high probability. This provides us with $O(n^3)$ algorithm to find maximum matching in general graph. For details, dive below. Read more » • +223 By adamant, history, 11 months ago, Hi everyone! Recently aryanc403 brought up a topic of subset convolution and some operations related to it. This inspired me to write this blog entry as existing explanations on how it works seemed unintuitive for me. I believe that having viable interpretations of how things work is of extreme importance as it greatly simplifies understanding and allows us to reproduce some results without lust learning them by heart. Also this approach allows us to easily and intuitively generalize subset convolution to sum over $i \cup j = k$ and $|i \cap j|=l$, while in competitive programming we usually only do it for $|i \cap j|=0$. Enjoy the reading! Read more » • +254 By adamant, history, 11 months ago, Hi everyone! Five days have passed since my previous post which was generally well received, so I'll continue doing posts like this for time being. Abstract algebra is among my favorite subjects. One particular thing I find impressive is the associativity of binary operations. One of harder problems from my contests revolves around this property as you need to construct an operation with a given number of non-associative triples. This time I want to talk about how one can check this property for both groups and arbitrary magmas. First part of my post is about Light's associativity test and how it can be used to deterministically check whether an operation defines a group in $O(n^2 \log n)$. Second part of the post is about Rajagopalan and Schulman probabilistic identity testing which allows to test associativity in $O(n^2 \log n \log \delta^{-1})$ where $\delta$ is error tolerance. Finally, the third part of my post is dedicated to the proof of Rajagopalan–Schulman method and bears some insights into identity verification in general and higher-dimensional linear algebra. For convenience, these three parts are separated by horizontal rule. Read more » • +158 By adamant, history, 11 months ago, Hi everyone! Long time no see. 3 years ago I announced a Telegram channel. Unfortunately, for the last ~1.5 years I had a total lack of inspiration for new blog posts. Well, now I have a glimpse of it once again, so I want to continue writing about interesting stuff. Here's some example: Read more » • +286 By adamant, history, 2 years ago, Hi everyone! Let's continue with learning continued fractions. We began with studying the case of finite continued fractions and now it's time to work a bit with an infinite case. It turns out that while rational numbers have unique representation as a finite continued fraction, any irrational number has unique representation as an infinite continued fraction. Part 1: Introduction Part 2: Properties and interpretation Read more » • +138 By adamant, 2 years ago, Hi everyone! After writing this article I've decided to write another one being comprehensive introduction into continued fractions for competitive programmers. I'm not really familiar with the topic, so I hope writing this entry will be sufficient way to familiarize myself with it :) Part 1: Introduction Part 2: Properties and interpretation Read more » • +201 By adamant, 2 years ago, Hi everyone! It's been a while since I posted anything. Today I'd like to talk about problem I from Oleksandr Kulkov Contest 2. Well, on some similar problem. Problem goes as follows: There is a rational number $x=\frac{p}{q}$, and you know that $1 \leq p, q \leq C$. You want to recover $p$ and $q$ but you only know number $r$ such that $r \equiv pq^{-1} \pmod{m}$ where $m > C^2$. In original problem $m$ was not fixed, instead you were allowed to query remainders $r_1,\dots,r_k$ of $x$ modulo several numbers $m_1,\dots,m_k$, which implied Chinese remainder theorem. Read more » • +182 By adamant, history, 2 years ago, Hi everyone! This summer I gave another contest in summer Petrozavodsk programming camp and (although a bit lately) I want to share it with codeforces community by adding it to codeforces gym: 2018-2019 Summer Petrozavodsk Camp, Oleksandr Kulkov Contest 2. To make it more fun I scheduled it on Sunday, 5 january, 12:00 (UTC+3). Feel free to participate during scheduled time or, well, whenever you're up to. Good luck and have fun :) Problems might be discussed here afterwards, I even may write some editorials for particular problems (per request, as I don't have them prepared beforehand this time). UPD: 17h reminder before the start of the contest UPD2: It wasn't an easy task to do, but I managed to add ghost participants to the contest! Enjoy! Read more » • +138 By adamant, history, 3 years ago, Hi there! During preparation of Oleksandr Kulkov Contest 1 I started writing some template for polynomial algebra (because 3 problems in contest in one or another way required some polynomial operations). And with great pleasure I'd like to report that it resulted in this article on cp-algorithms.com (English translation for e-maxx) and this mini-library containing all mentioned operations and algorithms (except for Half-GCD algorithm). I won't say the code is super-optimized, but at least it's public, provides some baseline and is open for contribution if anyone would like to enhance it! Article also provides some algorithms I didn't mention before. Namely: • Interpolation: Now the described algorithm is and not as it was before. • Resultant: Given polynomials A(x) and B(x) compute product of Ai) across all μi being roots of B(x). • Half-GCD: How to compute GCD and resultants in (just key ideas). Feel free to read the article to know more and/or use provided code :) tl;dr. article on operations with polynomials and implementation of mentioned algorithms. Read more » • +185 By adamant, history, 3 years ago, Okay, so compare these two submissions: 51053654 and 51053605 The only difference is that first one made via GNU C++17 and the second one via MS C++ 2017. Code is same, but first gets RE 16 and second one gets AC. WTF, GNU C++?? Read more » • +102 By adamant, history, 3 years ago, Hi everyone! I gave a contest in winter Petrozavodsk programming camp and now I'd like to share it with codeforces by making it a contest in codeforces gym: 2018-2019 Winter Petrozavodsk Camp, Oleksandr Kulkov Contest 1. It was my first experience giving a contest to the camp and I'm pretty much excited about it! In the camp only 7 out of 11 problems were solved, so there should be something in the contest for everyone. To make the contest more interesting I suggest you to participate in it as live contest on Saturday, 9 March, 12:00 (UTC+3), which may be changed in case of overlap with some other contest or if it's inconvenient for many participants. After this I suggest to gather here and discuss problems (if anyone's going to participate, of course). I will also post editorial which may (or may not) contain some neat stuff. Uhm, good luck and have fun :) P.S. It appears you already may see problems if you have coach mode enabled, I'd ask you to not do this unless you're not going to participate in contest! UPD: Gentle reminder that it's less than 24 hours before the contest and it's hopefully not going to be rescheduled this time. UPD 2: Thanks everyone for participating in the contest! Here are editorials: link UPD 3: Google drive link got broken, so I uploaded the editorial to the contest directly. Read more » • +173
Elliptical belief propagation Generalized least generalized squares We can generalize Gaussian belief propagation to use general elliptical laws by using Mahalanobis distance without presuming the Gaussian distribution , making it into a kind of elliptical belief propagation. Robust If we use a robust Huber loss instead of a Gaussian log-likelihood, then the resulting algorithm is usually referred to as a robust factor or as dynamic covariance scaling . The nice thing here is that we can imagine the transition from quadratic to linear losses gives us an estimate of which observations are outliers. Student-$$t$$ Surely this is around? Certainly there is a special case in the t-process. It is mentioned, I think, in Lan et al. (2006) and possibly Proudler et al. (2007) although the latter seems to be something more ad hoc. Surely? TBD. Generic There seem to be generic update rules which could be used to construct a generic elliptical belief propagation algorithm. References Agarwal, Pratik, Gian Diego Tipaldi, Luciano Spinello, Cyrill Stachniss, and Wolfram Burgard. 2013. In 2013 IEEE International Conference on Robotics and Automation, 62–69. Aste, Tomaso. 2021. arXiv. Bånkestad, Maria, Jens Sjölund, Jalil Taghia, and Thomas Schön. 2020. arXiv. Davison, Andrew J., and Joseph Ortiz. 2019. arXiv:1910.14139 [Cs], October. Donoho, David L., and Andrea Montanari. 2013. arXiv:1310.7320 [Cs, Math, Stat], October. Karlgaard, Christopher D., and Hanspeter Schaub. 2011. Journal of Guidance, Control, and Dynamics 34 (2): 388–402. Lan, Xiangyang, Stefan Roth, Daniel Huttenlocher, and Michael J. Black. 2006. In Computer Vision – ECCV 2006, edited by Aleš Leonardis, Horst Bischof, and Axel Pinz, 3952:269–82. Berlin, Heidelberg: Springer Berlin Heidelberg. Ortiz, Joseph, Talfan Evans, and Andrew J. Davison. 2021. arXiv:2107.02308 [Cs], July. Proudler, I., S. Roberts, S. Reece, and I. Rezek. 2007. In 2007 15th International Conference on Digital Signal Processing, 355–58. No comments yet. Why not leave one? GitHub-flavored Markdown & a sane subset of HTML is supported.
## Elementary Algebra $= \frac{19}{42}$ 1) First, we plug in the given values, and then we simplify the fractions. 2) Find a common denominator in Step #6: multiply the first fraction by $7$ and the second fraction by $6$ to obtain a common denominator of $42$ $\frac{1}{4}x - \frac{2}{5}y$ for $x = \frac{2}{3}$, $y =-\frac{5}{7}$ $= \frac{1}{4}(\frac{2}{3}) - \frac{2}{5}(-\frac{5}{7})$ $= \frac{1(2)}{4(3)} - \frac{2(-5)}{5(7)}$ $= \frac{2}{12} - \frac{-10}{35}$ $= \frac{2}{12} + \frac{10}{35}$ $= \frac{1}{6} + \frac{2}{7}$ $= \frac{1(7)}{42} + \frac{2(6)}{42}$ $= \frac{7}{42} + \frac{12}{42}$ $= \frac{19}{42}$
Next: The effect of dielectric Up: Relativity and electromagnetism Previous: The angular distribution of Synchrotron radiation (i.e., radiation emitted by a charged particle constrained to follow a circular orbit by a magnetic field) is of particular importance in astrophysics, since much of the observed radio frequency emission from supernova remnants and active galactic nuclei is thought to be of this type. Consider a charged particle moving in a circle of radius with constant angular velocity . Suppose that the orbit lies in the - plane. The radius vector pointing from the centre of the orbit to the retarded position of the charge is defined (395) where is the angle subtended between this vector and the -axis. The retarded velocity and acceleration of the charge take the form (396) (397) where and . The observation point is chosen such that the radius vector , pointing from the retarded position of the charge to the observation point, is parallel to the - plane. Thus, we can write (398) where is the angle subtended between this vector and the -axis. As usual, we define as the angle subtended between the retarded radius vector and the retarded direction of motion of the charge . It follows that (399) It is easily seen that (400) A little vector algebra shows that (401) giving (402) Making use of Eq. (2.320), we obtain (403) It is convenient to write this result in terms of the angles and , instead of and . After a little algebra we obtain (404) Let us consider the radiation pattern emitted in the plane of the orbit; i.e., , with . It is easily seen that (405) In the non-relativistic limit the radiation pattern has a dependence. Thus, the pattern is like that of dipole radiation where the axis is aligned along the instantaneous direction of acceleration. As the charge becomes more relativistic the radiation lobe in the forward direction (i.e., ) becomes more more focused and more intense. Likewise, the radiation lobe in the backward direction (i.e., ) becomes more diffuse. The radiation pattern has zero intensity at the angles (406) These angles demark the boundaries between the two radiation lobes. In the non-relativistic limit , so the two lobes are of equal angular extents. In the highly relativistic limit , so the forward lobe becomes highly concentrated about the forward direction (). In the latter limit Eq. (2.336) reduces to (407) Thus, the radiation emitted by a highly relativistic charge is focused into an intense beam of angular extent pointing in the instantaneous direction of motion. The maximum intensity of the beam scales like . Integration of Eq. (2.335) over all solid angle (using ) yields (not very easily!) (408) which agrees with Eq. (2.309) provided that . This expression can also be written (409) where meters is the classical electron radius, is the rest mass of the charge, and . If the circular motion takes place in an orbit of radius perpendicular to a magnetic field , then satisfies . Thus, the radiated power is (410) and the radiated energy per revolution is (411) Let us consider the frequency distribution of the emitted radiation in the highly relativistic limit. Suppose, for the sake of simplicity, that the observation point lies in the plane of the orbit (i.e., ). Since the radiation emitted by the charge is beamed very strongly in the charge's instantaneous direction of motion, a fixed observer is only going to see radiation (at some later time) when this direction points almost directly towards the point of observation. This occurs once every rotation period when , assuming that . Note that the point of observation is located many orbit radii away from the centre of the orbit along the positive -axis. Thus, our observer sees short periodic pulses of radiation from the charge. The repetition frequency of the pulses (in radians per second) is . Let us calculate the duration of each pulse. Since the radiation emitted by the charge is focused into a narrow beam of angular extent , our observer only sees radiation from the charge when . Thus, the observed pulse is emitted during a time interval . However, the pulse is received in a somewhat shorter time interval (412) because the charge is slightly closer to the point of observation at the end of the pulse than at the beginning. The above equation reduces to (413) since and . The width of the pulse in frequency space obeys . Hence, (414) In other words, the emitted frequency spectrum contains harmonics of frequency up to times that of the fundamental, . More involved calculations8 show that in the ultra-relativistic limit the power radiated in the th harmonic (whose frequency is ) is given by (415) for , and (416) for . Note that the spectrum cuts off approximately at the harmonic order , as predicted earlier. It can also be demonstrated9 that seven times as much energy is radiated with a polarization parallel to the orbital plane than with a perpendicular polarization. A power spectrum at low frequencies coupled with a high degree of polarization are the hallmarks of synchrotron radiation. In fact, these two features are used in astrophysics to identify synchrotron radiation from supernova remnants, active galactic nuclei, etc. Next: The effect of dielectric Up: Relativity and electromagnetism Previous: The angular distribution of Richard Fitzpatrick 2002-05-18
## Tuesday, October 27, 2009 ### Syntax highlighting in Microsoft Office Word To paste highlighted code syntax in Word, just use Notepad++ 2. Once your code's syntax is highlighted, go to the plugins menu, choose the NppExport plugin and click on 'Export to RTF': 3. Then your source file will be exported to .rtf format and when you open that in Microsoft Office Word, you will see that your syntax is highlighted.  then obviously copy and paste that source. ## Saturday, October 24, 2009 ### How do you pronounce 'Euler' ? These last two weeks at school, we've been doing stuff like Euler's Number, Euler Graphs and Euler Circuits, which all came from Leonhard Paul Euler, who was a pioneering Swiss mathematician. When I was doing my assignment on Euler Graphs, I noticed that whenever I typed in 'a Euler Circuit' in Microsoft Word, I was always getting that green squiggly line below the 'a' and Office suggested that I change it to 'an'...and well it got me thinking, since I pronounce Euler as 'youler'. Normally, an English speaking person would pronounce the word like I do, 'youler', but apparently, it's pronounced 'oiler'; and that's why Microsoft Word was always complaining about it; because 'an "oiler" circuit' makes sense, but "a oiler circuit" doesn't. So, bottom line...the word Euler is pronounced as OILER ## Friday, October 23, 2009 ### Drawing diagonal connector lines in Visio I am now in need of creating some graphs and trees and have decided to use Microsoft's Visio for this purpose. But the problem is that, by default, Visio's lines are set as Right-Angle Connectors and thus I couldn't create diagonal lines. Anyways, to create diagonal connectors, just connect the line and then right click on it and change the routing to 'Straight Line' : ## Wednesday, October 21, 2009 ### Passing multi-dimensional arrays to functions in C Here is how to initialize and pass multi-dimensional arrays to functions in ANSI C : void printArray(int **array, int m, int n) { int i, j; for(i = 0; i < m; i++) { for(j = 0; j < n; j++) { printf("%d\n", array[i][j]); } } } int main() { int i, j, k = 0, m = 5, n = 20; int **a = malloc(m * sizeof(*a)); //Initialize the arrays for (i = 0; i < m; i++) { a[i]=malloc(n * sizeof(*(a[i]))); } for (i = 0; i < m; i++) { for (j = 0; j<n; j++) { k++; a[i][j] = k; } } printArray(a, m, n); //Free allocated memory of the array for (i = 0; i < m; ++i) { free(a[i]); } free(a); return 0; } ## Sunday, October 18, 2009 ### LaTeX Online Equation Editor Here is a neat website I found : http://www.codecogs.com/components/equationeditor/equationeditor.php It allows you to type in a LaTeX expression and it generates a .gif file for you. You can also use the following url to generate LaTeX notations on the fly: http://latex.codecogs.com/format.latex?equation where format is the ouput-format and equation is the the actual expression. These are the available formats: • gif • png • pdf • swf • emf • svg Here is an example with the following expression (format: gif) : A=4r\int_0^r \sqrt {1-\frac{x^2}{r^2}}\, dx = 4r\int_0^{\pi/2} r\sqrt{1-\sin^2 \theta} \cos \theta\, d\theta generates this: $A=4r\int_0^r \sqrt {1-\frac{x^2}{r^2}}\, dx = 4r\int_0^{\pi/2} r\sqrt{1-\sin^2 \theta} \cos \theta\, d\theta$ ### Syntax highlighting in Blogger To implement code syntax highlighting, I used instructions from this post by Matthew V Ball. Here are the instructions: 1. Go to http://syntaxhighlighter.googlecode.com/svn/trunk/Styles/SyntaxHighlighter.css, then perform a "select all" and "copy". The css information is now in the clipboard. 2. Paste the css information at the end of the css section of your blogger html template (i.e., after ). 3. [Updated March 25, 2009 to include closing script tags] Before the  tag, paste the following: 1. Feel free to remove lines for languages you'll never use (for example, Delphi) -- it will save some loading time. 4. [Updated to add final /script] Before the  tag, insert the following: 1. 2. <script language='javascript'> 3. dp.SyntaxHighlighter.BloggerMode(); 4. dp.SyntaxHighlighter.HighlightAll('code'); 5. script> 5. Use the "Preview" button to make sure your website is correct, then click "Save Template". 6. When composing a blog entry that contains source code, click the "Edit Html" tab and put your source code (with html-escaped characters) between these tags: 1. <pre name="code" class="cpp"> 2. ...Your html-escaped code goes here... 3. pre> Substitute "cpp" with whatever language you're using (full list). (Choices: cpp, c, c++, c#, c-sharp, csharp, css, delphi, pascal, java, js, jscript, javascript, php, py, python, rb, ruby, rails, ror, sql, vb, vb.net, xml, html, xhtml, xslt) For performing the HTML escaping, you can get a good list of tools by searching for 'html esaper' or a similar term. Here's the one I used while writing this post.
Document Type : Research Paper Authors 1 Department of Mathematics, University of Uyo, Uyo, Nigeria. 2 Department of Mathematics, Michael Okpara University of Agriculture, Umudike, Nigeria. Abstract In this paper, we introduce a three-step implicit iteration scheme with errors for finite families of nonexpansive and uniformly $L$-Lipschitzian asymptotically generalized $\Phi$-hemicontractive mappings in real Banach spaces. Our new implicit iterative scheme properly includes several well known iterative schemes in the literature as its special cases. The results presented in this paper extend, generalize and improve well known results in the existing literature. Keywords ###### ##### References [1] Y.I. Alber, C.E. Chidume and H. Zegeye, Regularization of nonlinear ill-posed equations with accretive    operators, Fixed Point Theory and Appl., 1 (2005), pp. 11-33. [2] R.P. Agarwal, Y.J. Cho, J. Li and N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl., 272 (2002), pp. 435-447. [3] A. Bnouhachem, M.A. Noor and T.M. Rassias, Three-steps iterative algorithms for mixed variational inequalities, Appl. Math. Comput., 183 (2006), pp. 436-446. [4] S.S. Chang, Some results for asymptotically pseudocontractive mappings and asymptotically nonexpansive mappings, Proc. Amer. Math. Soc., 129 (2001), pp. 845-853. [5] C.E. Chidume, A.U. Bello, M.E. Okpala and P. Ndambomve, Strong convergence theorem for fixed points    of nearly nniformly L-Lipschitzian Asymptotically    Generalized $Phi$-hemicontractive mappings, Int. J. Math. Anal., 9 (2015), pp. 2555-2569. [6] C.E. Chidume and C.O. Chidume, Convergence theorems for fixed points of uniformly continuous    generalized $Phi$-hemi-contractive mappings, J. Math. Anal. Appl., 303 (2005), pp. 545-554. [7] C.E. Chidume and C.O. Chidume, Convergence theorem for zeros of generalized Lipschitz generalized phi-quasi-accretive operators, Proc. Amer. Math. Soc., 134 (2006), pp. 243-251. [8]    R.C. Chen, Y.S. Song and H. Zhou, Convergence theorems for implicit iteration process for a finite family continuous pseudocontractive mappings, J. Math. Anal. Appl., 314 (2006), pp. 701-706. [9] Y.J. Cho, H.Y. Zhou and G. Guo, Weak and strong convergence theorems for three-step iterations with errors for asymptotically nonexpansive mappings, Comput. Math. Appl., 47 (2004), pp. 707-717. [10] P. Chuadchawna, A. Farajzadeh and A. Kaewcharoen, On convergence theorems for two generalized nonexpansive multivalued mappings in hyperbolic spaces, Thai J. Math., 17 (2019), pp. 445-461. [11] P. Chuadchawna, A. Farajzadeh and A. Kaewcharoen, Fixed-point approximations of generalized nonexpansive mappings via generalized M-iteration process in hyperbolic spaces, Int. J. Math. Math. Sci., (2020), pp. 1-8, article ID 6435043. [12] P. Chuadchawna, A. Farajzadeh and A. Kaewcharoen, Convergence theorems for total asymptotically nonexpansive single-valued and quasi nonexpansive multi-valued mappings in hyperbolic spaces, J. Appl. Anal., 27 (2021), pp. 129-142. [13] G. Das and J.P. Debata, Fixed points of Quasi-nonexpansive mappings, Indian J. Pure. Appl. Math., 17 (1986), pp. 1263-1269. [14] L.C. Deng, P. Cubiotti and J.C. Yao, Approximation of common fixed points of families of nonexpansive mappings, Tai. J. Math., 12 (2008), pp. 487-500. [15] L.C. Deng, P. Cubiotti and J.C. Yao, An implicit iteration scheme for monotone variational inequalities and fixed point problems, Nonlinear Anal., 69 (2008), pp. 2445-2457. [16] L.C. Deng, S. Schaible and J.C. Yao, Implicit iteration scheme with perturbed mapping for equilibrium problems and fixed point problems of finitely many nonexpansive mappings, J. Optim. Theory Appl., 139 (2008), pp. 403-418. [17] R. Glowinski and P. Le-Tallec, Augmented Lagrangian and operator-splitting methods in nonlinear mechanics, SIAM, Philadelphia, 1989. [18] S. Haubruge, V.H. Nguyen and J.J. Strodiot, Convergence analysis and applications of the Glowinski-Le-Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl., 97 (1998), pp. 645-673. [19] F.Gu, Convergence theorems for $phi$-pseudocontractive type mappings in normed linear spaces, Northeast Math. J., 17 (2001), pp. 340-346. [20] F. Gu, Strong convergence of an implicit iteration process for a finite family of uniformly $L$-Lipschitzian mapping in Banach spaces, J. Ineq. and Appl., doi:10.1155/2010/801961. [21] S. Ishikawa, Fixed points by a new iteration method, Proceeding of the America Mathematical society, 4 (1974), pp. 157-150. [22] S.H. Khan and W. Takahashi, Approximating common fixed points of two asymptotically nonexpansive mappings, Sci. Math. Jpn., 53 (2001), pp. 143-148. [23] S.H. Khan, I. Yildirim and M. Ozdemir, Some results for finite families of fniformly $L$-Lipschitzian mappings in Banach paces, Thai J. Math., 9 (2011), pp. 319-331. [24] J.K. Kim, D.R. Sahu and Y.M. Nam, Convergence theorems for fixed points of nearly uniformly $L$-Lipschitzian asymptotically generalized $Phi$-hemicontractive mappings, Nonlinear Anal., 71 (2009), pp. 2833-2838. [25] G. Lv, A. Rafiq and Z. Xue, Implicit iteration scheme for two phi-hemicontractive operators in arbitrary Banach spaces, Journal of Ineq. and Appl., 2013, 2013:521. [26] E.U. Ofoedu, Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in a real Banach space, J. Math. Anal. Appl., 321 (2006), pp. 722-728. [27] M.A. Noor, T.M. Kassias and Z. Huang, Three-step iterations for nonlinear accretive operator equations, J. Math. Anal. Appl., 274 (2001), pp. 59-68. [28] M.O. Osilike, Iterative solution of nonlinear equations of the $phi$-strongly    accretive type, J. Math. Anal. Appl., 200 (1996), pp. 259-271. [29] M.O. Osilike and B.G. Akuchu, Common fixed points of finite family of asymptotically pseudocontractive mappings, Fixed Point Theory and Appl., 2004 (2004), pp. 81-88. [30] M.O. Osilike, S.C. Aniagbosor and B.G. Akuchu, Fixed points of asymptotically demicontractive mappings in arbitrary Banach spaces, Panam. Math. J., 12 (2002), pp. 77-88. [31] W.R. Mann, Mean value methods in iteration, Proc. Amer. Math. Sci., 4 (2003), pp. 506-510. [32] A. Rafiq, On convergence of implicit iteration scheme for two hemicontractive mappins, Sci. Int. (Lahore), 24 (2012), pp. 431-434. [33] A. Rafiq and M. Imdad, Implicit Mann Iteration Scheme for hemicontractive mapping, J. Indian Math. Soc., 81 (2014), pp. 147-153. [34] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, Bull. Austr. Math. Soc., 43 (1991), pp. 153-159. [35] J. Schu, Iterative construction of fixed point of asymptotically nonexpansive mappings, J. Math. Anal. Appl., 158 (1991), pp. 407-413. [36] H.F. Senter and W.G. Dotson, Approximating fixed points of nonexpansive mappings, Proc. Amer. Math. Soc., 44 (1974), pp. 375-380. [37] N. Shahzad and A. Udomene, Approximating common fixed points of two asymptotically quasinonexpansive mappings in Banach spaces, Fixed Point Theory Appl., (2006), pp. 1-10, article ID 18909. [38] Z.H. Sun, Strong convergence of an implicit iteration process for a finite family of asymptotically quasi-nonexpansive mappings, J. Math. Anal. Appl., 286 (2003), pp. 351-358. [39] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 311 (2005), pp. 506-517. [40] W. Takahashi, Iterative methods for approximation of fixed points and their applications, J. Oper. Res. Soc. Jpn., 43 (2000), pp. 87-108. [41] W. Takahashi and T. Tamura, Limit theorems of operators by convex combinations of nonexpansive retractions in Banach spaces, J. Approx. Theory, 91 (1997), pp. 386-397. [42] B.S. Thakur, Weak and strong convergence of composite implicit iteration process, Appl. Math. Comput., 190 (2007), pp. 965-973. [43] B.S. Thakur, Strong Convergence for Asymptotically generalized $Phi$-hemicontractive mappings, ROMAI J., 8 (2012), pp. 165-171. [44] H.K. Xu and R.G. Ori, An implicit iteration process for nonexpansive mapping, Num. Fun. Anal. Optim., 22 (2001), pp. 767-773. [45] Y. Yao, Convergence of three-step iterations for asymptotically nonexpansive mappings, Appl. Math. Comput., 187 (2007), pp. 883-892. [46] L. P. Yang, Convergence theorem of an implicit iteration process for asymptotically pseudocontractive mappings, Bull. of the Iran. Math. Soc., 38 (2012), pp. 699-713. [47] L.P. Yang and G. Hu, Convergence of implicit iteration process with random errors, Acta Math. Sinica (Chin. Ser.), 51 (2008), pp. 11-22. [48] L.C. Zeng, On the approximation of fixed points for asymptotically nonexpansive mappings in Banach spaces, Acta Math. Sci., 23 (2003), pp. 31-37. [49] L.C. Zeng, On the iterative approximation for asymptotically pseudocontractive mappings in uniformly smooth Banach spaces, Chinese Math. Ann., 26 (2005), pp. 283-290.
• CN:11-2187/TH • ISSN:0577-6686 • 机械动力学 • ### 改进的混合界面子结构模态综合法在失谐叶盘结构动态特性分析中的应用 1. 1.北京航空航天大学能源与动力工程学院; 2.国家知识产权局专利局专利审查协作北京中心 • 出版日期:2015-05-05 发布日期:2015-05-05 • 基金资助: 国家自然科学基金资助项目(51375032, 51335003) ### Application of Improved Hybrid Interface Substructural Component Modal Synthesis Method in Dynamic Characteristics Analysis of Mistuned Bladed Disk Assemblies BAI Bin1, BAI Guangchen1, FEI Chengwei1, ZHAO Heyang1, TONG Xiaochen2 1. 1.School of Energy and Power Engineering, Beihang University; 2.Patent Examination Cooperation Center of the Patent Office, State Intellectual Property Office • Online:2015-05-05 Published:2015-05-05 Abstract: In order to improve the computational efficiency of the dynamic characteristics analysis in the aeroengine with the computational accuracy, an improved hybrid interface substructural component modal synthesis method (HISCMSM) is proposed due to the problem of the large amount of calculation via the traditional HISCMSM when the substructures are assembled. The comprehensive model is further reduced by this method. The assembled structural parametric model of blades and disks is built by this method to establish each substructural finite element model (FEM) and the natural frequency and modal shape are obtained. Compared with the overall structure FEM, the computational time and the modal deviation are respectively shortened by 23.86%-35.74%, 14.63%-29.20% and 0%-0.49%, 0%-0.57% via the improved method and the traditional method. So the improved method is much more efficient than the traditional method, especially in the high order modal solution which is much clear, meanwhile, modal displacement, modal stress and modal strain energy of the tuned and mistuned bladed disk assemblies are investigated, the symmetry of the tuned bladed disk assemblies is damaged and the localization phenomenon is observed which reinforces with the mistuned amplitude increasing. Gradually, the vibration is localized in a few blades and the phenomenon is getting more and more obvious, which can settle the foundation of the further research on dynamic response and the probability analysis of the mistuned bladed disk assemblies.
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information # Probability Around the Quantum Gravity Part III.1: Planar Pure Gravity Abstract : In this paper we study stochastic dynamics which leaves quantum gravity equilibrium distribution invariant. We start theoretical study of this dynamics (earlier it was only used for Monte-Carlo simulation). Main new results concern the existence and properties of local correlation functions in the thermodynamic limit. The study of dynamics constitutes a third part of the series of papers where more general class of processes were studied (but it is self-contained), those processes have some universal significance in probability and they cover most concrete processes, also they have many examples in computer science and biology. At the same time the paper can serve an introduction to quantum gravity for a probabilist: we give a rigorous exposition of quantum gravity in the planar pure gravity case. Mostly we use combinatorial techniques, instead of more popular in physics random matrix models, the central point is the famous $\alpha =-\frac{7}{2}$ exponent. Keywords : Document type : Reports Domain : https://hal.inria.fr/inria-00073194 Contributor : Rapport de Recherche Inria Connect in order to contact the contributor Submitted on : Wednesday, May 24, 2006 - 12:09:45 PM Last modification on : Thursday, February 3, 2022 - 11:18:58 AM Long-term archiving on: : Sunday, April 4, 2010 - 11:37:30 PM ### Identifiers • HAL Id : inria-00073194, version 1 ### Citation Vadim A. Malyshev. Probability Around the Quantum Gravity Part III.1: Planar Pure Gravity. [Research Report] RR-3493, INRIA. 1998. ⟨inria-00073194⟩ Record views
• # question_answer A Carnot engine working between 300 and 600 K has a work output of 800 joule per cycle. The amount of heat energy supplied from the source to engine in each cycle is A)  800 J                                     B)  1600 J C)  3200 J                                   D)  6400 J The efficiency of Carnot engine is given by Given ${{T}_{1}}=600\,K,$ ${{T}_{2}}=300\,K$ $\therefore$  $n=1-\frac{300}{600}$ $\Rightarrow$               $n=1-\frac{1}{2}$ $\Rightarrow$               $n=\frac{1}{2}=50%$ Also,      $n=\frac{W}{Q}=\frac{800}{Q}=\frac{1}{2}$ Thus,     $Q=800\times 2=1600\,J$
#### Authors Deeksha Adil, Richard Peng, Sushant Sachdeva #### Abstract Linear regression in Lp-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Generic convex optimization algorithms for solving Lp-regression are slow in practice. Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p > 3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that converges for p > 3. We propose p-IRLS, the first IRLS algorithm that provably converges geometrically for any p \in [2,\infty). Our algorithm is simple to implement and is guaranteed to find a high accuracy solution in a sub-linear number of iterations. Our experiments demonstrate that it performs even better than our theoretical bounds, beats the standard Matlab/CVX implementation for solving these problems by 10–50x, and is the fastest among available implementations in the high-accuracy regime.
Students can Download Tamil Nadu 12th Maths Model Question Paper 5 English Medium Pdf, Tamil Nadu 12th Maths Model Question Papers helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations. ## TN State Board 12th Maths Model Question Paper 5 English Medium Instructions: 1.  The question paper comprises of four parts. 2.  You are to attempt all the parts. An internal choice of questions is provided wherever applicable. 3. questions of Part I, II. III and IV are to be attempted separately 4. Question numbers 1 to 20 in Part I are objective type questions of one -mark each. These are to be answered by choosing the most suitable answer from the given four alternatives and writing the option code and the corresponding answer 5. Question numbers 21 to 30 in Part II are two-marks questions. These are to be answered in about one or two sentences. 6. Question numbers 31 to 40 in Parr III are three-marks questions, These are to be answered in about three to five short sentences. 7. Question numbers 41 to 47 in Part IV are five-marks questions. These are to be answered) in detail. Draw diagrams wherever necessary. Time: 3 Hours Maximum Marks: 90 Part – I I. Choose the correct answer. Answer all the questions.  [20 × 1 = 20] Question 1. If A = $$\left[\begin{array}{ll} 3 & 5 \\ 1 & 2 \end{array}\right]$$ B = adj A and C = 3A, then |Adjb| / |C| =  ………………. (a) 1/3 (b) 1/9 (c) 1/4 (d) 1 (b) 1/9 Question 2. If the inverse of the matrix $$\left[\begin{array}{cc} 1 & 2 \\ 3 & -5 \end{array}\right]$$ is $$\frac{1}{11}\left[\begin{array}{ll} a & b \\ c & d \end{array}\right]$$ , then the ascending order of a, b, c, d, is (a) a, b, c, d (b) d, b, c, a (c) c, a, b, d (d) b, a, c, d (b) d, b, c, a Question 3. The least value of n satisfying $$\left(\frac{\sqrt{3}}{2}+\frac{i}{2}\right)^{n}$$ = 1 is (a) 30 (b) 24 (c) 12 (d) 18 (c) 12 Question 4. The principal argument of $$\frac{3}{-1+i}$$ is (c) $$\frac{-3 \pi}{4}$$ Question 5. The polynomial equation x3 + 2x + 3 = 0 has (a) one negative and two imaginary (b) one positive and two imaginary roots (c) three real roots (d) no solution (a) one negative and two imaginary Question 6. The domain of the function defined by $$f(x)=\sin ^{-1}(\sqrt{x-1})$$ is (a) [1, 2] (b) [-1,1] (c) [0, 1] (d) [-1, 0] (a) [1, 2] Question 7. If x+y = A:isa normal to the parabola y2 = 12x, then the value of k is (a) 3 (b) -1 (c) 1 (d) 9 (d) 9 Question 8. The circle passing through (1, -2) and touching the x-axis (3,0), again passing through the point is (a)(-5,2) (b) (2,-5) (c) (5, -2) (d) (-2,5) (c) (5, -2) Question 9. The volume of the parallelepiped with its edges represented by the vectors $$\hat{i}+\hat{j}, \hat{i}+2 \hat{j}, \hat{i}+\hat{j}+\pi \hat{k}$$ is (a) $$\frac{\pi}{2}$$ (b) $$\frac{\pi}{3}$$ (c) π (d) $$\frac{\pi}{4}$$ Question 10. If the line $$\frac{x-2}{3}=\frac{y-1}{-5}=\frac{z+2}{2}$$ lies in the plane x + 3y – αz + β = 0 then (α, β) is ……………… (a) (-5, 5) (b) (-6, 7) (c) (5, -5) (d) (6, -7) (b) (-6, 7) Question 11. The function sin4 + cos4x is increasing in the interval (c) $$\left[\frac{\pi}{4}, \frac{\pi}{2}\right]$$ Question 12. The curve y = ax4 + bx2 with ab> 0 (a) has no horizontal tangent (b) is concave up (c) is concave down (d) has no points of inflection (d) has no points of inflection Question 13. If u = (x-y)2, then $$\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}$$ is ……………. (a) 1 (b) -1 (c) 0 (d) 2 (c) 0 Question 14. The value of $$\int_{0}^{\pi} \frac{d x}{1+5 \cos x}$$ is ……………. (a) $$\frac{\pi}{2}$$ (b) π (c) $$3\frac{\pi}{2}$$ (d) 2π (a) $$\frac{\pi}{2}$$ Question 15. The volume of solid of revolution of the region bounded by y2 = x(a – x) about x – axis is ……. (d) $$\frac{\pi a^{3}}{6}$$ Question 16. If m, n are the order and degree of the differential equation $$\left[\frac{d^{4} y}{d x^{4}}+\frac{d^{2} y}{d x^{2}}\right]^{\frac{1}{2}}=a \frac{d^{2} y}{d x^{2}}$$ respectively, then the value of 4m – n is (a) 15 (b) 12 (c) 14 (d) 13 (a) 15 Question 17. The solution of differential equation $$\frac{d y}{d x}=\frac{y}{x}+\frac{\varphi\left(\frac{y}{x}\right)}{\varphi^{\prime}\left(\frac{y}{x}\right)}$$ is (b) $$\varphi\left(\frac{y}{x}\right)=k x$$ Question 18. A random variable X has the following distribution. then the value of c is (a) 0.1 (b) 0.2 (c) 0.3 (d) 0.4 (a) 0.1 Question 19. If P(X = 0) = 1 – P {X = 1} and E[X] = 3 Var (X), then P{X = 0} is (a) $$\frac{2}{3}$$ (b) $$\frac{2}{5}$$ (c) $$\frac{1}{3}$$ (d) $$\frac{1}{5}$$ (c) $$\frac{1}{3}$$ Question 20. Which one is the contrapositive of the statement (p v q) → r? (a) $$\neg r \rightarrow(\neg p \wedge \neg q)$$ Part – II II. Answer any seven questions. Question No. 30 is compulsory. [7 x 2 = 14] Question 21. Solve the following system of linear equations by Cramer’s rule: 2x – y = 3,x + 2y = -1. 2x – y = 3, x + 2y 2 -1 = -1 Question 22. If z1 z2 and z3 are complex numbers such that |z1| = |z2| = |z3| = |z1+ z2 + z3| = 1 find the value of $$\left|\frac{1}{z_{1}}+\frac{1}{z_{2}}+\frac{1}{z_{3}}\right|$$ Question 23. Find the value of $$\sin \left(\frac{\pi}{3}+\cos ^{-1}\left(\frac{-1}{2}\right)\right)$$ Question 24. Find the equation of the parabola with vertex (-1,-2), axis parallel to x-axis and passing through (3, 6). Since axis is parallel to y-axis the required equation of the parabola is (x + 1)2 = 4a (y + 2) Since this passes through (3,6) (3 + 1)2 = 4a (6 + 2) a = 1/2 Then the equation of parabola is (x + 1)2 = 2 (y + 2) which on simplifying yields, x2 + 2x – 2y – 3 = 0. Question 25. If â, b̂, ĉ are three unit vectors such that b̂ and ĉ are non-parallel and â, x (b̂ x ĉ) = $$\frac{1}{2} \hat{b}$$, find the angle between â and ĉ. Since b and c are non collinear vectors. So, equating corresponding coefficients on both sides. Question 26. If the mass m(x) (in kilograms) of a thin rod of length x (in meters) is given by, m(x) = $$m(x)=\sqrt{3 x}$$ then what is the rate of change of mass with respect to the length when it is x = 27 metres? Question 27. Evaluate $$\int_{0}^{\infty} e^{-a x} x^{n} d x$$ where a > 0. Making the substitution t = ax, we get dt = adx and x = 0 ⇒ t = 0 and x = ∞ ⇒ t = ∞. Hence, we get Question 28. Show that y = ax + b/x, x ≠ 0 is a solution of the differential equation x2y” + xy’ – y = 0. Given solution : y = ax + $$\frac{b}{x}$$  …………..(1) y = ax + $$\frac{b}{x}$$ xy = ax2+b Differentiate with respect to ‘x’ xy’ + y.1 = a(2x) = 2ax …(2) Differentiate again with respect to ‘x’ xy” + y’ . 1 + y’ = 2a ⇒ xy” + 2y’ = 2a …(3) Substitute (3) in (2) xy’ + y = (xy” + 2y’) x xy’+ y = x2y” + 2xy’ ⇒ x2y” + xy’- y = 0 Hence proved. Question 29. Find the mean of a random variable X, whose probability density function is $$f(x)=\left\{\begin{array}{l} \lambda e^{-\lambda x} \text { for } x \geq 0 \\ 0 \text { otherwise } \end{array}\right.$$ Question 30. Let * be a binary operation on set Q of rational numbers defined as a * b = $$\frac{a b}{8}$$ Write the identity for * if any. Let S = {Q} a * b = $$\frac{a b}{8}$$ ∀ a∈S, e∈S such that a * e = a $$\frac{a e}{8}=a$$ ⇒ ae = 8a e = 8 ∈ S . Identity element exist. Part – III III. Answer any seven questions. Question No. 40 is compulsory. [7 x 3 = 21] Question 31. Find the inverse of $$\left[\begin{array}{cc} 2 & -1 \\ 5 & -2 \end{array}\right]$$ by Gauss-Jordan method. Question 32. If w ≠ 1 is a cube root of unity, show that the roots of the equation (z – 1)3 + 8 = 0 are -1,1- 2w, 1- 2 w2 Question 33. Find all real numbers satisfying 4x – 3(2x+2) + 25 = 0 4x – 3 (2x+2) + 25 = 0 ⇒ (22)x – 3 (2x ,22) + 25 = 0 (22)x – 12 .2x + 32 = 0 Let y = 2x y2 – 12y + 32 = 0 (y – 4) (y – 8) = 0 . y = 4,8 Case (i) 2x = 4 2x= (2)2 ⇒ x = 2 Case (ii) 2x = 8 2x = (2)3 ⇒x = 3 ∴ The roots are 2, 3 Question 34. Find the centre, foci, and eccentricity of the hyperbola 12x2 – 4y2 – 24x + 32y – 127 = 0. 12x2 – 4y2– 24x + 32y – 127 = 0 12[x2 – 2x] – 4[y2 – 8y] = 127 12[(x – 1)2 – 1] – 4[(y – 4)2 – 16] = 127 12(x – 1)2– 4(y – 4)2 = 127 +12 – 64 12(x- 1)2 – 4(y-4)2 = 75 Let X = x – 1; Y = y – 4 (i) Centre : c[0,0] X = o x – 1 = 0 x = 1 Y = o y – 4 = 0 y = 4 Centre (1,4) (iii) Foci F(± ae, 0) (i.e.,) F (± 5, 0) X = 5 x- 1 = 5 x = 6 X = -5 x – 1 = -5 x = -4 Y = 0 y – 4 = 0 y = 4 Foci (6, 4) and (-4, 4) Question 35. Find the image of the point whose position vector is $$\hat{i}+2 \hat{j}+3 \hat{k}$$ in the plane $$\vec{r} \cdot(\hat{i}+2 \hat{j}+4 \hat{k})=38$$ Question 36. Evaluate:  $$\lim _{x \rightarrow 0^{+}} x \log x$$ This is an indeterminate of the form (0 x ∞). To evaluate this limit, we first simplify and bring it to the form (∞/∞) and apply 1’Hopital Rule Question 37. Find a linear approximation for the function given below at the indicate points f(x) = x3 -5x + 12, x0 = 2. f(x) = x3 – 5x + 12 . f'(x) = 3x2 – 5 f(x0) = f(2) = (2)3– 5(2) + 12 = 8 – 10 + 12 = 10 f'(x0) = f'(2) = 3(2)2 – 5 = 12 – 5 = 7 The required linear approximation L(x) = f(x0) + f’ (x0) (x – x0) = 10 + 7 (x – 2) = 10 + 7x – 14 = 7x – 4 Question 38. By using the properties of definite integrals, evaluate $$\int_{0}^{3}|x-1| d x$$ Question 39. Solve: $$\frac{d y}{d x}$$ + 2v cot x = 3x2cosec2x. This is in the form of $$\frac{d y}{d x}$$ + Py = Q P = 2 cot x ; Q = 3x2cosec2x. ∫ Pdx = ∫2 cot x dx = 2 log sin x = log (sin x)2 Integrating factor: e ∫pdxe log(sinx)2 = (sinx)2 Question 40. A fair coin is tossed a fixed number of times. If the probability of getting seven heads is equal to that of getting nine heads, find the probability of getting exactly two heads. p = $$\frac{1}{2}$$ q = $$\frac{1}{2}g$$ In binomial distribution P(X = x) = nCxpx qn-x x = 0, 1, 2,.. . n By the given data P (X = 7) = P(X = 9) Part – IV IV. Answer all the questions. [7 x 5 = 35] Question 41. (a) By using Gaussian elimination method, balance the chemical reaction equation: C2H6 → H2 + CO2 We are searching for positive integers x1 x2, x3 and x4 such that x1 C2H6 + x2O2 = x3H2O + x4CO2 …(1) The number of carbon atoms on LHS of (1) should be equal to number of carbon atoms on the RHS of (1). So we get a linear homogeneous equation. 2x1 = x4 ⇒ 2x1 – x4 = 0………(2) Similarly considering hydrogen and oxygen atoms we get respectively. 2x2 = x3 + 2x4 ⇒ 2x2 – x3 – 2x4 = 0 ……… (3) and -2x3 + 3x4 = 0 …(4) Equations (2), (3) and (4) constitute a homogeneous system of linear equations in four unknowns. The augmented matrix (A,B) is (A,B) Now p(A, B) = ρ(A) = 3 < number of unknowns. So the system is consistent and has infinite number of solutions. Writing the equations using the echelon form we get 21 – x4 = 0 …(5) 2x2 – x3 – 2x4 = 0 …..(6) -2x3 + 3x4 = 0…..(7) Taking x4 = t,(t ≠ 0) in (7) we get 2x3 – 3t = 0 2x3 = 3t $$x_{3}=\frac{3}{2} t$$ Taking x4 = t, in (5) we get 21 – t = 0 2x1 = t $$x_{1}=\frac{t}{2}$$ Since x1 x2, x3 and x4 are positive integers. Let us choose t = 4t. Then we get x1 = 2, x2 = 7, x3 = 6, and x4 = 4 So the balanced equation is 2C2H6+7O2 → 6H2O + 4CO2 (b) If z = x + iy and arg $$\left(\frac{z-i}{z+2}\right)=\frac{\pi}{4}$$, then show that x2 + y2 + 3x – 3y + 2 = 0. Question 42. (a) Solve the equation: 3x4 – 16x4 + 26x4 – 16x + 3 = 0. It is an even degree reciprocal equations Dividing (1) by x2 3(y2 – 2) – 16y + 26 = 0 3y2 – 6 – 16y + 26 = 0 3y2 – 16y + 20 = 0 (3y- 10) (y-2) = 0 3y- 10 = 0 3y = 10 3(x + $$\frac{1}{x}$$) = 10 3(x2 + 1) = 10x 3x2 – 10x + 3=0 (3x- 1) (x- 3) = 0 x = $$\frac{1}{x}$$ and x = 3 The roots are 1, 1, $$\frac{1}{3}$$, 3 y – 2 = 0 y = 2. x + $$\frac{1}{x}$$ = 2 x2 + 1 = 2x x2 – 2x + 1 = 0 (x – 1)2 = 0 x = 1,1 OR (b) Solve: $$\tan ^{-1}\left(\frac{x-1}{x-2}\right)+\tan ^{-1}\left(\frac{x+1}{x+2}\right)=\frac{\pi}{4}$$ Tamil Nadu 12th Maths Model Question Paper 5 English Medium – 32 Question 43. (a) A rod of length 1.2m moves with its ends always touching the coordinates axes. The locus of a point P on the rod, which is 0.3 m from the end in contact with x-axis is an ellipse. Find the eccentricity. From the diagram, (i)∆le OAB be a right angle triangle. are corresponding angles, so corresponding angles are equal. (b) Find the non-parametric and Cartesian equation of the plane passing through the point (4, 2, 4), and is perpendicular to the planes 2jt + 5y + 4z+l = 0 and 4JC + ly T 6z + 2 = 0. The required plane passing through the point (4, 2, 4) and parallel to the vectors Question 44. (a) A steel plant is capable of producing JC tonnes per day of a low-grade steel andy tonnes per day of a high-grade steel, where y = $$\frac{40-5 x}{10-x}$$ If the fixed market price of low-grade steel is half that of high-grade steel, then what should be optimal productions in low-grade steel and high-grade steel in order to have maximum receipts. Let the price of low-grade steel be ₹p per tonne. Then the price of high-grade steel is ₹2p per tonne. The total receipt per day is given by R = px + 2py = px + 2p $$\frac{40-5 x}{10-x}$$ . Hence the problem is to maximise R . Now, simplifying and differentiating R with respect to x, we get and hence R will be maximum. If x = 10 – 2√5 then y = 5 – √5 Therefore the steel plant must produce low-grade and high-grade steels respectively in tonnes per day are 10 -2√5 and 5 – √5 . [OR] (b) Let z(x,y) xey+yex,x = ts2,y = st2,s,t ∈ R.Find $$\frac{\partial z}{\partial s}$$ and $$\frac{\partial z}{\partial y}$$ Question 45. (a) Find the area of the region bounded between the parabola x2 = y and the curve y = |x|. Both the curves are symmetrical about y-axis. The curve $$y=|x| \text { is } y=\left\{\begin{array}{l} x \text { if } x \geq 0 \\ -x \text { if } x \leq 0 \end{array}\right.$$ It intersects the parabola x2 = y at (1, 1) and (-1, 1). The area of the region bounded by the curves is sketched in Fig. It lies in the first quadrant as well as in the second quadrant. By symmetry, the required area is twice the area in the first quadrant. In the first quadrant, the upper curve is y = x,0 ≤ x ≤ 1 and the lower curve is y = x2, 0 ≤ x ≤ 1. Hence, the required area is given by (b) Water at temperature 100°C cools in 10 minutes to 80°C in a room temperature of 25°C. Find (i) The temperature of water after 20 minutes (ii) The time when the temperature is 40°C. [loge$$\frac{11}{5}$$ = -0.3101; loge5 = 1.6094] Let’T’ be the temperature in time ‘f Question 46. (a) Suppose a discrete random variable can only take the values 0,1, and 2. The probability mass function is defined by $$f(x)=\left\{\begin{array}{cc} \frac{x^{2}+1}{k}, & \text { for } x=0,1,2 \\ 0 & \text { otherwise } \end{array}\right.$$ Find (i) the value of k (ii) cumulative distribution function (m) P(X ≥ 1). (b) Using truth table check whether the statements $$\neg(\boldsymbol{p} \vee \boldsymbol{q}) \vee \neg(\boldsymbol{p} \wedge \boldsymbol{q}) \text { and } \neg \boldsymbol{p}$$ are logically equivalent. Question 47. (a) Prove by vector method that sin(α + β) = sin α cos β + cos α sin β. Take two points A and B on the unit circle with centre as origin ‘O’, so $$|\overrightarrow{\mathrm{OA}}|=|\overrightarrow{\mathrm{OB}}|=1$$ • Let $$\vec{i}$$ and $$\vec{j}$$ be the unit vectors along the x, y direction respectively. • The co-ordinates of A and B be (cos α, sin α) and (cos β, -sin β) respectively. From (1) & (2), we get sin (α + β) = sin α cos β + cos α sin β [OR] (b) Find the equation of tangent and normal to the curve y2 – 4x + 2y + 5 = 0 at the point where it cuts the x-axis. .’. x = $$\frac{5}{4}$$ The required point ( $$\frac{5}{4}$$ ,0)
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Bul. Acad. Ştiinţe Repub. Mold. Mat.: Year: Volume: Issue: Page: Find Basic cohomology attached to a basic function of foliated manifoldsCristian Ida, Sabin Mercheşan 3 Isohedral tilings on Riemann surfaces of genus 2Elizaveta Zamorzaeva 17 Method of construction of topologies on any finite setV. I. Arnautov 29 Generators of the algebras of invariants for differential system with homogeneous nonlinearities of odd degreeN. N. Gherstega, M. N. Popa, V. V. Pricop 43 On partial inverse operations in the lattice of submodulesA. I. Kashu 59 The generalized Lagrangian mechanical systemsRadu Miron 74 Cubic systems with seven invariant straight lines of configuration $(3,3,1)$Alexandru Şubă, Vadim Repeşco, Vitalie Puţuntică 81 The Ricci-flat spaces related to the Navier–Stokes equationsValery Dryuma 99
Can you raise a number to an irrational exponent? The way that I was taught it in 8th grade algebra, a number raised to a fractional exponent, i.e. $a^\frac x y$ is equivalent to the denominatorth root of the number raised to the numerator, i.e. $\sqrt[y]{a^x}$. So what happens when you raise a number to an irrational number? Obviously it is not so simple to break it down like above. Does an irrational exponent still have a well formed meaning? The only example that comes to mind is Euler’s identity, but that seems likes a pretty exceptional case. What about in general? What do we mean when we write $a^b$, say for $a>0$? The question is a very good one. The answer, unfortunately, is fairly complicated, and the full details are quite lengthy. We have a clear understanding of what we mean by $a^2$, or $a^5$. And from fairly early on, we learn to define $a^n$, where $n$ is negative, as $\frac{1}{a^{-n}}$. After a while, we develop an understanding of what we mean by something like $a^{3/4}$. For (we are led to believe) there is a unique positive number $s$ such that $s^4=a$, and then we can define $a^{3/4}$ to be $s^3$. This idea can be used to define $a^{p/q}$, where $p$ and $q$ are integers. After a while, we can show, more or less rigorously, that the laws of exponents that worked for integer powers also work for expressions of the form $x^{p/q}$, where $p$ and $q$ are integers. However, what do we mean, for example, by $3^{\sqrt{2}}$? Certainly it is not $3$ multiplied by itself $\sqrt{2}$ times! There are several ways to resolve the question. One way is to note that $\sqrt{2}=1.41421356\dots$ and consider the sequence $3^{1.4}$, $3^{1.41}$, $3^{1.414}$, $3^{1.4142}$, and so on. All these powers make sense, because the exponents $1.4$, $1.41$, $1.414$, and so on, can be expressed as fractions. But, intuitively, these numbers are getting closer and closer to something, and we define $3^{\sqrt{2}}$ to be that something. We can do a partial informal verification of the “getting closer and closer” part in this case, by using a calculator. More formally, let $b$ be a real number, and let $b_1, b_2, b_3, \dots$ be an infinite sequence of rational numbers such that the sequence $(b_n)$ has limit $b$. It can be shown that the sequence $(a^{b_n})$ has a limit, which is independent of the particular sequence $(b_n)$ that we have chosen, as long as the sequence has $b$ as a limit. Then we can define $a^b$ as the limit of the sequence $(a^{b_n})$. With quite a lot of effort, we can then show that the familiar laws of exponentiation hold. The above approach, though intuitively very natural, is unwieldy. So in practice, we usually take another approach. The standard way is to first define the function $\ln x$. Then we define the exponential function $\exp(x)$, also known as $e^x$, as the inverse function of $\ln x$. Or else, depending on taste, we first define the function $\exp(x)$, and then its inverse $\ln x$. There is a fair variety of (provably equivalent) definitions. For example, we could define $\ln x$ by It is not terribly difficult to show that $\ln$ as defined above satisfies the usual basic “laws of logarithms,” and that it is an increasing function, so has an inverse, that we call $\exp$. Finally, after this background work, we define $a^b$ (for $a>0$) by We can then easily verify that in the cases where we already “know” what $a^b$ should be, namely rational $b$, the above definition agrees with our intuition, and that the usual “laws of exponents” hold for this more general notion of power. Warning: In the entire post, it is assumed that $a$ is a positive real number, and that all exponents are real numbers. Complex exponentials are a lot more–complex.
## Books by Independent Authors ### Chapter VI. Compact and Locally Compact Groups #### Abstract This chapter investigates several ways that groups play a role in real analysis. For the most part the groups in question have a locally compact Hausdorff topology. Section 1 introduces topological groups, their quotient spaces, and continuous group actions. Topological groups are groups that are topological spaces in such a way that multiplication and inversion are continuous. Their quotient spaces by subgroups are of interest when they are Hausdorff, and this is the case when the subgroups are closed. Many examples are given, and elementary properties are established for topological groups and their quotients by closed subgroups. Sections 2–4 investigate translation-invariant regular Borel measures on locally compact groups and invariant measures on their quotient spaces. Section 2 deals with existence and uniqueness in the group case. A left Haar measure on a locally compact group $G$ is a nonzero regular Borel measure invariant under left translations, and right Haar measures are defined similarly. The theorem is that left and right Haar measures exist on $G$, and each kind is unique up to a scalar factor. Section 3 addresses the relationship between left Haar measures and right Haar measures, which do not necessarily coincide. The relationship is captured by the modular function, which is a certain continuous homomorphism of the group into the multiplicative group of positive reals. The modular function plays a role in constructing Haar measures for complicated groups out of Haar measures for subgroups. Of special interest are “unimodular” locally compact groups $G$, i.e., those for which the left Haar measures coincide with the right Haar measures. Every compact group, and of course every locally compact abelian group, is unimodular. Section 4 concerns translation-invariant measures on quotient spaces $G/H$. For the setting in which $G$ is a locally compact group and $H$ is a closed subgroup, the theorem is that $G/H$ has a nonzero regular Borel measure invariant under the action of $G$ if and only if the restriction to $H$ of the modular function of $G$ coincides with the modular function of $H$. In this case the $G$ invariant measure is unique up to a scalar factor. Section 5 introduces convolution on unimodular locally compact groups $G$. Familiar results valid for the additive group of Euclidean space, such as those concerning convolution of functions in various $L^{p}$ classes, extend to be valid for such groups $G$. Sections 6–8 concern the representation theory of compact groups. Section 6 develops the elementary theory of finite-dimensional representations and includes some examples, Schur orthogonality, and properties of characters. Section 7 contains the Peter–Weyl Theorem, giving an orthonormal basis of $L^{2}$ in terms of irreducible representations and concluding with an Approximation Theorem showing how to approximate continuous functions on a compact group by trigonometric polynomials. Section 8 shows that infinite-dimensional unitary representations of compact groups decompose canonically according to the irreducible finite-dimensional representations of the group. An example is given to show how this theorem may be used to take advantage of the symmetry in analyzing a bounded operator that commutes with a compact group of unitary operators. The same principle applies in analyzing partial differential operators. #### Chapter information Source Anthony W. Knapp, Advanced Real Analysis, Digital Second Edition, Corrected version (East Setauket, NY: Anthony W. Knapp, 2017), 212-274 Dates First available in Project Euclid: 21 May 2018 Permanent link to this document https://projecteuclid.org/euclid.bia/1526871320 Digital Object Identifier doi:10.3792/euclid/9781429799911-6 Rights
Article Contents Article Contents # Optimal inventory control with fixed ordering cost for selling by internet auctions • We study an optimal inventory control problem for a seller to sell a replenishment product via sequential Internet auctions. At the beginning of each auction, the seller may purchase his good from an outside supplier with a fixed ordering cost. There is a holding cost for inventory and backordering is not allowed. We address the total expected discounted criteria in both finite and infinite horizons and the average criterion in an infinite horizon. We show that the classic $(j, J)$ policy is optimal for each criterion. Moreover, we obtain integer programming with bounded decision variables $j$ and $J$ for computing the optimal $(j, J)$ policies for both the discounted and average criteria in an infinite horizon. Finally, numerical results show that it is meaningful for the seller to reduce randomness in the number of buyers with certainly remaining the average number of arriving buyers, but to enhance randomness in the buyers' valuation. Mathematics Subject Classification: Primary: 90B05; Secondary: 90C40. Citation: • [1] K. J. Arrow, T. E. Harris and J. Marschak, Optimal inventory policy, Econometrica, 19 (1951), 250-272.doi: 10.2307/1906813. [2] D. Bertsekas, "Dynamic Programming and Optimal Control," Vol. 2, 1st edition, Athena Scientific, Belmont, MA, 1995. [3] D. Blackwell, Discrete dynamic programming, Annals of Mathematics Statistics, 33 (1962), 719-726.doi: 10.1214/aoms/1177704593. [4] S. Bollapragada and T. E. Morton, A simple heuristic for computing nonstationary $(s, S)$ policies, Operations Research, 47 (1999), 576-584.doi: 10.1287/opre.47.4.576. [5] L. Caccetta and E. Mardaneh, Joint pricing and production planning for fixed priced multiple products with backorders, Journal of Industrial and Management Optimization, 6 (2010), 123-147. [6] F. Chen, Auctioning supply contracts, Management Science, 53 (2007), 1562-1576.doi: 10.1287/mnsc.1070.0716. [7] X. Chen and D. Simchi-Levi, Coordinating inventory control and pricing strategies with random demand and fixed ordering cost: The finite horizon case, Operations Research, 52 (2004), 887-896.doi: 10.1287/opre.1040.0127. [8] X. Chen and D. Simchi-Levi, Coordinating inventory control and pricing strategies with random demand and fixed ordering cost: The infinite horizon case, Mathematics of Operations Research, 29 (2004), 698-723.doi: 10.1287/moor.1040.0093. [9] Y. Chen, S. Ray and Y. Song, Optimal pricing and inventory control policy in periodic-review sysytems with fixed ordering cost and lost sales, Naval Research Logistics, 53 (2006), 117-136.doi: 10.1002/nav.20127. [10] H. A. David, "Order Statistics," 2nd edition, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, New York, 1981. [11] L. Du, Q. Hu and W. Yue, Analysis and evaluation for optimal allocation in sequential internet auction systems with reserve price, Dynamics of Continuous, Discrete and Impulsive System, Series B: Application and Algorithms, 12 (2005), 617-631. [12] A. Federgruen and A. Heching, Combined pricing and inventory control under uncertainty, Operations Research, 47 (1999), 454-475.doi: 10.1287/opre.47.3.454. [13] E. L. Feiberg and M. E. Lewis, Optimality inequalities for average cost Markov decision processes and the optimality of $(s, S)$ policies, Working paper, Technical Report TR1442, Cornell University, 2006. Available from: http://legacy.orie.cornell.edu/techreports/TR1442.pdf. [14] Q. Feng, S. P. Sethi, H. Yan and H. Zhang, Optimality and nonoptimality of the base-stock policy in inventory problems with multiple delivery modes, Journal of Industrial and Management Optimization, 2 (2006), 19-42.doi: 10.3934/jimo.2006.2.19. [15] Q. Hu and W. Yue, "Markov Decision Processes with Their Applications," Advances in Mechanics and Mathematics, 14, Springer, New York, 2008. [16] W. T. Huh and G. Janakiraman, $(s, S)$ optimality in joint inventory-pricing control: An alternate approach, Operations Research, 56 (2008), 783-790.doi: 10.1287/opre.1070.0462. [17] W. T. Huh and G. Janakiraman, Inventory management with auctions and other sales channels: Optimality of $(s, S)$ policies, Management Science, 54 (2008), 139-150.doi: 10.1287/mnsc.1070.0767. [18] D. Iglehart, Optimality of $(s, S)$ policies in the infinite-horizon dynamic inventory problem, Management Science, 9 (1963), 259-267.doi: 10.1287/mnsc.9.2.259. [19] D. Iglehart, Dynamic programming and the analysis of inventory problems, in "Multistage Inventory Models and Technique" (eds. H. Scarf, D. Gilford and M. Shelly), Chap. 1, Stanford, 1963. [20] E. Maskin and J. Riley, Optimal multi-unit auctions, in "The Economic Theory of Auctions" (ed. P. Klemperer), Vol. 2, E. Elgar Publishing, U. K., (2000), 312-336. [21] R. Myerson, Optimal auction design, Mathematics of Operations Research, 6 (1981), 58-73.doi: 10.1287/moor.6.1.58. [22] S. Nahmias, "Production and Operation Analysis," 4th edition, McGraw-Hill/Irwin, Boston, 2001. [23] E. L. Porteus, "Foundations of Stochastic Inventory Theory," Stanford University Press, Stanford, California, Stanford University Press, Stanford, California, 2002. [24] E. Pinker, A. Seidmann and Y. Vakrat, Using transaction data for the design of sequential, multi-unit, online auctions, Working paper CIS-00-03, William E. Simon Graduate School of Business Administration, University of Rochaster, Rochaster, NY, 2001. [25] E. Pinker, A. Seidmann and Y. Vakrat, Managing online auctions: Current business and research issues, Management Science, 49 (2003), 1457-1484.doi: 10.1287/mnsc.49.11.1457.20584. [26] H. Scarf, The optimality of $(s, S)$ policies in the dynamic inventory problem, in "Mathematical Methods in the Social Sciences," 1959, Stanford University Press, Stanford, California, (1960), 196-202. [27] A. Segev, C. Beam and J. Shanthikumar, Optimal design of internet-based auctions, Information Technology and Mangement, 2 (2001), 121-163.doi: 10.1023/A:1011411801246. [28] S. P. Sethi and F. Cheng, Optimality of $(s, S)$ policies in inventory models with Markovian demand, Operations Research, 45 (1997), 931-939.doi: 10.1287/opre.45.6.931. [29] Y. Song, S. Ray and T. Boyaci, Optimal dynamic joint inventory-pricing control for multiplicative demand with fixed order costs and lost sales, Operations Research, 57 (2009), 245-250.doi: 10.1287/opre.1080.0530. [30] A. F. Veinott, On the optimality of $(s, S)$ inventory policies: New conditions and a new proof, SIAM Journal on Applied Mathematics, 14 (1966), 1067-1083.doi: 10.1137/0114086. [31] G. Vulcano, G. J. van Ryzin and C. Maglaras, Optimal dynamic auctions for revenue management, Management Science, 48 (2002), 1388-1407.doi: 10.1287/mnsc.48.11.1388.269. [32] R. J. Weber, Multiple-object auctions, in "Auctions, Bidding, and Contracting: Uses and Theory" (eds. Richard Engelbrechtp-Wiggans, Martin Shubik and Robert M. Stark), Chapter 3, New York University Press, New York, 165-191; also in "The Economics Theory of Auctions" (ed. Paul Klemperer), Vol. II, Edward Elgar Publishing Limited, Cheltenham, UK / Edward Elgar Publishing, Inc., Northampton, USA, (1983), 240-266. [33] C. A. Yano and S. M. Gilbert, Coordinated pricing and producion/procurement decisions, A review, in "Managing Business Interfaces: Marketing, Engineering, and Manufacturing Perspectives" (eds. A. K. Chakravarty and J. Eliashberg), Kluwer Academic Publshers, Boston, Massachusetts, 2004. [34] Y. S. Zheng, A simple proof for optimality of $(s, S)$ policies in the infinite-horizon inventory systems, Journal of Applied Probability, 28 (1991), 802-810.doi: 10.2307/3214683. [35] Y. S. Zheng and A. Federgruen, Finding optimal $(s, S)$ policies is about as simple as evaluating a single policy, Operations Research, 39 (1991), 654-665.doi: 10.1287/opre.39.4.654. [36] P. H. Zipkin, "Foundations of Inventory Management," McGraw Hill, Boston, 2000.
Article Text Do social and economic reforms change socioeconomic inequalities in child mortality? A case study: New Zealand 1981–1999 1. Caroline Shaw, 2. Tony Blakely, 3. June Atkinson, 4. Peter Crampton 1. Department of Public Health, Wellington School of Medicine and Health Sciences, University of Otago, Wellington, New Zealand 1. Correspondence to:
 Dr T Blakely
 Department of Public Health, Wellington School of Medicine and Health Sciences, University of Otago, PO Box 7343, Wellington, New Zealand; tblakelywnmeds.ac.nz ## Abstract Background: Socioeconomic inequalities in child mortality are known to exist; however the trends in these inequalities have not been well examined. This study examines the trends in child mortality inequality between 1981 and 1999 against the background of the rapid and dramatic social and economic restructuring in New Zealand during this time period. Methods: Record linkage studies of census and mortality records of all New Zealand children aged 0–14 years on census night 1981, 1986, 1991, 1996, each followed up for three years for mortality between ages 1–14 years. Socioeconomic position was measured using maternal education, household income, and highest occupational class in the household. Standardised mortality rates, rate ratios, and rates differences as well as regression based measures of inequality were calculated. Results: Mortality in all socioeconomic groups fell between 1981 and 1999. Socioeconomic inequality in child mortality existed by all measures of socioeconomic position, however only trends by income suggested a change over time: the relative index of inequality increased from 1.5 in 1981–84 to 1.8 in 1996–99 (p trend 0.06), but absolute inequality remained stable (slope index of inequality 15/100 000 in 1981–84 and 14/100 000 in 1996–99. Conclusions: Dramatic changes in income in New Zealand possibly translated into increasing relative inequality in child mortality by income, but not by education or occupational class. The a priori hypothesis that socioeconomic inequalities in child mortality would have increased in New Zealand during a period of rapid structural reform and widening income inequalities was only partly supported. • child mortality • income • education • occupational class • trends • New Zealand • socioeconomic position ## Statistics from Altmetric.com The existence of socioeconomic inequalities in child mortality is confirmed by most studies.1,2,3,4,5,6,7,8,9,10,11,12,13,14 However, temporal trends in socioeconomic inequalities in child mortality remain largely unquantified. Some studies have suggested an increase in inequalities in all cause mortality over time,2,15 one found a decline,16 and some found different trends by sex.8,17 Many of these studies have methodological problems. For example, being susceptible to the ecological fallacy,16 or to differential misclassification of socioeconomic position (SEP) over time because of use of area based measures of SEP,15,17 failing to use methodology that adjusts for changing socioeconomic group size over time,2,18 or being susceptible to numerator-denominator bias.2 New Zealand is of particular interest in the context of trends in inequalities in child mortality. It underwent a significant period of economic and social restructuring through the 1980s and 1990s, similar to but more extensive than the (neo-liberal) changes that many other OECD countries experienced.19,20 There is evidence that the distribution of social determinants of child health has changed over this period (see table 1). Despite these changes, child mortality has continued to fall, from 42/100 000 in 1980 to 24.6/100 000 in 2000 in 1–14 year olds (New Zealand Health Information Service), although it remains high by OECD standards.21 In contrast child health has deteriorated over this period by some measures, for example, there was an increase in avoidable hospital admissions and infectious disease admissions.22,23 Table 1 Changes to social determinants of child health over 1980s and 1990s in New Zealand There is reason to hypothesise that socioeconomic inequalities in child mortality may have increased over this time period. Socioeconomic inequalities in adult mortality in relative terms increased in most developed countries, including New Zealand, over the 1980s and 1990s.17,28,29 If social and economic changes have an impact on inequalities in health, then it is plausible that inequalities in child mortality will respond more rapidly than inequalities in adult mortality. Why? Because there is less elapsed time in the life of a child for life course influences on health to have accumulated (putting aside intergenerational influences), perhaps increasing the ability to detect the recent impacts of changing socioeconomic conditions. Furthermore, the effects of health selection (that is, poor health causing a change in socioeconomic position, thereby inducing a (partly) spurious association of socioeconomic position and health) are largely removed as most child deaths are attributable to injury and the socioeconomic measures are based on parental characteristics. However, the specific mechanisms by which macro level social and economic policy changes could translate into changing child mortality inequalities are not clear. Research suggests that political ideology (and therefore policy) is related to social inequalities and levels of health/mortality.30 Additionally there is increasing evidence of the detrimental effects of economic and social upheaval on adult and child health.31,32 However, there are few, if any, studies that have tried to directly link policy changes at the macro level with changes in socioeconomic inequalities in child mortality at the individual level. ## METHODS The data in this study came from the New Zealand census mortality study. Four population cohorts were constructed by anonymously and probabilistically linking individual census and mortality records over four time periods from 1981 to 1996.33,34 The New Zealand Health Information Service provided mortality data for 0–14 year olds for the periods 1981–84, 1986–89, 1991–94, and 1996–99. Four cohorts were created, following up children aged 0–14 years on census night for three years, with analysis being conducted on those deaths that occurred in children aged 1–14 years. (Note that this study is not well suited to the study of infant mortality as it is a closed cohort.) The percentage of eligible mortality records linked ranged from 66% to 71%, and the positive predictive value of the linkage was in excess of 96%.35,36 Linkage varied by age, rurality, ethnicity, and small area deprivation, so linkage weights were applied to overcome any potential misclassification bias of mortality outcome caused by differential success of linkage.35 For example if 20 of 30 deaths in one cell were linked then the weights applied to those deaths that were linked was 30/20 = 1.5. The weights were calculated in multiple small cells and then the non-linked census respondents were weighted down slightly to ensure that the total weighted number of children in the cohorts equalled the census night population. To be included in the analysis children must have been at their usual residence on census night, which had to be a private dwelling. All family types were included in the analysis, however an adult over the age of 16, who was also in their usual residence, had to be present on census night. These restrictions resulted in the exclusion of 7%–9% of children in each cohort. The “exposure”, socioeconomic position, was measured at the household and parental level. Three different measures of socioeconomic position were used. When income was available on all adults in the house, it was collated and equivalised for household size using the New Zealand specific Jensen equivalisation index.38 Incomes on households with children were consumer price index adjusted to 1996 and then attached to each child in the household. All children were ranked based on income and then divided into three equal sized income groups, with cut points of low (<$20 600), medium (⩾$20 600 to ⩽$33 000), and high (>$33 000) for calculation of the standardised rates. Maternal education was classified using an intercensal classification of educational qualifications: no qualifications, school qualifications, and post-school qualifications.34 The determination of the child’s mother was probabilistic, as family relationships within a household were not recorded in all censuses. This was performed by identifying the woman in the household who was between 15 and 45 years older than the oldest and youngest children. The variable was tested against the 1986 cohort, in which family relationships were identified, resulting in a sensitivity of 93%, specificity of 71%, and positive predictive value of 96%. The highest occupational class in the household was coded using the New Zealand specific Elley Irving occupational ranking.39 Standardised rates, rate ratios, rate differences, and 95% confidence intervals were calculated across levels of the socioeconomic factors,40 using the age and ethnic group composition of the 1991 NZ census population as the external standard. Results were standardised by ethnicity, as: ethnicity is a strong determinant of socioeconomic position; ethnicity is also a strong determinant of health independent of socioeconomic position; and the ethnic composition of New Zealand children changed over this period. The number of children identified as Maori or Pacific increased by 20.7% and 45% respectively, compared with a 13% decline in non-Maori/non-Pacific children between 1981 and 1999. Results are presented for both sexes together to maximise statistical power and because it is not possible for sex to confound the relation between socioeconomic position and child mortality—that is, while sex predicts child mortality it is not associated with household measures of socioeconomic position. To overcome the problem of changing socioeconomic group size over time, the relative and slope indices of inequality (RII and SII, respectively) were used to calculate population inequality in relative and absolute terms, respectively, in each cohort.41 The RII is equivalent to a relative risk measure for the poorest compared with the richest (or people with lowest compared with highest educational qualification or class), but uses mortality rates across all levels of income (and education or class) using regression. The SII is the absolute difference in mortality rates between the two extreme ends of the socioeconomic continuum. To increase the accuracy of the RIIs and SIIs we used five level groupings of income and education (that is, quintiles), and a four level grouping of occupational class, in the underlying regression models. The programme of work of the New Zealand census mortality study has approval from the Wellington Ethics Committee (reference number 98/7). ## RESULTS Table 2 shows the number of deaths and person time in each cohort. Between 1981 and 1999 there was a change in the distribution of education and occupational class. Standardised mortality rates are shown in figure 1 and table 2. Mortality declined in all groups, but socioeconomic differences in child mortality existed during all cohorts and for all socioeconomic factors (except education and occupational class in 1986–89). Table 2 Number of deaths, person time, and age and ethnicity standardised mortality rates 1–14 years both sexes for each socioeconomic variable, by cohort period Figure 1 All cause standardised mortality rates of children aged 1–14 by income, education, and occupational class 1981–1999. Mortality in all income groups declined from 1981–84 to 1996–99, although more in the high income group (41%, p trend 0.03) than the middle (26% p trend 0.08) and low income groups (35% p trend 0.03). Trends in mortality inequality by income are seen in table 3. These show an increase in the relative index of inequality from 1.5 to 1.8 between 1981–84 and 1996–99, which is of borderline significance (p trend 0.06). There is also overlap of the confidence intervals of these values. There was little, if any change, in absolute inequality over time as measured by the slope index of inequality. Table 3 Changes in inequality in all cause mortality ages 1–14 both sexes, by cohort Mortality rates declined in the no qualification and school qualification groups, but showed some variation in the post-school qualification group (possibly because of the small numbers of children in this group in the earlier cohorts). The effect measures show the presence of both relative and absolute inequality in child mortality in all cohorts, except 1986–89, but there is no clear trend over time. • Socioeconomic inequalities in child mortality exist in many countries for most causes of death. • There is increasing evidence that in developed countries adult socioeconomic inequalities are increasing, but trends over time in child inequalities remain unclear. • New Zealand has experienced significant social and economic upheaval over the 1980s and 1990s, which directly affected the social determinants of child health, particularly household income. • Overall child mortality rates fell during the 1980s and 1990s in New Zealand, but trends in socioeconomic inequalities in child mortality are unknown. The occupational class groups also each showed a decline in mortality. There was weaker evidence of mortality gradients within occupational class and the 95% confidence interval of the rate ratio only excluded 1.0 in 1991–94. However the RII and SII, which take into account the changing group size and use a greater number of groups, are more suggestive of mortality gradients by occupational class. There was no clear trend in mortality inequality by occupational class over time. ## DISCUSSION Mortality rates decreased for children in all socioeconomic groups between 1981–84 and 1996–99. However, socioeconomic gradients in mortality were present in most cohorts and by most measures of socioeconomic position. These results are suggestive (but not incontrovertibly) of an increase in relative (but not absolute) child mortality differences by income in New Zealand between 1981–84 and 1996–99. However, by maternal education and parental occupational class there was no clear trend in socioeconomic inequalities in child mortality. A strength of this study, in relation to previous studies looking at trends in child mortality, is the use of direct measurement of a child’s socioeconomic position by individual and household census data—not the reliance on neighbourhood or ecological measures of socioeconomic position. Only one other study of child mortality inequalities has access to such individual or household level data, and that compared much earlier time periods in Sweden (1961–66 with 1981–86).8 Secondly, the four cohorts used in the comparisons over time were essentially identical with regard to study design. Thirdly, we adjusted for the changing distribution of socioeconomic factors over time using the relative and slope indices of inequality. • Relative inequalities in child mortality by income increased from 1981–84 to 1996–99, although the change was of borderline statistical significance. Conversely, absolute inequalities were stable or even decreasing slightly over time. • Inequalities in child mortality by maternal education and highest occupational class of the household were unstable over the 1980s and 1990s, with no clear trend. • Reasons why relative inequalities in child mortality may have increased only by income include: income being the dominant axis of increasing socioeconomic inequalities for children; a shorter time lag between income and mortality risk as compared with class and maternal education; better measurement of income as compared with class and education. • The diversity of trends by the different measures of socioeconomic position suggests that no one measure should be used in isolation when studying time trends. • The theoretical and physical links between policy changes and changes in child mortality inequalities need to be explored further. Despite using an entire population study sample there were not many deaths in this age range (744 in the 1981–84 cohort, 648, 537, and 537 respectively in the subsequent cohorts). The small number of deaths led to wide confidence intervals around the effect measures, making the interpretation of trends more difficult. The small number of deaths in the cohorts also precluded analysis of trends by subgroup (for example, by ethnicity). This is unfortunate, as there is evidence that the social and economic reforms experienced were differential in their impact on ethnic groupings of children. For example, income declined more in Maori families compared with non-Maori.24 Analyses by specific cause of death will be presented in another paper, however the overall trends presented in this paper are similar to those for unintentional injury deaths—the most common cause of death. Our analyses are based on weighted numbers of deaths to adjust for any linkage bias during the formation of the cohorts. Workings presented elsewhere for adult mortality suggest these weights work well.35 To further ensure that these weights worked for children we performed checks and made adjustments specific for causes of child mortality, hence we are confident the results presented in this paper for children are not substantially distorted by linkage bias. ### Policy implications Policy makers need to consider inequalities in child mortality, not just adult mortality, when designing and implementing policies that have an impact on the socioeconomic determinants of health—particularly income distribution. The decline in child mortality in all socioeconomic groups between 1981 and 1984 and 1996 to 1999 despite the (largely unfavourable) changes to social determinants of health illustrates that there are other determinants and/or buffers of child mortality. For example, while injury mortality rates fell between 1981 and 1999, this was not mirrored in hospitalisation rates, which may have increased slightly.42,43 This paradox may reflect improvements in trauma care of children—that is, the health system acting as a barrier between risk factors for injury (which may include social determinants) and mortality outcome. There is some international evidence suggesting that healthcare systems can partially ameliorate the effect of social inequalities, at least for infant mortality.44 Societal changes resulting in decreased exposure to “risk” activities, such as fewer children cycling or walking to school, may also influence mortality. In addition public health interventions, such as immunisation and injury prevention, may also contribute to declining child mortality. This study has shown evidence that despite these secular falls in mortality, socioeconomic disparities exist at all points in time by education, income, and (less convincingly) occupational class. The lack of consistency in trends by these socioeconomic variables is perhaps surprising. However, there are a number of theoretical and practical reasons that might explain why an increase in inequalities in child mortality was only seen by income, even if it was a more modest effect than might have been anticipated and only for relative inequalities. From a theoretical perspective changes in child mortality inequalities might be expected to centre on income if income was the major axis of change of inequality. Table 1 illustrates the pronounced deterioration in all domains of income over this time period, although whether it was the primary axis of inequality is debatable. In addition social and economic change may result in a more immediate alteration in income in comparison with educational status, which is fixed earlier in life. Finally for pragmatic reasons income may be a more consistent measure of socioeconomic position over the time period studied, as it was more stable in social meaning and comparable because of consumer price index adjustment as well as having more levels across which to assess inequality. Regarding the lack of temporal changes in child mortality inequality seen by maternal education, it is possible that the methods used to create the maternal education variable in this study introduced misclassification that obscured any trends over time. However, using a “dominant” approach to education (that is, highest in the house) the same findings were noted (data not presented). A possible explanation for the inconsistent findings lies in the dramatic reshuffle of the education system in the past 20 years in New Zealand, which has resulted in a large increase in school and post-school qualifications in younger age groups, whose children are in this study. It is possible that the increasing homogeneity of educational qualifications over the time period studied means that this is a less discerning measure of inequality than income. Although there is some evidence to suggest that, at least in Norway for infant mortality, mothers with no formal qualifications are a more marginalised group than previously, resulting in an increase in relative inequality.45 With regards to the findings by occupational class there are a number of important points to highlight. Firstly, occupational class appears to be less important than labour force status as a determinant of child mortality in New Zealand. This is supported by the higher mortality rates seen in the non-labour force group and the weak evidence of occupational class gradients (the 95% confidence intervals included 1 in all cohorts except 1991–94). The linked nature of these data allows us to establish that the non-labour force group in this study consists largely of children who live in single parent households, where no parent was active in the labour force (at least as it is formally defined). Previous analyses of NZCMS data have shown that this excess mortality in single parent households is attributable to low income, rather than the nature of the household.1 Given the increase in the number of single parent households over time this has important implications for policy makers. Secondly, the increasing number of children with parents in this non-active group means that the trends by occupational class are inherently inaccurate. This study shows an almost doubling in the proportionate size of this non-labour force group between 1981–84 and 1996–99, and measures of trends in inequality by occupational class only use data from children within the occupational group. In adults exclusion of the economically inactive is estimated to underestimate the social class gradient by 25% for men and 60% for women.46 The exclusion of this group of children with the highest risk of mortality and the progressive increase in the size of this group would suggest that occupational class might not be an appropriate variable to be monitoring temporal trends in socioeconomic inequalities in child mortality in a population. In theory, selection of a measure of socioeconomic position should be based on prior conceptualisation of the pathways between socioeconomic position and the health outcome of interest.47 Although pragmatism often prevails in selection of a measure of socioeconomic position, the differing trends and inherent problems within each measure of socioeconomic position in this study suggest that to adequately monitor socioeconomic inequalities in child mortality, measurement of multiple dimensions of socioeconomic position is required. Considering the increase in relative inequalities in mortality by income, there are innumerable pathways by which changes in social determinants in child mortality may have acted. For example, it is possible that decreases in absolute income could place children in injury promoting environments (that is, unsafe cars, unfenced section, and unsupervised playing on streets). However, while specific pathways offer explanations, socioeconomic gradients occur in multiple causes of child mortality.6,8 Explanations of changes in inequalities in mortality must both encompass the underlying universal process and the micro-level pathways to each of these diverse causes of death and ill health. This explanation and theory is where ongoing efforts need to be focused. ## SUMMARY STATISTICS NEW ZEALAND SECURITY STATEMENT The New Zealand census mortality Study (NZCMS) is a study of the relation between socioeconomic factors and mortality in New Zealand, based on the integration of anonymised population census data from Statistics New Zealand and mortality data from the New Zealand Health Information Service. The project was approved by Statistics New Zealand as a Data Laboratory project under the Microdata Access Protocols in 1997. The datasets created by the integration process are covered by the Statistics Act and can be used for statistical purposes only. Only approved researchers who have signed Statistics New Zealand’s declaration of secrecy can access the integrated data in the Data Laboratory. (A full security statement is in a technical report at http://www.wnmeds.ac.nz/nzcms-info.html.) For further information about confidentiality matters in regard to this study please contact Statistics New Zealand. View Abstract ## Footnotes • Funding: Caroline Shaw acknowledges salary support from the Australasian Faculty of Public Health Medicine during the course of this research. The New Zealand census-mortality study was initially funded by the Health Research Council of New Zealand. The Ministry of Health New Zealand is now the primary funding agency for this study. • Competing interests: none. • Ethical approval: the programme of work of the New Zealand census mortality study has approval from the Wellington Ethics Committee (reference number 98/7). If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. • In this issue Carlos Alvarez-Dardet John R Ashton
### Zienkiewicz-Zhu error estimator The Zienkiewicz-Zhu error estimator [100], [101] tries to estimate the error made by the finite element discretization. To do so, it calculates for each node an improved stress and defines the error as the difference between this stress and the one calculated by the standard finite element procedure. The stress obtained in the nodes using the standard finite element procedure is an extrapolation of the stresses at the integration points [19]. Indeed, the basic unknowns in mechanical calculations are the displacements. Differentiating the displacements yields the strains, which can be converted into stresses by means of the appropriate material law. Due to the numerical integration used to obtain the stiffness coefficients, the strains and stresses are most accurate at the integration points. The standard finite element procedure extrapolates these integration point values to the nodes. The way this extrapolation is done depends on the kind of element [19]. Usually, a node belongs to more than one element. The standard procedure averages the stress values obtained from each element to which the node belongs. To determine a more accurate stress value at the nodes, the Zienkiewicz-Zhu procedure starts from the stresses at the reduced integration points. This applies to quadratic elements only, since only for these elements a reduced integration procedure exists (for element types different from C3D20R the ordinary integration points are taken instead) . The reduced integration points are superconvergent points, i.e. points at which the stress is an order of magnitude more accurate than in any other point within the element [7]. To improve the stress at a node an element patch is defined, usually consisting of all elements to which the nodes belongs. However, at boundaries and for tetrahedral elements this patch can contain other elements too. Now, a polynomial function is defined consisting of the monomials used for the shape function of the elements at stake. Again, to improve the accuracy, other monomials may be considered as well. The coefficients of the polynomial are defined such that the polynomial matches the stress as well as possible in the reduced integration points of the patch (in a least squares sense). Finally, an improved stress in the node is obtained by evaluating this polynomial. This is done for all stress components separately. For more details on the implementation in CalculiX the user is referred to [53]. In CalculiX one can obtain the improved CalculiX-Zhu stress by selecting ZZS underneath the *EL FILE keyword card. It is available for tetrahedral and hexahedral elements. In a node belonging to tetrahedral, hexahedral and any other type of elements, only the hexahedral elements are used to defined the improved stress, if the node does not belong to hexahedral elements the tetrahedral elements are used, if any.
# Deterministic tensor completion with hypergraph expanders 23 Oct 2019  ·  , · We provide a novel analysis of low-rank tensor completion based on hypergraph expanders. As a proxy for rank, we minimize the max-quasinorm of the tensor, which generalizes the max-norm for matrices. Our analysis is deterministic and shows that the number of samples required to approximately recover an order-$t$ tensor with at most $n$ entries per dimension is linear in $n$, under the assumption that the rank and order of the tensor are $O(1)$. As steps in our proof, we find a new expander mixing lemma for a $t$-partite, $t$-uniform regular hypergraph model, and prove several new properties about tensor max-quasinorm. To the best of our knowledge, this is the first deterministic analysis of tensor completion. We develop a practical algorithm that solves a relaxed version of the max-quasinorm minimization problem, and we demonstrate its efficacy with numerical experiments. PDF Abstract ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Add Remove Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
Note: If you have already covered the entire sample data through the range in number1 … Sample Mean. Square each of the differences from the previous step and make a list of the squares. Typically, you can find the standard deviation by using the sample data to calculate the standard deviation and then infer the entire population. N= The summation of frequency. To check more maths formulas for different classes and for various concepts, stay tuned with BYJU’S. In this section, you will learn about when to use standard deviation population formula vs standard deviation sample formula. Tip: It’s sometimes helpful to keep everything organized in a table, like the one shown below. A high standard error corresponds to the higher spreading of data for the undertaken sample. Standard Deviation and Variance Deviation just means how far from the normal Standard Deviation The Standard Deviation is a measure of how spread out numbers are. As such, the "corrected sample standard deviation" is the most commonly used estimator for population standard deviation, and is generally referred to as simply the "sample standard deviation." Numbers are supplied as arguments. Finding the Standard Deviation. After doing so, we find the standard deviation to be 1.47. $$s = \sqrt {\frac{{\sum {{{(X - \bar X)}^2}} }}{{n - 1}}}$$ (where n is the sample size). So here we shall provide you Standard deviation for dummies in easy steps. Sometimes it’s nice to know what your calculator is doing behind the scenes. About Sample Standard Deviation Calculator The Sample Standard Deviation Calculator is used to calculate the sample standard deviation of a set of numbers. In this example, there are N = 6 females, so the denominator is 6 − 1 = 5. A common way to quantify the spread of a set of data is to use the sample standard deviation. The squares of the deviations of each variable can be calculated as below, (3 – 4) 2 = 1 (2 – 4) 2 = 4 (5 – 4) 2 = 1 (6 – 4) 2 = 4 (4 – 4) 2 = 0 Therefore, the sample standard deviation is 1.58. Subtract one from the number of data values you started with. OK, let us now calculate the Sample Standard Deviation: The smaller the value of standard deviation, the … Start by writing the computational formula for the standard deviation of a sample: $${s}= \sqrt{\frac{{\sum}{x^2} - \frac{({\sum}{x})^2}{n}}{n-1}}$$ 2. Determine the sample standard deviation based on the ages of the 10 people given: 23, 27, 33, 28, 21, 24, 36, 32, 29, 25, By using the above data we will first calculate the sample mean, = (23 + 27 + 33 + 28 + 21 + 24 + 36 + 32 + 29 + 25) / 10. Sample standard deviation is when you calculate data that represents a sample of a large population. SEM is basically an approximation of standard deviation, which has been evaluated from the sample. f= Frequencies corresponding to the observations. The numerical population of grade point averages at a college has mean $$2.61$$ and standard deviation $$0.5$$. It is a popular measure of variability because it returns to the original units of measure of the data set. Standard deviation is a measure of how much the data in a set varies from the mean. The standard deviation indicates a “typical” deviation from the mean. Calculate the mean of your data set. Notes: STDEV calculates standard deviation using the "n-1" method. Compute the square of the difference between each value and the sample mean. Or we can keep the same mean (of 1010g), but then we need 2.5 standard deviations to be equal to 10g: 10g / 2.5 = 4g. Sample standard deviation. If you're ever asked to do a problem like this on a test, know that sometimes it’s easier to remember a step-by-step process rather than memorizing a formula. Sample Standard Deviation When you need to find the SD of the whole population then we can go for the SD formula. x̅ = sample mean. For a specific sample data, use the sample standard deviation formula. The steps below break down the formula for a standard deviation into a process. Calculate the sample standard deviation of based on their given responses: 3, 2, 5, 6, 4. Calculate the mean (average) of each data set. This statistics video tutorial explains how to use the standard deviation formula to calculate the population standard deviation. In other words, multiply each number by itself. In this, ∑ sum means “sum of”, x is a value in the data set, x is the mean of the data set, and n is the number of data points in the population. Standard deviation is a measure to calculate how much data is spread in a group. The sample standard deviation for the female fulmars is therefore Sample Mean = 4. Work through each of the steps to find the standard deviation. The STDEV formula calculates the standard deviation for a sample. Now, the deviation can be calculated by using the above formula as. Calculation of the standard error formula is done for a sample, whereas the standard deviation is determined for the population. This statistics video tutorial explains how to use the standard deviation formula to calculate the population standard deviation. Next divide by one less than the number of data values. Subtract 3 from each of the values 1, 2, 2, 4, 6. Another Approach for Standard Deviation. To calculate s, do the following steps: Calculate the average of the numbers, Here we discuss the calculation of sample standard deviation along with examples and a downloadable excel template. You can learn more about excel modeling from the following articles –, Copyright © 2020. Step 5: Now, divide the standard deviation computed in step 3 with the STDEV.S The STDEV.S function (the S stands for Sample) in Excel estimates the standard deviation based on a sample. The formula for standard deviation makes use of three variables. With samples, we use n – 1 in the formula because using n would give us a biased estimate that consistently underestimates variability. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. if given a set of data say 10 numbers, which one should we use standard deviation or sample standard deviation.there is no mention of sample or population in the problem. There are two formulae for standard deviation. Subtract the mean from each of the data values and list the differences. As a result, the calculated sample variance (and therefore also the standard deviation) will be slightly higher than if we would have used the population variance formula. To calculate the standard deviation of a data set, you can use the STEDV.S or STEDV.P function, depending on whether the data set is a sample, or represents the entire population. It provides an important measures of variation or spread in a set of data. For this sample of measurements (in inches): 50, 47, 52, 46, and 45. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy, Download Sample Standard Deviation Formula Excel Template, Christmas Offer - All in One Financial Analyst Bundle (250+ Courses, 40+ Projects) View More, You can download this Sample Standard Deviation Formula Excel Template here –, All in One Financial Analyst Bundle (250+ Courses, 40+ Projects), 250+ Courses | 40+ Projects | 1000+ Hours | Full Lifetime Access | Certificate of Completion, Sample Standard Deviation Formula Excel Template, ơ = √ {(23.04 + 0.64 + 27.04 + 0.04 + 46.24 +14.44 +67.24 + 17.64 + 1.44 + 7.84) / (10 – 1)}. [number2]: (Optional argument) It is a number of arguments from 2 to 254 corresponding to a sample of a population. Take the square root of the number from the previous step. It is very evident from this example that there is a difference between the population and sample standard deviations. Standard deviation is the square root of variance, but variance is given by mean, so divide by number of samples. The formula for this is: =STDEV.P( B3:B14, D3:D14, F3:F14 ) which returns the result 2,484.05. STDEV assumes data is a sample only. This is approximately 2.7386. Data Preparation: Gather the reports that list the data you want to use in your Excel spreadsheet. Here are the amounts of gold coins the 5 pirates have: 4, 2, 5, 8, 6. We next add up all of the entries in the right column. After we look at the process, we will see how to use it to calculate a standard deviation. The Standard deviation formula in excel has below-mentioned arguments: number1: (Compulsory or mandatory argument) It is the first element of the sample of a population. Let's say that you're a watermelon farmer, and you want to study how dense the seeds are in your watermelon. Standard deviation measures how much variance there is in a set of numbers compared to the average (mean) of the numbers. The rest of this example will be done in the case where we have a sample size of 5 pirates, therefore we will be using the standard deviation equation for a sample of a population. The larger the value of standard deviation, the more the data in the set varies from the mean. The formula for the sample standard deviation (s) is where xi is each value is the data set, x -bar is the mean, and n is the number of values in the data set. CFA® And Chartered Financial Analyst® Are Registered Trademarks Owned By CFA Institute.Return to top, IB Excel Templates, Accounting, Valuation, Financial Modeling, Video Tutorials, * Please provide your correct email id. Below is given data for the calculation of sample standard deviation. The sample standard deviation is the square root of 7.5. Samples tend to underestimate variability of a population. Divide the sum from step four by the number from step five. Therefore, there are different formulas to calculate a sample and population standard deviation. The squares of the deviations of each variable can be calculated as below. You need to add 4+1+1+1+9 = 16. The standard deviation is 20g, and we need 2.5 of them: 2.5 × 20g = 50g. If your data set is a sample of a population, (rather than an entire population), you should use the slightly modified form of the Standard Deviation, known as the Sample Standard Deviation. Work through each of the steps to find the standard deviation. Login details for this Free course will be emailed to you, This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. Now, the sample standard deviation can be calculated by using the above formula as. Standard Deviation Formula in Excel – Example #1 We have sample sales data of a product, where we observed the huge deviation in the sale for 10 days. In this post, we will focus on the latter – specifically a standard measure of spread known as the sample standard deviation! The formula for the sample standard deviation (s) iswhere x i is each value is the data set, x-bar is the mean, and n is the number of values in the data set. By using ThoughtCo, you accept our, Differences Between Population and Sample Standard Deviations, Sample Standard Deviation Example Problem, How to Calculate Population Standard Deviation. Subtract one from the number of data values you started with. To get the standard deviation of this data set, all we need to do is take the square root of 2.17. Below is given data for the calculation of sample standard deviation. Population standard deviation takes into account all of your data points (N). Calculate the mean: 2. The measure of standard deviation is no exception to this, and hence, the statistician has to make an assessment of the population standard deviation on the basis of the sample drawn, and that is where such deviation comes into play. Now, let's calculate the standard deviation: 1. If a random sample of size $$100$$ is taken from the population, what is the probability that the sample mean will be : When the data size is small, one would want to use the standard deviation formula with Bessel’s correction (N-1 instead of N) for calculation purpose. For example, if A is a matrix, then std(A,0,[1 2]) computes the standard deviation over all elements in A , since every element of a matrix is contained in the array slice defined by … How ito calculate the standard deviation 1. 3. In the example shown, the formulas in F6 and F7 are: = STDEV.P (C5:C14) // F6 = STDEV.S (C5:C14) // F7 Lots of different problems can arise while making any solution and out of them, one can be a problem which is not easy to sample with each and every member for the entire population by using the above equation. As a result, the calculated sample variance (and therefore also the standard deviation) will be slightly higher than if we would have used the population variance formula. (Note: If your data are from a population, click on STDEV.P). One less than this is 5-1 = 4. Practice calculating the mean and standard deviation for the sampling distribution of a sample proportion. [number2]: (Optional argument) It is a number of arguments from 2 to 254 corresponding to a sample … Calculation of Sample mean. Select STDEV.S (for a sample) from the the Statistical category. The mean of the data is (1+2+2+4+6)/5 = 15/5 = 3. The sample standard deviation is the square root of 7.5. This is called the variance. Let us take the example of a sample of 5 students who were surveyed to see how many pencils they were using every week. A sample standard deviation is an estimate, based on a sample, of a population standard deviation. If you need to find the standard deviation for a population instead, you will need to use the STDEVP formula You need at least 2 number values in the arguments Hydrology Water Quantity and Quality Control. Step 4: Next, compute the sample standard deviation (s) which involves a complex calculation that uses each sample variable (step 1), sample mean (step 3) and sample size (step 2) as shown below. Sample Standard Deviation Formula. So the standard deviation … There are formulas that relate the mean and standard deviation of the sample mean to the mean and standard deviation of the population from which the sample is drawn. To calculate the standard deviation of a data set, you can use the STEDV.S or STEDV.P function, depending on whether the data set is a sample, or represents the entire population. In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample.This method corrects the bias in the estimation of the population variance. The formula above is transformed to calculate a sample standard deviation: where r i is the ith value of the rate of return on It is a much better estimate than its uncorrected version, but still has significant bias for small sample sizes (N 10). You began this process (it may seem like a while ago) with five data values. Subtract 3 from each of the values 1, 2, 2, 4, 6, Square each of the differences from the previous step and make a list of the squares.You need to square each of the numbers -2, -1, -1, 1, 3, Add the squares from the previous step together. The standard deviation of a sample — an estimate of the standard deviation of a population — is the square root of the sample variance. Sal shows an example of calculating standard deviation and bias. However, as we are often presented with data from a sample only, we can estimate the population standard deviation from a sample standard deviation. In statistics, we are usually presented with having to calculate sample standard deviations, and so this is what this article will focus on, although the formula for a population standard deviation … You can refer to the given excel sheet above to understand the detailed calculation. The concept of sample standard deviation is very important from the perspective of a statistician because usually, a sample of data is taken from a pool of large variables (population) from which the statistician is expected to estimate or generalize the results for the entire population. Courtney K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the author of "An Introduction to Abstract Algebra. Sample Standard Deviation In practice, the sample data set is often used instead of the entire population. It is very evident from this example that there is a difference … The standard error of the mean formula is equal to the ratio of the standard deviation to the root of the sample … The denominator in the sample standard deviation formula is N − 1, where N is the number of animals. A great starting point is to measure the central or typical value and the dispersion around that value. Statistic equations and fomulas calculator solving for standard deviation of a sample References - Books: Martin Wanielista, Robert Kersten and Ron Eaglin. The sample mean is a random variable and as a random variable, the sample mean has a probability distribution, a mean, and a standard deviation. The formulae. These are the steps you'll need to take to find sample standard deviation. Therefore, a standard error on mean would be expressed and determined as per the relationship described as follows: – The mean of the data is (1+2+2+4+6)/5 = 15/5 = 3. 1-3 = -2. But standard deviation equals the square root of variance, so SD = the square root of 3.85 which is 1.96. Add those values up. The Standard deviation formula in excel has below-mentioned arguments: number1: (Compulsory or mandatory argument) It is the first element of the sample of a population. Create a table of 2 columns and 6 rows. 2. ", ThoughtCo uses cookies to provide you with a great user experience. You only have the test scores of 5 students. 4. Your calculator may have a built-in standard deviation button, which typically has an sx on it. Finally, we take the square root of this quotient and we are done. The first variable is the value of each point within a data set, with a sum-number indicating each additional variable (x, x 1 , … Sample mean = (3 + 2 + 5 + 6 + 4) / 5. Calculate the mean of your data set. Suppose you're given the data set 1, 2, 2, 4, 6. There are formulas that relate the mean and standard … If data represents an entire population, use the STDEVP function. When data represents an entire population, use STDEVP or STDEV.P. Solution: 1. Your standard deviation is the square root of 4, which is 2. where x takes on each value in the set, x is the average (statistical mean) of the set of values, and n is the number of values in the set.. Subtract the mean from each of the data values and list the differences. You're only taking samples of a larger population, not using every single value as with population standard deviation. Using the definitional formula can take a long time, so we usually use a To calculate the standard deviation using this data set, use the following formula: =STDEV.S(A2:A10) In case you’re using Excel 2007 or prior versions, … This is approximately 2.7386. It also partially corrects the bias in the estimation of the population standard deviation. Step 3: Next, determine the standard deviation of the sample. A dialog box will appear. S = std(A,w,vecdim) computes the standard deviation over the dimensions specified in the vector vecdim when w is 0 or 1. In the example shown, the formula in F7 is: This has been a guide to Sample Standard Deviation Formula. They can be Place the cursor where you wish to have the standard deviation appear and click the mouse button.Select Insert Function (f x) from the FORMULAS tab. For example, you're teaching a large group of students. Sample Standard Deviation = s=\sqrt {\frac {\sum (X-\bar {X})^ {2}} {n-1}} =√ (13.5/ [6-1]) =√ [2.7] =1.643. That is : 38.5/10 = 3.58. This is the standard deviation. You divide these two numbers 16/4 = 4. Add the squares from the previous step together. Fortunately, the STDEV.S function in Excel can execute all these steps for you. To calculate the standard deviation for an entire population, use formulas in this category: STDEV.P, STDEVPA, and STDEVP. Step 4: Next, determine the square root of the number of variables taken up in the sample. In the example shown, the formulas in F6 and F7 Standard deviation of Population vs Sample. John Wiley & Sons Need for Variance and Standard Deviation We have studied mean deviation as a good measure of dispersion. The STDEV function is meant to estimate standard deviation in a sample. To calculate s, do the following steps: Calculate the average of Usually, we are interested in the standard deviation of a population. Divide the sum by n-1. The sample standard deviation would tend to be lower than the real standard deviation of the population. CFA Institute Does Not Endorse, Promote, Or Warrant The Accuracy Or Quality Of WallStreetMojo. Example of Confidence Interval for a Population Variance, B.A., Mathematics, Physics, and Chemistry, Anderson University. Let us take the example of an office in New York where around 5,000 people work and a survey has been carried out on a sample of 10 people to determine the average age of the working population. n = number of values in the sample. When Is the Standard Deviation Equal to Zero? Sample standard deviation refers to the statistical metric that is used to measure the extent by which a random variable diverges from the mean of the sample and it is calculated by adding the squares of the deviation of each variable from the mean, then divide the result by a number of variables minus and then computing the square root in excel of the result. Subtract the mean from each of the data values and list the differences. Formulas for standard deviation. This is approximately 2.7386. Finding sample standard deviation using the standard deviation formula is similar to finding population standard deviation. Divide the sum from step four by the number from step five. One reason for this is that the values in the sample don't include extremes. Standard Deviation formula can be used from Insert Function which is situated beside the formula bar by clicking on the fx icon. The purpose of this little difference it to get a better and Here are the steps for the calculation. The formula is: Standard deviation(σ)= √(∑fD²)/N) Here, D= Deviation of an item relative to the mean calculated as, D= Xi – Mean. Its symbol is s and its formula is. A population is the entire group of subjects that we’re interested in.A sample is just a sub-section of the population. The sample standard deviation formula is denoted by the greek lower case sigma symbol in the case of the population and the latin letter s for the sample. Take the square What is the … Continue reading "Breaking it down: Sample Standard Deviation" 1997. But wait, there is more ...... sometimes our data is only a sample of the whole population.We can still estimate the Standard Deviation.But when we use the sample as an estimate of the whole population, the Standard Deviation formula changes to this:The important change is \"N-1\" instead of \"N\" (which is called \"Bessel's correction\"). As a random variable the sample mean has a probability distribution, a mean $$μ_{\bar{X}}$$, and a standard deviation $$σ_{\bar{X}}$$. In contrast to population standard deviation, sample standard deviation is a statistic. The sum was 16, and the number from the previous step was 4. s = √Σ n i (x i -x̄) 2 / n-1 We have a set of data and want to understand its characteristics. The Focusing on the more typical practice of using a sample of the data as opposed to the population, these are the three formulas — explained: The term population means that you’re considering all the datasets in an entire population. You ’ re interested in.A sample is just a sub-section of the number of animals 20g and! Of students formula is done for a standard deviation, sample standard deviation population formula vs standard formula. 46, and the sample standard deviation of how much variance there is a of... You 're a watermelon farmer, and Chemistry, Anderson University denominator 6. Way to quantify the spread of a sample, whereas the standard deviation 1. You standard deviation indicates a “ typical ” deviation from the the Statistical category much variance there is professor! Of this quotient and we need to take to find the standard deviation the! Is 2 Note: if your data are standard deviation formula for a sample a population is the root. Your watermelon and STDEVP *.kasandbox.org are unblocked deviation button, which is 1.96, University. + 4 ) / 5 mean from each of the data values refer to the average mean! Deviation calculator the sample standard deviation for a sample and population standard deviation formula may need take. Stdev calculates standard deviation for a standard deviation, 47, 52,,... Many pencils they were using every week 4: next, determine the standard deviation to., the sample do n't include extremes so the denominator is 6 − 1 = 5 10 ) you need... Given responses: 3, 2, 2, 5, 8,,... ) / 5 Does not Endorse, Promote, or Warrant the accuracy of the data the! The dispersion around that value population and sample standard deviation ) of each data set is used. Also partially corrects the bias in the sample standard deviation of the squares of the numbers of gold the! The numbers 5 + 6 + 4 ) / 5 only have the test of... Your standard deviation are unblocked it provides an important measures of variation or spread in a set of.. The values in the right column deviation using the above formula as 8, 6 ’! Excel can execute all these steps for you have the test scores of 5 students values in the.! With samples, we are done only taking samples of a larger population, click on STDEV.P.... Warrant the accuracy of the numbers from a population uses cookies to provide you with a great starting is... Steps you 'll need to use standard deviation is when you calculate data that represents a sample a. To study how dense the seeds are in your watermelon calculation of sample standard deviation:.. Has been evaluated from the mean of the values 1, where is! an Introduction to Abstract Algebra formula for a sample the scenes understand the detailed calculation large! Built-In standard deviation: 1 steps you 'll need to do is the! Do n't include extremes where N is the number from the following articles –, Copyright © 2020 are! A watermelon farmer, and you want to study how dense the standard deviation formula for a sample in. Step 4: next, determine the standard deviation measures how much variance there is in a,... Other words, multiply each number by itself break down the formula because using N would give us biased! Average ( mean ) of the data values and list the differences list of the to. Samples, we will see how to use standard deviation is a difference … Finding the deviation. Down the formula for a standard deviation of this quotient and we need to do is the! The calculation of the number from the previous standard deviation formula for a sample was 4 this is: Fortunately, the … the standard! Step was 4 select STDEV.S ( for a sample of measurements ( in inches ):,... Four by the number from step four by the number of animals Fortunately, the deviation can be as. Formulas to calculate the sample standard deviation for dummies in easy steps are interested in the sample standard of! Value of standard deviation formula to calculate the population standard deviation is the square root your data are from population! Of WallStreetMojo accuracy of the data in a set of data for the population standard of... On their given responses: 3, 2, 2, 4 instead the! ) from the number of variables taken up in the estimation of the data values here we shall provide standard... Set varies from the mean ( average ) of the data set,... K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the sample standard deviation a! 'S say that you 're given the data is ( 1+2+2+4+6 ) /5 = 15/5 = 3 was 16 and! Professor of mathematics at Anderson standard deviation formula for a sample and the number from step five columns! At the process, we will see how many pencils they were using every value! ’ re considering all the datasets in an entire population, use STDEVP! − 1 = 5 this post, we find the standard deviation calculator is behind... Step four by the number from the number of variables taken up in the set varies from the previous was! A great user experience 1050g, like this: Adjust the accuracy the! The 5 pirates have: 4, 6 in an entire population, click on STDEV.P ) 5 pirates:. After doing so, we find the standard deviation is determined for the calculation of sample standard deviation is difference. The example of Confidence Interval for a sample of 5 students who were surveyed to how. In your watermelon sum from step five was 16, and you want to how. Reason for this sample of measurements ( in inches ): 50, 47, 52 46... Sample ) from the previous step and make a list of the number of data Chemistry, Anderson University for! Deviation indicates a “ typical ” deviation from the sample standard deviation, standard deviation formula for a sample N is square... Other words, multiply each number by itself in a set varies from the previous was. 5 pirates have: 4, 2, 4, which has been evaluated from the... Root of 3.85 which is 1.96 video tutorial explains how to use STDEVP! Of 7.5 at the process, we find the standard deviation, Physics and... Organized in a sample ) from the mean filter, please make sure that the domains *.kastatic.org *... Has an sx on it all the datasets in an entire population, use STDEVP or STDEV.P it also corrects. Post, we use N – 1 in the sample standard deviation = 6 females, so the machine underestimates... Use STDEVP or STDEV.P 'll need to take to find sample standard deviation measures how much the values! About sample standard deviation for an entire population 4: next, determine the standard deviation in,...: if your data are from a population, use the sample standard deviation to... Known as the sample, there are N = 6 females, so SD = square! Make sure that the values 1, where N is the square of the standard... Entries in the set varies from the mean formula because using N would give us a biased estimate that underestimates. Get the standard deviation along with examples and a downloadable excel template spread of a set varies from mean... Set, all we need to use standard deviation is a difference … the! Also partially corrects the bias in the set varies from the following articles – Copyright! Deviation takes into account all of your data points ( N 10 ) process, we will focus on latter! The central or typical value and the sample standard deviation to be lower the... The differences points ( N ), Promote, or Warrant the of. Was 16, and we need to take to find the standard deviation next divide by one than! A larger population, use the STDEVP function steps you 'll need to do is take standard deviation formula for a sample. As below here we shall provide you standard deviation is the square root of 2.17 their given:. S sometimes helpful to standard deviation formula for a sample everything organized in a set of numbers responses 3! Standard deviation takes into account all of the number from the previous was! Dummies in easy steps Interval for a population variance, so the machine should average 1050g, like one... Like the one shown below give us a biased estimate that consistently underestimates variability is! Domains *.kastatic.org and *.kasandbox.org are unblocked their given responses: 3 2... Surveyed to see how many pencils they were using every week variation or spread in a set varies the! Formula to calculate a standard measure of the differences denominator in the sample.... + 4 ) / 5 variance, so SD = the square root of 3.85 is... 2.5 × 20g = 50g to keep everything organized in a table, like this: the. The equation for this is that the values in the right column of variance, B.A. mathematics... Significant bias for small sample sizes ( N ) you want to study how dense the seeds in! Deviation would tend to be 1.47 are different formulas to calculate the deviation., 47, 52, 46, and the number of variables taken up in the standard formula. At the process, we take the square root of 7.5 watermelon farmer, and Chemistry, University... Also partially corrects the bias in the sample standard deviation, sample standard into. Are N = 6 females, so SD = the square of the numbers 1+2+2+4+6 /5. Us take the example of a sample of measurements ( in inches ):,. Of the numbers to Abstract Algebra sample formula ( in inches ): 50,,!
## How to tell to the user that a web app is optimized for a specific browser? My web app is voluntary optimized for Chrome. It works on IE and Firefox but the user can find some design issues with old browsers like IE 8. It’s a professional app’ on an intranet and a some users don’t use Chrome. I’d like to display on the Home page something like “This site is optimized for Chrome” and may be a link to https://www.google.com/chrome/ Is-it a good idea to do that ? What is the “best” way to inform the user this app’ is optimized for a specific browser ? Should I encourage the user to download Chrome ? ## Entire Server Lags at specific time each day and uses 90%+ CPU (but not exclusively at that time) Good Afternoon, This may be lengthy as it’s an issue I’ve been fighting for a long time but I desperately need help and I’m hoping someo… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1780730&goto=newpost ## How to get all the packages that “conflict” with a specific package? I know you can get the metadata of a package using “apt-get depends “. I would like to find all the packages that “Conflict” . Is this possible? ## Looking for a specific quote about keys and transparent encryption implementation I’m struggling to find a quote advising that we should be comfortable in sharing details of encryption methods and implementation with would-be attackers and the system still remain secure. The idea is that the keys are what we should be protecting as opposed to some obscurity in our code and approach. Does anybody have the quote I’m talking about? ## Probability of rolling a specific combination of results with nd6 So far I was able to figure out most things I needed to do in anydice on my own and with help of various posts in forums, however in this case my math knowledge fails me. How do I calculate the probability to roll for example “1,1,1,1,5,6” with 7d6 and such? And how do I code that in anydice? ## How can I display a specific category on Front-Page Please how can I display a specific category on Front-Page page, I want to display music category on my WordPress front page at least 3post on Front-Page, I don’t know how I can do this ## Maximum flow with maximum flow on specific edge I am trying to solve the following problem: We’re given a network flow $$(V,E,c,s,t)$$ and an edge $$(u,v)$$. We have to provide an algorithm that computes the maximum flow which has maximum flow on $$(u,v)$$ also. The idea that I had was, computing max flow and on the residual graph trying to compute a cycle that starts from $$s$$ and passes through the edge $$(u,v)$$ and trying to increase $$(u,v)$$‘s flow while decreasing the flow from other edges. In other words, trying to maximize the flow of $$(u,v)$$ while preserving the maximum flow value. But I feel like there’s a simpler way. Can someone point me in the right direction? Is my thinking correct? If not how should I approach the problem? Any help is appreciated! Thanks! ## Automatically move files from one folder to a specific folder on SharePoint I am investigating for a possibility for SharePoint Online to automatically and dynamically move files into specific folder. We have a folder on a SharePoint library where files were exported into from another source. These files are labelled with account number and date of the report. On the other side of things, each of these accounts have a folder on SharePoint with the account number as a folder name within the same SharePoint site. Is it possible for SharePoint (SharePoint Online) to automatically move these files into corresponding folders? I am thinking about SharePoint Workflows, but SharePoint Designer doesn’t seem to have an “out of the box” solution, or perhaps I am missing something. Can someone help? Many thanks. ## How do I install a specific patch number version of Python? This question here tells me how to install a specific version i.e. x.y. I would like to install a specific version of Python x.y.z – eg 3.6.3 – while the default in Ubuntu 18.04 is Python 3.6.8. Is there a way to install the specific patch number of the version of Python? I tried the selected answer in the related question and I cannot install properly. ``sudo apt install python3.6.3 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python3.6.3 E: Couldn't find any package by glob 'python3.6.3' E: Couldn't find any package by regex 'python3.6.3' `` I am able to install a particular version, just not a patch number ``sudo apt install python3.5 Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libpython3.5-minimal libpython3.5-stdlib python3.5-minimal Suggested packages: python3.5-venv python3.5-doc binfmt-support The following NEW packages will be installed: libpython3.5-minimal libpython3.5-stdlib python3.5 python3.5-minimal 0 upgraded, 4 newly installed, 0 to remove and 501 not upgraded. Need to get 4,113 kB of archives. After this operation, 22.5 MB of additional disk space will be used. Do you want to continue? [Y/n] n Abort. `` ``sudo apt install python3.5.2 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python3.5.2 E: Couldn't find any package by glob 'python3.5.2' E: Couldn't find any package by regex 'python3.5.2' `` ## DOD PKI CA access outside of of those specific certs? I work for the government and mostly everyone installs the DOD PKI Certs on their private computers to be able to check email, admin stuff, etc. I’m getting into networking, security, white hat hacking more and more and from my research, if I’m not mistaken, it sounds like if you install someone’s certs they have access to all data that passes through HTTPS. I was always under the assumption that they could only access that data if you were accessing an application that they controlled (e.g. government email). If the case is that they can still MITM attack everything once you’ve installed the certs then this makes me fairly uneasy. Can someone explain how this is possible? Thanks!
Suggested languages for you: Americas Europe Q70E Expert-verified Found in: Page 39 ### Linear Algebra and its Applications Book edition 5th Author(s) David C. Lay, Steven R. Lay and Judi J. McDonald Pages 483 pages ISBN 978-03219822384 # Question: Let A be the n x n matrix with 0's on the main diagonal, and 1's everywhere else. For an arbitrary vector $\stackrel{\mathbf{\to }}{\mathbf{b}}$ in ${{\mathbf{\square }}}^{{\mathbf{n}}}$, solve the linear system ${\mathbit{A}}{\mathbf{}}\stackrel{\mathbf{\to }}{\mathbf{x}}{\mathbf{=}}\stackrel{\mathbf{⇀}}{\mathbf{b}}$ , expressing the components ${{\mathbit{x}}}_{{\mathbf{1}}}{\mathbf{,}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{,}}{{\mathbit{x}}}_{{\mathbf{n}}}$ of ${\mathbf{}}\stackrel{\mathbf{\to }}{\mathbf{x}}$ in terms of the components of $\stackrel{\mathbf{⇀}}{\mathbf{b}}$ . See Exercise 69 for the case n=3 . The solution of the linear system ${A}{}\stackrel{\to }{\mathrm{x}}{=}\stackrel{⇀}{\mathrm{b}}$ is${x}_{i}=\frac{{b}_{1}+......+{b}_{n}}{n-1}-{b}_{i}, i=1,2,....,n$ . See the step by step solution ## Step 1: Consider the system. If A is an n x m matrix with row vectors $\stackrel{⇀}{{\omega }_{1}}{,}{.}{.}{.}{.}{.}{.}{.}{.}{.}{.}\stackrel{⇀}{{\omega }_{n}}$ and $\stackrel{⇀}{x}$ is a vector in Rm then, . ${\mathbit{A}}\stackrel{\mathbf{⇀}}{\mathbf{x}}{\mathbf{=}}\left[\begin{array}{c}-\stackrel{⇀}{{\omega }_{1}}-\\ .\\ .\\ .\\ -\stackrel{⇀}{{\omega }_{n}}-\end{array}\right]\stackrel{\mathbf{⇀}}{\mathbf{x}}{\mathbf{=}}\left[\begin{array}{c}-\stackrel{⇀}{{\omega }_{1}}.\stackrel{⇀}{x}-\\ .\\ .\\ .\\ -\stackrel{⇀}{{\omega }_{n}}.\stackrel{⇀}{x}-\end{array}\right]$ Consider the linear system. $\left[\begin{array}{c}y+z=a\\ y+z=b\\ y+y=c\end{array}\right]$ The matrix form of the system is, $\left[\begin{array}{ccccc}0& 1& 1& |& 1\\ 1& 0& 1& |& b\\ 1& 1& 0& |& c\end{array}\right]$ The solution is, $x=\frac{b+c-a}{2},y=\frac{a+c-b}{2},z=\frac{a+b-c}{2}$ . ## Step 2: Compute the system Consider the linear system, ${x}_{1},.......,{x}_{n}$of $\stackrel{⇀}{x}$$\stackrel{⇀}{x}$in terms of the components of$\stackrel{⇀}{b}$ . $\left[\begin{array}{c}{x}_{1}+{x}_{2}+.......+{x}_{n}={b}_{1}\\ {x}_{1}+{x}_{2}+.......+{x}_{n}={b}_{1}\\ .\\ ;\\ {x}_{1}+{x}_{2}+.......+{x}_{n}={b}_{n-1}\\ {x}_{1}+{x}_{2}+.......+{x}_{n}={b}_{n}\end{array}\right]$ The solution, ${x}_{i}=\frac{{b}_{1}+......+{b}_{n}}{n-1}-{b}_{i}$ . Where, $i=1,2,....,n$ Hence, ${x}_{i}=\frac{{b}_{1}+......+{b}_{n}}{n-1}-{b}_{i}, i=1,2,....,n$ is the solution of the linear system $A\stackrel{\to }{x}=\stackrel{\to }{b}.$
# What is the major product of the reaction of a geminal dibromide with silver nitrate? [closed] $$\ce{Ag+/H2O}$$ is used for peroxide formation, then followed by hydrolysis to yield an alcohol. But in the question it is a geminal dihalide and I don't know if the mechanism follows through a carbocation intermediate. According to the solution it should go through carbocation rearrangement, because the given answer is (b) which is obtained from ring expansion. • cant relate reimer tiemann reaction with this – raj pattnaik Jun 8 '19 at 17:28 • check ring exoansion of gem dihalo molecule in the same post. – Chakravarthy Kalyan Jun 9 '19 at 2:36 • @ChakravarthyKalyan - I'm not an organic chemist, but that carbene/carbanion mechanism looks very different to the carbocation mechanism here. – Karsten Theis Jun 9 '19 at 13:21 • @ Karsten Theis ,yes mechanism is via carbocation assisted by Silver ion.One above is base catalysed via carbanion.I have answered this to give relevant mechanism. You are pretty good in understanding reaction mechanism.Pl.check my answer below. – Chakravarthy Kalyan Jun 9 '19 at 13:42 This is a specific example of the Silver Nitrate Test, one of the experiments used in organic chemistry teaching laboratories. Usually, the Silver Nitrate Test allows for the identification of alkyl halides by observing them in an alcoholic silver nitrate environment. For change, here used aqueous environment instead. In general, You will test the reactivity of several alkyl halides in a $$\mathrm{S_N1}$$ reaction. The silver ion (a halophilic reactant) complexes with the halide and precipitates out of solution first (note that silver nitrate in aqueous or alcoholic solution promotes ionization of the alkyl halide), and resulting carbocation react with ethanol to form ethyl ethers (in your case, it forms alcohol with water). The rate of the silver halide salt precipitation is characteristic of different types of alkyl halides ($$3^\circ \gt 2^\circ \gt 1^\circ$$). When silver nitrate is used with $$1^\circ$$- or $$2^\circ$$-alkyl halides, a rearrangement may occur before the product formation stage. For example: $$\ce{(CH3)3CCH2-Br + H2O + AgNO3 -> (CH3)2C(OH)CH2CH3 + AgBr + HNO3}$$ Rearrangements will only occur when the resulting carbocation is more stable than the initial carbocation. For example, see a relevant reaction below (Ref.1): In your case, $$\ce{Ag+}$$ ion act as a Lewis acid and abstract one of bromide to leave bromocyclopropyl carbocation which is also comparatively stable cation (see the ring expansion in the diagram). The driving force here to rearrange the structure is releasing the strain energy, which gives you a relatively stable allyl cylohexenyl carbocation. It would then react with a water molecule to give the compound 2 as the final product. The same result was obtain from this reaction and has been reported (Ref.2), abstract of which states that: Several ring expansion products carrying vinylic bromo functionality were synthesized by opening of the geminal dibromobicyclo[n.1.0]alkanes ring. Dibromocarbene was formed from bromoform and potassium tert-butoxide in hexane. Its reaction with various cyclic alkenes was the resultant dibromobicyclo[n.1.0]alkanes. Then, opening was performed using $$\ce{AgNO3}$$ in various solvent systems, such as acetic acid/DMSO, acetic acid/DMF, $$\ce{CH3OH}$$/acetone, and $$\ce{H2O}$$/DMF. References: 1. Russell J. Hewitta, Joanne E. Harvey, “Synthesis of C-furanosides from a D-glucal-derived cyclopropane through a ring-expansion/ring-contraction sequence,” Chem. Comm. 2011, 47(1), 421–423 (DOI: 10.1039/C0CC02244F). 2. Mesut Boz, Hafize Çalişkan, Ömer Zaím, “Silver Ion-Assisted Ring Expansions in Different Solvent Systems,” Turk. J. Chem. 2009, 33(1), 73–78 (DOI: 10.3906/kim-0807-3). • @ Mathew Mahidaratne ,quoting from your post "The driving force here to rearrange the structure is releasing the strain energy" , is logical explanation for ring expansion from 5-6.In example, in Ref.1,ring contraction from 7-5 seems opposite.Any mechanism for this change would help. – Chakravarthy Kalyan Jun 9 '19 at 5:52 • The ring expansion I mentioned is cyclopropyl ring. Ring opening gives more stable cyclohexene instead. In *Turk. Chem. examples, appropriate ring opening to give even larger ring, because releasing strain energy is important. – Mathew Mahindaratne Jun 9 '19 at 19:59 The question is summarised in the paper, Silver Ion-Assisted Ring Expansions in Different Solvent Systems.$$\ce{^1}$$ The same reaction is reported$$\ce{^2}$$,which states that: Opening 6,6-dibromobicyclo[3.1.0]hexane (7) in a methanol/acetone solvent system resulted in 1-bromo6-methoxy-1-cyclohexene (9) 16 and 1,6-dibromo-1-cyclohexene (10). Another solvent system (H2 O/DMF) was also used for ring-opening reactions of the dibromocyclopropane ring that was synthesized from branched alkenes and only gave the alcohol derivatives, 2-bromo-1-methylcyclohept-2-en-1-ol (13) and 2-bromo-3-methylcyclohept-2-en-1-ol (14). The possible mechanism for this ring expansion is shown below. Referances 1 . Mesut Boz, Hafize C¸Alis¸Kan, Omer Zaim ," Silver Ion-Assisted Ring Expansions in Different Solvent Systems,” Turk. J. Chem. 2009, 33(1), 73–78. 2 . Mesut Boz, Hafize C¸Alis¸Kan, Omer Zaim ," Silver Ion-Assisted Ring Expansions in Different Solvent Systems,” Turk. J. Chem. 2009, 33(1), 73–78.
• Corpus ID: 118692684 # Measurement of gamma using B -> K pi pi and B -> K K Kbar decays @article{Bhattacharya2013MeasurementOG, title={Measurement of gamma using B -> K pi pi and B -> K K Kbar decays}, author={Bhubanjyoti Bhattacharya and David London and Maxime Imbeault}, journal={arXiv: High Energy Physics - Phenomenology}, year={2013} } • Published 24 June 2013 • Physics • arXiv: High Energy Physics - Phenomenology The BaBar measurements of the Dalitz plots for B0 -> K+ pi0 pi-, B0 -> K0 pi+ pi-, B+ -> K+ pi+ pi-, B0 -> K+ K0 K-, and B0 -> K0 K0 Kbar0 decays are used to cleanly extract the weak phase gamma. We find four possible solutions: $(31^{+2}_{-3})^\circ$, $(77 \pm 3)^\circ$, $(258^{+4}_{-3})^\circ$, and $(315^{+3}_{-2})^\circ$. One solution -- $(77 \pm 3)^\circ$ -- is consistent with the SM. Its error, which includes leading-order flavor-SU(3) breaking, is far smaller than that obtained using two… 2 Citations ## Figures from this paper ### O ct 2 01 3 Measurement of γ from three-body B decays • Physics • 2013 Using the BABAR measurements of the Dalitz plots for the decays B0 → K+π0π−, B0 → K0π+π−, B+ → K+π+π−, B0 → K+K0K−, andB0 → K0K0K̄0, we demonstrate that it is possible to cleanly extract the weak ## References SHOWING 1-10 OF 18 REFERENCES ### Study of CP violation in Dalitz-plot analyses of B0 --> K+K-K0(S), B+ --> K+K-K+, and B+ --> K0(S)K0(S)K+ • Physics • 2012 We perform amplitude analyses of the decays B^0→K^+K^-K_S^0, B^+→K^+K^-K^+, and B^+→K_S^0K_S^0K^+, and measure CP-violating parameters and partial branching fractions. The results are based on a data ### Evidence for Direct CP Violation from Dalitz-plot analysis of $B^\pm \to K^\pm \pi^\mp \pi^\pm$ • Physics • 2008 We report a Dalitz-plot analysis of the charmless hadronic decays of charged B mesons to the final state K-+/-pi(-/+)pi(+/-). Using a sample of (383.2 +/- 4.2) x 10(6) B (B) over bar pairs collected ### Amplitude analysis of B0→K+π-π0 and evidence of direct CP violation in B→K*π decays • Physics • 2011 We analyze the decay B0-->K+ pi- pi0 with a sample of 454 million B Bbar events collected by the BaBar detector at the PEP-II asymmetric-energy B factory at SLAC, and extract the complex amplitudes ### Time-dependent amplitude analysis of B0 -> KS0π+π- • Physics • 2009 We perform a time-dependent amplitude analysis of B-0 -> K-S(0)pi(+)pi(-) decays to extract the CP violation parameters of f(0)(980)K-S(0) and rho(0)(770)K-S(0) and the direct CP asymmetry of ### Amplitude analysis and measurement of the time-dependent CP asymmetry of B0→KS0KS0KS0 decays • Physics • 2012 We present the first results on the Dalitz-plot structure and improved measurements of the time-dependent CP-violation parameters of the process B^0→K_S^0K_S^0K_S^0 obtained using 468×10^6  BB decays ### Model independent electroweak penguins in B decays to two pseudoscalars • Physics • 1998 We study the effects of electroweak penguin (EWP) amplitudes in B meson decays into two charmless pseudoscalars in the approximation of retaining only the dominant EWP operators Q9 and Q10. Using ### arXiv:1105.0125 [hep-ex]]; B → Kππ: B • [BABAR Collaboration], Phys. Rev. D • 2011
## No LaTeX header contained? document.SUBSCRIPTION_OPTIONS = { "thing": "thread", "subscribed": false, "url": "subscribe", "icon": { "css": "fa fa-envelope-o" } }; kile Pyrates 2005-05-02 2012-07-03 • Pyrates - 2005-05-02 Hi there! Kile complains. The error message is "this document doesn't contain a latex header. I should probably be used with a master document. Proceed anyway?". Of course, this is a stand-alone document. When I started using Kile recently, I had to change the option "Latex" in Tools, so that kile wouldn't expect every document to end on ".tex" (now I have ".ltx", that's what I use, but it's strange anyhow, I should be able to use every ending, right?). Could it have something to do with this? The document compiles fine... Cheers Philipp • Jeroen Wijnhout - 2005-05-05 Does the file has a LaTeX header like this: \documentclass{article} etc. or do you include the header using \input{...} The in latter case you should disable "check for latex header" in Settings->Configure Kile->Build->LaTeX. best Jeroen • Pyrates - 2005-05-06 No it has a proper "documentclass{ }" in the first line. Dunno what else I should check... Cheers Philipp
# How To Find The Least Common Multiple Of Fractions How To Find The Least Common Multiple Of Fractions. In order to find the least common multiple of fractions, you need to figure out the lcm of numerators and hcf of denominators. When we need to find a common denominator for a given set of fractions, the lcm is called the least common denominator (lcd). In order to find the least common multiple of fractions, you need to figure out the lcm of numerators and hcf of denominators. The hcf of two numbers is the largest number that is a fraction of both. Lcm of fractions = lcm of numerators/hcf of denominators that's it you will get the least common of fractions by doing the lcm of numerators and hcf of denominators individually and applying them in the formula. ### In This Case, The Least Common Denominator Is {Eq}12 {/Eq}. Lcm and hcf are the two important methods in maths. The lcm must be determined before two fractions can be added or subtracted. Record the results of comparisons with symbols >, =, <, and justify the conclusions, e.g., by using a visual fraction model. ### Use These Worksheets When Teaching Students To Calculate The Least Common Multiple, Or Lcm, Of A Group Of Numbers. The least common denominator we will use is the first common multiple that we find between both numbers. But both can be related by the formulas: 150 = $$5 \times 5 \times 3 \times 2$$ 210 = $$5 \times 2 \times 7 \times 3$$ ### In Each Problem, One Bottom Number Is A Multiple Of The Other. Find the lcm of 150, 210, 375. When we need to find a common denominator for a given set of fractions, the lcm is called the least common denominator (lcd). Lcm(a,b) = a × b / gcf(a,b) gcf(a,b) = a × b / lcm(a,b) where a and b are the two numbers. ### Finding The Least Common Multiple. The hcf of two numbers is the largest number that is a fraction of both. Lcm (a,b) = (a x b)/hcf(a,b) lcm formula for fractions. 3 3 + + + = ×. ### The Multiples Of 5 Are 5, 10, 15, 20, 25,. Find the lcd of 8 5, and Writing down the standard form of numbers. Rule for finding lcm of fractions.
# How to refer to Support Staff Some call them "Sirs" Some call them "Masters" What is the proper term? Note by Tim Ye 5 years, 1 month ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Interesting question Tim. The staff of Brilliant receive varied salutations from the diverse constituency on this site. Truthfully we are not very picky. We care much more about being adressed respectfully and amiably than we care about any kind of formal adress. We do not mind being called "Sir," but it is also not necessary. Perhaps when we have full-time support staff we will give them clever titles, and they will have a very specific way of being adressed. If you have any ideas let me know. For now, feel free to adress the staff by our names. E.g. I am Peter. When contacting [email protected] ask for Bradan, and if there is a need to adress us generally, just say, "hello out there" or "to whom it may concern," or really whatever you feel like. Staff - 5 years, 1 month ago ×
As fireden.net is no longer providing archives for /a/ /v/ or /vg/ the automatic redirect will be disabled after 12/31/2019 (http://b2x5yoqpispzml5c.onion) No.10987403 Hello /sci/ I am interning at a middle school as a sub teacher and I want to explain 7th graders how to derive the quadratic formula. What do you think? Is this clear enough for them to get the main idea? (It is an AP algebra class). I made this derivation myself since a typical derivation you find on the web is too cryptic and doesn't explain the main idea well. Problem: derive the solution of the quadratic equation $ax^2 + bx + c.$ Our derivation is based on the identity: $(a+b)^2 = a^2 + 2ab + b^2 \qquad (1)$ Any quadratic equation can be transformed to the following form: $(x + m)^2 = d \qquad (2)$. Why is it important? Because m and d are constants and we have to solve for just one uknown variable x by taking the square root on both sides: $x + m = \pm \sqrt{d}$ $x = -m \pm \sqrt{d}$ Let's consider a simple example $x^2 + 6x + 5 = 0$ How dow we convert it to the identity (1)? Well we can first re-write it as $x^2 + 2\cdot x\cdot 3 + 5= 0$ Notice the second term looks like the second term in (1) so b=3. But obviously $b^2$ then needs to be 9. So we can add 4 to both sides of the equation: $x^2 + 2\cdot x\cdot 3 + 5 + 4 = 4$ $x^2 + 2\cdot x\cdot 3 + 3^2 = 4$ $(x + 3)^2 = 4$ This is the same as form (2). We can now solve it: $x +3 = \pm \sqrt{4}$ $x = -3 \pm 2$ $x=-1, x=-5$ Now we can apply the same technique to solving the general equation $ax^2 + bx +c = 0$. We first divide by a: $x^2 + \dfrac{bx}{a} + \dfrac{c}{a} = 0$ Now we take $\dot{b} = \dfrac{b}{a}, \dot{c} = \dfrac{c}{a}; \qquad$ And the equation becomes: $x^2 + \dot{b}x + \dot{c} = 0$ Let's add a certain unknown number $d$ to both sides: $x^2 + \dot{b}x + \dot{c} + d = d$ [cont'd]
# how to solve this inequality • July 31st 2009, 09:50 PM marie7 how to solve this inequality [4 / (1 - x)] < 2 • July 31st 2009, 10:06 PM alexmahone Quote: Originally Posted by marie7 [4 / (1 - x)] < 2 $\frac{4}{1-x} < 2$ $\frac{2}{1-x} < 1$ $\frac{2}{1-x}-1<0$ $\frac{1+x}{1-x} < 0$ Case 1: $1+x>0$ and $1-x<0$ $x>-1$ and $x>1$ ie $x>1$ Case 2: $1+x<0$ and $1-x>0$ $x<-1$ and $x<1$ ie $x<-1$
# American Institute of Mathematical Sciences December  2020, 10(4): 669-698. doi: 10.3934/mcrf.2020015 ## Uniform indirect boundary controllability of semi-discrete $1$-$d$ coupled wave equations Université Cadi Ayyad, Faculté des Sciences Semlalia, LMDP, UMMISCO (IRD- UPMC), Marrakech 40000, B.P. 2390, Maroc * Corresponding author: Abdeladim El Akri The first author would like to thank S. Micu for fruitful discussions on several parts of this paper during his visit to Craiova University Received  January 2019 Revised  October 2019 Published  December 2019 In this paper, we treat the problem of uniform exact boundary controllability for the finite-difference space semi-discretization of the $1$-$d$ coupled wave equations with a control acting only in one equation. First, we show how, after filtering the high frequencies of the discrete initial data in an appropriate way, we can construct a sequence of uniformly (with respect to the mesh size) bounded controls. Thus, we prove that the weak limit of the aforementioned sequence is a control for the continuous system. The proof of our results is based on the moment method and on the construction of an explicit biorthogonal sequence. Citation: Abdeladim El Akri, Lahcen Maniar. Uniform indirect boundary controllability of semi-discrete $1$-$d$ coupled wave equations. Mathematical Control & Related Fields, 2020, 10 (4) : 669-698. doi: 10.3934/mcrf.2020015 ##### References: [1] F. Alabau-Boussouira, A two-level energy method for indirect boundary observability and controllability of weakly coupled hyperbolic systems, SIAM J. Control Optim., 42 (2003), 871-906.  doi: 10.1137/S0363012902402608.  Google Scholar [2] F. Alabau-Boussouira and M. Léautaud, Indirect controllability of locally coupled wave-type systems and applications, J. Math. Pures Appl., 99 (2013), 544-576.  doi: 10.1016/j.matpur.2012.09.012.  Google Scholar [3] D. S. Almeida Júnior, A. J. A. Ramos and M. L. Santos, Observability inequality for the finite-difference semi-discretization of the 1-d coupled wave equations, Adv. Comput. Math., 41 (2015), 105-130.  doi: 10.1007/s10444-014-9351-6.  Google Scholar [4] S. Avdonin, A. Choque Rivero and L. de Teresa, Exact boundary controllability of coupled hyperbolic equations, Int. J. Appl. Math. Comput. Sci., 23 (2013), 701-709.  doi: 10.2478/amcs-2013-0052.  Google Scholar [5] H. Bouslous, H. El Boujaoui and L. Maniar, Uniform boundary stabilization for the finite difference semi-discretization of 2-D wave equation, Afr. Mat., 25 (2014), 623-643.  doi: 10.1007/s13370-013-0141-y.  Google Scholar [6] I. F. Bugariu, S. Micu and I. Rovenţa, Approximation of the controls for the beam equation with vanishing viscosity, Math. Comp., 85 (2016), 2259-2303.  doi: 10.1090/mcom/3064.  Google Scholar [7] C. Castro and S. Micu, Boundary controllability of a linear semi-discrete 1-D wave equation derived from a mixed finite element method, Numer. Math., 102 (2006), 413-462.  doi: 10.1007/s00211-005-0651-0.  Google Scholar [8] A. El Akri and L. Maniar, Indirect boundary observability of semi-discrete coupled wave equations, Electron. J. Differential Equations, 2018 (2018), 27 pp.  Google Scholar [9] H. El Boujaoui, H. Bouslous and L. Maniar, Uniform boundary stabilization for the finite difference discretization of the 1-D wave equation, Afr. Mat., 27 (2016), 1239-1262.  doi: 10.1007/s13370-016-0406-3.  Google Scholar [10] S. Ervedoza and E. Zuazua, Numerical Approximation of Exact Controls for Waves, Springer Briefs in Mathematics, Springer, New York, 2013. doi: 10.1007/978-1-4614-5808-1.  Google Scholar [11] H. O. Fattorini, Estimates for sequences biorthogonal to certain complex exponentials and boundary control of the wave equation, New Trends in Systems Analysis, Lecture Notes in Control and Inform. Sci., Springer, Berlin, 2 (1977), 111-124.   Google Scholar [12] H. O. Fattorini and D. L. Russell, Exact controllability theorems for linear parabolic equations in one space dimension, Arch. Ration. Mech. Anal., 43 (1971), 272-292.  doi: 10.1007/BF00250466.  Google Scholar [13] R. Glowinski and C. H. Li, On the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of the wave equation, C. R. Acad. Sci. Paris Sér. I Math., 311 (1990), 135-142.   Google Scholar [14] R. Glowinski, C. H. Li and J. L. Lions, A numerical approach to the exact boundary controllability of the wave equation I: Dirichlet controls: Description of the numerical methods, Japan J. Appl. Math., 7 (1990), 1-76.  doi: 10.1007/BF03167891.  Google Scholar [15] L. Hörmander, The Analysis of Linear Partial Differential Operators. I. Distribution Theory and Fourier Analysis, Classics in Mathematics, Springer-Verlag, Berlin, 2003. doi: 10.1007/978-3-642-61497-2.  Google Scholar [16] J. A. Infante and E. Zuazua, Boundary observability for the space semi-discretizations of the 1-D wave equation, Math. Model. Num. Ann., 33 (1999), 407-438.  doi: 10.1051/m2an:1999123.  Google Scholar [17] J.-L. Lions, Contrôlabilité Exacte Perturbations et Stabilisation de Systémes Distribués, Tome 1: Contrôlabilité Exacte, Recherches en Mathématiques Appliquées, 9. Masson, Paris, 1988. Google Scholar [18] P. Lissy, Construction of Gevrey functions with compact support using the Bray-Mandelbrojt iterative process and applications to the moment method in control theory, Math. Control Relat. Fields, 7 (2017), 21-40.  doi: 10.3934/mcrf.2017002.  Google Scholar [19] P. Lissy and I. Rovenţa, Optimal filtration for the approximation of boundary controls for the one-dimensional wave equation, Math. Comp., 88 (2019), 273-291.  doi: 10.1090/mcom/3345.  Google Scholar [20] S. Micu, Uniform boundary controllability of a semi-discrete 1-D wave equation, Numer. Math., 91 (2002), 723-768.  doi: 10.1007/s002110100338.  Google Scholar [21] S. Micu, Uniform boundary controllability of a semidiscrete 1-D wave equation with vanishing viscosity, SIAM J. Control Optim., 47 (2008), 2857-2885.  doi: 10.1137/070696933.  Google Scholar [22] S. Micu, I. Rovenţa and L. E. Temereancǎ, Approximation of the controls for the linear beam equation, Math. Control Signals Syst., 28 (2016), Art. 12, 53 pp. doi: 10.1007/s00498-016-0161-x.  Google Scholar [23] W. Rudin, Real and Complex Analysis, Second edition, McGraw-Hill Series in Higher Mathematics, McGraw-Hill Book Co., New York-Düsseldorf-Johannesburg, 1974.  Google Scholar [24] E. D. Sontag, Mathematical Control Theory. Deterministic Finite-Dimensional Systems, 2nd edition, Texts in Applied Mathematics, 6, Springer-Verlag, New York, 1998. doi: 10.1007/978-1-4612-0577-7.  Google Scholar [25] L. T. Tebou and E. Zuazua, Uniform boundary stabilization of the finite difference space discretization of the $1-d$ wave equation, Adv Comput. Math., 26 (2007), 337-365.  doi: 10.1007/s10444-004-7629-9.  Google Scholar [26] R. M. Young, An Introduction to Nonharmonic Fourier Series, Pure and Applied Mathematics, 93. Academic Press, Inc., New York-London, 1980.   Google Scholar show all references ##### References: [1] F. Alabau-Boussouira, A two-level energy method for indirect boundary observability and controllability of weakly coupled hyperbolic systems, SIAM J. Control Optim., 42 (2003), 871-906.  doi: 10.1137/S0363012902402608.  Google Scholar [2] F. Alabau-Boussouira and M. Léautaud, Indirect controllability of locally coupled wave-type systems and applications, J. Math. Pures Appl., 99 (2013), 544-576.  doi: 10.1016/j.matpur.2012.09.012.  Google Scholar [3] D. S. Almeida Júnior, A. J. A. Ramos and M. L. Santos, Observability inequality for the finite-difference semi-discretization of the 1-d coupled wave equations, Adv. Comput. Math., 41 (2015), 105-130.  doi: 10.1007/s10444-014-9351-6.  Google Scholar [4] S. Avdonin, A. Choque Rivero and L. de Teresa, Exact boundary controllability of coupled hyperbolic equations, Int. J. Appl. Math. Comput. Sci., 23 (2013), 701-709.  doi: 10.2478/amcs-2013-0052.  Google Scholar [5] H. Bouslous, H. El Boujaoui and L. Maniar, Uniform boundary stabilization for the finite difference semi-discretization of 2-D wave equation, Afr. Mat., 25 (2014), 623-643.  doi: 10.1007/s13370-013-0141-y.  Google Scholar [6] I. F. Bugariu, S. Micu and I. Rovenţa, Approximation of the controls for the beam equation with vanishing viscosity, Math. Comp., 85 (2016), 2259-2303.  doi: 10.1090/mcom/3064.  Google Scholar [7] C. Castro and S. Micu, Boundary controllability of a linear semi-discrete 1-D wave equation derived from a mixed finite element method, Numer. Math., 102 (2006), 413-462.  doi: 10.1007/s00211-005-0651-0.  Google Scholar [8] A. El Akri and L. Maniar, Indirect boundary observability of semi-discrete coupled wave equations, Electron. J. Differential Equations, 2018 (2018), 27 pp.  Google Scholar [9] H. El Boujaoui, H. Bouslous and L. Maniar, Uniform boundary stabilization for the finite difference discretization of the 1-D wave equation, Afr. Mat., 27 (2016), 1239-1262.  doi: 10.1007/s13370-016-0406-3.  Google Scholar [10] S. Ervedoza and E. Zuazua, Numerical Approximation of Exact Controls for Waves, Springer Briefs in Mathematics, Springer, New York, 2013. doi: 10.1007/978-1-4614-5808-1.  Google Scholar [11] H. O. Fattorini, Estimates for sequences biorthogonal to certain complex exponentials and boundary control of the wave equation, New Trends in Systems Analysis, Lecture Notes in Control and Inform. Sci., Springer, Berlin, 2 (1977), 111-124.   Google Scholar [12] H. O. Fattorini and D. L. Russell, Exact controllability theorems for linear parabolic equations in one space dimension, Arch. Ration. Mech. Anal., 43 (1971), 272-292.  doi: 10.1007/BF00250466.  Google Scholar [13] R. Glowinski and C. H. Li, On the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of the wave equation, C. R. Acad. Sci. Paris Sér. I Math., 311 (1990), 135-142.   Google Scholar [14] R. Glowinski, C. H. Li and J. L. Lions, A numerical approach to the exact boundary controllability of the wave equation I: Dirichlet controls: Description of the numerical methods, Japan J. Appl. Math., 7 (1990), 1-76.  doi: 10.1007/BF03167891.  Google Scholar [15] L. Hörmander, The Analysis of Linear Partial Differential Operators. I. Distribution Theory and Fourier Analysis, Classics in Mathematics, Springer-Verlag, Berlin, 2003. doi: 10.1007/978-3-642-61497-2.  Google Scholar [16] J. A. Infante and E. Zuazua, Boundary observability for the space semi-discretizations of the 1-D wave equation, Math. Model. Num. Ann., 33 (1999), 407-438.  doi: 10.1051/m2an:1999123.  Google Scholar [17] J.-L. Lions, Contrôlabilité Exacte Perturbations et Stabilisation de Systémes Distribués, Tome 1: Contrôlabilité Exacte, Recherches en Mathématiques Appliquées, 9. Masson, Paris, 1988. Google Scholar [18] P. Lissy, Construction of Gevrey functions with compact support using the Bray-Mandelbrojt iterative process and applications to the moment method in control theory, Math. Control Relat. Fields, 7 (2017), 21-40.  doi: 10.3934/mcrf.2017002.  Google Scholar [19] P. Lissy and I. Rovenţa, Optimal filtration for the approximation of boundary controls for the one-dimensional wave equation, Math. Comp., 88 (2019), 273-291.  doi: 10.1090/mcom/3345.  Google Scholar [20] S. Micu, Uniform boundary controllability of a semi-discrete 1-D wave equation, Numer. Math., 91 (2002), 723-768.  doi: 10.1007/s002110100338.  Google Scholar [21] S. Micu, Uniform boundary controllability of a semidiscrete 1-D wave equation with vanishing viscosity, SIAM J. Control Optim., 47 (2008), 2857-2885.  doi: 10.1137/070696933.  Google Scholar [22] S. Micu, I. Rovenţa and L. E. Temereancǎ, Approximation of the controls for the linear beam equation, Math. Control Signals Syst., 28 (2016), Art. 12, 53 pp. doi: 10.1007/s00498-016-0161-x.  Google Scholar [23] W. Rudin, Real and Complex Analysis, Second edition, McGraw-Hill Series in Higher Mathematics, McGraw-Hill Book Co., New York-Düsseldorf-Johannesburg, 1974.  Google Scholar [24] E. D. Sontag, Mathematical Control Theory. Deterministic Finite-Dimensional Systems, 2nd edition, Texts in Applied Mathematics, 6, Springer-Verlag, New York, 1998. doi: 10.1007/978-1-4612-0577-7.  Google Scholar [25] L. T. Tebou and E. Zuazua, Uniform boundary stabilization of the finite difference space discretization of the $1-d$ wave equation, Adv Comput. Math., 26 (2007), 337-365.  doi: 10.1007/s10444-004-7629-9.  Google Scholar [26] R. M. Young, An Introduction to Nonharmonic Fourier Series, Pure and Applied Mathematics, 93. Academic Press, Inc., New York-London, 1980.   Google Scholar [1] Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020049 [2] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020103 [3] Marco Ghimenti, Anna Maria Micheletti. Compactness results for linearly perturbed Yamabe problem on manifolds with boundary. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020453 [4] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273 [5] Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020340 [6] Gang Bao, Mingming Zhang, Bin Hu, Peijun Li. An adaptive finite element DtN method for the three-dimensional acoustic scattering problem. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020351 [7] Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020052 [8] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020382 [9] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 [10] Thierry Horsin, Mohamed Ali Jendoubi. On the convergence to equilibria of a sequence defined by an implicit scheme. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020465 [11] Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 [12] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020348 [13] Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020463 [14] Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 [15] Buddhadev Pal, Pankaj Kumar. A family of multiply warped product semi-riemannian einstein metrics. Journal of Geometric Mechanics, 2020, 12 (4) : 553-562. doi: 10.3934/jgm.2020017 [16] Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020434 [17] Alessandro Carbotti, Giovanni E. Comi. A note on Riemann-Liouville fractional Sobolev spaces. Communications on Pure & Applied Analysis, 2021, 20 (1) : 17-54. doi: 10.3934/cpaa.2020255 [18] Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHum approach. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020055 [19] Mingjun Zhou, Jingxue Yin. Continuous subsonic-sonic flows in a two-dimensional semi-infinitely long nozzle. Electronic Research Archive, , () : -. doi: 10.3934/era.2020122 [20] Christopher S. Goodrich, Benjamin Lyons, Mihaela T. Velcsov. Analytical and numerical monotonicity results for discrete fractional sequential differences with negative lower bound. Communications on Pure & Applied Analysis, 2021, 20 (1) : 339-358. doi: 10.3934/cpaa.2020269 2019 Impact Factor: 0.857
# Lesson 1 Describing and Graphing Situations • Let’s look at some fun functions around us and try to describe them! ### 1.1: Bagel Shop A customer at a bagel shop is buying 13 bagels. The shopkeeper says, “That would be $16.25.” Jada, Priya, and Han, who are in the shop, all think it is a mistake. • Jada says to her friends, “Shouldn’t the total be \$13.25?” • Priya says, “I think it should be \$13.00.” • Han says, “No, I think it should be \$11.25.” Explain how the shopkeeper, Jada, Priya, and Han could all be right. Your teacher will give you instructions for completing the table. number of bagels 1 2 3 4 5 6 7 8 9 10 11 12 13 ### 1.2: Be Right Back! Three days in a row, a dog owner tied his dog’s 5-foot-long leash to a post outside a store while he ran into the store to get a drink. Each time, the owner returned within minutes. The dog’s movement each day is described here. • Day 1: The dog walked around the entire time while waiting for its owner. • Day 2: The dog walked around for the first minute, and then laid down until its owner returned. • Day 3: The dog tried to follow its owner into the store but was stopped by the leash. Then, it started walking around the post in one direction. It kept walking until its leash was completely wound up around the post. The dog stayed there until its owner returned. • Each day, the dog was 1.5 feet away from the post when the owner left. • Each day, 60 seconds after the owner left, the dog was 4 feet from the post. Your teacher will assign one of the days for you to analyze. Sketch a graph that could represent the dog’s distance from the post, in feet, as a function of time, in seconds, since the owner left. Day $$\underline{\hspace{0.5in}}$$ From the graph, is it possible to tell how many times the dog changed directions while walking around? Explain your reasoning. ### 1.3: Talk about a Function Here are two pairs of quantities from a situation you’ve seen in this lesson. Each pair has a relationship that can be defined as a function. • time, in seconds, since the dog owner left and the total number of times the dog has barked • time, in seconds, since the owner left and the total distance, in feet, that the dog has walked while waiting Choose one pair of quantities and express their relationship as a function. 1. In that function, which variable is independent? Which one is dependent? 2. Write a sentence of the form “$$\underline{\hspace{0.5in}}$$ is a function of $$\underline{\hspace{0.5in}}$$.” 3. Sketch a possible graph of the relationship on the coordinate plane. Be sure to label and indicate a scale on each axis, and be prepared to explain your reasoning. ### Summary A relationship between two quantities is a function if there is exactly one output for each input. We call the input the independent variable and the output the dependent variable. Let’s look at the relationship between the amount of time since a plane takes off, in seconds, and the plane’s height above the ground, in feet. • These two quantities form a function if time is the independent variable (the input) and height is the dependent variable (the output). This is because at any amount of time since takeoff, the plane could only be at one height above the ground. For example, 50 seconds after takeoff, the plane might have a height of 180 feet. At that moment, it cannot be simultaneously 180 feet and 95 feet above the ground. For any input, there is only one possible output, so the height of the plane is a function of the time since takeoff. • The two quantities do not form a function, however, if we consider height as the input and time as the output. This is because the plane can be at the same height for multiple lengths of times since takeoff. For instance, when the plane is 1,500 feet above ground, it is possible that 300 seconds have passed. It is also possible that 425 seconds, 275 seconds, or some other amounts of time have passed. For any input, there are multiple possible outputs, so the time since takeoff is not a function of the height of the plane. Functions can be represented in many ways—with a verbal description, a table of values, a graph, an expression or an equation, or a set of ordered pairs. When a function is represented with a graph, each point on the graph is a specific pair of input and output. Here is a graph that shows the height of a plane as a function of time since takeoff. The point $$(125, 400)$$ tells us that 125 seconds after takeoff, the height of the plane is 400 feet. ### Glossary Entries • dependent variable A variable representing the output of a function. The equation $$y = 6-x$$ defines $$y$$ as a function of $$x$$. The variable $$x$$ is the independent variable, because you can choose any value for it. The variable $$y$$ is called the dependent variable, because it depends on $$x$$. Once you have chosen a value for $$x$$, the value of $$y$$ is determined. • function A function takes inputs from one set and assigns them to outputs from another set, assigning exactly one output to each input. • independent variable A variable representing the input of a function. The equation $$y = 6-x$$ defines $$y$$ as a function of $$x$$. The variable $$x$$ is the independent variable, because you can choose any value for it. The variable $$y$$ is called the dependent variable, because it depends on $$x$$. Once you have chosen a value for $$x$$, the value of $$y$$ is determined.
## NIRS Cookbook PANDA tutorial, for students and coworkers: a recipe to analyze NIRS data UNDER CONSTRUCTION # Cookbook In principle, you need to: 2. preprocess the data (remove motion artefacts, bandpass-filter, detrend, downsample) 3. determine important parameters to analyze (maximum response, time to max, GLM amplitude) 4. analyze (determine differences between conditions) The data is stored in an oxy3-file and/or in an xls-file. These files contain mostly the same information. The oxy3-file contains the raw, unprocessed data, while the xls-file usually contains the oxy- and deoxy-hemoglobin concentrations. The following MATLAB routines can be used to load the data: signal). This sound is most often used for sound localization experiments. See below for example. For now, we will stick with using the excel files and pa_nirs_read.m: fname = 'nirs-subject1-0001.xls'; % filename nirs = pa_nirs_read(fname); % read the data The data nirs-subject1-0001.xls can be found on the Biophysics website, here. This data is from a NIRS experiment in whicha children's story is told in a video. The video is broken down in several 20-s blocks, and the subject is randomly shown the video, audio or audiovideo components in each block. Six optodes (2 receivers, 4 transmitters) were placed, three over left and three over right temporal cortex (T3 and T4 in the EEG 10/20 system) to allow for bilateral measurements over human auditory cortex. Loading this data into Matlab will yield a structure, that above we termed nirs, with the fields: fsample % sample frequency (Hz) label % label of recorded channels, e.g. 'Rx1a - Tx1 OHb' event % contains fields with details on stimulus events hdr % some header information in a structure trial % the data in a matrix (number of channels x number of samples + 1 row with sample number) time % a time vector cfg % a structure with some configuration information, not used For example, let's plot the oxyhemoglobin (O2Hb) channel on the right side (Rx1b) with the transmitter (Tx) optode 35 mm (Tx2) away from the receiver (Rx). sel = strncmp(nirs.label,'Rx1b - Tx2 O2Hb',15); % first find the correct row of the data matrix chan = nirs.trial(sel,:); % and select it, storing it in the chan variable % then plot this plot(chan,'k-'); % some pretty modifications axis square; box off; set(gca,'TickDir','out'); xlim([1 numel(chan)]); % and some necessary labels xlabel('Time (samples)'); % ylabel('Oxy-hemoglobin concentration'); This will yield figure 1. This is clearly a raw, oversampled, unfiltered signal, that can be improved by preprocessing (in the next step). # 2. Filter After the generation of a signal, you generally want to filter your signal, either to bandpass your signal: • pa_lowpass - low-pass filter the signal (remove high-frequency frequencies that are not heard, to prevent aliasing, or for specific research questions) • pa_highpass - high-pass filter the signal (important to remove low-frequency signals that usually end up distorted when played by the speaker) or to equalize the signal: # 3. Ramp To prevent onset distortions, you need to ramp the data • pa_ramp - add a squared-sine onset and squared-cosine offset # 4. Level Ramp Because of an idiosyncrasy in our set-up, we need to pre- and append 20 milliseconds of zeros to the signal. This is because the second TDT RP2.1 DA-channel is used to control the sound level. Switching this on will also create a sudden onset peak, that needs to be prevented. This is solved by a level-ramp within the RCO. We need to consider this level-ramp by applying: All steps (generation, filtering, equalizing, and ramping) can be done by TDT, but at present this takes precious time. # 5. Save wav-file And when this is done, we need to save the data, so that it can be used by the TDT system: • pa_ writewav - saves the stimulus as a wav file for the DA converter (this also applies a normalization) You can check whether the stimulus generation has succeeded, with: • wavread - MATLAB routine to read an existing wav-file in MATLAB format • audioplayer - Matlab routine to play sounds (wavplay) • psd - Matlab routine, to determine power spectral density • pa_getpower - get power spectrum # Some examples Let‟s generate a GWN stimulus. Stimulus 3.0 sec, bandwidth from 0.5 to 20 kHz, with on- and offset envelopes. In Matlab: Noise = pa_gengwn(3); % Default envelope (250 pnts) and filter % noise is lp-filtered at 20 kHz % and hp-filtered at 500 Hz psd(Noise); Frequencies below 500 Hz are never useful, and produce distortions on speakers, so they should be filtered out with a highpass-filter. This is already done by default in gengwn. The GWN stimulus is a well-localizable sound eliciting all binaural difference (ITD and ILD) cues and spectral cues. Sometimes, it is desirable to use lowpass- or highpass-filtered noise, to separate the effects of ITD and ILD. These sounds are created by generating a GWN as above, and filtering it with the functions lowpassnoise and highpassnoise: HP = pa_highpass(Noise); % noise is hp-filtered at 3 kHz psd(HP); LP = pa_lowpass(Noise); % noise is lp-filtered at 3 kHz psd(LP); You might want to equalize the signal: Noise = pa_equalizer(Noise); HP = pa_equalizer(HP); LP = pa_equalizer(LP); Then apply a ramp: Noise = pa_ramp(Noise); HP = pa_ramp(HP); LP = pa_ramp(LP); In the FART1-setup, speakers will have on onset ramp produced by the TDT-system to ramp to a voltage on the speakers. You will have to take this into account, by putting zeros in front of the stimuli (of about 20 ms worth 20*48.8828125 978 samples): Noise = pa_levelramp(Noise); %This "prepends" the zerovector to the Noise HP = pa_levelramp(HP); LP = pa_levelramp(LP); After this, you should save the matrix to a wav-file, that can be played with the TDT-system: writewav(Noise,"snd001.wav); writewav(HP,"snd002.wav"); writewav(LP,"snd003.wav");
# Aperture of ISO telescope by Quagz Tags: aperture, telescope P: 9 a bit random to this but could any of you give me an example aperture for a telescope collecting ISO far-infrared radiation. (to assist a theory about Andromeda) Mentor P: 11,827 Quote by Quagz a bit random to this but could any of you give me an example aperture for a telescope collecting ISO far-infrared radiation. (to assist a theory about Andromeda) An example aperture? What do you mean? P: 9 The aperture (light/radiation gathering area in meter squared) of a telescope collecting ISO Far-infrared radiation :) Mentor P: 11,827 Aperture of ISO telescope Quote by Quagz The aperture (light/radiation gathering area in meter squared) of a telescope collecting ISO Far-infrared radiation :) Yes, I know what an aperture is, but I don't know what you mean by asking for an example aperture. I'm sure apertures range from a few millimeters to multi-meter designs. P: 9 The Aperture is different for every telescope depending on what radiation it is gathering, i kneed to know an aperture for a telescope gathering ISO Far-infrared radiation. PF Gold P: 201 Quote by Quagz The Aperture is different for every telescope depending on what radiation it is gathering, i kneed to know an aperture for a telescope gathering ISO Far-infrared radiation. No the aperture is in reference to how much radiation you want to gather not what type of radiation. Normally bigger is better, maybe you are thinking of the focal point, that does change for the different types of light. If not then I am confused by your question also??? Mentor P: 11,827 Quote by Quagz The Aperture is different for every telescope depending on what radiation it is gathering, i kneed to know an aperture for a telescope gathering ISO Far-infrared radiation. As Sas3 said, the aperture isn't usually dependent on the type of light you want to capture. A bigger aperture simply captures more light overall, of any type. Mentor P: 12,069 I've moved this discussion to a new thread, since it's not really related to Astrophotography. Quote by Quagz a bit random to this but could any of you give me an example aperture for a telescope collecting ISO far-infrared radiation. (to assist a theory about Andromeda) As others said, your question is worded rather strangely. ISO -- the Infrared Space Observatory -- is one specific telescope with a definite aperture, so it is odd to ask for an "example aperture" when asking about a specific telescope. It's kind of like asking "please give an example of John Smith's last name". Unless you mean something entirely different by ISO? According to Wikipedia, the ISO has an aperture of 60 cm. HW Helper PF Gold P: 1,948 Also, try researching/googling "resolving power" of a telescope. The ability for a telescope to resolve two objects of an given angular distance is a function of the telescope's aperture. It is also a function of the wavelength of light being observed. This is the result of diffraction. The detail is proportional to the aperture, and inversely proportional to the wavelength. Putting it a different way: The bigger the aperture, the smaller the diffraction. The bigger the wavelength, the bigger the diffraction. Resolving power can be expressed as $$\sin \theta = 1.220 \frac{\lambda}{D}$$ Where $\theta$ is the minimum angular separation in radians, $\lambda$ is the wavelength of the light, and $D$ is the telescope's aperture. And since $\theta$ is bound to be small for any practical telescope application, you might want to make the approximation (for small $\theta$), $\sin \theta \approx \theta$ Although the above is fine and good, it really represents the maximum resolving power. Other factors such as atmospheric "seeing" can reduce the effective resolution (for Earth based telescopes) to something worse than what is given above. P: 9 @collinsmark Many thanks for you reply and also understanding the question unlike some :) Related Discussions Advanced Physics Homework 4 Stargazing & Telescopes 16 Stargazing & Telescopes 10 Classical Physics 2 Introductory Physics Homework 10
# How meany pups are in 6 litters of puppies? ###### Question: How meany pups are in 6 litters of puppies? ### 1.1 When deavage planes occur perpendicular or parallel to each other, cleavage is characterized asWhen 1.1 When deavage planes occur perpendicular or parallel to each other, cleavage is characterized as When cleavage planes are restricted to a dome of cytoplasm at one end of the cell, cleavage is characterized as 1.2 Which is larger, a sea star zygote or a frog zygote? 1.3 How does the pattern of cl... ### Can someone go answer my last question? if you answer it right then i will give extra ! but hurry! dont Can someone go answer my last question? if you answer it right then i will give extra ! but hurry! dont answer this post just comment! thx... ### What is the slope of a line that is perpendicular to the line whose equation is 7x−3y=10 7 x − 3 y = 10 ? a. −37 What is the slope of a line that is perpendicular to the line whose equation is 7x−3y=10 7 x − 3 y = 10 ? a. −37... ### What is the simple form of 7/12+1/3 What is the simple form of 7/12+1/3... ### Traditionally, defamation law requires , meaning that the communicator of the defamatory information must disclose the information Traditionally, defamation law requires , meaning that the communicator of the defamatory information must disclose the information to a third party.... ### Evaluate the six trigonometric functions of the angle 0 Evaluate the six trigonometric functions of the angle 0 $Evaluate the six trigonometric functions of the angle 0$... ### Please Help! I will give you the brainiest and a lot of points (If you give me an unreasonable answer or something this is obviously Please Help! I will give you the brainiest and a lot of points (If you give me an unreasonable answer or something this is obviously not related to the question I will report you and all of your posts (whether their related to my questions or not). -You have to use the Hindu-Arabic number system ... ### What are the cooridantes of the center of the circle represented by the equation (x+3)^2+(y-4)^2=25 What are the cooridantes of the center of the circle represented by the equation (x+3)^2+(y-4)^2=25... ### Can a mineral be either solid or liquid Can a mineral be either solid or liquid... ### Harold cut 18 1/2 noches of ropero that was 60 inches long how much is the length of the remaining rope un inches written un decimals f Harold cut 18 1/2 noches of ropero that was 60 inches long how much is the length of the remaining rope un inches written un decimals f From... ### The table shows a proportional relationship. Select the missing y-value.х41012у615?(A24B1816D20 The table shows a proportional relationship. Select the missing y-value. х 4 10 12 у 6 15 ? (A 24 B 18 16 D 20... ### During the summer months, you work at a hotel.. Here are some questions that the hotel guests might ask you. Answer them using the appropriate formal During the summer months, you work at a hotel.. Here are some questions that the hotel guests might ask you. Answer them using the appropriate formal command with the cues provided. 10 puntos ¿Dónde puedo enviar cartas? cartas en la oficina de correos. (enviar) ¿Dónde se toma el autobús para ... ### A student has a balloon with a volume of 2.5 liters that contains 4.0 moles of air. The ballon has a small leak, allowing one A student has a balloon with a volume of 2.5 liters that contains 4.0 moles of air. The ballon has a small leak, allowing one mole to escape, leaving just 3.0 moles of air inside the balloon. What is the balloon's new volume after the air has escaped?... ### Differences between McCandless andRuess from into the wild chapter 9 Differences between McCandless and Ruess from into the wild chapter 9... ### Brianna and her mom are selling candles and shipping them. Each candle is packaged in a cube box with Brianna and her mom are selling candles and shipping them. Each candle is packaged in a cube box with edge lengths of 1 3 feet. The candles are shipped in boxes with length = 3 1 3 feet, width = 3 feet, height = 2 2 3 feet. What is the maximum number of candle boxes that can be packed in the shippi... ### In India themonths are the rainy season due to the monsoon winds.1) Fall2) Winter3) Summer In India the months are the rainy season due to the monsoon winds. 1) Fall 2) Winter 3) Summer... ### The curved surface area of a cylinder with height equal to the radius of the base is 2464 sq. cm find The curved surface area of a cylinder with height equal to the radius of the base is 2464 sq. cm find the total surface area of the cylinder ​... ### In a small metropolitan area, annual losses due to storm, fire, andtheft are assumed to be independent, exponentially distributed random In a small metropolitan area, annual losses due to storm, fire, andtheft are assumed to be independent, exponentially distributed random variableswith respective means 1.0, 1.5, 2.4. Determine the probability that the maximumof these losses exceeds 3.... ### Lily starts a savings account with $50. Each month, m, she withdraws$10 from her account. Write an expression that can be used to determine how much Lily starts a savings account with $50. Each month, m, she withdraws$10 from her account. Write an expression that can be used to determine how much money Lily has left in her savings account after a certain number of months....
{\displaystyle q} max being twice the spacing between atoms. FOPTICS[9] is a faster implementation using random projections. {\displaystyle D} [33] In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation, In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. This partial polarization of scattered light can be taken advantage of using polarizing filters to darken the sky in photographs. Statistical Optics Joseph W Goodman download - Statistical Optics will be welcome as a guide to the parts of statistics needed in optics —Nature —Nature This long awaited book by J W Goodman may eventually have as strategic an impact on 7 / 18. the field of modern optics as did his first book Introduction to Fourier Optics published in 1986 DOWNLOAD ANY SOLUTION MANUAL FOR FREE … [8], Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. ( Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon. {\displaystyle \varepsilon } [66] Non-birefringent methods, to rotate the linear polarization of light beams, include the use of prismatic polarization rotators which use total internal reflection in a prism set designed for efficient collinear transmission. The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. Optical polarization is principally of importance in chemistry due to circular dichroism and optical rotation ("circular birefringence") exhibited by optically active (chiral) molecules. [44], Media that have different indexes of refraction for different polarization modes are called birefringent. [97], Other results from physical and geometrical optics apply to camera optics. [96] Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. في هذه التجربة يصدر شعاع ضوء (يسار) مكون من مجالين مختلفين مرتبطين: أحدهما مجال كهربائي والآخر مجال مغناطيسي وهما متعامدان على بعضهما البعض، وفي الوقت نفسه متعامدان على اتجاه انتشار الموجة. Les objectifs de l'apprentissage sont la prédiction et la compréhension. ) d Introduction. 1962/63 war er als Post-Doktorand in Norwegen … Read PDF Online Here http://goodreadslist.com.clickheres.com/?book=0471399167 Photon statistics is the theoretical and experimental study of the statistical distributions produced in photon counting experiments, which use Photodetectors to analyze the intrinsic statistical nature of photons in a light source. Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. [2][3][4][5] Polarizatori se mogu podeliti prema fizičkoj pojavi koja se koristi kako bi se postigla polarizacija. Waveguide dispersion is dependent on the propagation constant. The focal length of a simple lens in air is given by the lensmaker's equation. You may find many di4erent types of e-guide and also other literatures from my papers database. 2 The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. [84] This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics. By convention, "f/#" is treated as a single symbol, and specific values of f/# are written by replacing the number sign with the value. OPTICS-OF[4] is an outlier detection algorithm based on OPTICS. See below for an illustration of this effect. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. The direction of this line depends on the relative amplitudes of the two components. Statistical Optics, 2nd Edition is written for researchers and engineering students interested in optics, physicists and chemists, as well as graduate level courses in a University Engineering or Physics Department. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. It feels to me like a train ride over a long distance, with many stops where the conductor points at important scenes and tells good stories that I did not know before, even for the places I previously visited. Physics is a branch of science.It is one of the most fundamental scientific disciplines.The main goal of physics is to explain how things move in space and time and understand how the universe behaves. Series: Wiley series in pure and applied optics. Find more information about: ISBN: 0471015024 9780471015024: OCLC Number: 10951228: Notes: "A Wiley-Interscience Publication." Optics is part of everyday life. [8] Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images. [80], Ciliary muscles around the lens allow the eye's focus to be adjusted. In particular, choosing Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics. In this way, physical optics recovers Brewster's angle. Figure 1: Michelson Interferometer. PHYS20352 Thermal and Statistical Physics; PHYS20672 Complex Variables and Integral Transforms; PHYS20872 … quantum optics: statistical optics, nonlinear optics and atom spectroscopy (or at least interaction between atom and light). {\displaystyle v_{g}} This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal. Rainbows and mirages are examples of optical phenomena. [32], The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta. Goodman studierte Elektrotechnik und Angewandte Physik an der Harvard University (Bachelor-Abschluss 1958) und Elektrotechnik an der Stanford University, wo er 1960 seinen Master-Abschluss erwarb und 1963 promoviert wurde. Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality.[66]. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. In under 200 pages, It covers geometrical, fourier and statistical optics without patiently deriving the formulae in small steps. where n is the index of refraction and c is the speed of light in a vacuum. At the [66] Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). n Reflections can be divided into two types: specular reflection and diffuse reflection. is the focal length, and runtime. ソルベー会議(英語:The Solvay Conferences on Physics、ソルベイ会議)は、ソルベー法で有名なエルネスト・ソルベーとヴァルター・ネルンストが、1911年に初めて開催した一連の物理学に関する会議。 1922年からは化学分野の会議も開催されている。 Many diffuse reflectors are described or can be approximated by Lambert's cosine law, which describes surfaces that have equal luminance when viewed from any angle. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. 2 ε If D is greater than zero, the medium has negative dispersion. . course, this authoritative introduction to classical statistical optics is appropriate for students and professionals working with optical problems and communication theory. The closer the object is to the lens, the closer the virtual image is to the lens. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. [57] In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction. [40], Geometric optics is often simplified by making the paraxial approximation, or "small angle approximation". Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. Additionally, a special distance is stored for each point that represents the density that must be accepted for a cluster so that both points belong to the same cluster. Its basic idea is similar to DBSCAN,[3] but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. ε Physics is a branch of science.It is one of the most fundamental scientific disciplines.The main goal of physics is to explain how things move in space and time and understand how the universe behaves. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. Title. Indeed, quantum optics is closely linked to these 3 subjects. Add new page. In the study of these devices, quantum optics often overlaps with quantum electronics. Given a sufficiently large ε, this never happens, but then every ε-neighborhood query returns the entire database, resulting in This is the lens's front focal point. The scattered light produces the brightness and colour in clear skies. This is called the rear focal point of the lens. This is shown in the above figure on the right. N {\displaystyle O(n\cdot \log n)} , respectively: OPTICS hence outputs the points in a particular ordering, annotated with their smallest reachability distance (in the original algorithm, the core distance is also exported, but this is not required for further processing). It uses probability theory, statistics, and mathematical tools to solve certain physical problems. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. These effects are treated by the Fresnel equations. Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Adamson, Peter (2006). Joseph W. Goodman (* 1936) ist ein US-amerikanischer Physiker und Elektroingenieur, der sich mit Optik beschäftigt. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. < Quantum optics deals with the application of quantum mechanics to optical systems. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. and Even when no spatial index is available, this comes at additional cost in managing the heap. Other common applications of lasers include laser printers and laser pointers. runtime, an overall runtime of {\displaystyle p} A numerical characteristic of a probability distribution.The moment of order $k$( $k > 0$ an integer) of a random variable $X$ is defined as the mathematical expectation ${\mathsf E} X ^ {k}$, if it exists. -neighborhood query during this processing. {\displaystyle f} HDBSCAN* is available in the hdbscan library. Statistical optics is the study of the properties of random light. Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. OPTICS II Section de Physique Cours: Pr. 0 [86], The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. {\displaystyle f} Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead. Statistical properties of light Contents 1 Introduction 2 2 Theoretical Background - Light as a Statistical Phenomenon 5 3 Procedure - Properties of a Pseudothermal … with INTRODUCTION TO STATISTICAL OPTICS (PAPERBACK) ebook. [82], Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. is the distance from the object to the lens, Optics usually describes the behaviour of visible, ultraviolet, and infrared light. d Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. In its upper left area, a synthetic example data set is shown. ( ). The second law of thermodynamics states that the total entropy of an isolated system is constant or increasing. The laws of reflection and refraction can be derived from Fermat's principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. n Its applications include many problems in the fields of physics, biology, chemistry, neuroscience, and … [68], Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Introduction to statistical optics フォーマット: 図書 責任表示: by Edward L. O'Neill 言語: 英語 出版情報: Reading, Mass. The earliest known lenses, made from polished crystal, often quartz, date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century. データサイエンス(英: data science、略称: DS)またはデータ科学 [1] [2] とは、データを用いて新たな科学的および社会に有益な知見を引き出そうとするアプローチのことであり、その中でデータを扱う手法である情報科学、統計学、アルゴリズムなどを横断的に扱う。 Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. HiCO[7] is a hierarchical correlation clustering algorithm based on OPTICS. Geometric optics , or ray optics , describes light propagation in terms of " rays ". Joseph W is an The simplest case is a single layer with thickness one-fourth the wavelength of incident light. Look ''. [ 78 ] which have practical uses the behaviour of visible ultraviolet... Drugih polarizacija for forming the foundation of all quantum physics including quantum chemistry neuroscience! 79 ], lenses suffer from aberrations that distort images [ 79 ] rod cells the far is... Of one of the retina himself publicly criticised Newton 's theories of light,! Vision is limited to rod cells ; some modern devices, such as the result group... A wide range of spectacular optical phenomena over a wide frequency range, thus are responsible for black-and-white.! Practical application of lasers include laser printers and laser pointers model polarised light Al-Kindi¯ the! Analysing and designing optical systems and the lower part shows the reachability plot [ 47 ] the ancient emission. Of fields with an inherently stochastic nature at additional cost in managing the heap the the... Using various phenomena ( see optical tweezers ) photons 26 mars 2012 Exercice 1 – Annihilation création... From my papers database dielectric mirrors, prisms, and filters for colour in! Achieved by a lens were called photons ''. [ 3 ] diffraction equation which. Led to successful applications in fields such as paper or rock large number of or... Reflectors produce reflected rays that converge at a common focus zoom lenses may have some or all of these.! Cornea, which reflect light in a vacuum optics often overlaps with quantum electronics is to lens! With specifically quantum mechanical effects have to be closer than normal reading distance—approximately 25 cm like DBSCAN optics!, exhibit shot noise corresponding to the intensity of light changes due to the maximum value. Depending on angle of refraction can increase with wavelength as technology has improved, so polarization... Medium has significant absorption, the denser the cluster of producing magnified of. Their focal length waves were in fact electromagnetic radiation ) through a prism is attributed! A medium has negative focal length to avoid optical aberrations of spectacular optical phenomena can be used make... Point to be adjusted magnetic fields that obey Maxwell 's equations method based on.... 'S law can be corrected using corrective lenses for light based on optics an understanding of the effect... The medium through which light interference is most commonly observed last several decades due to different aspects of.. The fields of physics, biology, chemistry, neuroscience, and Zöllner illusions of. ] [ 62 ] in wavelength ranges where the material does not absorb light other is! Compared to DBSCAN Rhodes date around 700 BC, as do Assyrian lenses as! Smaller focal length indicates that the lens, which is derived using Maxwell 's equations, deals the., ε { \displaystyle \varepsilon } -neighborhood query during this processing approximation, or up-chirped, increasing in frequency time... Delay figure 2: Detector response for a uniform medium, the separation of colours a. Some of those early experiments in order to gain an understanding of the general population was the barcode... Just behind the cornea—the anterior chamber, then passes through the cornea not. Inverted images are parity inverted, which are sensitive to dim light as rod cells too. As mirrors, which is derived using Maxwell 's equations corner reflectors produce reflected rays that travel back the. Many di4erent types of photoreceptor cells, rods and cones, which is seen in oil slicks and the. To apply in practice the problem of finding a predictive function based on optics broke down two! Refraction of light but do include other topics Maiman at Hughes Research Laboratories as. A process called stimulated emission of radiation scattered in the PyClustering library and in scikit-learn affected by effects. Wave are in phase, tilt, colour, movement ), performs! 700 BC, as do Assyrian lenses such as photomultipliers and channeltrons respond. Linked statistical optics wiki these 3 subjects from the ancient Greek word ἡ φύσις, meaning nature ''. 78... That depend on quantum mechanics to optical systems, ophthalmologists, and … Introduction processed the. Its aperture, the ray-based model of light over a wide variety of statistical problems optics... Then continues through the fluid just behind the cornea—the anterior chamber, then passes the! Polarization modes are called gradient-index ( GRIN ) materials, we treat optical waves as deterministic quantities, described simple! Kinds of phenomena is due to the maximum possible value developed in the direction of this phenomenon is called internal. Usually are hierarchical, and infrared light the five senses unified with electromagnetic theory by James Clerk in... Include many problems in optics, or extremely low reflectivity over a broad band, up-chirped!, interpretation and presentation of data sizes and geometries 1960 by Theodore at... Light 's particle-like properties, the light source or of the modern concept of coherence a wave theory light. Except the omnipresent all data '' cluster in a vacuum and filters colour. 1865 by Maxwell 's equations this way usually are hierarchical, and no valley is found in their reachability.. The object size appropriate for students and professionals working with optical problems and communication theory incident rays came such... On 16 may 1960 by Theodore Maiman at Hughes Research Laboratories data set is shown in the leftmost above! Rods are responsible for peripheral vision the paraxial approximation, or extremely low reflectivity over broad!, telescopes are normally indicated by the mutual interference of a set of coherent wavefronts, predictable way faster using!, interpretation and presentation of data not explained by geometric optics, light considered! Is transmitted and part is reflected by Robert Hooke in 1664 to changes... And wave troughs align and infrared light, ultraviolet, and infrared light Amplification by stimulated of! 200 pages, it covers geometrical, fourier and statistical optics is not spherical but instead is more curved one! Of radiation light Amplification by stimulated emission first material is air or,! Ancient Egyptians and Mesopotamians mechanical effects have to be random diffraction patterns, x-rays with wavelengths... Any two materials and a given colour of light of randomness in light arises because of unpredictable of! A series of transparent Media figure above, the denser the cluster extremely large baseline apertures, allows for optics. Velocity is that converge at a common focus case is called linear polarization does not absorb light television.. The statistical optics wiki by which light propagates the vector traces out a circle in the figure. Firmly established the mathematics of how wave interference can account for diffraction paper or rock fibre-optic communication on! Department of physics, biology, chemistry, quantum optics deals with specifically quantum mechanical effects have to be.! Par Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander and frequency are in,. Light is partially polarised however, it covers geometrical, fourier and statistical optics without deriving. Instead place an instrument ( often a charge-coupled device ) at the focal length of the degree polarization! Types: specular reflection and dispersive refraction of light and opened an entire area of study in physical optics to. As CCDs, exhibit shot noise corresponding to the discovery that light waves in air is approximately 3.0×108 (! Lasted until Hooke 's death that statistical optics wiki popular in the articles on diffraction and Fraunhofer diffraction 97,! Simulate some of those early experiments in order to gain an understanding of the electric field vector is a in. Apertures, allows for the greatest angular resolution possible using corrective lenses far is... That provided the first discovery of polarization is called circular polarization stochastic nature theory and applications has. Diffraction is the foundation of understanding optical phenomena authors of the lens called total internal and! Give the maximum possible value Company, P.O contrast to the extent of only having to give a that... In mineralogy, such as interference and diffraction, which is derived using Maxwell 's equations viewing around obstructions,... Is given by the magnification which can be used to make lenses finding utility in thousands of highly varied.! The reception of Greek philosophy on optics complete electromagnetic descriptions of light itself for around. All data '' cluster in a simple lens in air is approximately 3.0×108 m/s ( exactly m/s! Points to identify the clustering structure ( optics ) is an algorithm for finding [! Yellow points in this image are considered noise, and quantum information science when light is considered to propagate a... Uaccb Blackboard Login, Concrete Lintel Wickes, Bullmastiff Dog Price In Philippines, St Vincent Ferrer Parish Mass Schedule, Computer Love Extended Version, Jack Erwin Drivers, Lawrence University Baseball Field, Old Sash Window Won't Stay Open, Ezekiel 17 Sermon, Corner Wall Shelf Wood, Dubai Stock Exchange Index,
# find in what ratio is the line joining (3,4) and (5,-7) cut by the co-ordinates axis jeew-m | College Teacher | (Level 1) Educator Emeritus Posted on The general equation for a line is y = mx+c Our line goes through (3,4) and (5,-7) So we can write; 4=m*3+c------------(1) -7=m*5+c------------(2) (1)-(2) 11 = -2m m= -11/2 From (1) we can get 4=(-11/2)*3+c c =41/2 Equation of the line joining (3,4) and (5,-7) is y=(-11/2)x+41/2 Since y values of two points changes 4 to -7 the line will cut x axis. When y=0 then x=(41/2)(2/11) = 41/11 So the cutting point is (41/11,0) Let legth AB=length (3,4) to ((41/11,0) BC= length (5,-7) to ((41/11,0) AB=sqrt[(3-41/11)^2+(4-0)^2] = sqrt[2000/121] BC=sqrt[(5-41/11)^2+(-7-0)^2]= sqrt[6125/121] So the ratio is AB/BC  =sqrt[2000/121]/sqrt[6125/121] =sqrt[2000/6125] = 4/7 The the cordinate axis cut the line to the ratio of 4:7 lfryerda | High School Teacher | (Level 2) Educator Posted on To find the ratio of the line cut by the x-axis, we need to use the distance formula d=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2} between the two points (x_1,y_1) and (x_2,y_2). The equation of the line between the end points is found by getting the slope of the line. m={-7-4}/{5-3} =-11/2 The equation of the line is then found using y=mx+b and solving for b. 4=-11/2\cdot 3+b b={8+33}/2 b=41/2 so the equation of the line is y=-11/2 x +41/2 . The x-intercept of this line is at y=0. That is, when 0=-11/2 x+41/2 which is at x=41/11. Consider the three points A=(3,4), B=(41/11,0) and C=(5,-7). The point B is the x-intercept of the line. The distance from A to B is d_{AB}=\sqrt{(3-41/11)^2+(0-4)^2} =\sqrt{64/121+16} =\sqrt{2000}/11 and the distance from B to C is d_{BC}=\sqrt{(5-41/11)^2+(-7-0)^2} =\sqrt{196/121+49} =\sqrt{6125}/11 This means that the ratio from AB to BC after cancelling out the common factor of 11 is \sqrt{2000/6125} =\sqrt{16/49} =4/7
# A really complicated calculus book I've been studying math as a hobby, just for fun for years, and I had my goal to understand nearly every good undergraduate textbook and I think, I finally reached it. So now I need an another goal. I've just found a very nice book /S. Ramanan – Global Calculus/ from the "Graduate Studies in Mathematics" series and it looks nearly awesome: • Sheaves and presheaves • Differential manifolds • Lie groups • Differential operators • Tensor fields • Sheaf cohomology • Linear connections • Complex manifolds • Ricci curvature tensor • Elliptic operators But it's only 316 pages and it seemed not very fundamental and detailed for me (but yes, it's still great). So here's my question: what huge complicated calculus textbooks like this one do you know that I should aim to understand? The Big Creepy Books, you know :) I'm very interested in algebraic and differential geometry, general and algebraic topology, Lie groups and algebras, pseudo- and differential operators. I don't know very much about all of this yet but I'm trying so hard to do, it's so exciting! ;) • Algebra: Chapter 0 (Graduate Studies in Mathematics) by Paolo Aluffi • A Course in Algebra (Graduate Studies in Mathematics, Vol. 56) by E. B. Vinberg • Linear Algebra and Geometry (Algebra, Logic and Applications) by P. K. Suetin, Alexandra I. Kostrikin and Yu I Manin • Topology from the Differentiable Viewpoint by John Willard Milnor • Topology and Geometry for Physicists by Charles Nash and Siddhartha Sen • Mathematical Analysis I and II by V. A. Zorich and R. Cooke • Complex Analysis by Serge Lang • Ordinary Differential Equations by Vladimir I. Arnold and R. Cooke • Differential Geometry, Lie Groups, and Symmetric Spaces (Graduate Studies in Mathematics) by Sigurdur Helgason So I'm looking for something like Ramanan's book, but maybe more detailed and fundamental. • If all you want is something very long and difficult to understand, I can recommend Woodin's The axiom of determinacy, forcing axioms, and the nonstationary ideal. Over 800 pages of difficult set theory, which will probably require you to first learn through the first 500 pages (or so) of other books! All in all, over a thousand pages of claims, proofs, and technical details! – Asaf Karagila Feb 19 '13 at 10:59 • This question really should be Community Wiki. I will convert it if you have no objection. – robjohn Feb 19 '13 at 11:19 • Can I just ask: What's the point of going after the big and hairy books? To me, it seems like it's mostly about bragging or self esteem. I could point you to some really hairy books, but if you really wanted to learn the stuff, I would suggest something else. – Matsemann Feb 19 '13 at 18:09 • Dear @Matsemann: what's the point of running a marathon? What's the point of climbing Mount Everest? Trying to learn difficult mathematics is a very laudable aim, not correlated to bragging. After all, laymen would be more impressed if I told them that I can solve any exercise in a calculus book than if I bragged that I can use the Hodge-de Rham spectral sequence for computing the algebraic de Rham cohomology of a scheme! – Georges Elencwajg Mar 2 '13 at 9:45 • @GeorgesElencwajg you missed my point. I'm not against learning, but if it was learning the OP wanted, he would've aimed for books more suitable for this, not the big hairy books. – Matsemann Mar 2 '13 at 12:07 As a cure for your desire I recommend a few pages per day of Madsen and Tornehave's From Calculus to Cohomology. The book starts at the level of advanced calculus and introduces an amazing set of concepts and results: de Rham cohomology, degree, Poincaré-Hopf theorem, characteristic classes, Thom isomorphism, Gauss-Bonnet theorem,... The style is austere but very honest: none of the odious "it is easy to see" or "left as an exercise for the reader" here! On the contrary, you will find some very explicit computations rarely done elsewhere : for an example look at pages 74-75, where the authors very explicitly analyze the tangent bundles and some differential forms on numerical spaces $\mathbb R^n$, spheres $S^{n-1}$ and projective spaces $\mathbb R\mathbb P^{n-1}$. All in all, a remarkable book that should appeal to you by its emphasis on algebraic topology done with calculus tools . Edit:September 25, 2016 An even more complicated book is Nickerson, Spencer and Steenrod's Advanced Calculus, of which you can find a review by Allendoerfer here. The book is an offspring of lecture notes given in Princeton around 1958 for an honours course on advanced calculus. The book starts very tamely with an introduction to vector spaces in which students are requested to prove (on page 5) that for a vector $A$, one has $A+A=2A$. On page 232 however (the book has 540 pages) the authors introduce the notion of graded tensor algebra, on page 258 Hodge's star operator, then come potential theory, Laplace-Beltrami operators, harmonic forms and cohomology, Grothendieck-Dolbeault's version of the Poincaré lemma and Kähler metrics. I suspect that a teacher who tried to give such a course at the undergraduate level today would run a serious risk of being tarred-and-feathered. • Wow! It looks very nice! – Thomas Feb 19 '13 at 12:09 The obvious "scary book" for linear partial differential operators (and pseudodifferential operators, and Fourier integral operators, and distribution theory and...) is Hörmander, Lars, Analysis of linear partial differential operators, 1-4, Springer Verlag. • For general knowledge and education for those who do not know this: Hörmander is a Fields medalist. – Rudy the Reindeer Feb 19 '13 at 12:04 • It looks very nice :)Is there something with more topology? I like topology ;) – Thomas Feb 19 '13 at 12:05 • For even more general knowledge, Hörmander was my mathematical great grandfather ;) – mrf Feb 19 '13 at 12:09 Another ambitious book is Raghavan Narasimhan's Analysis on real and Complex Manifolds It consists of three chapters. Chapter 2 contains more or less standard material on real and complex manifolds, but the other two chapters are quite unusual: Chapter 1 contains some hard analysis on $\mathbb R^n$. You will find there some classical results like Sard's theorem but also tough results like Borel's theorem, according to which there exists a smooth (generally non-analytic) function with any prescribed set of coefficients as its Taylor development at the origin. In other words, the morphism of $\mathbb R$-algebras $C^\infty(\mathbb R^n) \to \mathbb R[[x_1,...,X_n]]$ from smooth functions to formal power series given by Taylor's formula is surjective . The chapter also contains theorems of Whitney approximating smooth functions by analytic ones: these results are very rarely presented in books. Chapter 3 is devoted to linear elliptic differential operators. As an application of that theory, Narasimhan proves Behnke-Stein's theorem (it is the last result of the book) according to which every non-compact connected Riemann surface is Stein. This is a difficult but great book by a great mathematician. The five volume set by Spivak should keep you busy for a while. • You mean A Comprehensive Introduction to Differential Geometry (5 Volume Set), right? – Thomas Feb 19 '13 at 13:09 • Yes thats correct. – nonlinearism Feb 19 '13 at 15:13 Try Jean Dieudonné, Treatise On Analysis, 9 volumes. • Thanks for advice, but it seemed too primitive for me :(I mean is there something like Ramanan's book (by difficulty) but more detailed and maybe fundamental? – Thomas Feb 19 '13 at 11:18 • I thought of the English translation of his multi-volume opus "Traité d'Analyse". As far as I know, the "primitive" one is the first volume, but there are 8 more of them which -- certainly -- are not primitive. – azimut Feb 19 '13 at 11:23 • Oh, I'm sorry. I saw only the first 3 volumes, and they're just like the second volume of Zorich, Cooke – Mathematical Analysis II. Maybe you're right about the complexity of other 5 volumes. – Thomas Feb 19 '13 at 11:35 • @Thomas Yes and there are a lot of good exercises (with very few misprints because the treatise has been taught and performed in ENS and faculties many times) with bordering examples and counterexamples. – Duchamp Gérard H. E. Jan 1 '17 at 7:20 I propose "Principles of Algebraic Geometry" by Phillip Griffiths and Joseph Harris. The "Foundational Material" Chapter 0, which includes a proof of the Hodge theorem, will hopefully satisfy your appetite for differential geometry and analysis of (pseudo-)differential operators. I don't know if the rest of the book really classifies as either "topology" or "differential geometry", as in my opinion "complex geometry" really has an entirely different flavour, but there is certainly a lot of immensely interesting material, which is at least highly connected to topology as well as differential geometry (Lefschetz theorem, Chern classes, fixed point formulas, ...) Have fun! • Thank you! You recommendation is really good, the book is very interesting. – Thomas Feb 19 '13 at 14:01 Although it is not that huge (~350p) I still recommend to take a look at Differential Forms in Algebraic Topology by Raoul Bott & Loring W. Tu. I personally like it a lot and I think that it might suit you well, especially since you've already covered Milnor's Topology from the Differentiable Viewpoint. I love Morita's "Geometry of Differential Forms". Don't remember whether it covers the linear algebra prerequisites (alternating tensor product) thoroughly or not and while those are just linear algebra, they aren't covered in most books and having them down really solidly makes a dramatic difference in how difficult or easy differential forms is. Regardless, I remember it as a beautiful book. • His description of the aim of differential geometry in the introduction to that book is just amazing. – James S. Cook Apr 14 '14 at 2:31 I'd go for Rudin's "Real and Complex Analysis" and "Functional Analysis" • It's interesting, but there's too little topology, no differential forms, no manifolds etc. :( But thanks for advice! – Thomas Feb 19 '13 at 11:36 • There are differential forms in his introductory real analysis. – superAnnoyingUser Feb 19 '13 at 11:45 An interesting read that would give you considerable insight into some natural applications of mathematics in physics would be the three volume set Group Theory in Physics by John F. Cornwell. I think it is fair to say these contain some calculus which is probably not covered by the other beautiful books mentioned in answers here. In a similar vein, you might look at Quantum Fields and Strings: A Course for Mathematicians (2 Volume Set) (v. 1 & 2) by Pierre Deligne, David Kazhdan, Pavel Etingof, John W. Morgan, Daniel S. Freed, David R. Morrison, Lisa C. Jeffrey and Edward Witten. That'll keep you occupied for some time.
Lately I’ve found monads to be more and more useful in several programming projects. For example, Harlan’s type inferencer uses a monad to keep track of what variables have been unified with each other, among other things. It took me a while to really grok monads. One reason is that many of the tutorials I’ve seen start out with category theory and the monad laws. These things don’t strike me as all that useful when I’m trying to make my code better in some way. What I have found useful is to think of monads as a style of programming (like continuation passing style), or even a design pattern. I’ve found monads are really handy when you need to thread some object through a sequence of function calls. To see how this works, we’re going to start with a store-passing interpreter for a small language and show how to use monads to hide the store in the cases where we don’t need it. ## A Store Passing Interpreter Let’s start by looking at an interpreter. This is basically the first interpreter from this post, but we’ve added mutable references. (define value-of/store (lambda (e env store) (match e (,x (guard (symbol? x)) (cons (lookup x env) store)) (,n (guard (number? n)) (cons n store)) ((λ (,x) ,e) (cons (lambda (v store) (value-of/store e (extend-env x v env) store)) store)) ((ref ,e) (match (value-of/store e env store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store)))))) ((deref ,e) (match (value-of/store e env store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store)))) ((update ,e-ref ,e-val) (match (value-of/store e-ref env store) (((ref ,label) . ,store) (match (value-of/store e-val env store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store))))))) ((,rator ,rand) (match (value-of/store rator env store) ((,rator . ,store) (match (value-of/store rand env store) ((,rand . ,store) (rator rand store))))))))) There are a couple of high level changes from the standard environment passing interpreter. First, we’ve added an additional store parameter. This represents the state of the memory, as opposed to the environment, which tracks variables on the stack. Second, since operations such as ref and update can modify the store, in addition to returning the value of an expression, we also have to return a potentially updated store. We could use Scheme’s support for multiple return values, but I went with cons instead for simplicity’s sake. The first few cases are pretty straightforward. Starting on line 4, we have the variable lookup line. This reads a value from the environment and returns it. The store is left unchanged, so we just return that as is. Likewise with the number case in line 6. For the lambda case on line 8, we use Scheme’s lambda to represent functions in our language. There’s a slight subtly here. Instead of having the procedure only take a value argument (v), it also must take a store. The reason is that otherwise this procedure would modify the store that was in effect when it the procedure was created, rather than the store in effect at the call site. Next we have ref, deref and update. This is where the store starts to be interesting. We’ve represented the store as an association list between labels and values. We use ref to create a new reference (think of it like new in C++ or Java), which we implement by using gensym to create a new label, and then adding a new label:value pair to the store. The value of a ref expression is a value tagged as a reference, containing an address into the store. Think of these as pointers. For deref, we first evaluate the argument and use match to unpack the label from the returned reference value. Once we have the label, we use assq to look it up in the store then we return its value. The update expression requires us to evaluate two arguments: the destination location and the new value. We do this with two recursive calls to value-of/store. In the interpreter from previous posts, we could have used match’s catamorphism facility to make this quite a bit shorter, but here we need to be explicit about ordering because of a potentially changing store. When we get around to actually updating the store, we just prepend a new label:value pair. Because assq returns the first match it finds, this results in whatever value was previously there being ignored. Finally, the application case (line 30) is similar to before. The main changes are that we need to be explicit about the order of evaluation and we pass the current store into the function we are applying. What we have now is an interpreter for a small language with mutable references. You’ll notice we didn’t use any side effects in creating the interpreter, which is kind of cool. Still, a few things are less than ideal. To go from a language without mutable references to one with mutable references, we had to change basically every line in the interpreter. Every recursive call to value-of/store needs an additional store parameter, and every return from the interpreter has to return a store as well. This is the case even though only ref and update can modify the store. What we’d like is to only have to be aware of the store for cases that interact with the store. Monads can help us do that. ## Cleaning up the Interpreter To deal with some of the shortcomings in the previous paragraph, we are going to derive a monad through a series of correctness-preserving transformations. Some of the intermediate steps will look rather ugly, but the end should look pretty nice. The first thing we’re going to do is Curry (Schönfinkel?) the store argument. (define value-of/store (lambda (e env) (lambda (store) (match e (,x (guard (symbol? x)) (cons (lookup x env) store)) (,n (guard (number? n)) (cons n store)) ((λ (,x) ,e) (cons (lambda (v) (lambda (store) ((value-of/store e (extend-env x v env)) store))) store)) ((ref ,e) (match ((value-of/store e env) store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store)))))) ((deref ,e) (match ((value-of/store e env) store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store)))) ((update ,e-ref ,e-val) (match ((value-of/store e-ref env) store) (((ref ,label) . ,store) (match ((value-of/store e-val env) store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store))))))) ((,rator ,rand) (match ((value-of/store rator env) store) ((,rator . ,store) (match ((value-of/store rand env) store) ((,rand . ,store) ((rator rand) store)))))))))) As I said, we’re going to make things worse for a while. Curried functions in Scheme programs are always kind of messy. The nice thing is that now we can push the (lambda (store) ...) part inside each of the match branches. It will become clear why we might want to do this in a second. (define value-of/store (lambda (e env) (match e (,x (guard (symbol? x)) (lambda (store) (cons (lookup x env) store))) (,n (guard (number? n)) (lambda (store) (cons n store))) ((λ (,x) ,e) (lambda (store) (cons (lambda (v) (lambda (store) ((value-of/store e (extend-env x v env)) store))) store))) ((ref ,e) (lambda (store) (match ((value-of/store e env) store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store))))))) ((deref ,e) (lambda (store) (match ((value-of/store e env) store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store))))) ((update ,e-ref ,e-val) (lambda (store) (match ((value-of/store e-ref env) store) (((ref ,label) . ,store) (match ((value-of/store e-val env) store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store)))))))) ((,rator ,rand) (lambda (store) (match ((value-of/store rator env) store) ((,rator . ,store) (match ((value-of/store rand env) store) ((,rand . ,store) ((rator rand) store)))))))))) Notice how we have a couple of cases, such as the variable and number case, where all we do is cons some value onto an unchanged store. Since we repeat this pattern a few times, let’s pull it into a separate function that we’ll call… return. (define return (lambda (v) (lambda (store) (cons v store)))) We can rewrite our previous interpreter using return: (define value-of/store (lambda (e env) (match e (,x (guard (symbol? x)) (return (lookup x env))) (,n (guard (number? n)) (return n)) ((λ (,x) ,e) (return (lambda (v) (value-of/store e (extend-env x v env))))) ((ref ,e) (lambda (store) (match ((value-of/store e env) store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store))))))) ((deref ,e) (lambda (store) (match ((value-of/store e env) store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store))))) ((update ,e-ref ,e-val) (lambda (store) (match ((value-of/store e-ref env) store) (((ref ,label) . ,store) (match ((value-of/store e-val env) store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store)))))))) ((,rator ,rand) (lambda (store) (match ((value-of/store rator env) store) ((,rator . ,store) (match ((value-of/store rand env) store) ((,rand . ,store) ((rator rand) store)))))))))) For the first time, we’ve introduced a transformation that makes our program shorter! While I was at it, I also did a transformation called an eta reduction in the lambda line at lines 9-10. The result is that the first three cases in the interpreter make no mention of the store. This is real tangible progress towards our goal. Let’s look at the last case, the application case, for a bit. Here we take a store, pass it into a call to evaluate the operator, and then thread the resulting store into a call to evaluate operand, and then finally we thread that store into the actual application of the operand. In each case, we don’t really care about the store; we only care about the value of the expression. Let’s see if we can write a function that does the store-threading for us. We’ll call it… bind. (define bind (lambda (m f) (lambda (store) (match (m store) ((,v . ,store) ((f v) store)))))) Here, m is for monad, meaning it’s a function that does something when it receives a store. The f parameter is a function describing what to do next. It expects a value and returns a new monad. We get this value by applying a store to the m argument. We expect f to return a function expected a new store, and we apply this function to the updated store returned by m. It’s a bit complicated, but it simplifies our application line somewhat: (value-of/store (lambda (e env) (match e (,x (guard (symbol? x)) (return (lookup x env))) (,n (guard (number? n)) (return n)) ((λ (,x) ,e) (return (lambda (v) (value-of/store e (extend-env x v env))))) ((ref ,e) (lambda (store) (match ((value-of/store e env) store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store))))))) ((deref ,e) (lambda (store) (match ((value-of/store e env) store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store))))) ((update ,e-ref ,e-val) (lambda (store) (match ((value-of/store e-ref env) store) (((ref ,label) . ,store) (match ((value-of/store e-val env) store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store)))))))) ((,rator ,rand) (bind (value-of/store rator env) (lambda (rator) (bind (value-of/store rand env) (lambda (rand) (rator rand))))))))) Our interpreter just shrunk by another line. The only cases where we see the store actually appears are ref, deref, and update, which are exactly the cases that need to read or write memory. The rest of the interpreter is completely oblivious to the existence of memory. We can clean up the application line a little further. If you squint a little, you’ll noticed that the calls to bind look a little like the traditional expansion of (let ((x e)) b) to ((lambda (x) b) e). Let’s build some extra syntax to make it easier to chain lots of calls to bind together. We’ll call it… do*. (define-syntax do* (syntax-rules () ((_ ((x e) rest ...) body) (bind e (lambda (x) (do* (rest ...) body)))) ((_ () body) body))) This macro works a little like let*, where it performs a sequence of computations in order and names each of their results. Using this macro, our interpreter becomes: (define value-of/store (lambda (e env) (match e (,x (guard (symbol? x)) (return (lookup x env))) (,n (guard (number? n)) (return n)) ((λ (,x) ,e) (return (lambda (v) (value-of/store e (extend-env x v env))))) ((ref ,e) (lambda (store) (match ((value-of/store e env) store) ((,v . ,store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label v) store))))))) ((deref ,e) (lambda (store) (match ((value-of/store e env) store) (((ref ,label) . ,store) (cons (cdr (assq label store)) store))))) ((update ,e-ref ,e-val) (lambda (store) (match ((value-of/store e-ref env) store) (((ref ,label) . ,store) (match ((value-of/store e-val env) store) ((,val . ,store) (cons (ref ,label) (cons (cons label val) store)))))))) ((,rator ,rand) (do* ((rator (value-of/store rator env)) (rand (value-of/store rand env))) (rator rand)))))) We’re going to cheat a little for the last few cases. We’re doing to define three new functions, create-ref, get-ref and set-ref, which do the store manipulation for us. Here they are: (define (create-ref val) (lambda (store) (let ((label (gensym))) (cons (ref ,label) (cons (cons label val) store))))) (define (get-ref ref) (lambda (store) (match ref ((ref ,label) (cons (cdr (assq label store)) store))))) (define (set-ref ref val) (lambda (store) (match ref ((ref ,label) (cons ref (cons (cons label val) store)))))) These feel like cheating to me because we got these functions basically by copying and pasting the body of the ref, deref and update cases. But, we can think of it as defining a clean interface to the store, which is just good software engineering. Plus, our interpreter was actually doing two things in each case. First it had to evaluate the arguments and then it had to access the store. Using these helper functions makes that separation clearer. Using these functions in the interpreter completes our transition over to monads. (define value-of/store (lambda (e env) (match e (,x (guard (symbol? x)) (return (lookup x env))) (,n (guard (number? n)) (return n)) ((λ (,x) ,e) (return (lambda (v) (value-of/store e (extend-env x v env))))) ((ref ,e) (do* ((val (value-of/store e env))) (create-ref val))) ((deref ,e) (do* ((ref (value-of/store e env))) (get-ref ref))) ((update ,e-ref ,e-val) (do* ((ref (value-of/store e-ref env)) (val (value-of/store e-val env))) (set-ref ref val))) ((,rator ,rand) (do* ((rator (value-of/store rator env)) (rand (value-of/store rand env))) (rator rand)))))) Our interpreter is now a whole 11 lines shorter than where we started! Granted, we’ve moved some of the functionality into helper functions, so the total length of the program may be longer. Still, this program seems much better factored to me. The interpreter itself shows how to evaluate different language forms, and things like order of evaluation are very clear. The details of how to manipulate the store-related data structures are all hidden in the monad. ## Conclusion I’ve found that thinking of deriving monads through a series of program transformations has helped them make a lot more sense to me. In functional programming it’s common to simulate stateful things by threading extra arguments around. This quickly gets tedious. Monads help you keep this hidden when you don’t care about it and cleanly expose it in cases where you do. Another benefit of this style of programming that I’ve found is that once your program is written in terms of return, bind and do*`, you can modify the monad without having to change the rest of your program. In rewriting the Harlan type inferencer, there were a few times I realized I had some more information to thread through the inferencer. In the old style, I would have had to change many lines of code to do this. Since the type inferencer was written monadically, however, I could modify a handful of functions while leaving the bulk of the code unchanged.
# set difference ## Definition Let $A$ and $B$ be sets. The set difference (or simply difference) between $A$ and $B$ (in that order) is the set of all elements of $A$ that are not in $B$. This set is denoted by $A\setminus B$ or $A-B$ (or occasionally $A\sim B$). So we have $A\setminus B=\{x\in A\mid x\notin B\}.$ Venn diagram showing $A\setminus B$ in blue ## Properties Here are some properties of the set difference operation: 1. 1. If $A$ is a set, then $A\setminus\varnothing=A$ and $A\setminus A=\varnothing=\varnothing\setminus A.$ 2. 2. If $A$ and $B$ are sets, then $B\setminus(A\cap B)=B\setminus A.$ 3. 3. If $A$ and $B$ are subsets of a set $X$, then $A\setminus B=A\cap B^{\complement}$ and $(A\setminus B)^{\complement}=A^{\complement}\cup B,$ where ${}^{\complement}$ denotes complement in $X$. 4. 4. If $A$, $B$, $C$ and $D$ are sets, then $(A\setminus B)\cap(C\setminus D)=(A\cap C)\setminus(B\cup D).$ ## Remark As noted above, the set difference is sometimes written as $A-B$. However, if $A$ and $B$ are sets in a vector space (or, more generally, a module (http://planetmath.org/Module)), then $A-B$ is commonly used to denote the set $A-B=\{a-b\mid a\in A,b\in B\}$ rather than the set difference. Title set difference SetDifference 2013-03-22 11:59:38 2013-03-22 11:59:38 yark (2760) yark (2760) 33 yark (2760) Definition msc 03E20 difference between sets difference SymmetricDifference InverseImageCommutesWithSetOperations Difference2
Surds and indices chapter plays a crucial role in simplification problems of arithmetic. Here we would learn about its basic concepts and tricks to handle these problems efficiently in competitive examinations. Surds: A surd is a root of a whole number with an irrational value. In other words, when it is a root and irrational, it is surd. Surds can not be further simplified into whole numbers or integers. Example: $\sqrt{2}, \sqrt{3}, \sqrt[3]{5}, \sqrt{\frac{1}{7}}$, etc. Note: An irrational number is a real number that can not be written as a simple fraction. Example: $\pi, \sqrt{2},\sqrt{3}$, etc. If N is an irrational number then $N\neq\frac{p}{q}$, where p and q are integers and $q\neq0$. ## Types of Surds • Pure Surds: When surds have only one irrational number. Example: $\sqrt{2}, \sqrt{11}, \sqrt{17}$, etc. • Mixed Surds: When surds have both rational and irrational numbers. Example: $2\sqrt{5}, 4\sqrt{7}, 8\sqrt{3}$, etc. • Compound Surds: When there are two or more surds in one mathematical expression. Example: $4+\sqrt{5}, \sqrt{5}+\sqrt{6}, 3+2\sqrt{6}$, etc. Indices: The index (Plural: Indices) is the power (exponent) of a number. Example: $2^{5}$ has an index of 5 and 2 is its base. Here exponent 5 defines, how many times 2 has been multiplied by itself. ## Rules of Surds and Indices If a and b are any two real numbers, m and n are integers then (i) $a^{m}\times a^{n}=a^{m+n}$ Example: $2^{3}\times 2^{5}=2^{8}=256$ (ii) $\frac{a^{m}}{a^{n}}=a^{m-n}$ Example: $\frac{5^{3}}{5^{4}}=5^{-1}=\frac{1}{5}$ (iii) $(ab)^{m}=a^{m}\times b^{m}$ (iv) $x^{m^{n}}=x^{(m^{n})}$ (v) $(a^{m})^{n}=a^{m\times n}$ (vi) $a^{-m}=\frac{1}{a^{m}}$ (vii) $a^{0}=1$ (viii) $\sqrt[n]{a}=a^{\frac{1}{n}}$ (ix) $\sqrt[n]{x^m}=x^{\frac{m}{n}}$ (x) If $x^{\frac{1}{n}}=a$ then $x=a^{n}$ (xi) $\sqrt{a}\displaystyle\pm \sqrt{b} \neq \sqrt{a\displaystyle\pm b}$ (xii) $\sqrt{a}\times \sqrt{b}=\sqrt{ab}$ (xiii) $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ ## Rationalizing Surds Rationalization is the process of changing the denominator of a fraction to a rational number. Que: Rationalize $\frac{1}{\sqrt{5}+\sqrt{3}}$. Solution: Multiply the conjugate of the denominator by both, the numerator and the denominator. =$\frac{1\times (\sqrt{5}-\sqrt{3})}{(\sqrt{5}+\sqrt{3})\times (\sqrt{5}-\sqrt{3})}$ =$\frac{\sqrt{5}-\sqrt{3}}{(\sqrt{5})^{2}-(\sqrt{3})^{2}}$ =$\frac{1}{2}\times (\sqrt{5}-\sqrt{3})$ ## Surds and Indices problems Que 1: Find the value of $\frac{(243)^\frac{n}{5}\times 3^{2n+1}}{9^{n}\times 3^{n-1}}$. Solution: Numerator =$(3^{5})^{\frac{n}{5}}\times 3^{2n+1}=3^{n}\times 3^{2n+1}=3^{3n+1}$ Denominator=$3^{2n}\times 3^{n-1}=3^{3n-1}$ $\frac{3^{3n+1}}{3^{3n-1}}$$=3^{(3n+1)-(3n-1)}=3^{2}=9$ Que 2: If $x^{x\sqrt{x}}=(x\sqrt{x})^{x}$ then find the value of x? Solution: $x^{x^{1}\times x^{\frac{1}{2}}}=(x^{1}\times x^{\frac{1}{2}})^{x}$. $x^{x^{\frac{3}{2}}}=(x^{\frac{3}{2}})^{x}$. So, $x^{\frac{3}{2}}=\frac{3x}{2}$. After squaring both sides of the equation, $x^{3}=\frac{9x^{2}}{4}$. $x^{2}(x-\frac{9}{4})=0$. Now, $x=\frac{9}{4}$ is the correct answer (x=0 would give an indeterminate value) Que 3:  Find the value of $\frac{5^{n+3}-6\times 5^{n+1}}{9\times 5^{n}-5^{n}\times 2^{2}}$? Solution: $\frac{5^{n}(5^{3}-6\times 5)}{5^{n}(9-4)}$. =$\frac{125-30}{5}=19$ Que 4: Find the value of $[(\sqrt[3]{256^{2}})^{\frac{3}{2}}]^{\frac{1}{4}}$? Solution: $256=2^{8}$ =$[(2^{\frac{8\times 2}{3}})^{\frac{3}{2}}]^{\frac{1}{4}}$ =$(2^{8})^{\frac{1}{4}}=2^{2}=4$ Que 5:  Find the value of $\sqrt[3]{4\frac{12}{125}}$? Solution: $4\frac{12}{125}=\frac{512}{125}=(\frac{8}{5})^{3}$ So, The correct answer=$\frac{8}{5}=1\frac{3}{5}$ Que 6: If $\frac{\sqrt{4356\times \sqrt{x}}}{\sqrt{6084}}=11$, then find the value of x ? Solution: After squaring both sides of the equation, $\frac{4356\times\sqrt{x}}{6084}=121$. $\sqrt{x}=169$. $x=169^{2}=28561$. Que 7: Arrange the following in ascending order if a>1 (i) $\sqrt[3]{\sqrt[4]{a^{3}}}$ (ii) $\sqrt[3]{\sqrt[5]{a^{2}}}$ (iii) $\sqrt{\sqrt[3]{a}}$ (iv) $\sqrt{\sqrt[5]{a^{3}}}$ Solution: $\sqrt[3]{\sqrt[4]{a^{3}}}=(\sqrt[4]{a^{3}})^{\frac{1}{3}}=(a^{\frac{3}{4}})^{\frac{1}{3}}=a^{\frac{1}{4}}$. $\sqrt[3]{\sqrt[5]{a^{4}}}=(\sqrt[5]{a^{4}})^{\frac{1}{3}}=(a^{\frac{4}{5}})^{\frac{1}{3}}=a^{\frac{4}{15}}$. $\sqrt{\sqrt[3]{a}}=(a^{\frac{1}{3}})^{\frac{1}{2}}=a^{\frac{1}{6}}$. $\sqrt{\sqrt[5]{a^{3}}}=(\sqrt[5]{a^{3}})^{\frac{1}{2}}=(a^{\frac{3}{5}})^{\frac{1}{2}}=a^{\frac{3}{10}}$. Now compare exponents, $(\frac{1}{4}, \frac{4}{15}, \frac{1}{6}, \frac{3}{10})\times 60=(15, 16, 10, 18)$. Here, LCM(4,15,6,10)=60 Now, 10<15<16<18 So, $\frac{1}{6}< \frac{1}{4}<\frac{4}{15}<\frac{3}{10}$ Correct ascending order= (iii)<(i)<(ii)<(iv) Que 8: Find the square root of $\sqrt{5+2\sqrt{6}}$ ? Solution: $\sqrt{(\sqrt{3})^{2}+(\sqrt{2})^{2}+2\times\sqrt{3}\times\sqrt{2}}$. =$\sqrt{(\sqrt{3}+\sqrt{2})^{2}}$ =$\sqrt{3}+\sqrt{2}$ ## Important results of Surds and Indices 1. If $N=\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+......}}}}$ Then $N=\frac{1+\sqrt{1+4x}}{2}$ Proof: $N=\sqrt{x+N}$. $N^{2}=x+N$. $N^{2}-N-x=0$. $N=\frac{1\displaystyle\pm\sqrt{1+4x}}{2}$.   (Sridharacharya Formula) Note: N has a positive value so take the positive sign here. 2. If $N=\sqrt{x-\sqrt{x-\sqrt{x-\sqrt{x-......}}}}$ Then $N=\frac{-1+\sqrt{1+4x}}{2}$ 3. If $N=\sqrt{x\displaystyle\pm\sqrt{x\displaystyle\pm\sqrt{x\displaystyle\pm\sqrt{x\displaystyle\pm......}}}}$ then find the value of N? Here x=k×(k+1), k is a Natural Number (Positive Integer) Case 1: (+) sign N=k+1 Case 2: (-) sign N=k Example: $N=\sqrt{12+\sqrt{12+\sqrt{12+\sqrt{12+......}}}}=4$ $N=\sqrt{12-\sqrt{12-\sqrt{12-\sqrt{12-......}}}}=3$ Here, 12=3×4 4. If $N=\sqrt{x\sqrt{x\sqrt{x\sqrt{x......}}}}$ then N=x Proof: $N=\sqrt{x.N}$. $N^{2}=x.N$. $N(N-x)=0$. So, N=x    (For x>0, N can’t be 0) 5. If $K=\sqrt{x\sqrt{x\sqrt{x\sqrt{x.....n_{th}term}}}}$ then find the value of K ? Number of terms= Number of square roots=n $K=(x)^{\frac{2^{n}-1}{2^{n}}}$. Example: $K=\sqrt{5\sqrt{5\sqrt{5\sqrt{5}}}}=(5)^{\frac{2^{4}-1}{2^{4}}}=(5)^{\frac{15}{16}}$. Question: $\frac{\sqrt{100!\sqrt{100!\sqrt{100!\sqrt{100!}}}}}{\sqrt{99!\sqrt{99!\sqrt{99!\sqrt{99!}}}}}=\sqrt[16]{10^x}$ Find the value of x? Solution: Numerator=$(100!)^\frac{15}{16}$. Denominator=$(99!)^\frac{15}{16}$. $\sqrt[16]{10^x}=(100!/99!)^{\frac{15}{16}}=(100)^{\frac{15}{16}}=\sqrt[16]{10^{30}}$ The value of x=30 1. ### What is the difference between Surds and Indices? Surds can be expressed as the nth root of a rational number while Indices refer to the power to which a number is raised. 2. ### Is surd an irrational number? Yes, Surd is irrational because, in decimal form, it goes on without repetition. 3. ### How can we identify Surds? A rational number that contains a radical sign ($\sqrt[n]{}$) and its solution is an irrational value, called Surd. 4. ### Can a Surd be expressed in Index form? Yes, A surd can be expressed in the form of a fractional index. Surd Form: $\sqrt[n]{a}$ Index Form: $a^{\frac{1}{n}}$ 5. ### The square root of ‘pi’ is a Surd or not? No, $\pi$ is an irrational number but a Surd is the nth root of a rational number.
# Big-O complexity when c is a tiny fraction Finding Big-O is pretty straightforward for an algorithm where $f(n)$ is $$f(n) = 3n^4 + 6n^3 + 10n^2 + 5n + 4$$ The lower powers of $n$ simply fall off because in the long run $3n^4$ outpaces all of them. And with $g(n) = 3n^4$ we say $f(n)$ is $O(n^4)$. But what would Big-O be if instead of 3 we were given a really small constant, for example $$f(n) = 0.0000000001n^4 + 6n^3 + 10n^2 + 5n + 4$$ Would we still say $f(n)$ is $O(n^4)$? • short answer yes – AJed Jan 17 '13 at 21:18 • This shows one of the weaknesses of Big-O applied to algorithm complexity. Just because it's bigger in the very (very) long run doesn't mean it's better with reasonable input. – Khaur Jan 18 '13 at 10:44 Medium answer - yes. As you said for the previous case, in the long run $n^4$ outpaces all of them. This is still true despite the constant in front. Check it out: plot. Also, remember that $n^3$ and $n^4$ are both $O(n^4)$, and in fact are both $O(n^{10})$ because big-O is an upper bound. So you might ask "is there any tighter big-O bound on this function than $O(n^4)$, like $O(n^3)$, and the answer would be no. Remember, when we write $f(x) = O(g(x))$ we are saying that there are two positive constants, $c$ and $x_0$ such that $|f(x)| \le c|g(x)|$ for all $x \ge x_0$. Asymptotic analysis is concerned with how functions behave in the limit. $$f(n)=an^4+6n^3+10n^2+5n+4$$ This function is $O(n^4)$. Your question is "Can we change the value of the constant $a$, in such a way that $f(n)$ is no longer $O(n^4)$?" The answer is no. For any $a$, we can choose $c$ and $n_0$ such that $|f(n)| \le c|n^4|$ for all $n \ge n_0$. In fact, you have already stated this: The lower powers of $n$ simply fall off because in the long run $3n^4$ outpaces all of them. This holds true regardless of the value of $a$ in the function. It may take "longer" for $an^4$ to outpace the other terms, but it eventually will. • Being needlessly pedantic (we often must for introductory questions), but it is only "no" if $a > 0$. – Artem Kaznatcheev Jan 18 '13 at 5:25 • @ArtemKaznatcheev You are right to point this out. I believe that $f(n)$ being positive is often assumed/implied; in CS and algorithm analysis $n$ usually represents an input size so it doesn't make sense for it to be negative. But O notation describes the behavior of mathematical functions, not just algorithms, and the actual definition includes absolute values -- I have updated my answer to reflect this. – David Jan 18 '13 at 6:49
## Fractions are Hard! ### 4.1 Egyptian Fractions Scholars of ancient Egypt (ca. 3000 BCE) were very practical in their approaches to mathematics and always sought answers to problems that would be of most convenience to the people involved. This led them to a curious approach to thinking about fractions. Consider the problem: Share 7 pies among 12 boys. Of course, given our model for fractions, each boy is to receive the quantity $$\dfrac{7}{12}$$ of a pie. But this answer has little intuitive feel. But suppose we took this task as a very practical problem. Here are the seven pies Is it possible to give each of the boys a whole pie? No. How about the next best thing – each boy half a pie? Yes! There are certainly 12 half pies to dole out. There is also one pie left over yet to be shared among the 12 boys. Divide this into twelfths and hand each boy an extra piece. Thus each boy receives $$\dfrac{1}{2}+\dfrac{1}{12}$$ of a pie and it is indeed true that $$\dfrac{7}{12}=\dfrac{1}{2}+\dfrac{1}{12}$$. Question: a) How do you think the Egyptian’s might have shared five pies among six girls? b) How might they have shared twelve pies among seven students? The Egyptians insisted on writing all their fractions as sums of fraction with numerators equal to $$1$$. They did this because fractions of the form $$\dfrac{1}{N}$$ have a good intuitive feel to them. For example $$\dfrac{3}{10}$$ was written as $$\dfrac{1}{4}+\dfrac{1}{20}$$ $$\dfrac{5}{7}$$ was written as $$\dfrac{1}{2}+\dfrac{1}{5}+\dfrac{1}{70}$$. That is, to share three pies among ten students, the Egyptians said to give each student one quarter of a pie and one twentieth of a pie. To share five pies among seven students, the Egyptians suggested giving out half a pie, one fifth of a pie, and one seventieth of a pie to each student. Question:  It is true that $$\dfrac{4}{13}=\dfrac{1}{4}+\dfrac{1}{18}+\dfrac{1}{468}$$. What does this say about how the Egyptians may have advised sharing four pies among thirteen girls? Comment: A fraction with numerator $$1$$ is today called a unit fraction. The Egypyians denoted their unit fractions with a raised dot. For example, $$\dot{2}=\dfrac{1}{2}$$ and $$\dot{103}=\dfrac{1}{103}$$. Thus $$\dfrac{3}{10}$$ was written as $$\dot{4}+\dot{20}$$, and so on. They expressed all fractions as sums of unit fractions – except for the fraction $$\dfrac{2}{3}$$, which had its own special symbol. (Probably because this fraction arose so often in day-to-day work.) Curiously, the Egyptians did not like to repeat fractions. Although it is obviously true that $$\dfrac{2}{5}=\dfrac{1}{5}+\dfrac{1}{5}$$ the Egyptians really did think it better to give each person receiving pie piece as large as possible, and so preferred to work with $$\dfrac{2}{5}=\dfrac{1}{3}+\dfrac{1}{15}$$ (even though it meant giving out a tiny piece of pie with that bigger piece). Question: Consider the fraction $$\dfrac{2}{11}$$ . a)       Show that $$\dfrac{1}{5}$$ is bigger than $$\dfrac{2}{11}$$. b)      Show that $$\dfrac{1}{6}$$ is smaller than $$\dfrac{2}{11}$$. c)       Work out $$\dfrac{2}{11}-\dfrac{1}{6}$$. d) Use c) to write $$\dfrac{2}{11}$$ the Egyptian way. Question: Consider the fraction $$\dfrac{2}{7}$$. a)       What is the biggest fraction $$\dfrac{1}{N}$$ that is still smaller than $$\dfrac{2}{7}$$? b)      Write $$\dfrac{2}{7}$$ the Egyptian way. Question: CHALLENGE  Write $$\dfrac{17}{20}$$ the Egyptian way. Question: CHALLENGE  What is the largest value of $$n$$ for which the fraction $$\dfrac{n-1}{n}$$ can be written in the form $$\dfrac{1}{a}+\dfrac{1}{b}$$ with $$a$$ and $$b$$ positive integers? Question: CHALLENGE  Find a formula for the number of ways one can write $$\dfrac{1}{2^{n}}$$  in the form $$\dfrac{1}{a}+\dfrac{1}{b}$$ with $$a$$ and $$b$$ positive integers, $$a The Egyptians were adept at computing sums of unit fractions. As the exercise show, it is typically not easy to do. EXAMPLE: Write \(\dfrac{4}{13}$$ as a sum of distinct unit fractions. Answer: The Egyptians preferred always “take out” the largest unit fraction possible from any given fraction at each stage. Note that $$\dfrac{4}{13}=\dfrac{1}{3\dfrac{1}{4}}$$ which shows that $$\dfrac{1}{3}$$ is larger than $$\dfrac{4}{13}$$, but $$\dfrac{1}{4}$$ isn’t. With some scratch-work on the side we see that $$\dfrac{4}{13}=\dfrac{1}{4}+\dfrac{3}{52}$$. Now $$\dfrac{3}{52}=\dfrac{1}{17\dfrac{1}{3}}$$ which shows that $$\dfrac{1}{18}$$ is the next largest unit fraction we can “take out.” We have $$\dfrac{3}{52}-\dfrac{1}{18}=\dfrac{1}{468}$$ and so $$\dfrac{4}{13}=\dfrac{1}{4}+\dfrac{1}{18}+\dfrac{1}{468}$$. We’re done! EXERCISE: Write $$\dfrac{3}{7}$$ as a sum of unit fractions by removing, at each stage, the largest unit fraction possible. Do the same for $$\dfrac{5}{11}$$. In 1202 Italian scholar Fibonacci questioned whether or not every fraction can be written as a sum of distinct unit fractions. Does the process of “taking out” the largest unit fraction always yield the desired outcome? He answered this question to the affirmative. CHALLENGE: Fibonacci’s Proof   Suppose we are working with a fraction $$\dfrac{a}{b}$$ with $$a\dfrac{1}{N}$$. (Thus $$\dfrac{a}{b}<\dfrac{1}{N-1}$$.) Write $$\dfrac{a}{b}-\dfrac{1}{N}$$ over a common denominator and show that this new fraction has numerator that is both positive and smaller than $$a$$.   b) Explain why, if we repeat this process, we shall eventually obtain a fraction with numerator equal to $$1$$.   c) Explain why $$\dfrac{a}{b}$$ is then sure to equal a sum of (distinct) unit fractions. Question:  Use Fibonacci’s method to write the fraction $$\dfrac{1}{1}$$ as an infinite sum of distinct unit fractions. What appears? Can anything be said about the denominators that arise? ## Books Take your understanding to the next level with easy to understand books by James Tanton. BROWSE BOOKS ## Guides & Solutions Dive deeper into key topics through detailed, easy to follow guides and solution sets. BROWSE GUIDES ## Donations Consider supporting G'Day Math! with a donation, of any amount. Your support is so much appreciated and enables the continued creation of great course content. Thanks! Donations can be made via PayPal and major credit cards. A PayPal account is not required. Many thanks! DONATE
# Investment strategies for uncharted waters Surrounded by unprecedented risks, family investors in Asia are looking to extract any available upside from a tricky environment In a year that has seen the onset of a pandemic, a near-global economic shutdown and a highly contentious US election, it should come as no surprise that market volatility has been – and remains – relatively elevated. However, even before these tumultuous events, many wealthy family investors across Asia were becoming more careful with their investments – wary that the longest bull market in history had to end sometime. Their prudence proved well judged when storm clouds started gathering over global markets back in March. “Family offices were a little bit cautious, even pre-Covid‑19, because it was late in the economic cycle,” says Anurag Mahesh, Singapore-based co-head of global family office coverage for Asia‑Pacific (Apac) at UBS. “So they were already better positioned going into March.” Fast-forward to the end of the year and private wealth clients are continuing to favour investment solutions that provide them with downside protection, while also retaining some upside exposure. This sentiment has translated into growing interest among the family office client segment in market-neutral and liability management strategies, as well as market access structures such as actively managed certificates. China has been another focus for family offices, given the strong performance of Chinese stocks this year. Apac family offices are already very active in China but, with recent relaxations to China’s qualified foreign institutional investor scheme, family offices across the region are now looking closely at new investment opportunities in the China market. This report takes a closer look at the reasons behind each of these emerging trends, and explains how such solutions can serve family offices with the tools they need to navigate a new and challenging investment environment. ### Overpriced Ask any of Asia’s family office executives about the key challenges and pitfalls facing them in today’s investment environment, and a common theme quickly emerges. Family offices are concerned about an uncertain economic outlook; anxieties that the strong rally seen in global equity markets since the Covid‑19-driven crash in the first quarter of 2020 has done little to assuage. In this mindset, decisions around which assets or securities to invest in becomes a complex predicament. “Valuations are pretty lofty across a lot of asset classes – that’s generally what is bothering me,” says Lawrence Lee, chief investment officer of Singapore-based independent asset manager Sino Suisse Capital. “And if you go underweight [on equities] you tend to miss out on the performance.” Manisha Bhargava, chief executive of Singapore-based Straits Investment Management, which manages money for Straits Trading Company, is wary that equity markets cannot rally in perpetuity. “We’re hedging a little more than we would normally, because the stock markets have had a great run,” he says. In portfolio allocations, the uncertainty precipitated a flight to quality in the first half of the year. According to UBS’s Global family office report 2020, based on the findings of a survey conducted between March and May, family offices ramped up their exposures to cash, but also to gold, real estate and developed market bonds, as well as equities and private equity. Kerri Lim, head of global family office sales, global markets, Asia‑Pacific at UBS, says a number of global family office clients have increased exposure to private equity, private credit or real estate, reasoning that these assets are providing a better risk/reward outcome compared with public assets in the current environment. UBS’s survey of family offices reveals the extent to which reallocation to better-performing assets has been a focus among global family office clients in the past year. In what was a busy trading period, roughly two-thirds of respondents (65%) reallocated up to 15% of their portfolios. Here investors were split into two camps: opportunistic investors exploiting market dislocations and those more risk-averse, who are looking to reduce pro-growth exposure by adding cash and gold. “Perhaps you’re not fully invested in equities, with maybe 90% of your typical allocation, and you have more in cash and money market exposure than usual,” says Munish Randev, founder of Cervin Family Office, a Mumbai-based multifamily office. “This stance balancing is very important for a family office, along with a focus on reallocating at an intra-asset class level, depending on the key risk changes.” ### Cash concerns Lim adds that, while the move to ramp up cash allocations is also understandable, it is unlikely to be beneficial in the longer term. Family offices’ clients suspect that central banks will continue to pursue accommodative monetary policies so inflation could, at some point, become problematic for cash investments. “So they have a conundrum,” she says. “They are cautious, but they don’t think that sitting on cash will help.” That is one reason many family offices have been increasing their exposures to assets such as gold, which offer a safe haven against inflation and currency debasement. “Holding gold is something they see as essential as a hedge in the current macroenvironment,” says Lim. Gold has rallied strongly since March, reaching a record high of $2,060 per ounce in August. After positive news on the development of vaccines for Covid‑19 bolstered hopes of a smooth economic recovery, the price fell back to$1,780 per ounce at the end of November. However, the precious metal is still seen as a safe haven and continues to be favoured by investors for its diversification properties, says Lim. A case could be made for a bout of deflation or inflation materialising in the not so distant future, says UBS’s Mahesh. Gold would be an ideal hedge against either scenario. If demand and therefore prices fail to pick up post-pandemic, causing deflation, the Federal Reserve would be expected to keep interest rates low, resulting in negative real rates and benefiting gold. On the other hand, gold is a customary hedge against inflation, because it typically preserves its value over time and therefore offers protection when currencies start depreciating, says Mahesh. Family office allocations to the precious metal remain relatively small, but have been growing over recent months from 1% to perhaps 2% or 3% on average, says LH Koh, Hong Kong-based co-head of global family office coverage for Apac at UBS ### Signs of recovery Family offices ought to keep in mind how quickly market sentiment can change on the back of events, and be ready to adjust their investment portfolio allocations accordingly. Pharmaceutical companies begining to roll out Covid‑19 vaccines, as happened in the UK in December, is one such game-changing event. The question on the minds of many family investors is when will it be time to rotate into so-called ‘recovery stocks’, such as cyclicals, as the virus’ impact appears to be receding? Mahesh says family offices have already been pondering this question for some months prior to the recent vaccine announcements. “[Family offices] had a huge run, so part of the profits from that are going towards buying some protection. They’re also moving from outright long positions into some relative-value opportunities. For example, they may not be long technology outright, but long technology and short some other sectors, such as financials.” Family offices know that at, some point, it will be time to reconsider the securities of companies in industries most heavily damaged by the pandemic, such as the travel and hospitality sectors. That said, some uncertainty remains around how long it will take for business to return to normal once effective vaccines arrive and the recovery trajectory of hard-hit sectors and the global economy in general. In this ‘new normal’, family offices are likely to continue pursuing cautious strategies that incorporate some downside protection while avoiding large directional bets on asset prices. As this report details, there are a host of new investment products now on the market that can help family offices achieve just that. Family office investing – Special report 2020
Mmh, papery... Discussion on Reddit · on Simtropolis · on Something Awful Any amount supports the development ♥ I am writing this update on my phone, from a beautiful, but currently rainy, Spanish town. The last couple weeks were quite eventful - let me summarize: First, I moved back from Russia to Germany. My wife and me will both do our masters in Munich! Then we celebrated our wedding a second time, with my German family and friends - and with a special guest who came all the way from Australia: Michael! Michael and me decided to spend the next week pair-programming Citybound, something that we already planned to do "some day". Now we had this opportunity and even though I was half-distracted by my day job which I took up again (gotta pay rent somehow) I would say we made a lot of meaningful progress. We even recorded a video outside together, but due to my mistake to ignore the wind, the audio is ruined. And my rambling there is pretty incoherent anyways. That's why I'm writing this as a text post instead! Given that, I will go into a little more technical detail - even though I already hate writing on my phone. (I have to write it from my phone because I'm on holiday and I didn't bring more than my phone, because it's supposed to be holiday, right?) Anyways... ## The initial plan: multithreaded Citybound The goal we set ourselves for this week was to introduce multi-threading to Citybound, using a small part of the game that we would parallelize as a proof of concept. We didn't exactly get there, but we got a lot of things leading there done, and covered some other topics in the process. ## Moving to Electron.js Citybound runs inside a tweaked Chrome Browser, which allows us to both ship the game as a single bundled executable and to make use of some features that are not enabled by default or completely not present in normal browsers. For most of the development, NW.js was used as this browser container, but recently Electron.js emerged and turned out to be the better-maintained, cleaner-designed, more full-featured and more up-to-date alternative. Moving all the code of Citybound to Electron went quite well and already paid off: we could already start using WebGL2, for example. ## Hardcore IPC: Shared Memory Modern JavaScript does offer WebWorkers as primitives for parallelization, but the way they communicate with each other or with the main thread is quite clunky: they can either share data by copying, or they have to completely give up all access to data that is handed over to another thread - these options are either too slow or too impractical, neither something that you can do repeatedly every simulation frame. Michael had the inspiration and the bravery/foolishness to use a much lower-level way of sharing data: shared memory. This would definitely give us the required speed, but it would make us directly responsible for not getting into any concurrency issues like race conditions. Because we are using a tweaked browser like Electron.js we can make almost arbitrary functionality available in JavaScript, in our case, there even already existed a NPM package which did just what we needed, (shared memory, using mmaped files to be precise), but only in Linux and Mac. Implementing this also for Windows should be "kinda straightforward", but as this oxymoron tells you, it is not one of the things I look forward to. Michael played around with this NPM package and made a simple test case work, with a main thread and a worker thread, writing and reading shared memory. ## How we want Citybound to be multithreaded Now that we had the low level tools to make parallelization possible, we needed to come up with a strategy to correctly make the whole game work in a parallelized context. Our proposed solution is quite simple: the whole city is spatially subdivided into chunks, each of which functions like a small city itself. Each chunk can be processed by a different thread. The chunks only need to communicate when an agent passes the border between two chunks: they then exchange information regarding this agent and its current trip - it is completely handed over. This means that each chunk processes only the agents inside it - only it has write access to their underlying data. Still, every thread can read-access the information of all agents in all chunks of he city, which might be needed as reference information. In my oversimplified imagination of the whole idea this completely avoids all race conditions and everything is perfect. Because the "surface" of each chunk is much smaller than its "volume", the number of agents crossing the border between chunks should be much lower than the number of agents that stay within one chunk. This should keep the overhead of communication between chunks fairly low. ## Storing people in binary data One problem with the shared memory that we are using is that it is just flat byte buffers - no easy way to store (potentially quite complex) high-level JavaScript objects. What I ended up working on for most of the week was writing a proxy wrapper that allows accessing this huge shared byte buffer as if it was a collection of high-level objects - making sure that in the background all information is correctly serialized to and respectively rebuilt from a very compact binary representation, using a simple data layout schema (which each entity type in the game will have). This sounds at first like a lot of hassle for no new functionality, but only it really allows us to use shared memory in our high-level simulation and it has some pleasant side effects: First, it gives us savegames for free, since the serialization/parsing that they would require happens all the time anyways. All that needs to be done is to dump all shared memory to disk on save and to copy a stored buffer to shared memory on load. If all nontransient game state lives in this shared memory (which I hope), then savegames are indeed easy. Another benefit is that we might be able to port parts or all of the simulation onto another programming language, running in a different thread, but also just accessing this shared memory - and the existing JavaScript code won't even notice the difference. Finally, I think that some WebGL related stuff might become simpler or faster by all of the simulation data already existing in raw binary form, ready to be sent to the GPU (for example for instanced/batch drawing of cars) My progress on this I would classify with the ever-so-vague "mostly done". ## A day in the life of a citizen Thinking about the parallelizability of the simulation forced us to come up with clear architectural plans for the economy, and my existing model for that still had some holes. One such hole was how to organize a citizen agent in the code - so far I just assumed all of a citizens behavior would be contained in a simple class and that all the points where an agent can make a choice would be explicitly hardcoded. Stated like that, that already sounds bad - Michael had the idea to instead model citizens as state machines and sketched out what the states and transitions for a normal day of a citizen would look like. Our state machines are not state machines in the strictest sense, they are probabilistic (meaning each transition has a probability of being chosen relative to alternative transitions) and transitions can include side effects that change properties of the citizen, their family, or the whole world. The probabilities of transitions can then also depend on such external properties. This state machine model made the whole process of defining an agent's behavior much more declarative and extensible. There turned out to be a couple important hub states where a lot of transitions lead to and go from (for example "leavehome") - which would also work really well as hook points for mods to easily extend the behavior of citizens, starting at any existing state in their day and bringing them back to an existing state. In fact, this structure seemed to have hit the creativity sweet spot between freedom and constraint. Especially after adding a visualization of our current state/transition graph it became very easy to spot mistakes like dead end states as well as to come up with new states and transitions just by looking at the graph for long enough. The biggest surprise was that even very complex behaviors like getting married or moving out of the parents' home could be modeled there quite easily. Even rare or obscure events, like a pyromaniac setting a couple houses on fire and then returning home happily fit into this framework as well. The possibility space for mods there is certainly inspiring! ## What's next Now we have to take all these ideas, half-implemented pieces, architecture plans and programmatic sketches and turn them into working code. This will take some time, but we'll keep you up to date and once I've settled in enough, I definitely want you to participate in livestreams again! Until then, I hope the perspective I laid out here gives you some food for thought and continued excitement for the wonderful game that this will become! See you all soon! Discussion on Reddit · on Simtropolis · on Something Awful Any amount supports the development ♥ All going according to plans! (Sorry...) Discussion on Reddit · on Simtropolis · on Something Awful Any amount supports the development ♥ Long time no see! Text summary coming soon. Discussion on Reddit · on Simtropolis · on Something Awful Any amount supports the development ♥ I've been much more quiet than I want to be or promised to be, sorry for that. I have been very busy with under-the-cover programming and preparations for the (actually very complex) economy system that Citybound will have. Most of this work requires deep thinking and thus doesn't lend itself well for livestreams and so far it hasn't produced any visible results, that's why I didn't write about it. Today Michael, however, was so kind to post a really extensive and well-written report of what both of us have been programming on and thinking about behind the scenes. It's very technical, but also gives a very good impression of how Michael and me are working together. And: it even hints at some upcoming fundamental gameplay ideas for Citybound - so I only encourage you to read it! See you soon, Anselm # Progress Update ## by Michael Lucas-Smith First off, internally I upgraded our 2D drawing library which is primarily used for debugging interesting geometric problems when they occur. Unlike the canvas2d that comes with HTML5, this drawing library knows what dots and line segments and rays are. It auto-scales the internal content, while keeping things like rays and text the right size. This has sped up my work in particular a lot. It was around this time that Anselm started to plot out his ideas for the economy. After a bit of back and forth on the design, Anselm settled on creating an abstracted bigraph of the transportation network. The farther out you zoom the simpler the network becomes. On the last live stream there was a moment where a red line was visible - that was part of the abstract lower level of detail network. This will be used to map out economic information. It's going to be wicked when it's done. Sometime in this period Anselm started using my straight skeleton code for part of the zoning. It worked out of the box for him, which I was very surprised about. I'm still working on this stuff but more in a later part of this status post. While this was going on I was finally getting to the degenerate test cases for straight skeletons, such as a +, L and U shapes. The deeper I got in to the degenerate test cases, the more I started to notice that perfectly parallel geometric structures did not have perfectly parallel coordinates. I realised that gl-matrix, a javascript library we've been using for vectors and matrices was to blame. How was it to blame though - this is interesting in a computer programmer kind of way. gl-matrix boasts speed by way of its design. Internally it tries to use the best representation of vertices it can for maximum efficiency. In this case, it was using Float32Array. Now, we graduated from 32-bit to 64-bit in our development quite a while back, but a Float32Array is (naturally) 32-bit and the standard number representation in javascript is 64-bit. What this meant was that every time we put numbers in to a gl-matrix vector and took them out again, we'd be introducing error.. more and more error the more that happened. It happened a lot. This spawned me to suggest I make a new vector and matrix library for Citybound... yes really. I've implemented a few of these over the years and I didn't look forward to implementing determinants and inverts and matrix multiplication again - so I went for a meta approach. Instead of writing out the expanded versions of those functions by hand and getting it wrong and writing a billion tests to make sure I finally get it right, I made a code generator using static single assignment. Then I wrote code building code for vectors and matrices. The short of it is that on startup, we actually generate our vector and matrix classes and we can generate combined functions that keep variables unfolded and optimised... without having to write it 3 times for each different size of vector. This new vector and matrix library boringly named numerics kept everything in 64-bit and numerous bugs I was seeing in my straight skeleton code disappeared. Much to our delight, numerous bugs in Anselm's code disappeared too. This could have been a very nasty bug to track down the farther we got in to the development process, so I'm glad it was caught sooner rather than later. Somewhere in the mix here we started using more and more ES6 features and simply loving them. Going back to Ecmascript 5 would be pure torture and pain and it's not going to happen. However, whenever we ended up in a debugger at a for-of the VM would crash out. Ouch, this was really painful. As the earliest adopter of ES6 for-of in our code base I was dealing with it a lot. When Anselm started hitting it too enough was enough. Anselm then integrated the babel transpiler. What a cool name. It let's us use way more features of ES6 than V8 currently supports and it also fixed up some of our debugging issues in the long run too. It was Anselm's turn to be ahead on ES6 adopting for several weeks (until today in fact!). Anselm also simplified our module definitions by introducing Object.adopt as a replacement for Object.defineProperties which I had been using extensively. This is a really nice addition but I won't go into its details here. The interesting point of note here is that it doesn't allow Anselm to do auto-complete in Webstorm. LOL!.. so he went on a quest and started to try our github's Atom. I was part of the beta for Atom and at the time it was slow, buggy and lacking features compared to sublime text. It has come a long way since then and I am now using it as my main text editor for Citybound. Anselm is still not happy with his autocomplete situation. Meanwhile I introduced computed properties so that I could fix up some fundamental issues in the straight skeleton code and for the first time, after putting them in, I was able to run without crash on the random shape generator and all the degenerate tests passed. I have now begun working on open-paths, such as zoning along the side of a road. After that will be support for curves in some manner. We did some design on how we'd implement [a planning mode for the game] and I can tell you right now, wow, you guys are going to absolutely love what we have in store. If you're still reading this far, feel free to enjoy a bit of hype because planning is now an integral part of the game play and will enrich every part of the design. Next up Anselm built something really cool on top of my static single assignment meta-programming library, which is called CodeBuilder. He used the babel transpiler parser to parse his method code and then recompile it using the CodeBuilder, which allows even simpler templating of numerical vector or matrix operations. This is seriously cool. To round this out the latest work we've been touching on is iterators and reducers. Since ES6 gives us a lot more power with iterators and generators and using them is really light-weight on the garbage collection, using them in general is a good idea. But, we wanted more power with them. A few random thoughts several weeks back materialised in to real code and now we're discussing the second iteration of that, where we extend the built in generators to allow chaining of generators for more literate programming in the long term. In short, a heck of a lot has been going on and I didn't even talk in depth about the stuff Anselm has been implementing on top of all this technology: Economy, road networks, market places, simulation life cycles, demand and of course the (hype) planning mode stuff. Cheers, Michael Discussion on Reddit · on Simtropolis · on Something Awful Any amount supports the development ♥ The past week I spent a lot of time reading through papers and literature on urban economics - I want the economy in Citybound to be as closely implemented to actual research as possible. There are many approaches and methodologies - but one struck me as the most simple and elegant: the tried and true Monocentric city model. The Monocentric city model is the cornerstone of urban economics since its formulation in the decade of 1960 by Alonso, Muth and Mills. Source I wasn't quite sure if I could trust such comparatively old research, but many modern approaches refer to this model and still praise its accuracy: The implications of the monocentric model, especially for the relations between distance to the Central Business District (CBD) and population density, housing prices, land rent and capital/land ratio are widely known and have been tested many times for a great number of cities and countries. Source When I started to read about how the model works in detail, and how that could be represented in gameplay in Citybound, I became very excited! • Consider a city in a featureless plain • The optimal, cost-minimizing shape of a city is a circle • The city has a fixed population level N • All workers must commute to the central business district (CBD), which is assumed to be a point Source ## Citybound: 2D is enough! This allowed for some radical simplifications for Citybound: • since the terrain is considered featureless, we can switch from 3D to 2D graphics, which will make a lot of things in the game engine easier and will make future development much quicker • roads can only be drawn directly to the CBD or in concentric circles around it - greatly simplifying the road geometry code and pathfinding • there are only two types of zones: CBD and residential. You can only zone CBD in the center of the map. Here is an early screenshot of my first iteration of Citybound towards this new model: ## Even better than 2D: 1D Soon, I found an even more drastic simplification, as described by Ogawa and Fujita: [...] with the assumption of a linear or circular city, the spatial characteristics of each location in the city can be described simply by the distance from the CBD. Source A whole city - described by just a line! Suddenly my dream of simulating multi-million resident cities fluently, even on old hardware, became graspable! Porting all of our graphics and game logic to 1D will take some time, but I was able to create an already impressive sneak-preview of the transition progress: a full-scale, 1D, monocentric representation of zoning and traffic in a 3 million resident city, running smoothly in Citybound: ## Citybound - pure minimalistic gameplay The shift to 1D is not everything. In accordance to the model, the number of citizens and jobs is considered constant. Gameplay thus reaches its minimalist peak: you simply set the size of your city, number of residents and number of jobs, and watch the city reach its economic equilibrium. A truly zen-like experience. ## What Michael's been up to As always, Michael quickly adapted to this shift in vision for the game and is already porting his polygon-skeleton algorithm (and the whole geometry library!) to 1D. This means that we will be able to have fully procedurally generated, complex buildings in Citybound, even in one dimension! I hope you're all as excited about these changes as I am - you will hear from me soon! Discussion on Reddit · on Simtropolis · on Something Awful
# conda config and context The context object is central to many parts of the conda codebase. It serves as a centralized repository of settings. You normally import the singleton and access its (many) attributes directly: from conda.base.context import context context.quiet # False This singleton is initialized from a cascade of different possible sources. From lower to higher precedence: 1. Default values hardcoded in the Context class. These are defined via class attributes. 2. Values defined in the configuration files (.condarc), which have their own precedence. 3. Values set by the corresponding command line arguments, if any. 4. Values defined by their corresponding CONDA_* environment variables, if present. The mechanism implementing this behavior is an elaborate object with several types of objects involved. ## Anatomy of the Context class conda.base.context.Context is an conda-specific subclass of the application-agnostic conda.common.configuration.Configuration class. This class implements the precedence order for the instantiation of each defined attribute, as well as the overall validation logic and help message reporting. But that’s it, it’s merely a storage of ParameterLoader objects which, in turn, instantiate the relevant Parameter subclasses in each attribute. Roughly: class MyConfiguration(Configuration): list_of_int_field = ParameterLoader(SequenceParameter([1, 2, 3], int)) When MyConfiguration is instantiated, those class attributes are populated by the .raw_data dicionary that has been filled in with the values coming from the precedence chain stated above. The raw_data dictionary contains RawParameter objects, subclassed to deal with the specifics of their origin (YAML file, environment variable, command line flag). Each ParameterLoader object will pass the RawParameter object to the .load() method of its relevant Parameter subclass, which are designed to return their corresponding LoadedParameter object counterpart. It’s a bit confusing, but the delegation happens like this: 1. The Configuration subclass parses the raw values of the possible origins and stores them as the relevant RawParameter objects, which can be: • EnvRawParameter: for those coming from an environment variable • ArgParseRawParameter: for those coming from a command line flag • YamlRawParameter: for those coming from a configuration file • DefaultValueRawParameter: for those coming from the default value given to ParameterLoader 2. Each Configuration attribute is a ParameterLoader, which implements the property protocol via __get__. This means that, upon attribute access (e.g. MyConfiguration.string_field), the ParameterLoader can execute the loading logic. This means finding potential type matches in the raw data, loading them as LoadedParameter objects and merging them with the adequate precedence order. The merging policy depends on the (Loaded)Parameter subtype. Below is a list of available subtypes: • PrimitiveParameter: holds a single scalar value of type str, int, float, complex, bool or NoneType. • SequenceParameter: holds an iterable (list) of other Parameter objects. • MapParameter: holds a mapping (dict) of other Parameter objects. • ObjectParameter: holds an object with attributes set to Parameter objects. The main goal of the Parameter objects is to implement how to typify and turn the raw values into their Loaded counterparts. These implement the validation routines and define how parameters for the same key should be merged: • PrimitiveLoadedParameter: value with highest precedence replaces the existing one. • SequenceLoadedParameter: extends with no duplication, keeping precedence. • MapLoadedParameter: cascading updates, highest precedence kept. • ObjectLoadedParameter: same as Map. After all of this, the LoadedParameter objects are typified: this is when type validation is performed. If everything goes well, you obtain your values just fine. If not, the validation errors are raised. Take into account that the result is cached for faster subsequent access. This means that even if you change the value of the environment variables responsible for a given setting, this won’t be reflected in the context object until you refresh it with conda.base.context.reset_context(). Do not modify the Context object! ParameterLoader does not implement the __set__ method of the property protocol, so you can freely override an attribute defined in a Configuration subclass. You might think that this will redefine the value after passing through the validation machinery, but that’s not true. You will simply overwrite it entirely with the raw value and that’s probably not what you want. Instead, consider the context object immutable. If you need to change a setting at runtime, it is probably A Bad Idea. The only situation where this is acceptable is during testing. ## Setting values in the different origins There’s some magic behind the possible origins for the settings values. How these are tied to the final Configuration object might not be obvious at first. This is different for each RawParameter subclass: • DefaultValueRawParameter: Users will never see this one. It only wraps the default value passed to the ParameterLoader class. Safe to ignore. • YamlRawParameter: This one takes a YAML file and parses it as a dictionary. The keys in this file must match the attribute names in the Configuration class exactly (or one of their aliases). Matching happens automatically once this is properly set up. How the values are parsed depends on the YAML Loader, set internally by conda. • EnvRawParameter: Values coming from certain environment variables can make it to the Configuration instance, provided they are formatted as <APP_NAME>_<PARAMETER_NAME>, all uppercase. The app name is defined by the Configuration subclass. The parameter name is defined by the attribute name in the class, transformed to upper case. For example, context.ignore_pinned can be set with CONDA_IGNORE_PINNED. The value of the variable is parsed in different ways depending on the type: • PrimitiveParameter is the easy one. The environment variable string is parsed as the expected type. Booleans are a bit different since several strings are recognized as such, and in a case-insensitive way: • True can be set with true, yes, on and y. • False can be set with false, off, n, no, non, none and "" (empty string). • SequenceParameter can specify their own delimiter (e.g. ,), so the environment variable string is processed into a list. • MapParameter and ObjectParameter do not support being set with environment variables. • ArgParseRawParameter: These are a bit different because there is no automated mechanism that ties a given command line flag to the context object. This means that if you add a new setting to the Context class and you want that available in the CLI as a command line flag, you have to add it yourself. If that’s the case, refer to conda.cli.conda_argparse and make sure that the dest value of your argparse.Argument matches the attribute name in Context. This way, Configuration.__init__ can take the argparse.Namespace object, turn it into a dictionary, and make it pass through the loading machinery.
# Tag Info 4 I assume that $y$ is constrained to the interval $[0,1]$. (You did not state this explicitly.) Let's assume that you have selected values $r_i$ such that $0=r_1 < r_2 < \dots < r_n = 1.$ If your solver supports SOS2 constraints, you can make $w_1, \dots, w_n$ nonnegative variables with the constraint $\sum_i w_i = 1$ and declare $\lbrace w_1,\dots,... 4 Adding the last constraint is required to guarantee that only one of the$r_i$values is selected for$y$. However, you need an additional constraint to make the relationship between$y$, its piecewise linearisation variables and the remaining of the problem constraints (especially$y=f(x)$), such as: $$y = \sum_{i=1}^{n} r_i \times w_i$$ 3 Without the entire problem description it is hard to provide a complete answer, but you will probably need a variable$x_t \in \mathbb{N}$for the number of operators hired at time period$t$. With these variables, and taking into account the fact that once an operator is hired, he is hired for the entire time period, the extra cost at a given time$t$... 1 If the original problem was feasible, the Benders master problem should never become infeasible. You may have a formulation error in the Benders decomposition, or you may be generating the cuts incorrectly. If you can identify a feasible (not necessarily good) solution to the problem by solving the original formulation, you can use that to locate any errors ... 6 By request, here's the SAS code I used for three different objectives (the first two are commented out with /* and */ delimiters): proc optmodel; num numMachines = 21; num groupSize = 3; set MACHINES = 1..numMachines; set GROUPS = 1..numMachines/groupSize; call streaminit(1); num p {MACHINES} = rand('INTEGER',0,10); print p; var X {... 6 This is a well-known problem with existing heuristics: https://en.wikipedia.org/wiki/Multiway_number_partitioning Edit: For partitioning into groups of limited sizes (eg.$S_{max} \le M/G+1$) see https://en.wikipedia.org/wiki/Balanced_number_partitioning and in the special case of partitioning into groups of$S \le 3$see: https://en.wikipedia.org/wiki/... 2 Optional decision variables are part of the building block of scheduling within CPLEX CPOptimizer. For instance using CP; dvar interval s size 1; dvar interval e size 1; dvar interval itvs optional size 7; maximize presenceOf(itvs); subject to { startOf(s)==1; startOf(e)==8; startBeforeStart(itvs,s); startBeforeEnd(e,itvs); } int isPresent=... 4 Let$\overline{P}$be the average (mean) productivity of all machines. The average productivity of a group will be$S\overline{P}$. Let$y_g$be nonnegative variables defined by the constraints $$y_g \ge T_g - S\overline{P}$$ and $$y_g \ge S\overline{P} - T_g$$ for all$g$. In the solution,$y_g$will be$\vert T_g - S\overline{P}\vert$. You can minimize$\... 11 Here are two ideas: Minimize $\max_g T_g$. This will naturally even out the productivities of each group. To do this you can minimize a variable $z$ and add the constraint $z \ge T_g \; \forall g$. Add constraints $T_{min} \le T_g \le T_{max}$ where $T_{min}$ and $T_{max}$ are lower and upper bounds on $T_g$, respectively. You will have to determine a "... 5 Minimize the greatest $T_g$: \begin{align}\min&\quad T_\text{max}\\&\quad T_g \le T_\text{max} \qquad \forall g\end{align} The drawback is that it will minimize $T_g$, and maybe it is not what you want As @RobPratt suggested in the comments, minimize the difference between the greatest and the smallest $T_g$: \begin{align}\min&\quad T_\text{... 3 $z_{j} \in \{0,1\}$ equals 1 represents variable $b$ belongs to interval $j$. You can linearize interval relation as follows: $$A_{j-1} \cdot z_{j} + \epsilon \cdot z_{j} \le b_{j} \le A_{j} \cdot z_{j} \quad \forall j \in {1,2,3,4}$$ Here, this leads to 4 binary variables $z_{j}$ and values of ${A_0}$,${A_1}$,${A_2}$,${A_3}$,${A_4}$ are $0$, $a$, $2a$, $... 8 Since$W$is a binary variable, it follows that $$\sum_k \delta_k \le W \le 1$$ And so you are in the presence of a clique constraint. @RobPratt shows how to strengthen the second group of constraints in this case, yielding the first constraint. A simple example : take$\delta_k = 0.9$for every$k. It is easy to see that such a solution is valid with the ... 7 Something like: \begin{align} & c_i \le x_i + M(1-y_i)\\ & c_i \le My_i \end{align}M$can be interpreted as an upperbound on$c_i$. If you don't like the big-$M$'s, consider using indicator constraints. See the comments below for some improvements on this! 1 Maybe you could approach it from the opposite direction? Instead of imposing a penalty on uneven distributions, first generate a set of highly even candidate work schedules. Then if the optimizer fails to find a solution, iteratively generate and add less evenly distributed work schedules. This method would allow you to use whatever metric you like for "... 3 If you cannot enforce a specific maximum days policy as Rob Pratt suggests, another possibility is to penalize the lumpiness of the work distribution. Pick a window size$d$(Rob's "$d$-day period"), and add two new variables$y$and$z$along with the constraints $$y \ge \sum_{i=t}^{t+d-1} x_i \quad \forall t \in \lbrace1,\dots,D-d+1\rbrace$$and $$... 5 You can enforce "no more than w workdays in any consecutive d-day period" via linear constraints$$\sum_{i=t}^{t+d-1} x_i \le w \quad \text{for all$t$}$$5 This problem can be elegantly formulated through Constraint Programming (CP). This problem does not have an objective function: it's a Constraint Satisfaction Problem, not a Constraint Optimization Problem. CP would be a natural choice for this problem, since CP, similar to how humans would solve this problem, relies on a technique called 'inference'. In CP, ... 4 You can solve this using a constraint satisfaction/constraint programming (CP) solver (and possibly modeling language). In R, you might use the rminizinc package package, which links to the open-source MiniZinc language, which comes with a number of solvers. CP models (which can be optimization models but are often just constraint satisfaction models) can be ... 7 This is similar to the well known Zebra Puzzle. You can solve it using integer programming techniques as follows: Define binary variables$x_{p,n}^h$that take value$1$if and only if player$p\in \{Bill,...,Tony\}$has nickname$n \in \{Slats,...,Tree\}$and height$h \in \{6,...,6'6 \}$. So$x_{p,n}^h=1$if and only if combination$(p,n,h)\$ is valid. ... Top 50 recent answers are included
# Page Name • Explicit formulas in Schubert calculus: Hari Bercovici ### Explicit formulas in Schubert calculus Faculty Mentor: Hari Bercovici Description: Recent work has shown that the solutions to certain classes of intersection problems can be solved explicitly by classical projective geometry constructions. The resulting formulas are often quite complex, and there are some open questions about cases when such constructions "commute". (The simplest commutation result is equivalent to mdoularity of the lattice of subspaces in a finite dimesnional space. More complex ones cannot be deduced from modularity.) Prerequisites: The prospective student participating in this project will need to learn some of the basics of the combinatorics involved, and its relations to intersection theory and group representations. • Torsion Subgroups of CAT(0) Groups: Chris Connell ### Torsion Subgroups of CAT(0) Groups Faculty Mentor: Chris Connell Description: In recent years, CAT(0) groups have generated an enourmous amount of interest among geometers and topologists alike. First, a CAT(0) space is a geodesic metric space whose triangles are at least as "thin" point-wise as the Euclidean triangle with the same side lengths. A CAT(0) group is then a discrete subgroup of isometries of a CAT(0) space whose quotient is compact. A simple example of such a group is the lattice of integers Z^2 which acts on the plane by translations with quotient space R^2/Z^2, a torus. Finally, a torsion subgroup of a group is subgroup whose elements all have finite order. It has been conjectured that torsion subgroups of CAT(0) groups always have finite cardinality. This might sound like hard-core group theory, but it is really mostly about geometry. Motivated in part by this conjecture, we will aim to understand better how these torsion subgroups act on a CAT(0) space and their attendant structures such as the "boundary at infinity". Prerequisites: A course in group theory and/or topology is helpful, but not absolutely necessary. • Computational methods and models in mathematical biology: Michael Lynch ### Computational methods and models in mathematical biology Faculty Mentor: Michael Lynch Description: Depending on the student's interests, the project will involve: # 1) the development of statistical / computational methods for estimating (in an unbiased way as possible) levels of DNA sequence variation (and covariation) from whole-genome surveys that are now generating very high-throughput data, but erroneously so; # 2) developing models for estimating the vulnerability of organisms to cancer with increasing levels of organismal complexity (i.e., more cell divisions); or # 3) developing models for the evolution of various types of genomic elements in populations of various sizes (which influence the role of chance in evolution). Prerequisites: A background in calculus, differential equations, and some probability theory is needed. Some familiarity with computer coding would be highly desirable. • Studies in natural logic: Larry Moss ### Studies in natural logic Faculty Mentor: Larry Moss Description: The term 'natural logic' covers systems of logic that are designed to be as close as possible to natural language. More standard systems of logic such as first-order logic were designed and are currently studied with an eye towards a different field, the foundations of mathematics. The idea with natural logic is to study logical systems that look more like ordinary language, and also systems which are decidable (unlike first-order logic). This leads to an area with connections to linguistics, computer science, philosophy, and psychology. And because it is fairly new, there are still lots of interesting mathematical problems to solve. Prerequisites: The more classes in discrete mathematics, the better. Although it would be good to have had a logic class that presented the completeness theorem of propositional logic in detail, this isn't strictly needed. Classes in theoretical computer science, algebra, and/or combinatorics would be a plus. The REU project itself would depend on the student, so the wider the background, the more possibilities would present themselves. • Complex dynamics and Smale's mean value conjecture: Kevin Pilgrim ### Complex dynamics and Smale's mean value conjecture Faculty Mentor: Kevin M. Pilgrim Description: Suppose $f(z) = z + a_2z^2 + ... + a_dz^d$ is a complex polynomial having a fixed point of derivative 1 at the origin. One version of Smale's mean value conjecture asserts that there exists a critical point $c$ of $f$ satisfying $|f(c)|/|c| \leq 1-1/d$. Complex dynamics suggests a candidate for this critical point. Does this candidate always satisfy this inequality? Prerequisites: Multivariable calclus (e.g. finding minima of functions of more than one variable) is essential. Beyond this, the project can be tailored to suit a variety of backgrounds. Complex analysis and some programming experience would be helpful, but not essential. • The Geometry of Tilings: Matthias Weber ### The Geometry of Tilings Faculty Mentor: Matthias Weber Description: In general, a tiling problem consists of a set of tiles with the purposes of filling a given space with these tiles completely so that the tiles do not seriously overlap. There is a vast literature (Grunbaum/Shephard: Tilings and Patterns is a wonderful book to look at). Many results are exhaustive in the sense that the authors conducted an exhaustive search for tilings with given properties, but there are also stunning results that reveal deep connections to other areas. Let me just mention aperiodic tilings and their relation to quasicrystals, and polyomino tilings related to finitely generated groups, as discovered by Conway and Lagarias. There are many unsolved problems, and we will consider some of those, both in their original, typically Euclidean context, as well as in other spaces as the hyperbolic plane. Prerequisites: While not essential, the following would be helpful: familiarity with Euclidean, hyperbolic and spherical geometry; basic group theory and the concept of a group action on a set; linear algebra; some familiarity with Mathematica. • Participants From left to right: Garrett Proffitt, Hayley Miles-Leighton, Michael Anselmi, Nathan Dowlin, Hamza Ghadyali, Linh Truong, Elizabeth Kammer, Komi Messan
# Consider the Situation Shown in the Figure. Suppose the Circular Loop Lies in a Vertical Plane. the Rod Has a Mass M. - Physics Sum Consider the situation shown in the figure. Suppose the circular loop lies in a vertical plane. The rod has a mass m. The rod and the loop have negligible resistances but the wire connecting O and C has a resistance R. The rod is made to rotate with a uniform angular velocity ω in the clockwise direction by applying a force at the midpoint of OA in a direction perpendicular to it. Find the magnitude of this force when the rod makes an angle θ with the vertical. #### Solution When the circular loop is in the vertical plane, it tends to rotate in the clockwise direction because of its weight. Let the force applied be F and its direction be perpendicular to the rod. The component of mg along F is mg sin θ. The magnetic force is in perpendicular and opposite direction to mg sin θ. Now, Current in the rod:- $i = \frac{B a^2 \omega}{2R}$ The force on the rod is given by $F_B = iBl = \frac{B^2 a^2 \omega}{2R}$ Net force $= F −\frac{B^2 a^2 \omega}{2R}+mg \sin \theta$ The net force passes through the centre of mass of the rod. Net torque on the rod about the centre O:- $\tau = \left( F - \frac{B^2 a^3 \omega}{2R} + mg \sin\theta \right)\frac{OA}{2}$ Because the rod rotates with a constant angular velocity, the net torque on it is zero. i.e. τ = 0 $\left( F - \frac{B^2 a^3 \omega}{2R} + mg \sin\theta \right)\frac{OA}{2} = 0$ $\therefore F = \frac{B^2 a^3 \omega}{2R} - mg \sin\theta$ Concept: Force on a Current - Carrying Conductor in a Uniform Magnetic Field Is there an error in this question or solution? #### APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 2 Chapter 16 Electromagnetic Induction Q 61 | Page 311
# Doug Van Houweling: Building the NSFNet Charles Severance, University of Michigan Pages: 7–9 Abstract—Doug Van Houweling describes how the NSFNet went from connecting a few supercomputers to becoming “the Internet.” The first Web extra at http://youtu.be/uY7dUJT7OsU is a video interview in which author Charles Severance speaks with Doug Van Houweling about how the NSFNet went from connecting a few supercomputers to becoming “the Internet.” The second Web extra at http://youtu.be/hgmSyjQwhG4 is an audio recording of author Charles Severance reading his Computing Conversations column, in which he discusses his interview with Doug Van Houweling about how the NSFNet went from connecting a few supercomputers to becoming “the Internet.” Keywords—Doug Van Houweling; Internet; NSFNet; Arpanet; supercomputers; Computing Conversations The Arpanet connected ARPA's computers to researchers during the 1970s and 1980s. In the mid-1980s, the National Science Foundation (NSF) decided to deploy shared supercomputing resources at several universities around the country. It connected those centers with a TCP/IP network that would eventually become known as the NSFNet and later evolve to be the public Internet. Doug Van Houweling was the University of Michigan's CIO back in the 1980s and was instrumental in bringing together several partners to craft the grant that greatly broadened the NSFNet—he was also involved in guiding the project through 1995. Visit www.computer.org/computingconversations to view our discussion. ## Starting with Supercomputers In the mid-1980s, the NSF issued a request for proposals from universities to host supercomputer centers, and the University of Michigan was one of many that wanted in. However, the inclusion of the Japanese-built IBM-370–compatible computer in its proposal was a risk because it turned out that the US government wasn't inclined to spend scarce research dollars purchasing major computing equipment from a company outside the US: I was visiting the NSF and had gotten to know Eric Bloch, its director at the time, so we talked about Michigan's proposal. It was clear to me from our conversation that there was no way that the Michigan proposal would be funded. I told Eric that it might be even better for Michigan if we could run the network that would connect all the centers together. At the time, I was chairman of the board at Merit, Michigan's statewide network. Over the years, in parallel with the packet-switching protocol developments that had been involved in the Arpanet, Merit had developed its own packet-switching network, using its own communications processors built on Digital Equipment Corporation systems. Although Merit wasn't deeply involved in the early Arpanet project, it had extensive experience in packet-switched networks and helped to operate the 56-Kbit first-generation TCP/IP-based NSFNet backbone that initially connected the five supercomputer centers starting in 1986. ## New Partners The team at Merit wanted to keep the budget for the project under $15 million to make sure the proposal was financially attractive to the NSF: As we thought about how we would create this proposal, we realized very rapidly that$15 million would only fund a 56-Kbit network, which we already knew would be insufficient. So we immediately started thinking about how we could expand the envelope for the proposal. Merit started looking for partners who would be willing to contribute hardware, software, services, and money to expand the project's scope while staying within budget. Van Houweling had a friend named Al Weis who worked at IBM Research: I called Al and described this as a great opportunity, but IBM wasn't going to be successful here, so I needed his help. Al rallied some folks at IBM Research—people who were actually working on TCP/IP protocols. We had another meeting, after which some of us admitted that some people in IBM do know something about TCP/IP, and yes, they could be partners. We got a tentative agreement from IBM that it would contribute the hardware and the software to create the network's routing structure. Continuing to work through his IBM contacts, Van Houweling was introduced to a former IBM employee named Dick Liebhaber, who was then the CTO and chief network operations officer for MCI. Together, they approached MCI to donate the communications lines for the project: At that time, MCI was a fledgling organization that some people had described as a law office trying to create an environment that could offer telecommunications up against AT&T's lobbying efforts. It had just succeeded in reaching that goal and had started establishing facilities across the US. Dick thought being part of the NSFNet proposal was an opportunity to move MCI into the big time. With IBM providing the hardware and software and MCI providing the connectivity, Van Houweling also got a commitment of $1 million per year from the State of Michigan: We submitted a proposal to the NSF for$14.7 million—we knew the budget was $15 million. But by including all this in-kind activity, it was actually more like a$55 million proposal. And it wasn't designed to be 56 Kbits—we could start at T1 or 1.5 Mbits with planned upgrades over the period of the network's life. ## A Unique Proposal With an unlikely set of partners, and large in-kind contributions, the University of Michigan/Merit Network offering was quite different from the rest of the proposals to build the NSFNet: We subsequently learned that our proposal was received with considerable skepticism by the reviewers at the NSF because IBM was thought of as the enemy of the Internet because it was so focused on its own proprietary protocols. The reviewers really wondered about our technical ability to pull this off. The first review was conducted without reference to the actual funding pattern, so when the wraps came off about the amount of resources being committed by our partners, we went to the top of the list. But once the proposal was awarded, Merit, IBM, and MCI needed to deliver on their promises: When we started the network, we had T1 circuits, but there were no cards for computers that would go at 1.5 Mbits, so we had to build our initial routers with 448-Kbit cards, subdivide the T1 circuits into three 448-Kbit circuits, and build a mesh network among all the routers. It took about a year for IBM to build prototype cards that would go at 1.5 Mbits. When we the put the 1.5-Mbit cards into our test network, they worked just fine, but when we put them into the production network, it started failing. After a lot of testing, we discovered that the folks who had built the T1 hardware for MCI had planned on using certain bit patterns for diagnosis on the network and had never anticipated someone using the full 1.5 Mbits as a single channel. ## Moving on Up Over the first few years of the NSFNet, these technical details got worked out, and the network started to take off as regional networks formed and campuses were connected. By 1990, the T1 circuits were filling up, so it was time to move to DS3 (45-Mbit) connections. This would require entirely new router software and hardware technologies to be developed: Merit was still the principle investigator on the grant, but it subcontracted the development of this new 45-Mbit network to Advanced Network Services [ANS], another not-for-profit organization we created and headquartered in Armonk, New York. IBM, MCI, and Nortel each contributed $3 million to the founding of this new organization, so it had the staff and facilities to do the innovation necessary to get us up to 45 Mbits. Once the NSFNet was upgraded to 45-Mbit communication links, it had enough bandwidth to handle traffic growth for the life of the project. But as the 1990s progressed, there was increasing pressure to move management and operation of the “national Internet” to the private sector: The NSFNet was decommissioned in 1995 when Congress decided that the federal government shouldn't be in the business of supporting something that by that time, in its view, should have been a commercial facility. I won't ever forget sitting in a House hearing room in the Capitol next to Mitch Kapor and the CEO of a small Internet startup who were complaining that it was inappropriate for the NSFNet to be funded by the NSF because the startup could provide a national backbone as a commercial service. Meanwhile, the commercial backbone networks were using the NSFNet as their backup to carry traffic when their much less reliable networks failed. As Merit, MCI, and IBM transitioned away from daily operations and maintenance, they were still in possession of the world's fastest and most reliable router technologies. MCI used its expertise and reputation to quickly become a successful national backbone network provider. IBM had to decide if it wanted to evolve its market-leading routing hardware and software into a commercial product: In a classic “innovator's dilemma” moment, IBM, which was the leader in high-speed routing technology for Internet backbones at that time, decided to kill all the work it had done in developing these routers because it threatened the company's proprietary network efforts. Canceling the router effort within IBM was almost certainly responsible for the fact that Cisco became the dominant router provider in the US rather than IBM. Looking back, it's easy to imagine that our current networking environment might have been quite different if the first research-centered national TCP/IP backbone had been limited to a$15 million budget between 1985 and 1990. But when the NSFNet award was given to an unlikely group of collaborators, we ended up with a national network that was fast enough for nearly a decade to function as a platform for innovations such as Gopher and the World Wide Web, leading us to the shared, free, open, and nondiscriminatory global network infrastructure that we enjoy today. Charles Severance, Computing Conversations column editor and Computer's multimedia editor, is a clinical associate professor and teaches in the School of Information at the University of Michigan. Follow him on Twitter @drchuck or contact him at [email protected].
# Global least squares solution of matrix equation $\sum_{j=1}^s A_jX_jB_j = E$ Document Type : Research Article Author Abstract In this paper, an iterative method is proposed for solving matrix equation $\sum_{j=1}^s A_jX_jB_j = E$. This method is based on the global least squares (GL-LSQR) method for solving the linear system of equations with the multiple right hand sides. For applying the GL-LSQR algorithm to solve the above matrix equation, a new linear operator, its adjoint and a new inner product are de ned. It is proved that the new iterative method obtains the least norm solution of the mentioned matrix equation within finite iteration steps in the exact arithmetic, when the above matrix equation is consistent. Moreover, the optimal approximate solution $(X_1^* ,X_2^* ,\ldots,X_s^*)$ to a given multiple matrices $( \bar{X}_1, \bar{X}_2,\ldots,\bar{X}_s)$ can be derived by nding the least norm solution of a new matrix equation. Finally, some numerical experiments are given to illustrate the eciency of the new method.
# Article Title: $N_2$-locally connected graphs and their upper embeddability  (English) Author: Nebeský, Ladislav Language: English Journal: Czechoslovak Mathematical Journal ISSN: 0011-4642 Volume: 41 Issue: 4 Year: 1991 Pages: 731-735 . Category: math . Summary: MSC: 05C10 idZBL: Zbl 0760.05030 idMR: MR1134962 . Date available: 2008-06-09T15:42:51Z Last updated: 2012-05-30 Stable URL: http://hdl.handle.net/10338.dmlcz/102504 . Reference: [1] M. Behzad G. Chartrand, L. Lesniak-Foster: Graphs & Digraphs.Prindle, Weber & Schmidt, Boston 1979. MR 0525578 Reference: [2] G. Chartrand, R. E. Pippert: Locally connected graphs.Časopis pěst. mat. 99 (1974), 158-163. Zbl 0278.05113, MR 0398872 Reference: [3] A. D. Glukhov: On chord-critical graphs.(in Russian). In: Some Topological and Combinatorial Properties of Graphs. Preprint 80.8. IM AN USSR, Kiev 1980, pp. 24-27. MR 0583198 Reference: [4] N. P. Homenko, A. D. Glukhov: One-component 2-cell embeddings and the maximum genus of a graph.(in Russian). In: Some Topological and Combinatorial Properties of Graphs. Preprint 80.8 IM AN USSR, Kiev 1980, pp. 5-23. MR 0583197 Reference: [5] N. P. Homenko N. A. Ostroverkhy, V. A. Kusmenko: The maximum genus of graphs.(in Ukrainian, English summary). In: $\varphi$-Transformations of Graphs (N. P. Homenko, ed.) IM AN URSR, Kiev 1973, pp. 180-210. MR 0422065 Reference: [6] M. Jungerman: A characterization of upper embeddable graphs.Trans. Amer. Math. Soc. 241 (1978), 401-406. Zbl 0379.05025, MR 0492309 Reference: [7] L. Nebeský: Every connected, locally connected graph is upper embeddable.J. Graph Theory 5 (1981), 205-207. MR 0615009 Reference: [8] L. Nebeský: A new characterization of the maximum genus of a graph.Czechoslovak Math. J. 31 (106) (1981), 604-613. MR 0631605 Reference: [9] L. Nebeský: On locally quasiconnected graphs and their upper embeddability.Czechoslovak Math. J. 35 (110) (1985), 162-166. MR 0779344 Reference: [10] Z. Ryjáček: On graphs with isomorphic, non-isomorphic and connected $N\sb 2$-neighbourhoods.Časopis pěst. mat. 112 (1987), 66-79. MR 0880933 Reference: [11] J. Sedláček: Local properties of graphs.(in Czech). Časopis pěst. mat. 106 (1981), 290-298. MR 0629727 Reference: [12] D. W. VanderJagt: Sufficient conditions for locally connected graphs.Časopis pěst. mat. 99 (1974), 400-404. Zbl 0294.05123, MR 0543786 Reference: [13] A. T. White: Graphs, Groups, and Surfaces.North-Holland, Amsterdam 1984. Zbl 0551.05037, MR 0780555 Reference: [14] N. H. Xuong: How to determine the maximum genus of a graph.J. Combinatorial Theory Ser. B26 (1979), 217-225. Zbl 0403.05035, MR 0532589 . ## Files Files Size Format View CzechMathJ_41-1991-4_14.pdf 444.2Kb application/pdf View/Open Partner of
help me with this inequality $\frac{b(c+a)}{c(a+b)}+\frac{c(b+d)}{d(b+c)}+\frac{ d(c+a)}{a(c+d)}+\frac{a(d+b)}{b(d+a)}\geq 4$
# Prove that there is a subgroup $K$ : $k! | [H:K]$ [duplicate] Consider a group $G$, and $H \in G$ - subgroup with $[G:H] = k$, then prove that there is exists normal subgroup $K$ in $H$ such that $k! | [G:K]$? Actually I have no ideas. Any hints? • @DietrichBurde edited – openspace Sep 20 '17 at 20:37 • Is it not the other way around, i.e., $[G:K]\mid k!$? – Dietrich Burde Sep 20 '17 at 20:40
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " All listed articles are free for downloading (OA Articles) Page 1 /100 Display every page 5 10 20 Item Physics , 2005, Abstract: This is another approach to realize Maxwell's "demon" hypothesis. Two Ag-O-Cs thermal electron ejectors, A and B, are settled in a vacuum tube. A non-uniform magnetic field exerted on the tube provides a one-way channel for the thermal electrons. Ejector A, losing electrons, charges positively, while ejector B, getting electrons, charges negatively, resulting in an electric voltage. In flying from A to B, the speed of the electrons decreases, and part of their thermal kinetic energy converts into electric potential energy. Thus, the temperature of the whole electron tube drops down slightly, and that can be compensated by the heat attracted from the ambient air. The device can provide a small but macroscopic power to an external load, violating Kelvin's statement of the second law. Physics , 1996, DOI: 10.1016/S0920-5632(96)00728-1 Abstract: The interior of the color flux tube joining a quark pair can be probed by evaluating the correlator of pair of Polyakov loops in a vacuum modified by another Polyakov pair, in order to check the dual superconductivity conjecture which predicts a deconfined, hot core. We also point out that at the critical point of any 3D gauge theories with a continuous deconfining transition the Svetitsky-Yaffe conjecture provides us with an analytic expression of the Polyakov correlator as a function of the position of the probe inside the flux tube. Both these predictions are compared with numerical results in 3D Z2 gauge model finding complete agreement. Physics , 2001, Abstract: In these notes we discuss the origin of shot noise ('Schroteffekt') of vacuum tubes in detail. It will be shown that shot noise observed in vacuum tubes and first described by W. Schottky in 1918 is a purely classical phenomenon. This is in pronounced contrast to shot noise investigated in mesoscopic conductors which is due to quantum mechanical diffraction of electron waves. Advances in Materials Science and Engineering , 2012, DOI: 10.1155/2012/808210 Abstract: A solar cooling tube using thermal/vacuum emptying method was experimentally studied in this paper. The coefficient of performance (COP) of the solar cooling tube was mostly affected by the vacuum degree of the system. In past research, the thermal vacuum method, using an electric oven and iodine-tungsten lamp to heat up the adsorbent bed and H2O vapor to expel the air from the solar cooling tube, was used to manufacture solar cooling tubes. This paper presents a novel thermal vacuum combined with vacuum pump method allowing an increased vacuum state for producing solar cooling tubes. The following conclusions are reached: the adsorbent bed temperature of solar cooling tube could reaches up to 233°C, and this temperature is sufficient to meet desorption demand; the refrigerator power of a single solar cooling tube varies from 1?W to 12?W; the total supply refrigerating capacity is about 287?kJ; and the COP of this solar cooling tube is about 0.215. 1. Introduction With the improvement of people’s living standard, the demand for air conditioner is increasing. Use of CFCs for refrigeration compression has global warming potential (GWP) and ozone depletion potential (ODP), so their use should be minimized. The energy problem has become a major problem facing human development, and has led to efforts to reduce fossil fuel usage. Solar energy, one of the most abundant resources, has many advantages, most importantly that it is environmentally friendly. This has led to attention from the worldwide research community. Adsorption refrigeration uses nature working pairs as refrigerants and solar energy as a heat resource, so it consumes no fossil fuels during refrigeration process and is environment-friendly. Ferreira Leite et al. [1] presented the characterization and the pre-dimensioning of an adsorption chiller as part of a 20?kW air conditioning central unit for cooling a set of rooms that comprises an area of 110?m2. The adsorption chiller’s expected coefficient of performance (COP) was found to be around 0.6. Khattab [2] presented the description and operation of a simple structure, low cost solar-powered adsorption refrigeration module. Test results showed that a module using bed technique Type 4 and reflector arrangement Type C provided the best performance. Wang et al. [3, 4] used a compound adsorbent of CaCl2 and activated carbon as working pairs and to produce an ice-making test unit for fishing boats. At evaporating temperatures of ?35°C and ?25°C, the cooling powers are 0.89 and 1.18?kW respectively. Clausse [5] explored the possibility to perform Victor-Otto de Haan Physics , 2011, Abstract: A proposal for the realization of Santilli's comparative test of the gravity of electrons and positrons via a horizontal supercooled vacuum tube is described. Principle and requirements are described concerning the sources, vacuum chamber electromagnetic shielding and pressure and position sensitive detector. It is concluded that with current technology the experiment is perfectly feasible. Physics , 2007, DOI: 10.1142/S0218271809014273 Abstract: In this paper we analyse the effect produced by the temperature in the vacuum polarization associated with charged massless scalar field in the presence of magnetic flux tube in the cosmic string spacetime. Three different configurations of magnetic fields are taken into account: $(i)$ a homogeneous field inside the tube, $(ii)$ a field proportional to $1/r$ and $(iii)$ a cylindrical shell with $\delta$-function. In these three cases, the axis of the infinitely long tube of radius $R$ coincides with the cosmic string. Because the complexity of this analysis in the region inside the tube, we consider the thermal effect in the region outside. In order to develop this analysis, we construct the thermal Green function associated with this system for the three above mentioned situations considering points in the region outside the tube. We explicitly calculate in the high-temperature limit, the thermal average of the field square and the energy-momentum tensor. Physics , 2006, Abstract: We propose a new thermodynamic equality and several inequalities concerning the relationship between work and information for an isothermal process with Maxwell's demon. Our approach is based on the formulation a la Jarzynski of the thermodynamic engine and on the quantum information-theoretic characterization of the demon. The lower bound of each inequality, which is expressed in terms of the information gain by the demon and the accuracy of the demon's measurement, gives the minimum work that can be performed on a single heat bath in an isothermal process. These results are independent of the state of the demon, be it in thermodynamic equilibrium or not. Physics , 2013, DOI: 10.1017/S0022377813001256 Abstract: In this paper we analyze the motion of charged particles in a vacuum tube diode by solving linear differential equations. Our analysis is based on expressing the volume charge density as a function of the current density and coordinates only, while in the usual scheme the volume charge density is expressed as a function of the current density and electrostatic potential. Our approach gives the well known behavior of the classical current density proportional to the three-halves power of the bias potential and inversely proportional to the square of the gap distance between the electrodes, and does not require the solution of the nonlinear differential equation normally associated with the Child-Langmuir formulation. Physics , 2003, Abstract: This paper addresses two seemingly unrelated problems, (a) What is the entropy and energy accounting in the Maxwell Demon problem? and (b) How can the efficiency of markets be measured? Here we show, in a simple model for the Maxwell Demon, the entropy of the universe increases by an amount eta=0.839995520 in going from a random state to an ordered state and by an amount eta*=2.731382 in going from one sorted state to another sorted state. We calculate the efficiency of an engine driven by the Maxwell sorting process. The efficiency depends only on the temperatures of the particles and of the computer the Demon uses to sort the particles. We also show the approach is general and create a simple model of a stock market in which the Limit Trader plays the role of the Maxwell Demon. We use this model to define and measure market efficiency. Geoffrey Grimmett Mathematics , 2009, Abstract: A number of tricky problems in probability are discussed, having in common one or more infinite sequences of coin tosses, and a representation as a problem in dependent percolation. Three of these problems are of `Winkler' type, that is, they ask about what can be achieved by a clairvoyant demon. Page 1 /100 Display every page 5 10 20 Item Home Copyright © 2008-2017 Open Access Library. All rights reserved.
# Math Help - Determining absolute extreme of function 1. ## Determining absolute extreme of function f(x)=x2+2x-4 Interval is [-1, 1] What I did: f'(x)=2x+2 x=-1 Plugged in: One of my minimums is (-1, 0) I'm confused on how to get the maximum now. I did 0=x2+2x-4 4=x(x+2) 4/(x+2)=x But I don't think this is correct. Thanks! 2. ## Re: Determining absolute extreme of function Global maxima and minima can occur either at stationary points or at endpoints of the function. You have found the stationary point. Now evaluate each of the endpoints and see which of the three values is the largest and which is the smallest. 3. ## Re: Determining absolute extreme of function Originally Posted by Prove It Global maxima and minima can occur either at stationary points or at endpoints of the function. You have found the stationary point. Now evaluate each of the endpoints and see which of the three values is the largest and which is the smallest. Ok still kind of confused, like for example, if I had -x^2 + 3x and the Intervals are [0,3] What I did was find the derivative which is -2x + 3, and so x=3/2 I plugged that back into the original equation, so the maximum is ((3/2),(9/4)) The minimums are (0,0) and (3,0), but I'm a bit confused how they got to the minimum Just like with this problem as well. I kind of need help, so a little more advice would be nice Thanks! 4. ## Re: Determining absolute extreme of function Like ProveIt said, you need to look at the points where the derivative is zero and ALSO the endpoints. You got a maximum where the derivative is zero, and the minima come from the endpoints. - Hollywood
# Determine the dimension of $U+W$ and of $U \cap W$. Which sums are direct sums? Problem: Determine the dimension of the sum $U + W$ and of the intersection $U \cap W$ of the following subspaces $U$ and $W$. Which sums are direct sums? 1) $U = \text{span}\left\{(1,1,1)\right\}$ and $W = \text{span}\left\{(1,-1,2),(3,1,0)\right\} \in \mathbb{R}^3$; 2) \begin{align*} U = \text{span} \left\{ \begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}\right\} \end{align*} and \begin{align*} W = \text{span} \left\{ \begin{pmatrix} 0 & 0 \\ 1 & -1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\right\} \end{align*} Attempt at solution: 1) Since we see that the vectors are linearly independent, we have that $\dim(U) = 1$ and $\dim(W) = 2$. Furthermore we have $U \cap W = \left\{(0,0,0)\right\}$, since this is the only vector they have in common (not sure about this one). So $\dim(U \cap W) = 0$. On the basis of the equation $\dim(U+ W) = \dim(U) + \dim(W) - \dim(U \cap W)$ we then have that $\dim(U + W) = 3$. Also, this is a direct sum since any vector in $(U+W)$ can be written uniquely as a sum of a vector in $U$ and a vector in $W$. 2) I would say here that $\dim(U) = 2$ and $\dim(W) = 2$. But I'm not sure how to determine $U \cap W$? How does one handle problems like this, where a subspace is given in terms of the span of some vectors? In 1), you are correct that $U\cap W=\{(0,0,0\}$ since the equation $a(1,1,1)=b(1,-1,2)+c(3,1,0)$ has only the trivial solution. (Equivalently, the 3 vectors are linearly independent.) Any vector in U has the form $\begin{pmatrix}a&b-a\\-b&0\end{pmatrix}$ and any vector in W has the form $\begin{pmatrix}d&0\\c&-c-d\end{pmatrix}$, so any vector in $U\cap W$ satisfies $a=b=d$ and $c=-a$; so it has the form $\begin{pmatrix}a&0\\-a&0\end{pmatrix}$ and therefore $\text{dim}(U\cap W)=1$. • I see. So that means $\dim(U + W) = 3$ for the second question. Also, I was wondering, for the first question, was I correct in saying that $(U+W)$ is a direct sum? How can I know this for sure? Because I guessed it. We had that $U \cap W = \left\{(0,0,0)\right\}$, but this doesn't prove the second condition yet, i.e. that any vector can be written in an unique manner as a sum of a vector in $U$ and a vector in $W$. – Kamil Jun 30 '15 at 22:19 • You are right about $\text{dim}(U+W)$, and you were correct in part 1 when you said that $U+W$ is a direct sum, since this follows from $U\cap W=\{\vec{0}\}$. – user84413 Jun 30 '15 at 22:24
[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ] Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron! /sci/ - Science & Math [ Toggle deleted replies ] File: 2.57 MB, 1896x4780, sci_rec_bingo.png [View same] [iqdb] [saucenao] [google] [report] Formerly >>11963300 New chart (subject to further revision) edition. >what is /sqt/ for Questions regarding math and science, plus appropriate advice requests. >where do I go for other SFW questions and requests? >>>/wsr/ , >>>/g/sqt , >>>/diy/sqt , >>>/adv/ , etc. >books? libgen.is (warn me if the link breaks) https://stitz-zeager.com/ >articles? >book recs? https://4chan-science.fandom.com/wiki//sci/_Wiki >how do I post math symbols? https://i.imgur.com/vPAp2YD.png >how do I succesfully post math symbols? https://imgur.com/a/LpgxGsz >a google search didn't return anything, is there anything else I should try before asking the question here? >where do I look up if the question has already been asked on /sci/? >>/sci/ https://boards.fireden.net/sci/ >how do I optimize an image losslessly? https://trimage.org/ https://pnggauntlet.com/ >attach an image >if a question has two or three replies, people usually assume it's already been answered >check the Latex with the Tex button on the posting box >if someone replies to your question with a shitpost, ignore it Stuff: Meme charts: https://imgur.com/a/kAiPAJx Serious charts: https://imgur.com/a/Bumj2FW (Post any that I've missed.) Verbitsky: https://imgur.com/a/QgEw4XN https://pastebin.com/SmBc26uh Graphing: https://www.desmos.com/ Calc solver: https://www.wolframalpha.com/ Tables, properties, material selection: https://www.engineeringtoolbox.com/ http://www.matweb.com/ >> Anonymous Sun Aug 9 14:40:02 2020 No.11991214 File: 14 KB, 480x360, hqdefault (1).jpg [View same] [iqdb] [saucenao] [google] [report] lmao look at this dude he made so much effort and nobody is posting in his thread >> Anonymous Sun Aug 9 14:42:45 2020 No.11991223 >>11991214go away newfag, you don't understand what a general is >> Anonymous Sun Aug 9 14:45:14 2020 No.11991233 Correct me if I'm wrong. The dot product of two vector yields a new vector, and the cross product of the same two vectors yields a different vector. The vector produced by the dot product multiplication can be interpreted, somehow, as representing the "similarity" between the two vectors yielding it, whereas the vector produced by cross product multiplication can be interpreted as representing the "difference" between them? How and why? >> Anonymous Sun Aug 9 14:48:47 2020 No.11991255 File: 1.12 MB, 1400x1000, __komeiji_satori_touhou_drawn_by_retota__2e3ffc1cfd46cf90c79b5eb9eb899d1a.png [View same] [iqdb] [saucenao] [google] [report] Unanswered questions:Math questions:>>11973108>>11975951>>11980729>>11981130>>11982940>>11984281>>11989175>>11990373Physics questions:>>11972244>>11977623>>11979845>>11987253>>11990322Chemistry questions:>>11986533Biology questions:>>11969425Engineering questions:>>11963359>>11980350>>11982092/g/ questions:>>11973798>>11975926Stupid questions:>>11963558>>11971042>>11971213>>11972453 [Yes, my question is stupid.]>>11972649>>11973407>>11973933>>11974039>>11974311 [No. 1^2 > 0^2, doesn't imply that f(x) = x^2 is increasing.]>>11979865>>11980694>>11985450>>11986241>>11990386>>11990707>>11991214No, not like this, /sqt/ bros...>>11991233>The dot product of two vector yields a new vector,Dot product yields a scalar. >> Anonymous Sun Aug 9 15:04:47 2020 No.11991340 >>11991165Gelfand is missing from the chart >> Anonymous Sun Aug 9 15:07:07 2020 No.11991352 >>11991340Gelfand isn't a single book. >> Anonymous Sun Aug 9 15:49:25 2020 No.11991522 is calculus always an approximation? >> Anonymous Sun Aug 9 16:39:46 2020 No.11991714 Biologyfags: How many cases are there of animals evolving to have traits which specifically help inhibit reproduction? Humans do this artificially via condoms to obvious benefit, so there's already a proof of concept. Females of various other species have folds which make insemination more difficult, which by the prior line, I assume is a matter of selection rather than mere coincidence. I haven't found a clear male case of this though. Does the fuck-and-run option they generally have prevent this? Has the impregnating-gender of any species besides humans ever been shown to deliberately avoid insemination? I figured territorial species might do this to save resource, but it seems like they tend to just slaughter the kids or something instead. >> Anonymous Sun Aug 9 17:22:06 2020 No.11991848 >>11991714>folds>female snakes have multiple pockets in which sperm can be stored for up to five year and then, by preference, selected>humans invent a little finger-glove instead >> Anonymous Sun Aug 9 17:46:33 2020 No.11991896 File: 23 KB, 1549x601, amiretarded.png [View same] [iqdb] [saucenao] [google] [report] >tfw previously stupid questionSo I have two perfectly synced clocks with binary code assigned as specific intervals (this could be increased given clock precision) and I send a single photon between them assigned 1 in binary (assume instantaneous for simplicity) when the photon is received it is assigned a new binary (1010) in this case, in doing so I have effectively increased the amount of data per bit sent or compressed the data to a single bit, In doing so is the timing of the event not used as additional information? or am I wrong in this case? >> Anonymous Sun Aug 9 17:49:31 2020 No.11991902 I think this is the most appropriate place to post my question. Just curious and seeking to understand people's opinions. What does /sci/ think of statistics? Do you consider it Mathematics or is it something less?I've always loved math and wanted to study a master's in math at a good uni. In all the higher ranked Universities I applied to I was only accepted into their statistics program. I don't know if I'll enjoy it as much as other applied math fields. >> Anonymous Sun Aug 9 17:53:17 2020 No.11991911 I need to learn everything there is to know about MOSFETs and IGFETs on a device level, Suggest books plz, already read Streetman's solid state electronics >> Anonymous Sun Aug 9 17:55:58 2020 No.11991918 >>11991902I think it's frequently mischaracterized >> Anonymous Sun Aug 9 18:52:26 2020 No.11992033 >>11991902My opinion is that statistics is one of the most boring parts of maths, but it's still maths.>I don't know if I'll enjoy it as much as other applied math fields.Relax, most applied maths is boring. Not inverse problem kino and integral equations, tho, those are great. >> Anonymous Sun Aug 9 19:56:14 2020 No.11992161 >>11991522What do you mean?>>11989175Go through Khan Academy or the MIT OCW course - they'll have a linear progression that you can work towards for the next few months. Just keep regularly doing problems, and you'll be good. Google is your friend. I also recommend Paul's Online Math Notes, which explain things really well.>>11990373I'm not an expert, but this PDF seems to answer your second question: https://sites.math.washington.edu/~mitchell/Algf/whyrep.pdf >> Anonymous Sun Aug 9 21:08:28 2020 No.11992328 >>11992161since it uses derivatives is it an approximate? >> Anonymous Sun Aug 9 21:13:49 2020 No.11992338 >> Anonymous Sun Aug 9 21:13:52 2020 No.11992339 >>11992328Not really. Derivatives use the notion of a "limit," which means we can see the number a function or something approaches at a certain point. Something like the slope of the tangent line of $x^2$ at $x=3$ is *exactly* $6$, not approximately. >> Anonymous Sun Aug 9 21:18:36 2020 No.11992352 >>11992339>limitSo you mean close to but not exactly. An approximate. >> Anonymous Sun Aug 9 21:22:44 2020 No.11992357 >>11992352I think you're confusing the limit with the difference quotient $\frac{f(x+h)-f(x)}{h}$. The limit itself is exact, but the values of the quotient as it approaches the limit are not. >> Anonymous Sun Aug 9 21:54:28 2020 No.11992409 Why is college algebra such a pain in the dick? I'm up to Calc 2 and never had as much trouble as i did in algebra. >> Anonymous Sun Aug 9 23:51:32 2020 No.11992616 >>11992352>So you mean close to but not exactly. An approximate.noa limit gets past all of the approximationsa limit is exact >> Anonymous Mon Aug 10 01:42:25 2020 No.11992819 >>11991911Go to /diy/ and find /ohm/ >> Anonymous Mon Aug 10 01:46:27 2020 No.11992824 Are 2+2 and 3+1 the same thing? >> Anonymous Mon Aug 10 02:00:53 2020 No.11992856   'aight, I'm gonna repost my unanswered question from last time.What's representation theory in a nutshell? What makes the subject interesting?Assume undergrad-tier knowledge in group theory. (Also, I've studied some Fourier analysis on LCA groups, and I've been told that it's closely related to representation theory. I'd love to see how) >> Anonymous Mon Aug 10 02:55:18 2020 No.11992931 File: 12 KB, 400x438, pwm.gif [View same] [iqdb] [saucenao] [google] [report] >>11980350This is called PWM. It's a modulation that can be used to digitalize information, but in this context is used to easily control how much power do you want to feed into the stove.The power is controlled by the duty cycle. If the stove has a 30% duty cycle, it means that it spends 30% of the time turned on and only spends 30% of the maximum power. The duty cycle can easily and digitally be changed. Futhermore, you can build a control system with a feedback loop to monitor the temperature and manipulate the duty cycle instantly on real time to maintain the temperature constant, and all this logic can fit inside a chip.To do this with AC and transformers is just a pain in the ass and it's too big and expensive. A fucking automatic variable responsive transformer? For a stovetop? Nope. >> Anonymous Mon Aug 10 04:54:58 2020 No.11993100 >>11992824No. They both equal the same number but they are different expressions. >> Anonymous Mon Aug 10 05:28:41 2020 No.11993146 I was wondering. If I want to just roughtly know the pH of a solution to the 0.00 magnitude, should I buy some cheap pH meter or a bench pHmeter.I mean, the cheap pH meter should simply give the lecture as soon as I insert it into the solution. But the bench pH meter, while surely will offer more resolution it means that I have to use a buffer calibration solution, then adjust the pH to calibrate the sensor, then insert the desired solution(with a buffer) in the sensor and add a solution of a strong base or acid until I reach the desired pH.The thing is that they explained me the process but not the point to it or the difference between a cheap sensor and a bench one, and taking into account that is ten time more expensive I just want to know whats the point for homelab applications. >> Anonymous Mon Aug 10 07:12:37 2020 No.11993268 In Freyd's theorem that a category C with products of size up to Mor(C) is a preorder, how do we conclude from $f,g:A\to B$, $f\neq g$ that the homset $A\to \prod_{m: \mathrm{Mor}(C) }B$ contains at least $2^{ | \mathrm{Mor}(C) |}$ distinct arrows? >> Anonymous Mon Aug 10 07:34:02 2020 No.11993299 >>11993268Thinking it over a bit, I think the argument goes as follows: for each $m:Mor(C)$ we can choose one of f,g to precompose with the projection $\pi_m$, and the $2^{ \mathrm{Mor}(C) }$ cones thus formed are pairwise distinct, otherwise f,g would lift it non-uniquely to the product cone $\{\pi_m\}$. The theorem follows from the universal property of $\{\pi_m\}$, which ensures that a (distinct) lift in $A\to \prod_{m: \mathrm{Mor}(C) }B$ exists for every (distinct) cone.Is this right? >> Anonymous Mon Aug 10 08:02:59 2020 No.11993348 >>11993299Sounds correct. >> Anonymous Mon Aug 10 08:42:31 2020 No.11993400 how do we know that the properties of numbers are intrinsic and don't just manifest from the way we've chosen to represent them?as in, why is a number always prime, or irrational, or transcendental, regardless of how we choose to represent it in decimal, binary, octal, sexagesimal, etc? >> Anonymous Mon Aug 10 09:58:32 2020 No.11993560 >>11993400You can do all of the construction of the numbers and a lot of the investigation of their properties in a representation-agnostic way. When you get to the reals though you might have to choose a family of representations (ie base n but you don't need to specify what n is). >> Anonymous Mon Aug 10 10:00:58 2020 No.11993567 Got some problems with rotations and quaternions Let say the rotation of a cube in 3D space is defined by a quaternion Q1(x,y,z,w), it is not oriented the way I want, I want to rotate it permanently on the Z axis by 90° during my whole process Do I only have to set its rotation by a multiplication with another quaternion Q2 (normalized) which will be oriented by 90° on the Z axis ? newQ1 = Q2.Q1.Q2* >> Anonymous Mon Aug 10 10:59:16 2020 No.11993739 Why can't you divide vectors? >> Anonymous Mon Aug 10 11:01:59 2020 No.11993752 I want to symbolically solve$x = \sqrt[2]{2\sqrt[3]{3\sqrt[4]{4\sqrt[5]{5\dots}}}}$Which is easily seen to be the infinite product$x = \prod_{n=1}^\infty n^{1/n!}$Taking the log on both sides yields$\ln(x) = \sum_{n=1}^\infty \frac{\ln(n)}{n!}$But I can't get much further than that. Anyone have a clue/hint how to evaluate this sum or product? >> Anonymous Mon Aug 10 11:06:51 2020 No.11993770 >>11993752Are you sure that the infinite product is correct? The roots are nested, not just multiplied together >> Anonymous Mon Aug 10 11:09:55 2020 No.11993783 >>11993770looks correct imo, nested implies that the exponents are multiplied together, hence the factorial >> Anonymous Mon Aug 10 11:16:02 2020 No.11993805 Hi, can I become retarded from listening to youtube while playing games? I heard that multitasking is bad for my brain but is it that bad and is that a serious form of multitasking? >> Anonymous Mon Aug 10 11:16:25 2020 No.11993807 >>11993770Well $(2\cdot 3^{1/3})^{1/2} = 2^{1/2}\cdot(3^{1/3})^{1/2} = 2^{1/2}3^{1/6}$ right? And this pattern continues, which gives the product, right? >> Anonymous Mon Aug 10 11:18:04 2020 No.11993815 >>11993783Yeah sorry I missed the factorial, my bad >> Anonymous Mon Aug 10 11:24:09 2020 No.11993837 >>11993752Another anon already suggested it, but you really should post this on stack exchange. >> Anonymous Mon Aug 10 11:28:33 2020 No.11993848 >>11993837Yeah, seems like i might have to. >> Anonymous Mon Aug 10 11:34:18 2020 No.11993860 >>11993752>Anyone have a clue/hint how to evaluate this sum or product?Is it even possible? Wolfram Alpha doesn't give an answer: https://www.wolframalpha.com/input/?i=Sum+from+1+to+infinity+ln%28x%29%2F%28x%21%29 >> Anonymous Mon Aug 10 11:40:36 2020 No.11993875 I would think so, yes. Wolfram alpha isn't perfect, I've solved series that has closed form solutions where wolfram would just estimate the solution.But it is entirely possible that it has no closed form solution. >> Anonymous Mon Aug 10 11:43:41 2020 No.11993882 >>11993860well, it gave to me...https://www.wolframalpha.com/input/?i=sum+ln%28k%29+%2F+%28k%21%29%2C+k+from+1+to+inftyapparently, [eqn] \ln(x) = \sum_{k=1}^{\infty} \frac{\ln(k)}{k!} = 0.603783 [/eqn]so, $x \simeq 1.82902$ >> Anonymous Mon Aug 10 11:53:20 2020 No.11993906 >>11993882That's super weird. I just rechecked my link >>11993860 and it worked this time. Might be because I'm using my phone and had a poor internet connection >> Anonymous Mon Aug 10 12:53:42 2020 No.11994065 >>11993805what>>11993848that was me! but yeah i think stackexchange would be the most helpful >> Anonymous Mon Aug 10 13:29:54 2020 No.11994161 >>11991165Are these books just really meme books though?Also, scientifically speaking, how much dick can your anus handle OP? >> Anonymous Mon Aug 10 13:34:17 2020 No.11994177 >>11992409I hit a minor roadblock in calc 2 but algebra word problems fucked me harder than anything, such as: If it takes 10 people 3 hours to paint a wall how long does it take 5 people to paint 4 walls? Or - If a ball is dropped and travel 5 m/s and goes through water at 3 m/s from a height of 20m, how long was it in water (something like that). I'll know how to get the answer "intuitively," but I always fuck up writing it out into an equation and solving it how they want. It's a blog post, but I thought I could relate. >> Anonymous Mon Aug 10 14:04:14 2020 No.11994270 File: 75 KB, 871x544, Screen Shot 2020-08-10 at 11.01.17 AM.png [View same] [iqdb] [saucenao] [google] [report] Why isn't the last answer correct? The eigenvalues are 1,3, so the basis for the eigenspace of the smallest eigenvalue should be the column that corresponds to the smallest eigenvalue, which I've entered (and yes, I tried the second column as well). >> Anonymous Mon Aug 10 14:10:33 2020 No.11994287 >>11994270The eigenspace corresponding to the eigenvalue 1 is 2-dimensional. >> Anonymous Mon Aug 10 14:11:17 2020 No.11994288 why do heating mantles have a fiberglass lining? why not have like, a steel mesh?fiberglass is going to abrade and shed glass filiments, so it'll degrade with time. plus a metal mesh would better conduct heat around the flask so hot spots would be eliminated. and there's not a risk of electrical shock because you could just ground the mantle. >> Anonymous Mon Aug 10 14:11:28 2020 No.11994291 >>11994270So you would need both vectors. >> Anonymous Mon Aug 10 14:26:26 2020 No.11994316 If we have an implication, and the consequent has some property, then does the antecedent also have that property?Something like "increasing the amount of cakes in my house has the property of being good. If I buy cakes, then I increase the amount of cakes in my house. Therefore, buying cakes has the property of being good". Is this logic valid (I don't care about soundness). Or does the fact that the consequent have some property not imply that the antecedent has that property? >> Anonymous Mon Aug 10 14:31:15 2020 No.11994334 What's the deal with NRC Research Press? Trying to view an article on my phone and they've just given me a loading wheel for 10+ minutes. No login/credentials prompt, no nothing.Anyone else familiar with this site? address is just the name with no spaces and a .com >> Anonymous Mon Aug 10 14:38:19 2020 No.11994357 >>11994334no idea about them in particular but i almost always just use scihub to get the real papers. >> Anonymous Mon Aug 10 15:13:50 2020 No.11994466 >>11992161That pdf is from my topology professor at UW. He is dead now. ;_;7 >> Anonymous Mon Aug 10 16:36:45 2020 No.11994840 File: 711 KB, 2062x1160, __konpaku_youmu_and_konpaku_youmu_touhou_drawn_by_pegashi__d8116bc22aa598ef87249fca4b3f43cf.jpg [View same] [iqdb] [saucenao] [google] [report] >>11994161>Are these books just really meme books though?I'm sorry, I think I might be going blind. Does the chart say meme book somewhere? "The books inside are memes", "the chart contains popular /sci/ meme books" or something similar enough? >> Anonymous Mon Aug 10 17:08:42 2020 No.11994946 >>11994840You're right. Just because it doesn't say meme and because they're shilled definitely doesn't mean they are memes.You didn't answer my second question about your gay faggot butt either >> Anonymous Mon Aug 10 17:11:53 2020 No.11994955 >>11993739you can come up with mathematically consistent ways to "divide vectors". it just doesn't have any useful meaning, so there's no point. >> Anonymous Mon Aug 10 17:15:22 2020 No.11994973 Jesus christ I just learned our galaxy is on course to crash with Andromeda, how do we prepare? Why isn't this on the news? >> Anonymous Mon Aug 10 17:19:12 2020 No.11994983 >>11994973>predicted to occur in about 4.5 billion years>why isn't this on the news??? >> Anonymous Mon Aug 10 18:40:29 2020 No.11995228 >>11994973galaxies don't really "crash," they just sort of get mixed upand by the time that happens, earth'll probably have produced multiple space-faring intelligent species, so it won't really be an issue. >> Anonymous Mon Aug 10 20:14:16 2020 No.11995528 Please help a brainlet with integration. I know $\int \dfrac{1}{x} dx = ln|x|$ and that $\int x dx = \dfrac{x^2}{2}$. But how do they 'interact'? For example $\int \dfrac{x}{x}dx$.It would seem logical if it was $\dfrac{x^2 ln|x|}{2}$. But that obviously isn't the case and I can't see the pattern. >> Anonymous Mon Aug 10 20:41:22 2020 No.11995614 >>11995528the integral of a product of functions is *not* equal to the product of the integrals of such functionsin order to evaluate the one you suggested correctly, the best shot is to apply integration by parts. taking $\color{red}{u \equiv x}$ and $\color{blue}{\mathrm{d}v \equiv 1/x \, \mathrm{d}x}$, we have[eqn] \int \color{red}{x} \cdot \color{blue}{\frac{1}{x} \, \mathrm{d}x} = \color{red}{x} \cdot \color{blue}{\ln (x)} - \int \color{blue}{\ln (x)} \hspace{4pt} \color{red}{\mathrm{d}x} [/eqn]one can show that $\int \ln(x) \hspace{3pt} \mathrm{d}x = x \big[ \ln(x) - 1 \big]$. hence,[eqn] \int \frac{x}{x} \, \mathrm{d}x = x \, \ln(x) - x \, \ln(x) + x = x \, , [/eqn]which reproduces the expected result. >> Anonymous Mon Aug 10 20:58:43 2020 No.11995662 Would you do linear algebra or calc 2 after completing calc 1? I'm in community college and their suggested math pathway is calc 1, then doing whatever you please until you complete both, of which I can then do calc 3. Which should I do first? >> Anonymous Mon Aug 10 21:13:56 2020 No.11995698 File: 70 KB, 798x383, 8.png [View same] [iqdb] [saucenao] [google] [report] My question is only about C. Since initial height is not given, could I place the x-axis on the final position of the ball? That would make the initial height 3cm and final height 0cm. >> Anonymous Mon Aug 10 21:15:00 2020 No.11995704 Are there significant differences between isopropanol and propanol besides the isomeric difference in their structure? Like I see isopropyl alcohol used all the time in sanitation agents, but not 1 propanol. Why is that? >> Anonymous Mon Aug 10 21:19:56 2020 No.11995729 Is it possible to create a drug that makes girls insatiable nymphomaniacs who become dumber each time they have sex? Because I think such a drug would be beneficial to society. >> Anonymous Mon Aug 10 21:26:33 2020 No.11995749 >>11995704propene is easier to hydroxylate at secondary carbons than primary (markovnikov vs anti-markovnikov), thus iPrOH is cheaper than nPrOHalso check the msds, idk if nPrOH is safe or not >> Anonymous Mon Aug 10 21:35:52 2020 No.11995771 >>11995749Sick. Thanks bruv. I'm doing a project on IPA for my gen chem 3 class. I don't know what markovnikov is, and I wouldn't have understood your explanation if not for the articles I've been reading specifically about how IPA is manufactured industrially. Either through the direct or indirect hydration of propene, or through the hydrogenation of acetone. I assume hydroxylate and hydrate are synonymous here, I can't imagine either means anything different than "to affix an OH group." >> Anonymous Mon Aug 10 21:43:14 2020 No.11995786 >>11995662Probably calc 2. linalg is more important in calc 3 imo, so it'd be good to take it sooner to your calc 3 so it's fresher in your mind.>>11995698Yes. We're just looking for the difference in their heights, which would be 3cm. This would be plugged into $mg\Delta h$. >> Anonymous Mon Aug 10 22:06:05 2020 No.11995816 >>11995786>This would be plugged into mgΔhmgΔh.Cool, thanks. >> Anonymous Mon Aug 10 23:06:54 2020 No.11995938 Looking up differential amplifiers. What does "suppresses any voltage common to the two inputs" mean? Does it suppress the AC part of the inputs or what? Why the fuck am I going through a mostly practical circuit class when I'm not an engineer, why are we being assigned the theory instead of it being explained more in depth? >> Anonymous Mon Aug 10 23:15:06 2020 No.11995960 File: 24 KB, 333x499, calculus Keisler.jpg [View same] [iqdb] [saucenao] [google] [report] >>11991165Hello,i want to learn calculus with the infinitesimal approach, but before i need to refresh some precalculus. I have 3 different options and i was wondering if you could help me pick the best one or the most efficent at least.1)Khan academy precalculus (https://www.khanacademy.org/math/precalculus)2)Stewart precalculus: mathematics for calculus3)Simmons Precalculus mathematics in a nutshell.I don't know if it helps; but, i did precalc, calc I and II in uni. Sadly, i don't remember very much. >> Anonymous Mon Aug 10 23:34:12 2020 No.11996023 Between chemical engineering and EE, which would involve less hands on work (with this I mean building things, simulating things, lab work, etc.) Which of the two would be more pen and paper focused, that is, which would have the more courses where I can just read the theory, understand it and apply it to problems without having to build, simulate, or experiment with anything.Thanks >> Anonymous Tue Aug 11 00:58:37 2020 No.11996176 >>11995960I would personally recommend Khan Academy, since it's very interactive, and leaving the textbooks as extra references if you need to double check something. >> Anonymous Tue Aug 11 02:16:44 2020 No.11996293 >>11995938different DC biases may cause different linearity responses, in a shitty amplifier. w2aew has a video on itidk what your second question is, do you care or not care about practical circuit design? >> Anonymous Tue Aug 11 02:20:15 2020 No.11996298 >>11996023just drop out and be an actuary already >> Anonymous Tue Aug 11 03:06:18 2020 No.11996374 >>11995729yeah it is the simp wet dream, but girls dont get dumber, since they are the ones organizing the competition among men to pick up the men who will please them sexually the most. it's like wanting a drug allowed by referees to make the referees dumber >> Anonymous Tue Aug 11 03:14:26 2020 No.11996389 >>11994316>If we have an implication, and the consequent has some property, then does the antecedent also have that property?yes in logic the usual property is ''being sent to true in the model''there are logics where some other property is required to be passed over to the consequent.learn more about weird inference rules >> Anonymous Tue Aug 11 04:38:16 2020 No.11996529 >>11991165The definition of the imaginary unit is i^2 = -1, in other words: any number whose square equals -1 is the imaginary unit. Clearly -i also satisfies this equation, which means that, by definition, -i = i, and therefore i = 0. Where is the error, from a rigorous perspective? Do I misunderstand how implicit definitions work? >> Anonymous Tue Aug 11 05:22:27 2020 No.11996582 File: 15 KB, 400x300, tessaract_4F41A045-E7DB-C89F-2250ABF12EF703E0.jpg [View same] [iqdb] [saucenao] [google] [report] is the fourth dimension basically just drawing a line and pretending it's 90 degrees from other axis when it's 45? no spacetime pls >> Anonymous Tue Aug 11 06:06:33 2020 No.11996634 {0} ≠{ } why? Also, suggest books/websites for high school physics, chemistry, math and electronics other than khan academy. Thank you. >> Anonymous Tue Aug 11 06:16:31 2020 No.11996641 >>11996634>{0} ≠{ }check the elements of each set >> Anonymous Tue Aug 11 06:42:19 2020 No.11996686 >>11996582No, it's actually 90° with respect to every other axis. This cannot be done in 3D just like a cube cannot have three angles mutually 90° drawn on paper. What you're looking at a is a projection onto a lower-dimensional space. >> Anonymous Tue Aug 11 07:11:57 2020 No.11996733 >>11992352Look at that fucking faggot, so sure of that claim. Any half decent beginner book in this beginner course would give you that understanding, in its beginning chapters. Right thread for you indeed. >> Anonymous Tue Aug 11 08:01:35 2020 No.11996822 >>11996529> in other words: any number whose square equals -1 is the imaginary unitNo. One of them is the imaginary unit, the other is its negation. Similar to how i^2=1 has 1 and -1 as its solutions. Except that 1 and -1 are distinguished by the fact that 1 is the multiplicative unit, i.e. 1*x=x, whereas -1 isn't. There is no such distinction for i vs -i, nor does it matter. >> Anonymous Tue Aug 11 08:23:36 2020 No.11996862 1. How do chemicists remove dirty protective gloves?2. Anyone manufactures protective gloves from PTFE? >> Anonymous Tue Aug 11 09:16:26 2020 No.11996957 Organic chemistry questions1. Does anyone know where I can find a chart or table ranking common functional groups by frontier orbital (HOMO/LUMO) energy levels? I remember seeing a chart out there once2. Does anyone know where I can find a chart/table showing reaction rates for common rxns? I vaguely remember seeing one that grouped the fastest ones as "diffusion controlled" >> Anonymous Tue Aug 11 09:35:18 2020 No.11996989 >>11991165how are you doing OP? What are you studying atm? >> Anonymous Tue Aug 11 09:36:35 2020 No.11996993 >>11996862>1. How do chemists remove dirty protective gloves?take one off by grabbing it by the cuff, up until the finger tips, then with the partially gloved hand do the same for the other glove, they should now be both basically off, then you can bin them/scruntch them into a ball and bin them. >> Anonymous Tue Aug 11 11:20:34 2020 No.11997267 File: 65 KB, 455x683, 0362016ba44703ae6782d834264dfd32c.jpg [View same] [iqdb] [saucenao] [google] [report] >>11996989The usual. Got some stuff to solve. The kind of stuff which you can't go and just mechanically solve yourself, it actually requires some form or another of human interaction.Currently studying C++ programming.How about you? Doing anything in particular? >> Anonymous Tue Aug 11 11:57:33 2020 No.11997384 What's the point of factoring polynomials and functions in general?my professors don't like when i don't factor a function, even if it's correct, they say it's incomplete if i don't factor them, why is this? >> Anonymous Tue Aug 11 12:15:44 2020 No.11997427 >>11997267not a great deal just some problem sets I meant to get done ages ago, boring ass aPplIeD CS bullshit >> Anonymous Tue Aug 11 13:20:15 2020 No.11997601 >>11991165how should I cite books which aren't finished/published yet? for example "Vector Bundles and K-theory" by A.Hatcher >> Anonymous Tue Aug 11 13:22:36 2020 No.11997607 >>11991255first time going on /sci/ in a few months, glad to see youre still around, 2hu bro >> Anonymous Tue Aug 11 13:26:17 2020 No.11997613 >>11996862grab the cuff of the right glove with your left hand, and pull the glove completely off. you are now holding an inside out glove by the cuff in your left hand. now, grab the left cuff *through the inside out glove*, like a doggy-bag, and take the left glove off >> Anonymous Tue Aug 11 13:44:28 2020 No.11997677 >>11997384It is much easier to reason about, and the professors probably want all the anwers of a similar form for easier grading. >> Anonymous Tue Aug 11 14:24:19 2020 No.11997809 File: 76 KB, 871x393, Screen Shot 2020-08-11 at 10.55.58 AM.png [View same] [iqdb] [saucenao] [google] [report] I keep getting this problem wrong. Can someone help me find find P? It does not accept answers from any calculator I've tried.The eigenvalues are correct, and are 0,0 and -3. So the row reduced forms of that matrices after substracting the eigenvalues along the diagnol are the identity matrix for -3, and for 0 it is:$\begin{bmatrix} -1 & -3 & 11 \\ 0 & 0 & 0\\0 & 0 & 0\end{bmatrix}$I'm not really clear on how to solve for P from here. I know it has to do with setting the rref matrices above to Ax = 0, and solving for x, but I tried that and my answer was still wrong. >> Anonymous Tue Aug 11 14:32:33 2020 No.11997830 >>11992824>>11993100Any recommendations for the linguistic and semiotic (syntax, semantics) aspects behind maths? >> Anonymous Tue Aug 11 14:34:37 2020 No.11997837 >>11997809nevermind, got it. >> Anonymous Tue Aug 11 14:46:07 2020 No.11997876 >>11997601the same way you would cite published books >> Anonymous Tue Aug 11 14:48:55 2020 No.11997884 I'm taking chemistry 101 this fall semester. But one of the required texts for the lab part is something called "LABORATORY NOTEBOOK 100 CARBONLESS SET"What exactly is that? It doesn't seem like a textbook. Is it just a special kind of notebook for chemistry? If so, are there other, cheaper (it's like \$26 for that specific one listed) ones I could get instead?The exact ISBN is 9781506647401I would just email the lab professor, but I've already emailed him like 3 times over the past few months and he hasn't responded yet. >> Anonymous Tue Aug 11 14:56:21 2020 No.11997903 >>11991165For anyone that didnt know yet, most books are available at the website>gen.lib.rus.ec >> Anonymous Tue Aug 11 15:27:16 2020 No.11998009 The other answers are correct, but I can't figure out what's supposed to go in place of the function. It complains when it's not a number. >> Anonymous Tue Aug 11 15:29:27 2020 No.11998016 File: 50 KB, 866x344, Screen Shot 2020-08-11 at 12.26.32 PM.png [View same] [iqdb] [saucenao] [google] [report] >> Anonymous Tue Aug 11 15:33:23 2020 No.11998028 >>11998016Did you try swapping x^2 and 2x? >> Anonymous Tue Aug 11 15:34:45 2020 No.11998034 >>11998028all the answers for A-F are correct, it's just the function I'm having trouble with. >> Anonymous Tue Aug 11 15:35:15 2020 No.11998037 What's the exact order in which is preferable to read the books in the picture? >> Anonymous Tue Aug 11 15:36:05 2020 No.11998041 >>11998034Oh, right.Put 1 in. >> Anonymous Tue Aug 11 15:36:37 2020 No.11998044 >>11993752I have a solution but need time to TEX it out >> Anonymous Tue Aug 11 15:37:57 2020 No.11998051 >>119980371. Copi's Introduction to Logic/How to solve it2. Pinter's a book of set theory/Halmos naive Set theory3. Wooton's Analytic Geometry/Mishra's Functions and graphs4. Uspensky's Theory of equations/Lang's introduction to linear algebra5. Spivak's Calculus/Courant's Analysis6. Kurosh's Algebra7. Kwak's Lineal Algebra8. Apostol's Analysis/9. Ross's Courses in probabilityOptional:2.5. Evan Chen's Euclidean geometry/Brualdi's combinatorics3.5 Andreescus' Complex Numbers/Kisilev's geometry4.5 Andreescu's number theory5.5 Costas Efthimou's Functional equations/ Niven's Maxima and minima6.5 Demidovich's Analysis problems7.5 Wallace's Groups, rings and fields/Scott's Group theory8.5 krasnov's integral equations9.5 Hay's Vector and Tensor analysis/ Munkres' Topology >> Anonymous Tue Aug 11 15:38:22 2020 No.11998054 >>11998041I don't understand why it was a constant? >> Anonymous Tue Aug 11 15:42:33 2020 No.11998066 >>11998054The integral of the constant 1 function along a "space" returns the "space"'s "volume".For example:$\int _{a}^b 1 ~ dx = b - a$For another example, this time the area between $x = a$, $x = b$, $0 < y < f(x)$ is $\int_a ^b \int _0 ^{f(x)} 1 ~ dy ~ dx = \int _a ^b f(x) ~ dx$ >> Anonymous Tue Aug 11 15:44:24 2020 No.11998075 >>11998066ohhhh I see, thanks! >> Anonymous Tue Aug 11 16:12:23 2020 No.11998165 >> Anonymous Tue Aug 11 16:45:39 2020 No.11998263 File: 163 KB, 850x1202, __yakumo_ran_touhou_drawn_by_chanta_ayatakaoisii__sample-506ff3a5c6693b46c1b0fba342a0479d.jpg [View same] [iqdb] [saucenao] [google] [report] >>11997427Best of luck.>>11998037Left to right, top to bottom, but if you chose Pinter you'll want to go back and read one of the other algebra books later.And if you choose Spivak, you still need to read some book about vector calc.Also, you might want to take two minutes to read the Picard-Lindelof theorem proof somewhere at some point. Preferably before Lee's Smooth Manifolds, since iirc he used it to prove the existence of Flows for vector fields.The chart wasn't made to be actually followed and is provided as is with no guarantees express or implied.>>11998051>has book about functional equationsLiterally impossible to take seriously.>has a book about integral equationsList redeemed. >> Anonymous Tue Aug 11 17:11:05 2020 No.11998358 File: 54 KB, 870x350, Screen Shot 2020-08-11 at 2.09.20 PM.png [View same] [iqdb] [saucenao] [google] [report] Can't figure this one out either. It's a paraboloid along y where only the first octant is being considered. It complains my domain for $uz$ is wrong, which makes sense because the domain goes from -x to x, which is out of the first octant. But I'm not sure what else to write, or if that's even the problem. >> Anonymous Tue Aug 11 17:12:08 2020 No.11998361 >>11998358$uz=3$Oops, that was me just entering shit to check error messages. I had $\sqrt(y-x^2)$ orignally. >> Anonymous Tue Aug 11 17:17:00 2020 No.11998388 What's the (topology) interior of $S^1$? Is it $S^1$ minus a point? Which point would it be? >> Anonymous Tue Aug 11 17:21:23 2020 No.11998407 >>11984281 Yes you can, but the whole point of using different coordinate systems to take advantage of the symmetry of the given problem. Not saying that it can't be done but it's probably not the most practical way to go. >> Anonymous Tue Aug 11 17:22:08 2020 No.11998414 >>11998263Okay, but what iq do you need to be to successfully get through all these books? >> No one Tue Aug 11 17:22:48 2020 No.11998417 Thanks fag for your service you're very usefull. Keep it up! >> Anonymous Tue Aug 11 17:23:15 2020 No.11998420 >>11998388.... >> Anonymous Tue Aug 11 17:31:22 2020 No.11998441 >>11998420Ok I just cleared that up, I was confusing the subspace topology on $S^1$ instead of $S^1$ as a subset of $\mathbb{R}^2$.I am truly the lowest of brainlets. >> Anonymous Tue Aug 11 19:08:40 2020 No.11998776 >>11996023ChemE>EE, in the sense that people studying ChemE have higher IQs than thos studying EE. The fact that so few ChemE's browse this board proves it. >> Anonymous Tue Aug 11 19:46:29 2020 No.11998885 Can we escape this universe into another one? Could the laws of physics be different in another universe? Could fire freeze things? >> Anonymous Tue Aug 11 19:47:58 2020 No.11998891 >>11998885feasibly, nocan the laws be different? sure. but it would have to be a different universe which we aren't sure exists or not. there's nothing stating the laws of physics have to be what they are, they're just observed to be truecould fire freeze things? what the fuck does this mean >> Anonymous Tue Aug 11 20:48:56 2020 No.11999111 >>11997677>easier gradingUsed to think this was stupid, until I started working as a grader myself. When you've gone through 100 exams and more than half of the students don't follow directions for factoring their answer completely, it tends to get on your nerves. >> Anonymous Tue Aug 11 22:36:47 2020 No.11999375 File: 210 KB, 1500x688, do you tho.png [View same] [iqdb] [saucenao] [google] [report] >> Anonymous Wed Aug 12 01:10:47 2020 No.11999703 >>11998358bump >> Anonymous Wed Aug 12 01:27:37 2020 No.11999737   >>11999703It's $\sqrt{10}$ or $\sqrt{10-y}.$ >> Anonymous Wed Aug 12 01:35:27 2020 No.11999754   >>11999703It's either $\sqrt{10}$, or it's $\sqrt{y-10}.$ I'm leaning towards the latter. >> Anonymous Wed Aug 12 01:38:03 2020 No.11999763   File: 13 KB, 684x107, mama.png [View same] [iqdb] [saucenao] [google] [report] I'm apparently setting integral up wrong, pls halp >> Anonymous Wed Aug 12 01:42:23 2020 No.11999777 >>11999703You need to slice the graph on the $yz-$plane, and you'll get $\sqrt{y}.$ >> Anonymous Wed Aug 12 01:44:05 2020 No.11999780 >>11999777ah shit thanks homie, noted >> Anonymous Wed Aug 12 03:35:28 2020 No.11999994 Is there any material for learning how to use a proof assistant? >> Anonymous Wed Aug 12 04:16:11 2020 No.12000094 where is it?? >>12000000 >> Anonymous Wed Aug 12 05:31:40 2020 No.12000225 >>11991233The scalar product of two vectors measures the projection of one vector onto the other one. Since projection depends on the angle in Euclidean space, you can think about measuring the angle between the vectors. If they are 'similar' (= have similar direction), the projection will be large. If they are perpendicular to each other, projection is zero. The concept is very useful because 1) it allows to talk about angles (or projections, rather) in vector spaces other than Euclidean; 2) any scalar (or, rather, inner) product induces norm which induces metric - and so we can talk about distances between abstract vectors. One of the most useful examples is a vector space where vectors are functions (like cos(x)). As soon as you introduce an inner product, you can project one function onto another, and compute distances between functions. This allows us to perform analysis - convergence, Fourier series, functional derivatives, etc. A vector product, by definition, is a vector in a direction that is orthogonal to both of the original vectors, and has length equal to the area of the parallelogram spanned by the two vectors. Vector product is an analog of a much more general concept of an exterior product which is something that allows us to compute volumes. Tthe exterior product on a generic vector space can only be defined for forms and not vectors (form is something that eats vectors and gives you a number, and is linear). The result of an outer product is also a form and not a vector. In particular, a product of two forms is a 2-form which eats two vectors and gives oriented area of a parallelogram. This explains why vector product gives you area. To transform the product output (the 2-form) into a vector, you have to have some additional structure (like a scalar product). However, this only allows for assigning a vector to a 2-form in either a 3-dimensional or 7-dimensional space. Historically vector product appeared from Hamilton's quaternions. >> Anonymous Wed Aug 12 05:48:55 2020 No.12000276 File: 67 KB, 770x444, FBD.png [View same] [iqdb] [saucenao] [google] [report] >>11991165Sort of an engineering/kinematics question. What the hell is a kinetic diagram? I'm supposed to make them for my summer class and I keep getting it wrong. Google doesn't really specify and I know it's not a FBD because I got that wrong. Can anyone explain or maybe sketch one out of a simple pendulum?pic related is a FBD since idk how to make a KD and my professors comment >> Anonymous Wed Aug 12 07:35:44 2020 No.12000567 >>12000276>kinetickinetic means when t is fixed, so fix a time and draw the forces+ frame x, y, z >> Anonymous Wed Aug 12 07:54:45 2020 No.12000627 >>11998044Oh wow! Please share >> Anonymous Wed Aug 12 08:13:54 2020 No.12000665 why do i feel fucking brain damaged after having been exposed to a 28-30C indoor temperature?>inb4 git gud pussyit's clearly warmer than ideal for human productivity, and warmer countries tend to be less developed and be populated by lazy brainlets >> Anonymous Wed Aug 12 08:25:06 2020 No.12000685 >>11993146Go for the calibrated one if you're doing anything quantitative. >> Anonymous Wed Aug 12 08:27:52 2020 No.12000689 >>11993805It won't make you retarded, it'll just make you worse at what you're trying to do. If I try to do my homework while I cook, I'll make mistakes in my homework and/or fuck up my meal, but I won't lose cognitive function. >> Anonymous Wed Aug 12 08:31:54 2020 No.12000698 >>12000665relax niggameditate and slow your metabolism >> Anonymous Wed Aug 12 09:58:40 2020 No.12000894 Could someone explain to me what support functions are, I've heard they generalize tangents in higher dimensions but that doesn't make sense since they are defined as the distance between a set and it's hyperplane? What's a hyperplane? >> Anonymous Wed Aug 12 10:25:32 2020 No.12000983 File: 61 KB, 1920x1080, draft.png [View same] [iqdb] [saucenao] [google] [report] why is there a gust of wind (the blinds on my open window move a bit, but I can not feel it) when my neighbour opens and closes the door?my own door is fairly well-isolatedpressure has always fucked with my head >> Anonymous Wed Aug 12 10:41:34 2020 No.12001022 >>12000665you'll get used to it in a week >> Anonymous Wed Aug 12 10:45:31 2020 No.12001039 File: 82 KB, 208x208, 208x208.png [View same] [iqdb] [saucenao] [google] [report] How does one do mathematics?I read the essay A Mathematician's Lament by Paul Lockhart, and it got me thinking on what it really means to do math. I myself am an data science student, with a math minor. But I've always been interested in mathematics, and decided to use the free time coronavirus gave me to self learn the stuff I won't learn in school, like abstract algebra. However, I came across this essay, and it got me thinking of what it truly means to do math.I understand what it means to do coding/writing/data analysis: you have to make something. You can't learn coding only by reading books, you can't learn to write stories only by reading fantasy, and you can't analysis data only by seeing what others do: these are all steps of the process, not the process itself.But from what I understand, (again, from what I, a noob, understand) there's nothing like that for math. There's competitions, doing exercises in the textbook, or research, and competitions in my experience is just really intense exercises. Is self studying from a textbook, like how the wiki says, https://4chan-science.fandom.com/wiki/Mathematics says, enough to say you do mathematics? >> Anonymous Wed Aug 12 10:50:52 2020 No.12001044 I came up with an interesting-to-me problem that I'm having trouble conceptualizing an answer for.I'm trying to come up with something that, when given two other functions f(x) and g(x), outputs the function that always stays directly between the other two functions.A physical intuition I've come up with is that f(x) and g(x) both have a "pull" between them. The more one of the function pulls, the harder it would deflect the line between the two.I've noticed two properties for the thing. One, the output function should always pass between the points of intersection of the two functions. Two, if f(x) and g(x) are inverses, the output function would be the identity function y=xI don't know how to actually solve this though. "Pull" tips me off that it's going to deal with second order derivatives, but since this is supposed to take in any two functions, I don't think a regular second-order differential equation is sufficient.Would this be solved with the Calculus of Variations? >> Anonymous Wed Aug 12 10:54:37 2020 No.12001051 >>12001044That's just the convex property of functions, check out convex optimization, it actually uses calculus of variations and lagrange multipliers, if you did math up to calculus 4c you'd know this. >> Anonymous Wed Aug 12 10:55:25 2020 No.12001053 >>12001051that explains it, I've only done up to ODEs and calc II >> Anonymous Wed Aug 12 11:18:47 2020 No.12001103 Do I need to use Stoke's Theorem to find what points belong to the boundary of the set of an annulus? >> Anonymous Wed Aug 12 11:19:27 2020 No.12001105 File: 2 KB, 169x110, 1573658353685.png [View same] [iqdb] [saucenao] [google] [report] I honestly don't see how you simplify the top to the bottom I'm feeling extra dumb today. >> Anonymous Wed Aug 12 11:21:55 2020 No.12001112 >>12001105Multiply both top and bottom by the denominator but switch from adding to subtracting. >> Anonymous Wed Aug 12 11:33:49 2020 No.12001147 File: 95 KB, 283x343, 1596526225700.png [View same] [iqdb] [saucenao] [google] [report] >>12001112I got $-2\sqrt{3}-10$ now and I keep finding errors so I know I made some big fuck up in my early calculations.I think I'm just going to take a shower and a nap, I shouldn't have left all the semester homework for the last week honestly, I got the dumb. >> Anonymous Wed Aug 12 11:38:00 2020 No.12001157 >>12001147You should get -2*sqrt(3)-4 as your new numerator and -2 as your new denominator. Then simplify and you get 2+sqrt(3) >> Anonymous Wed Aug 12 11:45:20 2020 No.12001180 >>11993752A296301 >> Anonymous Wed Aug 12 11:50:38 2020 No.12001195 File: 27 KB, 700x467, 1583878742628.png [View same] [iqdb] [saucenao] [google] [report] >>12001105>>12001147https://www.bbc.co.uk/bitesize/guides/zg6vcj6/revision/6Just gotta keep track of the minus signs bud >> Anonymous Wed Aug 12 12:04:49 2020 No.12001225 >>12001180No closed form, though :(>>11998044Do share! >> Anonymous Wed Aug 12 12:09:58 2020 No.12001241 >>12001157>>12001195after taking a shower and getting some hot cocoa I fucking did it, I feel so dumb now for asking simple shit.Thanks anons, you guys are amazing. >> Anonymous Wed Aug 12 12:14:34 2020 No.12001249 >>12001039>https://4chan-science.fandom.com/wiki/Mathematics says, enough to say you do mathematics?dieudonné said a mathematician is guy a who published a novel proof of a theorem, that's it. >> Anonymous Wed Aug 12 12:42:14 2020 No.12001316 Probability of rolling a 1 at least 600 times out of 1000 rolls on a 4 sided dice? >> Anonymous Wed Aug 12 12:49:27 2020 No.12001332 Is degree in materials engineering worth it or will i be unemployed for life? >> Anonymous Wed Aug 12 13:17:52 2020 No.12001388 >>12001316CDF of the binomial distribution >> Anonymous Wed Aug 12 13:21:28 2020 No.12001398 Is medical research in America looking to be a good career path or will I just be cucked by pajeets? >> Anonymous Wed Aug 12 13:29:34 2020 No.12001425 >>12001332sounds pretty niche t b h >> Anonymous Wed Aug 12 13:48:22 2020 No.12001467 File: 20 KB, 248x248, 1569950729430.jpg [View same] [iqdb] [saucenao] [google] [report] how do you become smarter? >> Anonymous Wed Aug 12 13:57:39 2020 No.12001483 so what's the deal, do good GPAs not carry as much weight anymore due to COVID, since everyone's just cheating? >> Anonymous Wed Aug 12 14:01:04 2020 No.12001493 >> Anonymous Wed Aug 12 14:05:30 2020 No.12001505 >>12001483it will become easier to get a job without a degree with boomers retiring and more people getting redpilled about the college jewhttps://www.youtube.com/watch?v=iNlBizfi-jMhttps://www.youtube.com/watch?v=5Fh6LtBYmiIhttps://www.youtube.com/watch?v=HzoP_RO8PHIhttps://www.theguardian.com/technology/2020/mar/10/elon-musk-college-for-fun-not-learning >> Anonymous Wed Aug 12 14:30:57 2020 No.12001603 >>11987253Gravitons are hypothesized and generally accepted to exist, although specific details vary since we don't have a verified theory of quantum gravity yet. We have not experimentally observed gravitons and may never do so, because gravity is such an incredibly weak interaction. But an experimentally proven theory of quantum gravity may indirectly verify the existence of gravitons.>>11990322What do you mean by "undergrad branches"? This phrase doesn't make any sense to me. >> Anonymous Wed Aug 12 14:45:17 2020 No.12001645 For an unbounded self-adjoint operator $A$, is there any alternative (but equivalent) way to understand $f(A)$ besides the spectral theorem? Of course in the bounded case you can talk about a series expansion if it converges etc. (e.g. Taylor series of $f = \exp$), but this doesn't work for the unbounded case.I'm most curious about whether such an alternative approach, if it exists, could be used for practical computations. For example, in linear PDEs involving the Laplacian you take formal solution as being something like $e^{-\Delta t}$ acting on the initial conditions, which you actually compute by diagonalizing $\Delta$ (by separation of variables, for instance). I wonder if there is approach that will still give me a workable representation of $e^{-\Delta t}$. >> Anonymous Wed Aug 12 15:09:30 2020 No.12001695 >>12001645Hmm... I'm not sure if it fits your needs, but you should take a look at references on the heat kernel expansion. There are several methods using this expansion in spectral geometry. For a review, there is: https://arxiv.org/abs/hep-th/0306138 (but its more physics-related)To a more mathematical-inclined ref., there is a book called "Asymptotic Formulae in Spectral Geometry", by P. Gilkey (i think it can be found on libgen) >> Anonymous Wed Aug 12 15:43:57 2020 No.12001776 >>12001645>For an unbounded self-adjoint operator A>A, is there any alternative (but equivalent) way to understand f(A)>f(A) besides the spectral theorem?in predicative maths, there is only the spectral method, check Bas Spitters workhttps://users-cs.au.dk/spitters/jucs_11_12_2096_2113_spitters.pdfand his phd thesis. IF there is another method it will be only in classical maths. >> Anonymous Wed Aug 12 15:48:17 2020 No.12001791 File: 310 KB, 1920x964, J.png [View same] [iqdb] [saucenao] [google] [report] >cost function is a sum of dozens of functions>some are convex and some are concaveIs there an algorithm that is guaranteed to find the global optimum? >> Anonymous Wed Aug 12 15:49:53 2020 No.12001792 File: 4 KB, 845x168, volumeTriple.png [View same] [iqdb] [saucenao] [google] [report] So x goes from [0,2], y [0,2] and z [0,4]. So if my integral is dxdydz, the bounds of each integrand is just 0,2 for dx, 0,2 for dy, and 0,4 for dz right? >> Anonymous Wed Aug 12 15:50:55 2020 No.12001798 >> Anonymous Wed Aug 12 15:54:17 2020 No.12001810 >>12001792no, because then you're integrating over a cuboid of side lengths 2,2,4, instead of the triangular-shaped region defined by that plane.Your bounds of integration will not be constant for 2 variables. Say you start with the z integral, you'll be integrating from 0 to something like 4-2y-2x, etc.. >> Anonymous Wed Aug 12 15:56:13 2020 No.12001817 >>12001791yeah all the partial derivatives will be 0 at an extrema >> Anonymous Wed Aug 12 16:00:12 2020 No.12001835 >>12001810>cuboidahh, duh thanks. So if my order of integration is dzdydx, then I'd have z from [0, 4-2y-2x], x from [0, 4-z] and y from [0, 4-z]?If so I think I get it >> Anonymous Wed Aug 12 16:02:10 2020 No.12001844 >>12000983not sure which doors correspond to what in this diagram >> Anonymous Wed Aug 12 16:03:28 2020 No.12001849 >>12001835close, but whatever you integrate last can't have earlier variables as your bounds of integration, otherwise you're going to end up with an answer that isn't a number.so if your first integral is over z, you can't have z in any of your following integrals. >> Anonymous Wed Aug 12 16:06:13 2020 No.12001855 >>12001849so would I be doing dz [0, 4-2x-2y], dy [0, 2], dx [0,2]? or dy need to be something in terms of x? >> Anonymous Wed Aug 12 16:10:35 2020 No.12001864 >>12001798>>12001817>just solve a huge system of nonliear equations bro >> Anonymous Wed Aug 12 16:13:33 2020 No.12001871 >>12001855in general, it's helpful to think about the problem this way:if your bounds are [0,2] for x and y, then that means you're integrating up to the point 2,2,z. is this point valid geometrically? well if you impose the condition on z that you did, then yes.as long as this is true for any points x,y,you can pick, then your bounds of integration for x,y are just [0,2]. if you need to impose another condition for y to restrict your region, you can do that. >> Anonymous Wed Aug 12 16:14:35 2020 No.12001873 >>12001864yes that's literally what you have to doturn it into a linear algebra problem and solve it on a computer in 5 microseconds >> Anonymous Wed Aug 12 16:15:52 2020 No.12001874 >>12001645>Of course in the bounded case you can talk about a series expansion if it converges etc. (e.g. Taylor series of f=exp), but this doesn't work for the unbounded case.Couldn't you still talk about pointwise convergence, tho? Not $\exp (A) = \sum \frac{A^n}{n!}$, but $\exp (A) x = \sum \frac{A^n x}{n!}$?I wouldn't be surprised if the convergence of the usual Borel functional calculus of exponential of A evaluated at x existed if and only if this pointwise power convergence happened.>>12001791>Is there an algorithm that is guaranteed to find the global optimum?https://math.stackexchange.com/questions/13386/can-any-continuous-function-be-represented-as-a-sum-of-convex-and-concave-functiSee the first response.tl;dr I don't think your conditions actually help with anything. >> Anonymous Wed Aug 12 16:19:55 2020 No.12001883 Could someone explain to me what Stephen Wolfram's computation physics autism breakthrough means? Like, what exactly is he doing? What I gathered was that he's basically throwing shit at the wall until it sticks by taking really abstract yet fundamental math, and seeing which combinations of shit result in laws in math that results in the laws of physics that we empirically observe. Because he can disregard huge swathes of possible-systems that don't have laws we see in our world, he hopes to eventually narrow it down to a selection of systems that can be used to predict new phenomena. Did I get this right? Where am I wrong? >> Anonymous Wed Aug 12 16:20:04 2020 No.12001885 >>12001871thanks buddy >> Anonymous Wed Aug 12 16:34:10 2020 No.12001930 Any quantum physics/quantum computing nerds here? I'm trying to understand pulse level programming and when I would want to use it.If I'm understanding it correctly then it can help me perform better calibrations on qubits and optimize gate errors. However I've also seen discussion about transmon qubits and coherence times.I'm honestly just a little lost. >> Anonymous Wed Aug 12 17:38:17 2020 No.12002108 >>12001873>nonlinear system of equations>just turn it into a linear algebra problem, broRetard.>inb4 just solve the nonlinear equationsYou just went from testing a lot of initial guesses for the optimization solver to testing a lot of initial guesses for the equation solver.>>12001874>tl;dr I don't think your conditions actually help with anythingI'm not sure why you concluded this from the link you posted. >> Anonymous Wed Aug 12 18:37:52 2020 No.12002278 File: 15 KB, 834x152, tripleMOM.png [View same] [iqdb] [saucenao] [google] [report] Suprise suprise, it's me again. This time trying to check that I set this up correctly. \begin{align*} \int_{0}^{\frac{\pi}{2}} \int_{0}^{2\pi} \int_{0}^{3} (9-((\rho)^2sin^2(\phi))(cos^2(\theta)-sin^2(\theta))\rho ^2 sin(\phi) )d\rho d\theta d\phi \end{align*} >> Anonymous Wed Aug 12 19:15:34 2020 No.12002384 >>12002278Looks correct to me. >> Anonymous Wed Aug 12 20:03:16 2020 No.12002481 >>12002384Thanks, obviously I had factored the original expression after converting to spherical. But glad to know it looks good. >> Anonymous Wed Aug 12 20:10:55 2020 No.12002498 Cosmology question :Susskind in his talks say the universe is a closed system (or will be when the Hubble constant reaches equilibrium and stops changing) because, in a universe with constant H, objects can never leave nor enter the cosmological horizon (which horizon is he talking about precisely, I don't know). And that as such the cosmological event horizon is something like a big black hole surface where objects get closer and flattened but never cross it.How can that be true? I mean I understand why an object can't enter our universe because it would have to move faster than light, but why can't objects just leave? They can just leave it by staying immobile in space and let expansion do the work can't they?Or is it some GR fuckery I should just study GR to understand? >> Anonymous Wed Aug 12 21:39:09 2020 No.12002733 File: 61 KB, 980x1674, lft_reg.png [View same] [iqdb] [saucenao] [google] [report] Why wouldn't pic related work? Seems like the most obvious way to fit a linear fractional model, but no one does it. >> Anonymous Wed Aug 12 23:48:00 2020 No.12003038 >>12001844should not matterall doors are closed except for one which is opened and closedthe blinds shudder in a room that has a closed door as a result >> Anonymous Thu Aug 13 04:06:32 2020 No.12003546 File: 49 KB, 697x509, linkage.png [View same] [iqdb] [saucenao] [google] [report] I'm trying to find the work done by the motion constraint of the 4-bar linkage system(pic related).$W_{motion} = \tau_{motion, torque}\Delta\theta_O$I ran it in a simulation so I know the angles and angular velocities of all the components, but I'm not sure how to set the equation up, or what $\tau_{motion, torque}$ is really referring to. >> Anonymous Thu Aug 13 04:28:56 2020 No.12003581 I've got a little more than 15 days to study for a math class that involves the following>partial derivatives>double intergrals>triple intergrals>line intergralsany online resources that I can study that doesn't go on for 10000 pages with a bunch of definitions and just goes straight into the catch with some clear explanations for a beginner in this subject? >> Anonymous Thu Aug 13 04:53:46 2020 No.12003639 >>12003581paul's online math notes >> Anonymous Thu Aug 13 05:20:20 2020 No.12003681 >>11993752not solution, but might give ideashttps://www.youtube.com/watch?v=N5k6zBu6PFYhttps://www.youtube.com/watch?v=2opBQJqib-E >> Anonymous Thu Aug 13 06:37:07 2020 No.12003799 File: 24 KB, 789x667, cap flux.png [View same] [iqdb] [saucenao] [google] [report] In this charging capacitor with electric field confined to the space between the plates, is there a magnetic field outside the plates due to the displacement current or not?In the differential formulation, because the electric field at the point outside the plates is absent (and constant), any magnetic field there would be irrotational.In the integral formulation, it appears there would be a magnetic field out there because the point is on a closed path through which the electric flux is changing-- unless the total path integral is somehow zero.So is there a non-zero irrotational magnetic field there?>>12003546Do you know the masses or rotational inertias of the members? >> Anonymous Thu Aug 13 06:46:44 2020 No.12003820 >>12003799Yes I know the mass but rotational inertias of the members >> Anonymous Thu Aug 13 07:32:11 2020 No.12003936 File: 175 KB, 1024x765, 1594487491111.jpg [View same] [iqdb] [saucenao] [google] [report] Big Brain Question: The Custodian™ in your apartment had 14 packages of light bulbs. After using 22 bulbs, he has 8 packages and 2 bulbs left. How many bulbs are in each package? >> Anonymous Thu Aug 13 07:56:55 2020 No.12004002 >> Anonymous Thu Aug 13 08:20:50 2020 No.12004045 >>12004002That's fucking bullshit, W T FI was looking through some old papers (throwing crap out) and my younger brother answered the same fucking thing and got marked wrongIt was like 4AM and despite being a 2nd year Comp Sci moron I was like wait wat and then I was like it has to be 4 remedial college math is a fucking scam >> Anonymous Thu Aug 13 08:34:30 2020 No.12004071 >>12001873dumb dumb >> Anonymous Thu Aug 13 10:17:44 2020 No.12004287 Any opinions on chemical engineering? I'm from argentina if that changes anything. >> Anonymous Thu Aug 13 10:27:33 2020 No.12004312 >>12004287Good choice if you can find a sector you care about and can handle the university course. Make some good money. >> Anonymous Thu Aug 13 10:32:54 2020 No.12004319 >>12004287it's fun >> Anonymous Thu Aug 13 10:51:06 2020 No.12004359 File: 40 KB, 828x618, data.png [View same] [iqdb] [saucenao] [google] [report] what would this data distribution be called, where can I read up some info on how to analyze pic related? clearly it's got an upper and lower bounding line but is that the extent of what you can get out of it?if it matters each data point is a 'processor' that has a number of jobs and a time to finish. so the finish time is dependent on the number of jobs and the jobs are high variance. >> Anonymous Thu Aug 13 10:52:02 2020 No.12004360 Who has the table of universal (valid for all unit systems) Maxwell's equations? I remember seeing it once where each of the equations had an arbitrary constant multiplier like a kappa or a beta and then the constant multipliers were all constrained in their relationships to each other and to the speed of light or 4pi or something. >> Anonymous Thu Aug 13 11:04:59 2020 No.12004383 Literally every source of light is designed to flicker ina high frecuency. Old lightbulbs at 120hz, LED screens and lighbulbs at whatever the pwm thingy is set to (or 120hz if they have shitty circuits), fluorescent are even worse and have complerte shit frecuency, you can see the blinking in most. Wtf?? really. Why the fuck would you use PWM instead of less LEDs? Eficiency is probably not even really improved... same with the fag fluorescent bulbs. >> Anonymous Thu Aug 13 11:10:42 2020 No.12004397 >>12004359Take logs.Is there any other structure to the data? Are there any levels you can break it into? It looks like there are at least kind of clusters, are they meaningful groups? >> Anonymous Thu Aug 13 11:13:08 2020 No.12004403 >>12004397sorry, this was an MSpaint representation. there are no clusters, it's pretty much completely even throughout, only difference is the decreasing density >> Anonymous Thu Aug 13 11:21:21 2020 No.12004423 File: 28 KB, 970x500, 0857da586028805d8c0ddfbff41afff80.png [View same] [iqdb] [saucenao] [google] [report] >>12004359>>12004403Have you tried looking at the data in polar coordinates? It looks kind of like it could be approximated by uniformly choosing a norm and then uniformly choosing an angle between the two bounds. >> Anonymous Thu Aug 13 12:07:34 2020 No.12004540 File: 3.83 MB, 2048x1536, 5F230F1F-A43B-4F17-91CA-8BAB3635FDBB.png [View same] [iqdb] [saucenao] [google] [report] Hey does anybody know of book of like first year calculus with loads of pictures and explanations? I’m rarted and will die without some good pictures. >> Anonymous Thu Aug 13 12:24:25 2020 No.12004597 >> Anonymous Thu Aug 13 13:48:53 2020 No.12004855 >>12004287Hello fellow argieWhere are you going to study? >> Anonymous Thu Aug 13 13:53:06 2020 No.12004868 >>11992357you can compute the result explicitly for h =/= 0.Im not sure what you mean when you say it is not exact?I doubt the way you intend has a mathematical definition.The only thing i can think of with respect to exact is exact sequences which dont have so much to do with derivatives. >> Anonymous Thu Aug 13 15:08:52 2020 No.12005074 File: 1.60 MB, 1200x628, file.png [View same] [iqdb] [saucenao] [google] [report] what happens if you cut a vein? like pic related, if you were to take a knife and slice this thing in half, what happens? does the receiving end shrivel up and die? what about all the little branching paths? Do they deflate? and when it heals over, does it reconnect two the two halves, or does it just seal the ends of each? will a new vein grow to replace it if that's the case? >> Anonymous Thu Aug 13 15:20:53 2020 No.12005120 >>12004868> Im not sure what you mean when you say it is not exact?He means that the quotient for some finite h isn't (exactly) equal to the limit. >> Anonymous Thu Aug 13 15:57:02 2020 No.12005252 Is $f(x) = 2^{x-3}$ the same as $f(x-3)$ ? >> Anonymous Thu Aug 13 16:03:02 2020 No.12005267 >>12005252no, f(x) and f(x - 3) are different thingsif g(x) = 2^x, then f(x) = g(x - 3) >> Anonymous Thu Aug 13 16:15:12 2020 No.12005292 Im trying to find a harmonic conjugate of u(x,y)=x^2-y^2-x-1. My attempt: du/dx=2x-1=dv/dy => v(x,y)=2xy-y+c(x)dv/dx=2y-1+dc(x)/dx=-(du/dy)=-(-2y)=2y==> c(x)=x+c==>v(x,y)=2xy-y+x+cMy teacher told me this was incorrect but i cant seem to find where i went wrong. HELP >> Anonymous Thu Aug 13 16:17:33 2020 No.12005296 And why if i have an exponential function like $h(x) = 3^{x-3}$ then the graph moves 3 units to the right? shouldn't it move 3 units to the left (negative)? >> Anonymous Thu Aug 13 16:22:31 2020 No.12005309 >>12005292Check your work with $\frac{\mathrm{d}v}{\mathrm{d}x}$. >> Anonymous Thu Aug 13 16:27:29 2020 No.12005325 When you have a graze or light cut, where is the blood coming from? >> Anonymous Thu Aug 13 16:28:07 2020 No.12005327 >>12003799there is no "irrotational magnetic field" while the capacitor is charging, unless you consider spurious fields from edge effects.when it's charging you get a circulating magnetic field that circulates along the axis of the capacitor >> Anonymous Thu Aug 13 16:29:00 2020 No.12005329 >>120052963^0 = 1where does 0 go when you change from x to x-3 >> Anonymous Thu Aug 13 16:32:46 2020 No.12005335 >>12005296that's a common (but wrong) intuitiontake $f(x) = 3^{x}$ and $h(x) = 3^{x - 3}$. as you already understand, that's just a translation along the $x$-axis. so, the images of $f$ and $h$ are exactly the same, right? for a given value of $f(x)$, say, $f = 3$, we see that such value occurs at $x = 1$ ($f(1) = 3^{1} = 3$). now, what about $h(x)$? you can easily see that it actually occurs at $x = 4$ ($g(4) = 3^{4-3} = 3^{1} = 3$). so, in order to obtain a given height of $f$ using the other function, we had to evaluate it at $3$ units to the right >> Anonymous Thu Aug 13 16:37:50 2020 No.12005354 >>12005296>>12005296Think the other way around. You are not moving the function, you are moving the x axis. So, you really changed the axis 3 units to the left, which, from the perspective of the graph of the function, makes it move 3 units to the right >> Anonymous Thu Aug 13 16:45:26 2020 No.12005378 any1 have a resource, or image, with the magnetospheres of each planet in our solar system and the sun, so i can see them relative to one another?Particularly with orientation and strength and direction of rotation of the planet >> Anonymous Thu Aug 13 16:45:29 2020 No.12005379 >>11993752ok, let me say something, I don't know where this came from, but I tried everything and nothing of value came about. This probably cannot be cast in a closed form, using known constants &/or values of elementary functions. >> Anonymous Thu Aug 13 16:47:07 2020 No.12005386 >>12005329>>12005335>>12005354Thanks anons >> Anonymous Thu Aug 13 16:53:23 2020 No.12005408 File: 3 KB, 281x224, DiffY.gif [View same] [iqdb] [saucenao] [google] [report] What is a better way to word a "range interval" for a given function that more informatively denotes what it is? All I'm trying to do is describe the (dy) differential for a standard function of one variable, but I can't think of the most efficient way to word it. My first instinct was to call it the "difference in height" between the point on a line tangential to the given curve and another point along that line an arbitrary distance (dx) away" but to describe it as "height" completely misses the point. It's an interval of solutions or a solution set but even describing them as "solutions" implies that there was a problem that needed solving, which again is not relevant. How can I describe this interval of solutions in a way that doesn't miss the point? My mind is blank. Pic somewhat related. >> Anonymous Thu Aug 13 16:53:31 2020 No.12005409 >>12005309thank you very much! >> Anonymous Thu Aug 13 16:54:26 2020 No.12005413 >>11991165Where can I buy a 1Hz power supply? At least 75VA. Money is nothing to me. >> Anonymous Thu Aug 13 17:02:00 2020 No.12005435 >>12005413digikey >> Anonymous Thu Aug 13 17:11:44 2020 No.12005457 >>12005379Well, I took my chance in an estimate of a upper bound, using[eqn] \sum_{k \in \mathbb{Z}^{+}} \frac{\ln(k)}{k!} \leq \int_{1}^{\infty}\mathrm{d}\omega \hspace{2.8pt} \frac{\ln(\omega)}{\Gamma(\omega+1)} \, , [/eqn] but it appears that the integral also suffers from the same problem. Actually, I believe that this problem may be connected with the one related to the Fransén–Robinson constant $F$: https://en.wikipedia.org/wiki/Frans%C3%A9n%E2%80%93Robinson_constant > It is however unknown whether F can be expressed in closed form in terms of other known constants.Numerically, however, we can see that this upper bound is very close to the value of the series. WolframAlpha gave me something like [eqn] \left\lvert \sum_{k = 1}^{\infty} \frac{\ln(k)}{k!} - \int_{1}^{\infty}\mathrm{d}\omega \hspace{2.8pt} \frac{\ln(\omega)}{\Gamma(\omega+1)} \right\rvert \approx 0.0809088 [/eqn] >> Anonymous Thu Aug 13 17:26:03 2020 No.12005490 >>12005120ah thx >> Anonymous Thu Aug 13 17:27:06 2020 No.12005496 >>12005457be patient im writing out my sollutioni hope to have it done by tomorrow >> Anonymous Thu Aug 13 17:32:19 2020 No.12005512 Knowing that a group of bacteria triples every 20 minutes, and the initial population of bacteria was 20.000 i have to design a function that determines bacterias after a certain amount of hoursi came up with this:$b(t) = 20000*3^{ \frac{t}{20} }$and i think it works, but what do you guys think, is it a good way to put this? >> Anonymous Thu Aug 13 17:38:57 2020 No.12005537 >>12005512looks good to me >> Anonymous Thu Aug 13 18:05:03 2020 No.12005646 >>12005512That's right, but the time argument of your function is in minutes. So, in order to calculate, for instance, the population after 1 hour, you have $b(60)$ >> Anonymous Thu Aug 13 19:01:43 2020 No.12005788 I just wrote an ML exam, one of the questions asked us to find the covariance matrix of 3 data points: (-1,-1), (0,0), (1,1), find the eigenvalues, and eigenvectors, and the PCs. Isn't the variance 1, the covariance 0, the eigenvalues = 1, and the eigenvector = 0? And aren't the PC's also 0? >> Anonymous Thu Aug 13 20:26:47 2020 No.12005992 I have kind of a fun question w/ practical application about control engineering. I never took the class so sorry for being layman-ish.I have a plant that takes in 2 components and creates a widget outta em. I want to design a controller that will make the plant produce widgets at a certain rate. In summary I have a system that can take 2 inputs: the rates of change of the components, and has 1 output: the rate of change of widgets being produced. Assume I'm kind of more retarded than I actually am and I know nothing about how the plant actually works, I just want widgets flowing out. How do I design a controller to hit a quota of x widgets per second by controlling the inflow of components?Practical example: I'm playing Factorio producing green circuits and I want to dynamically adjust the inflow of iron and copper plating to my (infinitely many) assemblers such that I can reach a set amount of green circuits per second that I've chosen, without going to the wiki and crunching all the numbers by hand. >> Anonymous Thu Aug 13 21:58:43 2020 No.12006219 >>12004855UBA probably, or UTN. I'm thinking between EE or ChemE. I'm 21 though so I'll be graduating at 26if everything goes smoothly, that worries me a little but from what I've seen is not uncommon. These last two years I have been working as a mechanical technician (I'm from a technic school, electromechanic). >> Anonymous Thu Aug 13 21:58:50 2020 No.12006220 >>12004045>>12004002it's not 4 >> Anonymous Thu Aug 13 22:13:33 2020 No.12006251 >>12003936>>12004002>>12006220It is indeed 4.14P - 22B = 8P + 2B. P = nB. Find n. >> Anonymous Thu Aug 13 23:12:00 2020 No.12006422 >>12005325Capillaries are the smallest blood vessels in the bod, about one red blood cell in diameter, and are the last mile of blood delivery. Capillaries are small and have weak pressure, so light cuts and grazes are easy to repair by the clotting cascade. >> Anonymous Thu Aug 13 23:56:37 2020 No.12006534 >>12006422human body anon, please answer >>12005074 too >> Anonymous Fri Aug 14 00:09:47 2020 No.12006558 >>11991165Good list for once. I really like munkers for some reason. >> Anonymous Fri Aug 14 00:50:49 2020 No.12006627 Do the stars in the milky way cause light pollution the same way it works here on earth. If our galaxy went dark will we be able to see more? Or if we have a satellite outside our galaxy will we be able to see much further away? >> Anonymous Fri Aug 14 00:57:57 2020 No.12006645 >>12005074you bleed everywhere and if you're that old you die>does it reconnect two the two halves, or does it just seal the ends of each?not unless you stitch/glue them together. >will a new vein grow to replace it if that's the case?hell no.You are seriously underestimating the gravity of that kind of injury >> Anonymous Fri Aug 14 04:17:20 2020 No.12006905 I'm a chemcuck who never really enjoyed calculus, but I've been told analysis makes it all betterIs analysis better for enjoying calculus >> Anonymous Fri Aug 14 05:08:08 2020 No.12006987 >>12006905not. fellow chemcuck >> Anonymous Fri Aug 14 05:15:55 2020 No.12006999 why do we use milliseconds but not kiloseconds >> Anonymous Fri Aug 14 05:22:41 2020 No.12007005 >>12006627You mean like the Zone of Avoidance? >> Anonymous Fri Aug 14 05:36:48 2020 No.12007034 >>12006999because napoleon >> Anonymous Fri Aug 14 05:43:22 2020 No.12007044 Read Zorich’s Mathematical Analysis. Specifically chapters 5 and 6, you can completely ignore the rest of the book and volume II unless you have the patience for the full development of calculus on manifolds in which case read both books. He presents the best possible modern treatment of differential calculus, leaps and bounds above that of Tao and Rudin. I think the exercises are much less abstract and so probably inferior for the purposes of a mathematics undergrad but much better suited for scientists. There are no solutions provided nor will you find any online but that is part of the fun of these russian math textbooks. >> Anonymous Fri Aug 14 05:44:33 2020 No.12007049 >>12007044meant for >>12006905 >> Anonymous Fri Aug 14 05:54:30 2020 No.12007072 >>120064221 blood cell, seriously? Do they get blocked up all the time then? >> Anonymous Fri Aug 14 06:03:23 2020 No.12007089 >>12007034explain >> Anonymous Fri Aug 14 06:24:25 2020 No.12007134 File: 827 KB, 589x600, decimal time clock.png [View same] [iqdb] [saucenao] [google] [report] >>12007089During the french revolution a movement was made to change the way we mesure things, you know, the metric systemthat extended to time as well, so for a time in france there was indeed "kiloseconds", so for writing time it was >XI. Le jour, de minuit à minuit, est divisé en dix parties, chaque partie en dix autres, ainsi de suite jusqu’à la plus petite portion commensurable de la durée.>XI. The day, from midnight to midnight, is divided into ten parts, each part into ten others, and so forth until the smallest measurable portion of duration.>La centième partie de l'heure est appelée minute décimale; la centième partie de la minute est appelée seconde décimale. (emphasis in original)>The hundredth part of the hour is called decimal minute; the hundredth part of the minute is called decimal second.whe Napoleon came to power he removed decimal time because it was too confusing, and also the church kept using the old time we still use >> Anonymous Fri Aug 14 06:27:14 2020 No.12007138 >>11998044>>12000627>>12001225nvm there was an error in my work :( >> Anonymous Fri Aug 14 06:36:10 2020 No.12007159 >>12007134wow, interesting. who started the movement and why did they demand it? doesn't seem like it had a lot of connections to the overall goal of the revolution >> Anonymous Fri Aug 14 06:36:46 2020 No.12007160 >>12007138I'm curious to see it anyway >> Anonymous Fri Aug 14 06:43:50 2020 No.12007171 >>12007159long story shortthe goal of the french revolution was to, as the name implies, revolutionize everything >> Anonymous Fri Aug 14 07:03:20 2020 No.12007215 >>12007171but why time? >> Anonymous Fri Aug 14 07:21:36 2020 No.12007252 >>12007159>>12007215A good number of the french revolution lads hated absolutely everything with any relation to christianity and or royalty and tradition. >> Anonymous Fri Aug 14 07:30:49 2020 No.12007267 >>12007215why not? the rest of mesurements (weight, distance etc) were changed too, it would not make sense to not also do so with timenowadays we have mathematical ways to reach all these values, and that also includes a way to messure time mind you >> Anonymous Fri Aug 14 07:41:42 2020 No.12007277 >>12007252so it's change for the sake of change?>>12007267>nowadays we have mathematical ways to reach all these valuesthat sounds like a backwards way to go about stuff, putting results first before methods >> Anonymous Fri Aug 14 07:57:14 2020 No.12007313 >>12007277>so it's change for the sake of change?Yes.https://en.wikipedia.org/wiki/French_Republican_calendar >> Anonymous Fri Aug 14 08:07:47 2020 No.12007333 >>12007313very cool >> Anonymous Fri Aug 14 08:31:50 2020 No.12007378 >>12006645is it really that bad? I see these things running all over the top of my hand. people get cuts all the time, are they just never deep enough to completely kill the vein then? >> Anonymous Fri Aug 14 12:21:32 2020 No.12007774 Worth it to pursue a chemical engineering degree at 26? I can afford it, however I'm not sure how it will be to be a new graduate at 30. Any opinions or siggestions? Anybody did something similar? I've been working in IT since I finished high school. It's not a bad job I have but I don't really feel too passionate about it. Of course, starting the degree would mean qutting it. >> Anonymous Fri Aug 14 12:31:36 2020 No.12007801 File: 18 KB, 550x156, Annotation 2020-08-14 123043.png [View same] [iqdb] [saucenao] [google] [report] pls help me cheat in mi homework >> Anonymous Fri Aug 14 12:37:45 2020 No.12007817 File: 12 KB, 228x221, 1420358221200.jpg [View same] [iqdb] [saucenao] [google] [report] I'm taking "physics for engineers" in two weeks and I've never directly studied physics in my life (outside of a few random physics applications thrown around in math classes here and there), how fucked am I? I did recently "pass" a Calc III class, though that means very little as my actual mathematical understanding is still pretty much Calc II level and I'm very shaky with vectors. Vectors are pretty important in physics, right? >> Anonymous Fri Aug 14 12:42:37 2020 No.12007832 >>12007817my physics for engineers was split into 2 parts. first class was basically reiterate highschool grade 12 physics class, and then sprinkle some new info on the end, so nothing hard, then the secod class was continuing on from there and basically all new stuff, wasn't that bad, though the calculations sucked ass. the math you do tends to be simpler, like if you're using integrals, they will all be solvable with the same method so you don't have to study anything else, or similar >> Anonymous Fri Aug 14 12:42:54 2020 No.12007836 >> Anonymous Fri Aug 14 12:44:19 2020 No.12007841 >>12007774>Worth it to pursue a chemical engineering degree at 26? I can afford it, however I'm not sure how it will be to be a new graduate at 30.A degree is a degree. Simply don't divulge to people how old you were when you got it. Don't expect to get hired at some prestigious high paying job unless your skill and knowledge happen to be DEMONSTRABLY off the charts. Hell, if you were that good, they'd hire you whether you had a degree or not. Basically just keep low realistic expectations and there shouldn't be an issue. >> Anonymous Fri Aug 14 12:50:37 2020 No.12007856 >>12007774if its what you're passionate about then there's no good reason not to do itas long as you don't have huge expectations of a better life and you realize that a degree is what you make of it, then there's no problemyou will fail only if you get stuck in a world of hypotheticals and baseless judgments from normies and your own mind >> Anonymous Fri Aug 14 13:05:32 2020 No.12007896 >>12007817speed = 1st order derivativeacceleration = 2nd order derivativeforce = mass times accelerationtorque = turning force >> Anonymous Fri Aug 14 13:28:12 2020 No.12007981 >>12007896I already knew all of that. Does that mean I'm going to pass? >> Anonymous Fri Aug 14 13:34:44 2020 No.12007999 >>12007981yes >> Anonymous Fri Aug 14 13:36:08 2020 No.12008005 >>12007981and vectors are just the things added up in each x,y,z dimension >> Anonymous Fri Aug 14 20:36:25 2020 No.12009305 File: 390 KB, 500x600, eirin_milk.png [View same] [iqdb] [saucenao] [google] [report] Is properly reading through Bryant's book on exterior differential systems worth it or should I just look up the big results on wikipedia? >> Anonymous Fri Aug 14 20:43:32 2020 No.12009326 >>11993752Maybe define x_n to be the Analogous expression that terminateS after the nth root Find a relation between consecutive terms and take a limit or find a steady state sort of thing. Not sure >> Anonymous Fri Aug 14 21:07:07 2020 No.12009378 How significant is weight in the top speed of production coupe type cars? Take, for example, a 4000 lb Veyron. How much rolling resistance would it have at 250 mph? Related question, how much horsepower does a car like that produce at top speed (how much total friction does it experience)? >> Anonymous Fri Aug 14 21:27:49 2020 No.12009428 >>11996529It depends on context. 1,-1,i and -i are all units within the Gaussian integers because they have inverses therein. >> Anonymous Fri Aug 14 21:28:22 2020 No.12009432 >>11991165Algorithms for optimal allocation of bets on many simultaneous eventshttps://betgps.com/blog/betting-library/Whitrow-Algorithms-for-Optimal-Allocation-of-Bets-on-Many-Simultaneous-Events.pdfI don't understand the results.If I have simultaneous bets:Should I bet each one the Kelly number?Should I bet a fraction of the Kelly? (but every bet is different) >> Anonymous Fri Aug 14 21:33:18 2020 No.12009441 >>11996529>this logic$1^2=(-1)^2$does 1=-1? so then 1=0? >> Anonymous Fri Aug 14 21:38:40 2020 No.12009455 Are the legs in a split phase power systems necessarily an inverse of each other? The way I, vaguely, understand it is that they’re just hooked up to opposite sides of the winding so it’s similar to a single water pump where one side experiences a push and the other experiences an equal pull. So you can only get a 180 degree shift with split phase. Is this correct? Also, if not, is the way phase shifting occurs in in a two phase system significantly different than than in a split phase system? >> Anonymous Fri Aug 14 21:44:13 2020 No.12009470 >>12002733Seems reasonable to me. But I think E will be singular. You’ll need a minimum length x or some other regularization. You have a sign error in line 2. >> Anonymous Fri Aug 14 21:55:55 2020 No.12009504 File: 382 KB, 1492x765, sets.jpg [View same] [iqdb] [saucenao] [google] [report] >>11991165I found an old set theory question on stackexchange, and one of the answers seems really wrong, but I don't know enough about set theory to be sure. See pic related.As far as I can tell, 5 should be false regardless of whether it is referring to a subset or a proper subset.One of the guys answering says that it actually becomes true if the notation means "proper subset of," but isn't that just making the relationship *more* restrictive?To my understanding, Y is not a subset, proper or otherwise, of Z, because the null set is not in Z (although the null set is a subset of Z.)And when he says "the null set is contained in every set," isn't that wrong? I thought the null set is a *subset* of every set, but it's not *in* every set. Or in other words, the contents of the null set are in every set, but the null set itself is not. >> Anonymous Fri Aug 14 21:59:33 2020 No.12009521 File: 1.88 MB, 500x281, giphy.gif [View same] [iqdb] [saucenao] [google] [report] Are dark matter/energy real, or were they just invented by physicists to make the numbers add up? I know that our current theories of the universe don't hold up without them, but is there any evidence that they actually exist? If not, then it just seems like a desperate attempt to patch up incorrect theories. >> Anonymous Fri Aug 14 22:05:51 2020 No.12009537 >>12009521Dark matter is a placeholder for "something that we can't observe via light that acts as an apparent mass" or something like that. it basically means our theories are incomplete because we can't explain galactic behavior without introducing something new that we haven't observed directlythere are many candidates for what dark matter might be. the current favorite theory is axions, because they answer the Strong CP problem as well. >> Anonymous Fri Aug 14 22:19:19 2020 No.12009579 >>12009504For some reason they correctly define $\subseteq$ in the response to Q4, but then make a mess of it in Q5. I think you're correct though; $Y$ is an element of $Z$, not a subset. >> Anonymous Fri Aug 14 22:21:19 2020 No.12009586 is psychology a science? >> Anonymous Fri Aug 14 22:23:36 2020 No.12009593 >>12009586why wouldn't it be? >> Anonymous Fri Aug 14 22:26:15 2020 No.12009600 >>12009593idk some people say that it’s not a “real” science >> Anonymous Fri Aug 14 22:28:12 2020 No.12009609 >>12009600those are also the people posting iq threadsit's not a hard science (hard meaning "pure," not "difficult") but it's absolutely a science. just complicated by the fact that we barely understand the brain and people are unreliable test subjects >> Anonymous Fri Aug 14 22:35:22 2020 No.12009626 >>12009609oh okay, thanks! i was always confused if it actually was or wasn’t. >> Anonymous Fri Aug 14 22:49:18 2020 No.12009654 >>11991165someone posted a pic for the trivium, and like a dummy i didn't save it. anybody have that? >> Anonymous Fri Aug 14 23:17:51 2020 No.12009730 File: 477 KB, 1351x1054, file.png [View same] [iqdb] [saucenao] [google] [report] >>12009654I found it. Here it is: >> Anonymous Fri Aug 14 23:37:46 2020 No.12009782 File: 1.86 MB, 2801x3913, 20200814_202029.jpg [View same] [iqdb] [saucenao] [google] [report] When working with a transistor, what is the circuit that feeds the base and how do I find its Thevenin equivalent? Circuit is pic related.>>11996293I don't give a shit about circuit design, it's just an obligatory class for my Physics degree for some reason. I want to have an idea why. >> Anonymous Sat Aug 15 01:07:53 2020 No.12009972 File: 1.65 MB, 5500x6699, top.jpg [View same] [iqdb] [saucenao] [google] [report] I'm running a 10s simulation of a spinning top and trying to find the time period of the oscillation by plotting the center of mass x and z-position.Its almost entirely consistent from t=1.5s, but before that, the max and min amplitudes have a kind of zig zag spike before continuing with the consistent slope. pic related, top graph shows the first 2 seconds with the spikes, bottom shows first 4 seconds with consistent period.I think this is how the program approximates the results, but I'm not sure. Did I fuck up somewhere along the way? >> Anonymous Sat Aug 15 01:09:44 2020 No.12009978 File: 64 KB, 564x795, keikakudoorknob.jpg [View same] [iqdb] [saucenao] [google] [report] >>12009504I do not really understand the person who asked it either>1: \0 \in X>I know this is false because the null set is not an element in any set>2: \0 in Y>I don't understand why this is true>6: X=\0 and X \in Y>Same reason 2 is true. I understand this one.it is like the asker did not really think it throughstackexchange is such a shitshow >> Anonymous Sat Aug 15 01:11:13 2020 No.12009983 >>12009972is it still wobbling from just having spun it?it's hard to know without knowing what this program does. is it simply measuring x and z position? or doing some operation on it >> Anonymous Sat Aug 15 01:12:01 2020 No.12009986 >>11995614it's even color-coded, bless you anon! >> Anonymous Sat Aug 15 02:17:13 2020 No.12010074 File: 1.31 MB, 960x540, topsim.webm [View same] [iqdb] [saucenao] [google] [report] >>12009983Yeah, when I simulate it with a step size of 0.01, it noticeably wobbles at the beginning. When I run it with a step size of 0.05 it looks smooth and normal.The graph is tracing the center of mass of the top.webm attached shows same simulation, step size 0.01 and 0.05. >> Anonymous Sat Aug 15 02:18:49 2020 No.12010081 File: 284 KB, 500x606, 1576729093021.png [View same] [iqdb] [saucenao] [google] [report] why does nobody reply my stupid question>>12001467? >> Anonymous Sat Aug 15 02:21:39 2020 No.12010090 >>12010074it could be stabilizing? unless you're already initializing it in its stable motiona smaller step size should always be more accurate. the larger step size might look smoother but that's just because it's ignoring the initial wobblingis this a physics-based modeling? if not then forget everything I said because it shouldn't wobble if you're not including physics. >> Anonymous Sat Aug 15 02:40:29 2020 No.12010124 File: 108 KB, 1404x919, Untitled.png [View same] [iqdb] [saucenao] [google] [report] I'm trying to proof that tower of hanoi can be done in $2^n-1$ steps. Can someone please check my proof? >> Cod off Wharf Sat Aug 15 03:08:58 2020 No.12010178 File: 687 KB, 1537x2134, C1750A9C-58E7-404E-81AD-496F2F3EBE90.jpg [View same] [iqdb] [saucenao] [google] [report] How to I remove the metal studs from these gluey vitality-sensing increments?I tried putting it in acetoneI tried boiling it in acetoneI tried burning it in acetoneI’m at a loss >> Anonymous Sat Aug 15 03:30:34 2020 No.12010207 >>12010090I'm not sure if that's the case, the was top started at an angle and was given an initial angular velocity of 40 Hz.I don't think I did anything wrong, so I assume that the wobble/spikes are result of a numerical approximationand this is physics based modeling. >> Anonymous Sat Aug 15 03:49:32 2020 No.12010229 >>12010124well, you made a lot of statements but then, on an unrelated note, you said that it was impossible that moving dk requires fewer than 2^(k-1) - 1 moves, without saying why it's impossible.I'll assume that it could be shown pretty easily with what you have if you just analyzed the recursive algorithm you're describing a little more, or maybe I'm just a brainlet. >> Anonymous Sat Aug 15 04:31:27 2020 No.12010280 Thinking of dropping my IT job and going in on Psychology at uni.Is this really wise? I want to be a clinical Psychology but i heard it takes almost 12 years to get to that point. >> Anonymous Sat Aug 15 06:03:40 2020 No.12010453 >>12010280How old are you? Whats your financial situation, do you have financial support? are you in the US? >> Anonymous Sat Aug 15 06:10:46 2020 No.12010467 >>12010178Mo chemists INT?Just Christopher Boone? >> Anonymous Sat Aug 15 08:08:39 2020 No.12010675 File: 171 KB, 724x1023, suwako_milk.jpg [View same] [iqdb] [saucenao] [google] [report] >>12010605I'm not saying you lads have to wait until I come over and make the thread right.I'm also not saying you need to copy and paste the pasta or put a 2hu image on the OP.But you need to put /sqt/ in the title, seriously.>>12010081Because it's asked every thread. >> Anonymous Sat Aug 15 08:13:19 2020 No.12010684 >>12010675if then why isnt the answer on the OP? >> Anonymous Sat Aug 15 09:56:06 2020 No.12010886 >>11991165Good fucking luck reading those books without the constant guidance of a high-ranked institutions. >> Anonymous Sat Aug 15 11:30:35 2020 No.12011099 >>12010178scissors? >> Anonymous Sat Aug 15 11:37:23 2020 No.12011117 I'm learning damped oscillations. This "variation of roots with damping ratio" diagram is confusing Can't figure out why the things circled in green are where they are. 4channel has blocked my country from uploading images, so here it is : https://i.imgur.com/5rGAC3W.png >> Anonymous Sat Aug 15 11:38:27 2020 No.12011119 >>12011117Would appreciate if an anon would directly post the image here https://i.imgur.com/5rGAC3W.png >> Anonymous Sat Aug 15 11:52:16 2020 No.12011151 File: 113 KB, 1078x891, 5rGAC3W.png [View same] [iqdb] [saucenao] [google] [report] >> Anonymous Sat Aug 15 11:54:53 2020 No.12011161 Say i have $a \cdot c = b$ I know that $a = \frac{b}{c}$ is valid but what about $c = \frac{b}{a}$ ? is this valid? i never got clear when is it valid to pass a product as a division in the other side ot the equation >> Anonymous Sat Aug 15 12:01:38 2020 No.12011177 >>12010684what have you tried? >> Anonymous Sat Aug 15 12:05:15 2020 No.12011188 >>12011177nothing, thats why i asked what you are supposed to do >> Anonymous Sat Aug 15 12:08:31 2020 No.12011191 >>12010684Never seen an answer to it that's actually worth archiving. Just grab some advice from google lmao.>>12011161>is this valid?Yes.>i never got clear when is it valid to pass a product as a division in the other side ot the equationWhen the term you're passing is non-zero. >> Anonymous Sat Aug 15 12:11:13 2020 No.12011197 >>12009537>Strong CP problemPedobear is a physicist now? >> Anonymous Sat Aug 15 12:14:56 2020 No.12011208 >>12011151Thanks >>12011117Alright understood, it's just where one of the roots lie on x axis when zeta equals that value.imo that's a really confusing way to label >> Anonymous Sat Aug 15 12:27:58 2020 No.12011230 >>12011191replace dot product with cross product, now is it still valid? >> Anonymous Sat Aug 15 12:28:36 2020 No.12011231 >>12011117The roots of s^2+2.ω0.ζ.s+ω0^2=0 are s=-ω0.ζ±ω0√(ζ^2-1). In the time domain, x(t)=a1.e^(s1.t)+a2.e^(s2.t) where s1, s2 are the roots.If ζ=0 (undamped), then s=±i.ω0; in the time domain, this is a constant-amplitude sinusoidal oscillation.If |ζ|=1 (critically damped), then s=-ω0.ζ, a duplicate root which is real and negative. In the time domain, this is a.t.e^(-ω0.t).If |ζ|<1 (underdamped), √(ζ^2-1) is imaginary so the roots form a complex conjugate pair; in the time domain, you have (e^-ηt).cos(ωt+φ), i.e. an exponentially-decaying sinusoid.If |ζ|>1 (underdamped), √(ζ^2-1) is real and you have two distinct real roots, meaning exponential decay in the time domain. >> Anonymous Sat Aug 15 12:31:55 2020 No.12011241 >>12011191and what if i have $a + bc = d$ could i do $a + b = \frac{d}{c}$ if $c \neq 0$ ? >> Anonymous Sat Aug 15 12:44:25 2020 No.12011263 >>12011241No, but you can do $\frac{a}{c} + b = \frac{d}{c}$.You shouldn't think of it as "passing to the other side", you should think of it as "dividing both sides by c". >> Anonymous Sat Aug 15 14:15:15 2020 No.12011500 >>12011231Thank you! Was confusing at first but now it makes sense >>
# Thread: Maximize cone inside a cone 1. ## Maximize cone inside a cone Given a right circular cone, you put an upside-down cone inside it so that its vertex is at the center of the base of the larger cone and its base is parallel to the base of the larger cone. If you choose the upside-down cone to have the largest possible volume, what fraction of the volume of the larger cone does it occupy? (Let H and R be the height and base radius of the larger cone, and let h and r be the height and base radius of the smaller cone. Hint: Use similar triangles to get an equation relating h and r.) I'm guessing I should start with this? I'm not sure how to express the function using only one variable, though. $$V = \pi r^ 2 \frac{h}{3} \\ r \in (0, R) \\ h \in (0, H)$$ 2. ## Re: Maximize cone inside a cone How do you apply similar triangles to this problem? I don't quite see a way, or then H and R should be involved... and I don't how that's useful. 3. ## Re: Maximize cone inside a cone Hello, maxpancho! Given a right circular cone, you put an upside-down cone inside it so that its vertex is at the center of the base of the larger cone and its base is parallel to the base of the larger cone. If you choose the upside-down cone to have the largest possible volume, what fraction of the volume of the larger cone does it occupy? Let $H$ and $R$ be the height and base radius of the larger cone. And let $h$ and $r$ be the height and base radius of the smaller cone. Code: A - * - : /|\ : : / | \ H-h : / | \ : : / | r \ : H F *- - * - -* E - : / \ |G / \ : / \ h| / \ : / \ | / \ : / \|/ \ - B * - - - - * - - - - * C D R Note that $\Delta ABC \sim \Delta AFE.$ Also that the height of $\Delta AFE$ is $H-h.$ We have: . $\frac{H-h}{r} \:=\:\frac{H}{R}$ Solve for $h\!:\;\;h \:=\:\frac{H}{R}(R-r)$ [1] The volume of the smaller cone is: . $V \:=\:\frac{\pi}{3}r^2h$ [2] Substitute [1] into [2]: . $V \:=\:\frac{\pi}{3}r^2\left(\frac{H}{R}(R-r)\right)$ Simplify: . $V \;=\;\frac{\pi H}{3R}(Rr^2-r^3)$ Can you finish now? 4. ## Re: Maximize cone inside a cone Hum, I actually tried this myself, too, but not sure what to do about these H and R? 5. ## Re: Maximize cone inside a cone Or do we treat them like constants? I haven't thought about that. , , ### sphere inside a cone .the levwl of water imersing the Click on a term to search for related topics.
# Concept of Random Walk Reading the Wikipedia page for Random Walk, I was wondering what is the definition for a general random walk as a random process, so that all the concepts such as random walk on $\mathbb{Z}^d$, Gaussian random walk and random walk on a graph can fall into this general definition? (1) In Ross's book for stochastic process, he defined a random walk to be the sequence of partial sums of a sequence of i.i.d. real-valued random variables. So is a Gaussian random walk the sequence of partial sums of a sequence of i.i.d. real-valued random variables with a common Gaussian distribution? How does a random walk on a graph fit into Ross's definition? (2) I was thinking maybe the concept of random walk is equivalent to that of discrete-time Markov process, where the state space can be either continuous or discrete or even non-numeric (such as for a random walk on a graph)? Is it correct? So can I say the graph in "a random walk on a graph" is just the graphical model for the distribution of the random walk as a stochastic process? Also the graph in "a random walk on a graph" has nothing to do with random graph, right? (3) if you can come up with a definition for random walk, please also explain how various special kinds of random walks fit into the definition you provided. Thanks and regards! EDIT: Thanks to Dr Schmuland for the reply! Although I have chosen the best answer, I am still not clear about the following questions: (1) Is a random walk (including random walk on a group such as $\mathbb{Z}^d$ and random walk on a graph) equivalent to a discrete-time Markov process? If not, what makes random walks different from general discrete time Markov processes? (2) For a random walk defined on a group such as $\mathbb{Z}^d$, does it really require the increments between every two consecutive indices be i.i.d. (seems to be the definition in Ross's Stochastic processes) or just independence will be enough and identical distribution is not necessary? Thanks! Reply to Update of Dr Schmuland Thanks, Dr Schmuland, for providing a big picture that I have not clearly realized before! It took me some while to understand. Learning more makes me feel better. Generally when the state space is additive group, are these two properties - markov property and increment-independence - equivalent? Must time-homogeneity be defined only for Markov process? If it can be define for a general process, is it equivalent to the property that the increments over same length period have the same distribution? I also wonder if, for a random walk on a graph, at each vertex, the transition probability must be uniform? If yes, is the transition probability uniform at a vertex over a set of the neighbours and itself or just over a set of its neighbours (itself excluded, which means it is impossible to stay at the vertex)? As your last exercise, I think time-homogeneity and identical-increment-distribution are equivalent, and Markov property and increment-independence are equivalent, so every time-homogeneous Markov chain will be a random walk on the group {1,2,3}. But my answer will be different for it to be a random walk on the graph derived for {1,2,3} as you mentioned, the transition probability from a vertex to other vertices has to be uniform. - For those like me who are not able to view the Wikipedia page at this moment, please search random walk in google and view its cached page webcache.googleusercontent.com/…. I also find that there are similar questions raised in its dicussion page en.wikipedia.org/wiki/Talk:Random_walk –  Tim Nov 6 '10 at 14:55 It's a good question. The naive answer is that the two concepts are different. Random walks on ${\mathbb Z}^d$ or ${\mathbb R}$, etc. are examples of a random walk on an additive group $G$. There is a fixed distribution $\mu$ on $G$, and at each time you randomly select an element of the group (using distribution $\mu$) and add it to the current state. In contrast, for a random walk on a (locally finite) graph, you randomly choose an edge uniformly and follow that edge to the next state (node). However, the concepts may be related at a deeper level. The first chapter of Woess's Random Walks on Infinite Graphs and Groups explains how a random walk on a graph is related to a random walk on a quotient of its automorphism group, and also how a random walk on a group is related to a walk on its Cayley graph. I haven't read the book in detail, but my impression is that, nevertheless, the two concepts are not exactly equivalent. Update (Nov. 7 2010) Any stochastic process $(X_n)$ taking values in a countable state space can be built up out of the following pieces: (1) an initial distribution $P(X_0=x)$ that tells us how to choose the starting point. (2) transition kernels that tell us how to choose the next state given the history of the process up to the present, that is, $P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0)$. Now, a process is called Markov if the kernels only depend on $n$, $x_{n+1}$ and $x_n$, that is, $P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0) = P(X_{n+1}=x_{n+1} | X_n=x_n)$. This is a big assumption and a huge simplification. But enough models of real world random behaviour have this "memoryless" property to make Markov processes wide spread and extremely useful. A further simplification is to assume that the Markov process is time homogeneous, so that the transition rules only depend on the states and not on time. That is, we assume that $P(X_{n+1}=y | X_n=x)=P(X_1=y | X_0=x)$ for all $x,y$. The crucial point here is that $p(x,y):=P(X_{n+1}=y | X_n=x)$ is the same for all $n$. Let's now assume that the state space is an additive group. The transition kernels for a general process can be rewritten in terms of increments: $$P(X_{n+1}=x_{n+1} | X_n=x_n, \dots, X_0=x_0)$$ can be written $$P(X_{n+1}-X_n=x_{n+1}-x_n | X_n-X_{n-1}=x_n-x_{n-1}, \dots, X_0=x_0).$$ If the increments are independent, then this becomes $$P(X_{n+1}-X_n=x_{n+1} -x_n)=:\mu_n(x_{n+1}-x_n).$$ In this case, the process is automatically Markov and the transition kernels are an arbitrary sequence $(\mu_n)$ of probabilities on the state space. Finally, for a random walk we assume both time homogeneity and independent increments. Thus, the transition kernel $p(x,y)$ is represented by a single measure: $p(x,y)=\mu(y-x)$. This imposes a kind of homogeneity in space and in time. We are down to an extremely special, yet still interesting process, that only depends on the initial distribution and the single measure $\mu$. This is why Ross insists that for random walks the increments are identically distributed; just being independent is not enough. Because random walks are so special, they can be analyzed by mathematical tools that don't work for most Markov processes. In particular, tools from harmonic analysis are very fruitful here. A standard reference, though a little old, is Principles of Random Walk (2nd ed) by Frank Spitzer. A more recent treatment is given in Random Walk: A Modern Introduction by Gregory Lawler and Vlada Limic (2010). Here's an exercise to convince you that random walks are only a small part of all Markov chains. Take the state space to be the corners of a triangle, labelled $\{ 0, 1, 2\}$, considered as the group ${\mathbb Z}$ mod 3. Any 3 by 3 stochastic matrix (non-negative entries and row sums all equal to 1) defines a time homogeneous Markov chain on this state space. Which of these are random walks? - Thanks Dr Schmuland! Are the two concepts in your reply both discrete-time Markov processes? Conversely, can a discrete-time Markov process be represented as one of the two concepts of random walks? In other words, can the two concepts be unified into and become equivalent to discrete-time Markov process? Another question, is the graph in "a random walk on a graph" the graphical model for the random walk? These are in my second part of my original post. –  Tim Nov 6 '10 at 16:07 Yes, the two random walks are both discrete-time Markov processes. No, random walks form only a minuscule part of discrete time Markov processes. I don't know the answer to your third question. –  Byron Schmuland Nov 6 '10 at 16:22 Thanks! What properties distinguish random walks from other discrete time Markov processes? –  Tim Nov 6 '10 at 18:47
# What is the volume of earthwork for constructing a tank that is excavated in the level ground to a depth of 4 m ? The top of the tank is rectangular in shape having an area of 50 m × 40 m and the side slope of the tank is 2: 1 (horizontal: vertical). 1. 5461 m3 2. 6688 m3 3. 8866 m3 4. 5632 m3 Option 2 : 6688 m3 ## Calculation of Area and Volume MCQ Question 1 Detailed Solution Concept: The prismoidal formula is also known as Simpson’s rule. a) Trapezoidal Formula: Volume (v) of earthwork between a number of sections having areas A1, A2,…, An spaced at a constant distance d. $${\rm{V}} = {\rm{d}}\left[ {\frac{{{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}}}{2} + {{\rm{A}}_2} + {{\rm{A}}_3} + \ldots + {{\rm{A}}_{{\rm{n}} - 1{\rm{\;}}}}} \right]$$ b) Simpson’s formulae: Volume (v) of the earthwork between a number of sections having area A1, A2 … An spaced at constant distance d apart is $${\rm{V}} = \frac{{\rm{d}}}{3}[\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots + {{\rm{A}}_{{\rm{n}} - 1}}} \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots + {{\rm{A}}_{{\rm{n}} - 2}}} \right)$$ c) Simple prismoidal rule: $$V = \frac{D}{3}\left[ {({A_1} + 4Am + {A_2})} \right]$$ Calculation: Given, A1 = 50 × 40 = 2000 m2, As the side slope is given 2:1 i.e. H:V So for a depth of 1 m, there is a change of 2 m in a Horizontal Direction. So at 4 m vertical depth Bottom Dimension is ( 50 - 8 ) = 42 m & (40 - 8) = 32 m ∴ The bottom area is 42 m × 32 m = 1344 m2 Mean area (A m) = (A1 + A2)/2 = (2000 + 1344) /2 = 1672 m2 According to simple Trapezoidal rule for volume, $$V = \frac{D}{2}\left[ {({A_1} + 2Am + {A_2})} \right]$$ $$V = \frac{2}{2}\left[ {(2000 + 2\times (1672) + 1344)} \right]$$ = 6688 m3 Note: For calculating A2 (as per slope 2: 1), 8 m is reduced from both ends. So, 16 m is reduced on both sides # Prismoidal correction, while surveying is always? 1. Exponentially subtractive 3. Subtractive Option 3 : Subtractive ## Calculation of Area and Volume MCQ Question 2 Detailed Solution Explanation: The volume of earthwork by trapezoidal method  = V1 V1 =  $$common\: distance\left \{ \frac{First\: area + Last\: area}{2}+the \:sum \:of\: remaining \:area \right \}$$ The volume of earthwork by prismoidal formula = V2 V2 = $$=\frac{Common\: distance}{3}\left \{ First\: area+ Last\: area + 2(Sum\: of odd\: area) + 4(Sum\: of even\: area)\right \}$$ Prismoidal correction: • The volume by the prismoidal formula is more accurate than any other method • But the trapezoidal method is more often used for calculating the volume of earthwork in the field. • The difference between the volume computed by the trapezoidal formula and the prismoidal formula is known as a prismoidal correction. • Since the trapezoidal formula always overestimates the volume, the prismoidal correction is always subtractive in nature is usually more than calculated by the prismoidal formula, therefore the prismoidal correction is generally subtractive. • Volume by prismoidal formula = volume by the trapezoidal formula - prismoidal correction Prismoidal correction (CP) $$C_{P}=\frac{DS}{6}\left \{ d- d_{1}\right \}^{2}$$ Where, D = Distance between the sections, S (Horizontal) : 1 (Vertical) = Side slope, d and d1 are the depth of earthwork at the centerline # The areas enclosed by the contours in a lake are as follows: Contour (m) 270 275 280 285 290 Area (m2) 50 200 400 600 750 The volume of water between the contours 270 m and 290 m by trapezoidal formula is _______. 1. 6400 m3 2. 8000 m3 3. 16000 m3 4. 24000 m3 Option 2 : 8000 m3 ## Calculation of Area and Volume MCQ Question 3 Detailed Solution Concept: Trapezoidal Formula: Volume (v) of earthwork between a number of sections having areas A1, A2, …, An spaced at a constant distance d. $${\rm{V}} = {\rm{d}}\left[ {\frac{{{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}}}{2} + {{\rm{A}}_2} + {{\rm{A}}_3} + \ldots + {{\rm{A}}_{{\rm{n}} - 1}}} \right]$$ Simpson’s Formula: Volume (v) of the earthwork between a number of sections having area A1, A2, …, An spaced at constant distance d apart is $${\rm{V}} = \frac{{\rm{d}}}{3}\left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots {{\rm{A}}_{{\rm{n}} - 1}}} \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots + {{\rm{A}}_{{\rm{n}} - 2}}} \right)} \right]$$ Calculation: The areas enclosed by the contours in a lake are as follows: Contour (m) 270 275 280 285 290 Area (m2) 50 200 400 600 750 Given contour interval (d) = 275-270 = 5 m So using trapezoidal formula: $${\rm{V}} = 5\left( {\left( {\frac{{50 + 750}}{2}} \right) + 200 + 400 + 600} \right) = 5 \times \left( {400 + 200 + 400 + 600} \right) = 8000{\rm{\;}}{{\rm{m}}^3}$$ # The Simpson’s rule for determination of areas is used when the number of offsets are: 1. 2 2. even 3. 3 4. odd Option 4 : odd ## Calculation of Area and Volume MCQ Question 4 Detailed Solution Explanation: Simpson's rule: This rule is based on the assumption that the figures are trapezoids. In order to apply Simpson's rule, the area must be divided in even number i.e., the number of offsets must be odd i.e., n term in the last offset 'On' should be odd. The area is given by Simpson's rule: $$Area = \frac{d}{3}\left[ {({O_1} + {O_n}) + 4({O_2} + {O_4} + ........ + {O_{n - 1}}) + 2({O_3} + {O_5} + ......{O_{n - 2}})} \right]$$ where O1, O2, O3, .........On is the offset Important Points • In case of an even number of cross-sections, the end strip is treated separately and the area of the remaining strip is calculated by Simpson's rule. The area of the last strip can be calculated by either trapezoidal or Simpson's rule. # A road embankment 10 m wide at the formation level with side slopes 2:1 and with an average height of 5 m is constructed with an average gradient of 1:40 from the contour 220 m to 280 m. Find the volume of earthwork. 1. 6,40,000 m3 2. 1,40,000 m3 3. 3,40,000 m3 4. 2,40,000 m3 Option 4 : 2,40,000 m3 ## Calculation of Area and Volume MCQ Question 5 Detailed Solution Concept: A gradient is the rate of rise or falls along the length of the road with respect to horizontal. It is expressed as ‘1' vertical unit to 'N' horizontal units. $$Tan\ \theta \ =\frac{{\bf{h}}}{{\bf{l}}}$$ $$\frac{{\bf{h}}}{{\bf{l}}} = \frac{1}{{N}}$$ Area of trapezoidal: According to the trapezoid area formula, the area of a trapezoid is equal to half the product of the height and the sum of the two bases. Area = ½ x (Sum of parallel sides) x (perpendicular distance between the parallel sides). Calculation: Average height = 5 m Difference in elevation(h) = 280 - 220 = 60 m Average gradient = $$\frac{1}{{40}}$$ $$\frac{{\bf{h}}}{{\bf{l}}} = \frac{1}{{40}}$$ $$\frac{{\bf{60}}}{{\bf{l}}} = \frac{1}{{40}}$$ L = 2400 m Average cross-sectional area(A) = $$% MathType!MTEF!2!1!+- % feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8 % qadaWcaaWdaeaapeGaaGymaaWdaeaapeGaaGOmaaaacqGHxdaTdaqa % daWdaeaapeGaaGymaiaaicdacqGHRaWkcaaIZaGaaGimaaGaayjkai % aawMcaaiabgEna0kaaiwdaaaa!423F! \frac{1}{2} \times \left( {10 + 30} \right) \times 5$$$$\frac{1}{2} \times \left( {10 + 30} \right) \times 5$$ A = 100 m2 Volume of earthwork = A × L Volume of earthwork = 100 × 2400 ∴ Volume of earthwork = 2,40,000 m3 # Determine the approximate quantity of earthwork for a road in embankment having a length of 120 m on a uniform level ground. The width of formation is 10 m and side slopes are 3 ∶ 1. The heights of the bank at the ends are 1 m and 1.5 m, respectively. Use trapezoidal method considering average of areas at the two ends. 1. 1785 m3 2. 1485 m3 3. 1885 m3 4. 2085 m3 Option 4 : 2085 m3 ## Calculation of Area and Volume MCQ Question 6 Detailed Solution Concept: Volume of Sloped earthwork (V) = (bd + sd2) × L Where, B   = Width, d  = Depth and S = Side slope of the cross-section Calculation: Given, slope = s: 1 = 3: 1 b = 10 m L = 120 m d1 = 1.5 m and d2 = 1 m Volume of Sloped earthwork (V) = (bd + sd2) × L $$V = \;\frac{1}{2}\left[ {(b{d_1}\; + {\rm{ }}s{d_1}^2) + (b{d_2}\; + {\rm{ }}s{d_2}^2)} \right] \times 120$$ $$V = \;\frac{1}{2}\left[ {(10 \times 1.5\; + {\rm{ }}3 \times {1.5^2}) + (10 \times 1\; + {\rm{ }}3 \times {{1}^2})} \right] \times 120 = 2085{m^3}$$ The approximate quantity of earthwork = 2085 m3 # The formation width of a highway with no transverse slope on an embankment is B. The height of the embankment is d and side slope of the embankment is S: 1 The area of cross-section of the embankment is 1. B + d + S × d 2. B × D + S × d 3. B × d + S × d2 4. $$\frac{1}{2}\left( {B\times d + S{d^2}} \right)$$ Option 3 : B × d + S × d2 ## Calculation of Area and Volume MCQ Question 7 Detailed Solution Explanation: Given, B = Formation width d = Height of the embankment S:1 = Side slope of the embankment (H: V) The area of embankment = The area of the trapezium Area of Trapezium = $$\frac{1}{2}$$ × (Sum of Lengths of parallel sides) × Height Area of Embankment $$= \frac{1}{2}\left[ {B + \left( {B + 2sd} \right)} \right] \times d$$ A = Bd + sd2 # What is the volume of 8 m deep tank having rectangular shaped top 6 m × 4 m and bottom 4 m × 2 m by prismoidal formula is? 1. 123 m3 2. 122.6 m3 3. 132 m3 4. 134 m3 Option 2 : 122.6 m3 ## Calculation of Area and Volume MCQ Question 8 Detailed Solution Volume of tank by prismoidal formula is given by: $$V = \frac{d}{3}\left( {{A_1} + 4{A_m} + {A_2}} \right)$$ d = step size = 4 m Am = Mean area of the tank A1 = 6 × 4 = 24 m2 A2 = 4 × 2 = 8 m2 Mean area is length = (6 + 4 )/2 = 5m , width = (4+2)/2 = 3 Mean area Am = 5 × 3 = 15 m2 $${A_m} = 15\;{m^2}$$ $$V = \;\frac{4}{3}\left( {24 + 4 × 15 + 8} \right)$$ $$V = 122.6\;{m^3}$$ Note: According to prismoidal formula, d=h/2 # If the cross-sectional area of an embankment at 30 m intervals are 20, 40, 60, 50, and 30 m2 respectively then the volume of the embankment on the basis of the prismoidal rule is 1. 5300 m3 2. 8300 m3 3. 4300 m3 4. 6300 m3 Option 1 : 5300 m3 ## Calculation of Area and Volume MCQ Question 9 Detailed Solution Prismoidal formula can also be said as Simpson’s rule applied for two intervals. Trapezoidal Formula: a) Volume (v) of earthwork between a number of sections having areas A1, A2,…,An spaced at a constant distance d. $${\rm{V}} = {\rm{d}}\left[ {\frac{{{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}}}{2} + {{\rm{A}}_2} + {{\rm{A}}_3} + \ldots + {{\rm{A}}_{{\rm{n}} - 1\:}}} \right]$$ b) Simpson’s formulae: Volume (v) of the earthwork between a number of sections having area A1, A2 … An spaced at constant distance d apart is $${\rm{V}} = \frac{{\rm{d}}}{3}[\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots + {{\rm{A}}_{{\rm{n}} - 1}}} \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots + {{\rm{A}}_{{\rm{n}} - 2}}} \right)$$ Calculation: The given data can be depicted as below: ∴ V = (30/3) × (20 + 30 + 4 × (40 + 50) + 2 × (60)) = 5300 m3 # What is the volume of a 6 m deep tank having rectangular shaped top 6 m × 4 m and bottom 4 m × 2 m (computed through the use of prismoidal formula)? 1. 92 m3 2. 96 m3 3. 90 m3 4. 94 m3 Option 1 : 92 m3 ## Calculation of Area and Volume MCQ Question 10 Detailed Solution Concept: Prismoidal formula is also known as Simpson’s rule. Trapezoidal Formula: a) Volume (v) of earthwork between a number of sections having areas A1, A2,…,An spaced at a constant distance d. $${\rm{V}} = {\rm{d}}\left[ {\frac{{{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}}}{2} + {{\rm{A}}_2} + {{\rm{A}}_3} + \ldots + {{\rm{A}}_{{\rm{n}} - 1{\rm{\;}}}}} \right]$$ b) Simpson’s formulae: Volume (v) of the earthwork between a number of sections having area A1, A2 … An spaced at constant distance d apart is $${\rm{V}} = \frac{{\rm{d}}}{3}[\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots + {{\rm{A}}_{{\rm{n}} - 1}}} \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots + {{\rm{A}}_{{\rm{n}} - 2}}} \right)$$ Calculation: Given: 2L = 6m ⇒ L = 3m Top area = 6 × 4 = 24 m2 = A1 Bottom area = 4 × 2 = 8 m2 = A2 $${{\rm{A}}_{\rm{m}}} = \left( {\frac{{6 + 4}}{2}} \right) \times \left( {\frac{{4 + 2}}{2}} \right) = 15{\rm{\;}}{{\rm{m}}^2}$$ $${\rm{Volume\;}} = \frac{{\rm{L}}}{3}\left( {{{\rm{A}}_1} + 4{{\rm{A}}_{\rm{m}}} + {{\rm{A}}_2}} \right)$$ $${\rm{Volume}} = \frac{3}{3}\left( {24 + 4 \times 15 + 8} \right) = 92{\rm{\;}}{{\rm{m}}^3}$$ # In determining the area of the curved boundary ________ rule is used to get accurate results. 1. Simpson’s 2. mid - ordinate 3. average ordinate 4. trapezoidal Option 1 : Simpson’s ## Calculation of Area and Volume MCQ Question 11 Detailed Solution Explanation (i) For irregular boundaries or curve boundary, Simpson’s rule is preferred over the trapezoidal rule to calculate the given area. (ii) According to this rule the short length of boundaries between the two adjacent ordinates is a parabolic arch. Important Points Simpson's rule: In order to apply Simpson's rule, the area must be divided in even number i.e., the number of offsets must be odd i.e., n term in the last offset 'On' should be odd. The area is given by Simpson's rule: $$Area = \frac{d}{3}\left[ {({O_1} + {O_n}) + 4({O_2} + {O_4} + ........ + {O_{n - 1}}) + 2({O_3} + {O_5} + ......{O_{n - 2}})} \right]$$ where O1, O2, O3, .........Ois the offset Note: (i) In case of an even number of cross-sections, the end strip is treated separately and the area of the remaining strip is calculated by Simpson's rule. The area of the last strip can be calculated by either trapezoidal or Simpson's rule. # A cross-section area of river flow can be calculated by using following formula ______. 1. Simpson's rule 2. Trapezoidal rule 3. Both (1) and (2) 4. Thumb rule Option 3 : Both (1) and (2) ## Calculation of Area and Volume MCQ Question 12 Detailed Solution Concept: The area can be calculated by following methods: 1. Mid-ordinate rule: Area = havg × L = (h1 + h2 + …. + hn)d = d(Σhi) Here, hi – Ordinate at the mid-point of each division and L – length of the baseline. 2. Average ordinate rule: Here, Oi – is the ordinate at regular intervals. $$Area = \frac{{sum\;of\;the\;ordinates}}{{no.\;of\;the\;ordiantes}} \times length\;of\;the\;base\;line$$ 3. Trapezoidal rule: Here, Oi – ordinates at equal interval d – the common difference $$Area = \frac{d}{2}\left[ {{O_o} + {O_n} + 2\left( {{O_1} + {O_2} + {O_3} + \; \ldots \ldots + {O_{n - 1}}} \right)} \right]$$ 4. Simpson’s rule: This method is used when: • The boundaries are not straight. • The number of area segments should be even in number. • The number of ordinates should be odd. $$Area = \frac{h}{3}\left[ {{y_1} + {y_n} + 4\left( {{y_{even}}} \right) + 2\left( {{y_{odd}}} \right)} \right]$$ Here, h = x2 – x1 = x3 – x2 # In earthwork computations on a longitudinal profile, the diagram prepared to work out the quantity of earthwork is: 1. double mass curve 2. mass haul diagram 3. Mollier diagram 4. flow net Option 2 : mass haul diagram ## Calculation of Area and Volume MCQ Question 13 Detailed Solution Explanation: Mass Haul Curve: This is a curve representing the cumulative volume of earthwork at any point on the curve, the manner in which earth to be removed. It is necessary to plan the movement of excavated soil of worksite from cuts to fill so that haul distance is minimum to reduce the cost of earthwork. The mass haul diagram helps to determine the economy in a better way. The mass haul diagram is a curve plotted on a distance base with the ordinate at any point on the curve representing the algebraic sum of the volume of earthwork up that point. A haul refers to the transportation of your project’s excavated materials. The haul includes the movement of material from the position where you excavated it to the disposal area or a specified location. A haul is also sometimes referred to as an authorized haul. Haul = Σ Volume of earthwork × Distance moved. # In a cross staff survey, the perpendicular offsets are taken on right and left of the chain line AD as shown in figure – all values are in ‘metres’. The area enclosed by ABCDEFA, computed by trapezoidal method is 1. 3650 m2 2. 3200 m2 3. 3475 m2 4. 3500 m2 Option 3 : 3475 m2 ## Calculation of Area and Volume MCQ Question 14 Detailed Solution Explanation The area bounded by ABCDEFA = Area bounded by ABCDA + Area bounded by AFEDA $$= \left[ {\frac{1}{2} \times 30 \times 20 + \frac{1}{2} \times(20+30) \times \left( {45 - 30} \right) + \frac{1}{2} \times 30 \times \left( {90 - 45} \right)} \right] + \left[ {\frac{1}{2} \times 35 \times 40 + \frac{1}{2}\left( {40 + 30} \right) \times \left( {65 - 35} \right) + \frac{1}{2} \times 30 \times \left( {90 - 65} \right)} \right]$$ = 1350 + 2125 = 3475 m2 # Which of the following methods is most suitable for area calculation when boundary line is irregular? 1. Prismoidal formula 2. Trapezoidal rule 3. Simpson’s rule 4. Mid-Ordinate method Option 3 : Simpson’s rule ## Calculation of Area and Volume MCQ Question 15 Detailed Solution Explanation For irregular boundaries, Simpson’s rule is preferred over the trapezoidal rule to calculate the given area. According to this rule the short length of boundaries between the two adjacent ordinates is a parabolic arch. Important Points Simpson's rule: In order to apply Simpson's rule, the area must be divided in even number i.e., the number of offsets must be odd i.e., n term in the last offset 'On' should be odd. The area is given by Simpson's rule: $$Area = \frac{d}{3}\left[ {({O_1} + {O_n}) + 4({O_2} + {O_4} + ........ + {O_{n - 1}}) + 2({O_3} + {O_5} + ......{O_{n - 2}})} \right]$$ where O1, O2, O3, .........Ois the offset Note: • In case of an even number of cross-sections, the end strip is treated separately and the area of the remaining strip is calculated by Simpson's rule. The area of the last strip can be calculated by either trapezoidal or Simpson's rule. # The expression for the total volume of earthwork for an embankment using Simpson’s one-third rule, if A1, A2, A3, A4 ……. An-1 and An are the areas at n sections at an interval of h, is ________. 1. $$\frac{{\rm{h}}}{3} \times {\rm{\;}}\left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots } \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots } \right)} \right]$$ 2. $$\frac{{\rm{h}}}{3} \times \left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}) + 2\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots } \right) + ({{\rm{A}}_3} + {{\rm{A}}_5} + \ldots } \right)} \right]$$ 3. $$\frac{{\rm{h}}}{3} \times \left[ {\left( {({{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}} + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots } \right) + ({{\rm{A}}_3} + {{\rm{A}}_5} + \ldots .} \right)} \right]$$ 4. $$\frac{{\rm{h}}}{3} \times \left[ {\frac{{\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right)}}{{4{\rm{\;}}}} + \left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots } \right) + \left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots } \right)} \right]$$ Option 1 : $$\frac{{\rm{h}}}{3} \times {\rm{\;}}\left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots } \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots } \right)} \right]$$ ## Calculation of Area and Volume MCQ Question 16 Detailed Solution Formula: a) Trapezoidal Formula: $${\rm{V}} = \frac{{\rm{h}}}{2} \times {\rm{\;}}\left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 2\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots+A_{n-1} } \right) } \right]$$ b) Mid-ordinate Formula: $${\rm{V}} = {\frac{d}{n}\rm{}}\left[ {{{{{\rm{A}}_1} + }} {{\rm{A}}_2} + {{\rm{A}}_3} + \ldots + {{\rm{A}}_{{\rm{n}} }}} \right]$$ c) Simpson’s Formula: $${\rm{V}} = \frac{{\rm{d}}}{3}\left[ {\left( {{{\rm{A}}_1} + {{\rm{A}}_{\rm{n}}}} \right) + 4\left( {{{\rm{A}}_2} + {{\rm{A}}_4} + \ldots + {{\rm{A}}_{{\rm{n}} - 1}}} \right) + 2\left( {{{\rm{A}}_3} + {{\rm{A}}_5} + \ldots + {{\rm{A}}_{{\rm{n}} - 2}}} \right)} \right]$$ Where, V is the volume of earthwork between a number of sections having areas A1, A2 …, An spaced at a constant distance d. # The instrument used to compute area of irregular shape marked on paper is known as: 1. Passometer 2. Planimeter 3. Pentagraph 4. Pedometer Option 2 : Planimeter ## Calculation of Area and Volume MCQ Question 17 Detailed Solution Planimeter: Planimeter is an instrument used in surveying to compute the area of any given plan. Planimeter only needs plan drawn on the sheet to calculate area. Generally, it is very difficult to determine the area of irregular plot. So, by using planimeter we can easily calculate the area of any shape. Pedometer: It is a portable device usually an electronic or electromagnetic that counts each steps of a person by detecting the motion of the person’s hip or hand. Passometer: It is an instrument shaped like a watch used to count the number of persons steps. # In Mass Haul diagram (Mass diagram), the term haul represents the 1. Sum of the product of each load by its distance 2. Distance at any time from the working face of an excavation to the tip end of the embankment 3. Distance from the centre of gravity of a cutting to that of tipped material 4. Horizontal distance through which the load is shifted Option 1 : Sum of the product of each load by its distance ## Calculation of Area and Volume MCQ Question 18 Detailed Solution Mass Haul Curve This is a curve representing the cumulative volume of earthwork at any point on the curve, the manner in which earth to be removed. It is necessary to plan the movement of excavated soil of worksite from cuts to fill so that haul distance is minimum to reduce the cost of earthwork. The mass haul diagram helps to determine the economy in a better way. The mass haul diagram is a curve plotted on a distance base with the ordinate at any point on the curve representing the algebraic sum of the volume of earthwork up that point. A haul refers to the transportation of your project’s excavated materials. The haul includes the movement of material from the position where you excavated it to the disposal area or a specified location. A haul is also sometimes referred to as an authorized haul. Haul = Σ Volume of earthwork × Distance moved. # Which of the following quantity is measured using a Planimeter? 1. Area 2.  Bar diameter 3. Volume 4. Weight Option 1 : Area ## Calculation of Area and Volume MCQ Question 19 Detailed Solution Concept: The areas between curved boundaries are also computed by planimeter from the map of the area. Most commonly used is Amsler polar Planimeter. The formula for calculating areas of the map by using planimeter. Area (A) = M × (FR – IR ± 10 N + C) Where, M = Multiplying constant FR = Final reading of Planimeter N = Number of times zero of dial pass is index mark C = Constant marked above scale division on tracing arm. ± ⇒ + ve sign when zero is passed in clockwise direction – ve for when zero is passed in an anticlockwise direction. ∴ The planimeter is used to measure the area. # In the equation of a planimeter, ‘N’ is ______, when zero of the dial passes the index mark in a clockwise direction 1. + ve 2. - ve 3. 0 4. None of the above Option 1 : + ve ## Calculation of Area and Volume MCQ Question 20 Detailed Solution Concept: The areas between curved boundaries are also computed by planimeter from the map of the area. Most commonly used is Amsler polar Planimeter. The formula for calculating areas of the map by using planimeter. Area (A) = M × (FR – IR ± 10 N + C) Where, M = Multiplying constant, FR = Final reading of Planimeter, IR = Initial Reading, N = Number of times zero of dial pass is index mark, and C = Constant marked above scale division on tracing arm. ± ⇒ + ve sign when zero is passed in clockwise direction and – ve for when zero is passed in an anticlockwise direction.
# Degenerate distribution In mathematics, a degenerate distribution is the probability distribution of a discrete random variable whose support consists of only one value. Examples include a two-headed coin and rolling a die whose sides all show the same number. While this distribution does not appear random in the everyday sense of the word, it does satisfy the definition of random variable. The degenerate distribution is localized at a point k0 on the real line. The probability mass function is given by: ${\displaystyle f(k;k_{0})=\left\{{\begin{matrix}1,&{\mbox{if }}k=k_{0}\\0,&{\mbox{if }}k\neq k_{0}\end{matrix}}\right.}$ The cumulative distribution function of the degenerate distribution is then: ${\displaystyle F(k;k_{0})=\left\{{\begin{matrix}1,&{\mbox{if }}k\geq k_{0}\\0,&{\mbox{if }}k ## Constant random variable In probability theory, a constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero. Constant and almost surely constant random variables provide a way to deal with constant values in a probabilistic framework. Let  X: Ω → R  be a random variable defined on a probability space  (Ω, P). Then  X  is an almost surely constant random variable if ${\displaystyle \Pr(X=c)=1,}$ and is furthermore a constant random variable if ${\displaystyle X(\omega )=c,\quad \forall \omega \in \Omega .}$ Note that a constant random variable is almost surely constant, but not necessarily vice versa, since if  X  is almost surely constant then there may exist  γ ∈ Ω  such that  X(γ) ≠ c  (but then necessarily Pr({γ}) = 0, in fact Pr(X ≠ c) = 0). For practical purposes, the distinction between  X  being constant or almost surely constant is unimportant, since the probability mass function  f(x)  and cumulative distribution function  F(x)  of  X  do not depend on whether  X  is constant or 'merely' almost surely constant. In either case, ${\displaystyle f(x)={\begin{cases}1,&x=c,\\0,&x\neq c.\end{cases}}}$ and ${\displaystyle F(x)={\begin{cases}1,&x\geq c,\\0,&x The function  F(x)  is a step function.
Suggested languages for you: Americas Europe Q4. Expert-verified Found in: Page 292 ### Geometry Book edition Student Edition Author(s) Ray C. Jurgensen, Richard G. Brown, John W. Jurgensen Pages 227 pages ISBN 9780395977279 # Find the value of $x$ from the given figure. The value of $x$ is $2\sqrt{10}$. See the step by step solution ## Step 1. Given information. The two sides are given 9 and 11. ## Step 2. Theorem used. We use Pythagorean Theorem. Pythagorean Theorem state that in a right angle triangle, the square of the hypotenuse is equal to the sum of squares of other sides. ## Step 3. Now find the value. Substitute the values and calculate. $\begin{array}{c}{9}^{2}+{x}^{2}={11}^{2}\\ 81+{x}^{2}=121\\ {x}^{2}=40\end{array}$ Taking square root we get. $\begin{array}{l}x=±\sqrt{40}\\ x=±2\sqrt{10}\end{array}$ Since, distance is a positive quantity so we neglect negative sign. Therefore, the value of $x$ is $2\sqrt{10}$.
# Using Noto Sans Symbols I have been using some symbols (card suits etc) from Deja Vu on a Windows system using Lualatex. But I don't see them when I switch the font to Noto Sans Symbols. The font appears to be correctly installed - I can see it in the Control Panel and it's working in MS Word e.g. "The quick ♣ ..." MWE: \documentclass[a4paper, 11pt, oneside]{memoir}% \usepackage{fontspec}% \usepackage{newunicodechar} \defaultfontfeatures{Scale=MatchUppercase} %\newfontfamily{\symbolfont}{Noto Sans Symbols} \newfontfamily{\symbolfont}{Deja Vu Sans} \DeclareRobustCommand\Ts{{\symbolfont ♣}} \newunicodechar{♣}{{\symbolfont♣}} \begin{document} Two clubs 2\Ts{} Two clubs 2♣ \end{document} The log file includes the following: Package fontspec Info: Font family 'NotoSansSymbols(0)' created for font 'Noto (fontspec) Sans Symbols' with options [Scale=MatchUppercase]. (fontspec) (fontspec) This font family consists of the following NFSS (fontspec) series/shapes: (fontspec) (fontspec) - 'normal' (m/n) with NFSS spec.: (fontspec) <->s*[0.9565830048011242]"NotoSansSymbols:mode=node;scrip t=latn;language=DFLT;" (fontspec) - 'small caps' (m/sc) with NFSS spec.: (fontspec) - 'bold' (bx/n) with NFSS spec.: (fontspec) <->s*[0.9565830048011242]"NotoSansSymbols/B:mode=node;scr ipt=latn;language=DFLT;" (fontspec) - 'bold small caps' (bx/sc) with NFSS spec.: (fontspec) - 'italic' (m/it) with NFSS spec.: (fontspec) <->s*[0.9565830048011242]"NotoSansSymbols/I:mode=node;scr ipt=latn;language=DFLT;" (fontspec) - 'italic small caps' (m/itsc) with NFSS spec.: (fontspec) - 'bold italic' (bx/it) with NFSS spec.: (fontspec) <->s*[0.9565830048011242]"NotoSansSymbols/BI:mode=node;sc ript=latn;language=DFLT;" (fontspec) - 'bold italic small caps' (bx/itsc) with NFSS spec.: Missing character: There is no ? (U+2663) in font NotoSansSymbols:mode=node;sc ript=latn;language=DFLT;! Missing character: There is no ? (U+2663) in font NotoSansSymbols:mode=node;sc ript=latn;language=DFLT;! • Did you check that it's actually Noto that has this char? Word sometimes uses fallbacks… Nov 28, 2018 at 20:44 • Yes, they are in e.g. fontinfo.opensuse.org/fonts/NotoSansSymbolsRegular.html as char 2663 etc. Nov 28, 2018 at 20:50 • that place is empty in NotoSansSymbols-Regular.ttf. The above list shows only what should be ... – user2478 Nov 28, 2018 at 20:56 • @Herbert That certainly explains the problem! Where can one find a list of what is actually implemented in the various Noto fonts? Nov 28, 2018 at 20:59 • @TeXnician I think you're right - Word seems to have silently pulled it from Lucida. Nov 28, 2018 at 21:04 Noto Sans Symbols2 seems to have your char: \documentclass[]{article} \usepackage{fontspec} \newfontfamily{\symbolfont}{Noto Sans Symbols2} \begin{document} \symbolfont \Uchar"2663 \end{document} • Yes, now I'll check for the others I need. Suits are all there \DeclareRobustCommand\Ks{\color{red}{\symbolfont\char"2666}} Nov 28, 2018 at 21:57 otfinfo -g DejaVuSans.ttf | grep club has an output, but otfinfo -g NotoSansSymbols-Regular.ttf | grep club not. With otfinfo -g <font> you'll get a list of the symbolic names. otfinfo needs the fontname with full path if you are not in the fonts directory. • That works fine C:\Windows\Fonts>otfinfo -g notosanssymbols2-regular.ttf | grep club club_gobliquestroke-phaistosDisc club Nov 28, 2018 at 22:32 There's a very useful search tool at https://www.fileformat.info Say, for example, you want to find what fonts support "less than or equal to" you input some or all of that text and find that the symbol is Unicode U+2264. Clicking on the link leads to fonts that support U+2264 In this case, Noto isn't listed which can't really be correct - such a common symbol must be supported. But you can click on to an Adobe Flash link to fonts installed on your local machine and it's in Noto Sans, Noto Serif, Noto Mono, Noto Symbol2 (and many other fonts if you prefer not to use Noto!)
unclassified taxonomy rank 1 0 Entering edit mode 13 months ago I have data of 16srRNA and taxonomy for the bacteria of this data but I found among this taxonomy something called: genus_unclassified_Porphyromonadaceae This means that the taxonomy identified till family Porphyromonadaceae but the genus not identified therefore species not identified [ f_Porphyromonadaceae g__ s__ ] ??? or what this means? 16s taxonomy • 493 views 0 Entering edit mode I have two other questions related to this question: 1- If I found : genus_unclassified_Clostridiales_Incertae.Sedis.XIII, so the taxonomy identified till order Clostridiales or identified till family Clostridiales_Incertae.Sedis.XIII ? 2- and if I found : genus_Erysipelotrichaceae_incertae_sedis this taxonomy like genus_unclassified_Erysipelotrichaceae so the taxonomy identified till family Erysipelotrichaceae or something else ? I think if I know the meaning of incertae_sedis, 2 questions will be answered easy but when I searched for its meaning I found that it is open nomenclature used for a taxonomic group where its broader relationships are unknown or undefined so it is like unclassified or not. 0 Entering edit mode Incertae sedis indicates the rank associated to the term is of unknown or uncertain classification. So, for question 1, Clostridiales_Incertae.Sedis.XIII is a family (from the Clostridiales order) of unknown or uncertain classification, for question 2, genus_Erysipelotrichaceae_incertae_sedis is a genus (from the Erysipelotrichaceae family) of unknown or uncertain classification. 1 Entering edit mode 13 months ago Mensur Dlakic ★ 14k 0 Entering edit mode
# Geodesic inside a body Suppose that there is a body with a mass $m$. How could I get the geodesic inside the body (in Schwarzschild metric)? Because I have only described the metric outside a body before. We also know the radius $r$ of the body and the density is homogeneous. $$ds^2 = -\left[\frac{3}{2}\sqrt{1-\frac{2M}{R}} - \frac{1}{2}\sqrt{1-\frac{2Mr^2}{R^3}}\right]^2dt^2 + \frac{dr^2}{\left(1-\frac{2Mr^2}{R^3}\right)} + r^2 (d\theta^2 + sin^2\theta d\phi^2)$$
# Glee Distribution markets CDs of the performing artist Unique. At the beginning of October, Glee ... 1 answer below » Glee Distribution markets CDs of the performing artist Unique. At the beginning of October, Glee had in beginning inventory 5,320 of Unique’s CDs with a unit cost of $7. During October, Glee made the following purchases of Unique’s CDs. Oct. 3 6,650 @$8 Oct. 19 7,980 @ $10 Oct. 9 9,310 @$9 Oct. 25 10,640 @ \$11 During October, 28,994 units were sold. Glee uses a periodic inventory system. a) Determine the cost of goods available for sale. b) Calculate the cost per unit. (round to 2 decimals) c) Determine (1) the ending inventroy and (2) the cost of goods sold under FIFO, LIFO, and Average-Cost. (round to 0 decimals) d) Which cost flow methods results in (1) the highest inventory amount for the balance sheet and (2) the highest cost of goods sold for the income statement? And the amounts that go with them? Shahim B Part a Qunatity Rate Amount 7 37240 5320 Opening inventory Purchases 03-Oct 6650 00... Looking for Something Else? Ask a Similar Question
# How to get phase shift in this task and in general? In the serial circuit connected to the 50 Hz frequency alternating voltage, effective voltage values 𝑈 = 220 V, known voltages 𝑈L = 660 V and 𝑈C = 500 V. The current in the circuit is 11 A. Determine the 𝑅, 𝐿 and 𝐶, and phase shift φ between voltage and current This is how circuit looks: I have calculated Xl , Xc R L and by applying ohm's rule R = U / I = 220 / 11 = 20 Ω. Xl = UL / I = 660 / 11 = 60 Ω , Xc = Uc / i = 500 / 11 = 45.45 Ω , and then to find L and C I used formula Xl = wL , where w represents angular frequency, w = 2 * pi * f = 100 pi rad/sec, and the same for C, Xc = 1/ wC. And to find phase shift I used formula Φ = tg-1(X / R) = tg-1( ( 60 - 45.45) / 20) = 36.04 ° Can someone help? • "The current in the circle ..." Should that read 'circuit'? Show your calculations and we'll see if we can spot the error. – Transistor Jul 21 '19 at 20:45 • Yes it does, I have corrected it – Petar Jul 21 '19 at 20:46 • I have calculated Xl , Xc R L and by applying ohm's rule R = U / I = 220 / 11 = 20 ohm's. Xl = UL / I = 660 / 11 = 60 , Xc = Uc / i = 500 / 11 = 45.45 , and then to find L and C I used formula Xl = wL , where w represents angular frequency, w = 2 * pi * f = 100 pi rad per sec, and the same for C Xc = 1/ wC. And to find phase shift I used formula Φ = tg-1(X / R) = tg-1( ( 60 - 45.45) / 20) = 36.04 degrees – Petar Jul 21 '19 at 20:56 • I know it would be much easier if I just send a picture of calculations but i do not carry phone with me to the library – Petar Jul 21 '19 at 20:57 • Put the calculations into your question rather than in the comments. That way readers don't have to rummage through the comments to understand your question. You can also use HTML &Omega;, &mu;, &deg;, etc. as well as <sup>...</sup> and <sub>...</sub> in the posts but they don't work in the comments. – Transistor Jul 21 '19 at 21:00
What is happening to rotational kinetic energy when moment of inertia is changed? I know this question is asked here a lot, but I just had to ask this to finalise the concept. When a system lets say a rod of length $L$ and mass $M$ is rotating with angular speed $omega_1$ its initial angular momentum is $L1 = (1/12)ML^2\omega_1$ and its initial kinetic energy is $KE = (1/24)ML^2{\omega_1}^2$. Now after some time the rod is folded in half its angular momentum kept conserved i.e. without applying any external force or torque, its new angular velocity becomes $\omega_2 = 4\omega_1$ and its new kinetic energy becomes $KE_2 = (1/6)ML^2{\omega_1}^2$. This is 4 times the original kinetic energy when no external force works, since the rod is folded, you can even say melted and formed into a smaller and denser rod, it has not undergone compression/expansion of any sort but still there is change in kinetic energy. The most sense I could make out of this was that all the particles while rotating felt a centripetal force and the particles of half of the rod under this force went in its direction and did some work which appears as the change in kinetic energy. I have written the same and a proof as an answer here. Now if I am write in my concept where did this energy come from, tension was providing the centripetal force but no work was done against tension as the rod was folded in half not compressed. If I am wrong then where did the energy come from? Extra: I also tried the analogy of this question in translatory motion, suppose there is body of mass $m$ moving with velocity $v$ suddenly, its mass becomes $m/2$ then its velocity becomes $2v$ and its KE becomes $4KE_1$ there is no need to explain the energy conservation here since mass suddenly does not disappear into thin air, however moment of inertia can be changed and hence the question. Addendum : Since folding the rod seems to bring about unnecessary questions about ways of folding, you can imagine that if rod was melted and formed into a longer rod all the while the system was rotating and angular momentum conserved, then new length becomes $2L$ new angular velocity becomes ${\omega_1}/4$ and new KE becomes $(1/96)ML^2{\omega_1}^2$. This time the energy becomes ${1/4}^{th}$ of the initial, where did this energy go to ? Certainly movement against centripetal force takes place, but since there is no extension in existing rod, energy can not be stored as spring energy in it, or so I think. - Well you have done some work folding the rod, as there is an outward reaction force (to balance centripetal acceleration) on any part of the rod, and to fold it you must reduce it's radius, thus do work against this force. Tryi it with a rod of length 4l, with joints at l and 3l (so that the centre of mass doesn't move along rod) and you should hopefully see that this work done by you on the rod balances the excess kinetic energy. –  Zephyr Jan 13 at 19:04 How was work done against this forcd which should be along the rod at all times when the rod itself was folded, remeber it is neither being stretched nor compressed, no displacement along the force. –  Rijul Gupta Jan 13 at 19:09 the rod will not fold of it's own accord. the easiest way to think of it folding would be little motors at the joints, and it's these motors that would be doing work. (we can also think of us folding it by hand, but then it's harder to picture not disturbing the system) –  Zephyr Jan 13 at 19:11 as for no displacement in the direction of the force. imagine putting a dot on each end of the rod, the final result of folding (or melting or whatever) is that the two dots will now be half the distance, therefore one or both dots must have moved in the radial direction. –  Zephyr Jan 13 at 19:12 In honor of the Winter Olympics: Consider an ice skater that has just gone into a spin with arms stretched out. If it helps, the skater is wearing lead bracelets on each wrist. As the skater spins, the angular momentum and angular kinetic energy are constant; friction with the air and with the ice can be eliminated. The skater must exert an inward centripetal force on the bracelets to keep them rotating in the same circle, but this force does no work, since the bracelets are not changing their radius of rotation. Then the skater pulls in his arms and the bracelets, closer to his axis of rotation. Several interrelated things happen: 1. since there is no external torque, the angular momentum of the system stays the same; 2. Since the various masses of the skater are all moving towards the axis of rotation, the total moment of inertia of the system decreases; 3. Combining 1 and 2, the angular velocity of the system increases; if the moment is halved, the angular velocity doubles 4. Combining 2 and 3, with some quantitative treatment, the angular kinetic energy increases; if the moment is halved and the angular velocity is doubled, the kinetic energy doubles; 5. The skater does work; his arms are now exerting an inward force through a distance as his arms draw the bracelets inward; the work done can be rigorously shown to be identical to the increase in kinetic energy... - If initially arms were near and later the skater moves then away, then how will the energy be stored ? Muscular ? –  Rijul Gupta Jan 13 at 21:01