text
stringlengths
100
356k
# Normalization of states and bracket notation In Peskin & Schroeder's QFT, if we set $$|p\rangle = \sqrt{2E_p} a^{\dagger}_p|0\rangle \tag{2.35}$$ as in equation 2.35, then how do I get to the next equation 2.36: $$\langle p|q\rangle = 2E_p (2\pi)^3 \delta^3(p-q) \tag{2.36}$$ Where I seem to be stuck is: $$\langle p|q\rangle =\sqrt{2E_p 2E_q}\langle0|a_pa_q^{\dagger}|0\rangle$$ I don't understand why $\langle0|a_pa_q^{\dagger}|0\rangle$ is prop. to the delta function. ## 1 Answer Try commuting $a_p$ and $a_q^\dagger$ and keep in mind that $a_n|0\rangle = 0$. • so I get $\sqrt{2E_p2E_q} \langle 0 |a_p a_q^{\dagger}|0\rangle = \sqrt{2E_p2E_q} \langle 0 |[a_p ,a_q^{\dagger}]|0\rangle + \sqrt{2E_p2E_q} \langle 0 |a_q^{\dagger} a_p |0\rangle$. The second term is zero since $a_n |0\rangle = 0 \forall n$. So we get, using $[a_p, a_q^{\dagger}] = 1$, $\sqrt{2E_p2E_q} \langle 0 |0\rangle$. But what is $\langle 0 |0\rangle$ equal to? Thank you – laguna Oct 23 '17 at 7:59 • Depending on whether you have Bosonic or Fermionic operators, the bracket inside the first term after the $=$ sign is either a commutator or an anti-commutator. But, regardless, that bracket is a the delta function. So you have: $\sqrt{4E_pE_q}\langle 0|\delta^3(q - p)(2\pi)^3|0\rangle = \sqrt{4E_pE_q}\delta^3(q - p)(2\pi)^3\langle 0|0\rangle= \sqrt{4E_pE_q}\delta^3(q - p)(2\pi)^3 =$ the desired answer. – IcyOtter Oct 23 '17 at 8:09 • Thank you. And by definition $\langle s | s \rangle = 1$ for all states $s$, right? – laguna Oct 23 '17 at 8:20 • No problem! And yes, for any normalised state, we have $\langle s|s\rangle = 1$. In other words, a state projects onto itself 100%. – IcyOtter Oct 23 '17 at 10:12
## anonymous 3 years ago Inverse Functions 1. anonymous For each func defined as one-one, write an equation for the inverse function in the form y=f^-1 (x), graph f and f^-1 on the same axes, and give the domain and range of them. 2. anonymous And my function is: f(x) = -4x + 3 3. Mertsj f(x)=-4x+3 Domain = all real numbers. Range = all real numbers $x=-4y+3, y = f ^{-1}(x)=\frac{-1}{4}x-\frac{3}{4}$ 4. anonymous I got this for the inverse:|dw:1361839067062:dw| 5. Mertsj Domain = all real numbers. Range = all real numbers. 6. Mertsj Yep. + 3/4 sorry 7. anonymous Thanks
# Principal Component Analysis To Decompose Signals and Reduce Dimensionality (codes included) We will learn the basics of Fourier analysis and implement it to remove noise from the synthetic and real signals Principal Component Analysis is a technique that can extract the dominant pattern in the data matrix in terms of a set of new orthogonal variables called principal components and the corresponding set of factor scores based on its dominance (e.g. Kutz 2013; Abdi & L. J. Williams 2010). Some of its applications includes data compression, image processing, exploratory data analysis, pattern recognition, time series prediction etc. ## Similar posts #### Create beamer slides using python The data matrix contains the observations that are described by several inter-correlated quantitative dependent variables. The main goals of PCA are to extract the most important information (may not be most dominant) from the data matrix, compress the size of data matrix by removing orthogonal components that explains less variance for the data. The PCA computes principal components, the new set of variables, that are linear combinations of original variables. The principal components are required to have estimated variance in descending order. ## Apply PCA on a time-space function Let us apply PCA on a time-space function borrowed from Kutz (2013). ### Create a function f(x, t) = [1 - 0.5 \cos 2t] sech x + [1-0.5 \sin 2t] sech x tanh x Solutions of the form equation above are often obtained in myriad of contexts by full numerical simulations of underlying physical system. import numpy as np import matplotlib.pyplot as plt from scipy import linalg plt.style.use('seaborn') x = np.linspace(-10,10,100) t = np.linspace(0,10,30) [X,T]=np.meshgrid(x,t) def sech(X): return 1/np.cosh(X) f=sech(X)*(1-0.5*np.cos(2*T))+(sech(X)*np.tanh(X))*(1-0.5*np.sin(2*T)) vmin = np.amin(f) vmax = np.amax(f) from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=plt.figaspect(0.8)) ax = fig.gca(projection='3d') # Plot the surface. surf = ax.plot_surface(T, X, f, cmap='summer',linewidth=0, antialiased=True, rstride=1, cstride=1, alpha=None) ax.set_ylabel(r'$X$') ax.set_xlabel(r'$T$') surf.set_clim(vmin,vmax) plt.colorbar(surf) plt.tight_layout() plt.savefig('PCA_space_time_func.png',dpi=300,bbox_inches='tight') plt.close('all') ### Compute the PCA We can apply PCA to investigate the dynamics of this system. u,s,v = linalg.svd(f, full_matrices=False,check_finite=False) eigen_values = s s = np.diag(s) pca_modes = [] for j in range(1,3): ff=u[:,0:j].dot(s[0:j,0:j]).dot(v[0:j,:]) pca_modes.append(ff) vmin = np.min([np.amin(pca_modes[0]),np.amin(pca_modes[1])]) vmax = np.max([np.amax(pca_modes[0]),np.amax(pca_modes[1])]) ### Plot the Spatial Pattern ## Spatial behaviour fig = plt.figure(figsize=plt.figaspect(0.5)) for jj in range(len(pca_modes)): ax = fig.add_subplot(1, 2, jj+1, projection='3d') surf = ax.plot_surface(T, X, pca_modes[jj], cmap='summer',linewidth=0, antialiased=True, rstride=1, cstride=1, alpha=None) surf.set_clim(vmin,vmax) ax.set_ylabel(r'$X$') ax.set_xlabel(r'$T$') ax.set_title('Var explained: {:.2f}%'.format(eigen_values[jj]/np.sum(eigen_values)*100)) plt.colorbar(surf) plt.tight_layout() plt.savefig('PCA_space_time_func_modes.png',dpi=300,bbox_inches='tight') plt.close('all') The first two modes capture 100% of the total surface energy in the system. Hence, the third mode is completely unnecessary. This suggests that system can be easily and accurately represented by a simple two-mode expansion. Figure 2 shows the spatial behavior of the first two modes. ### Plot the temporal pattern ## temporal behaviour fig, ax = plt.subplots(2,1,figsize=plt.figaspect(0.5)) ax[0].plot(x,v[0,:],'k-',label='mode 1') ax[0].plot(x,v[1,:],'k--',label='mode 2') ax[0].set_xlabel('x') ax[0].set_ylabel('PCA modes') ax[0].legend() ax[1].plot(t,u[:,0],'k-',label='mode 1') ax[1].plot(t,u[:,1],'k--',label='mode 2') ax[1].set_xlabel('x') ax[1].set_ylabel('PCA modes') ax[1].legend() plt.tight_layout() plt.savefig('PCA_space_time_behaviour.png',dpi=300,bbox_inches='tight') plt.close('all') In Figure 3, top panel shows the spatial behaviour of the system for the two modes, and bottom panel shows the temporal evolution of the system for two modes. ## References 1. Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4), 433–459. https://doi.org/10.1002/wics.101 2. Kutz, J. N. (2013). Data-driven modeling & scientific computation: methods for complex systems & big data. Oxford University Press. Tags: Categories: Created on: #### Disclaimer of liability The information provided by the Earth Inversion is made available for educational purposes only. Whilst we endeavor to keep the information up-to-date and correct. Earth Inversion makes no representations or warranties of any kind, express or implied about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services or related graphics content on the website for any purpose. UNDER NO CIRCUMSTANCE SHALL WE HAVE ANY LIABILITY TO YOU FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF THE SITE OR RELIANCE ON ANY INFORMATION PROVIDED ON THE SITE. ANY RELIANCE YOU PLACED ON SUCH MATERIAL IS THEREFORE STRICTLY AT YOUR OWN RISK.
# Prove that $C_0$ is Banach. Let $$x^n \in C_0$$ is Cauchy. $$\rightarrow$$ For $$\epsilon> 0$$ there is N such that $$n,m \ge N$$ $$\lVert x^m -x^n\rVert< \frac {\epsilon} {2}$$ we know that for every k $$\lvert x^m_k -x^n_k\rvert \le sup_{i\ge 1} \ \lvert x^m_i -x^n_i\rvert < \frac {\epsilon} {2}$$ So $$x^n_k$$ is Cauchy in $$\mathbb R$$ which is Banach so $$x^n_k \rightarrow x_k \in \mathbb R \ \ \ \ \ \ or \ \ \ \ \ \lvert x^n_k -x_k\rvert \le \frac {\epsilon} {2}$$ by this we can say that there is an N, $$n\ge N$$ such that $$\lVert x^n -x\rVert = sup_{i\ge 1} \ \lvert x^n_i -x_i \lvert < \frac {\epsilon} {2}$$ $$\ \ \ \ \ \ \$$ which means $$x^n \rightarrow x$$ Since $$x^m \in c_0$$, there is some $$N'$$ such that $$|x_i^m| < {1 \over 2 } \epsilon$$ for $$i \ge N' \ \ \ \ \ \ \ \ \ \ \ (m=N)$$ Now to show that $$x\in C_0$$ $$\lvert x_i\rvert \le \lvert x^m_i-x_i\rvert+ \lvert x^m_i\rvert <\frac {\epsilon} {2}+\frac {\epsilon} {2}=\epsilon \ \ for \ i\ge N'$$ This gives us $$x_i\rightarrow 0$$ for $$i\ge N'$$ So $$x\in C_0$$ Is this Correct? The question has been modified based on this answer. The answer below is for the first version of the question. First correction: $$|a_n|<\epsilon /2$$ for all $$n$$ and $$a_n \to a$$ does not imply $$|a|<\epsilon /2$$. It implies $$|a|\leq \epsilon /2$$. Second correction: in the last part $$|x|$$ has no meaning for a sequence $$x$$. Use coordinates. You should write $$|x_i| \leq |x_i^{n}|+|x_i^{n}-x_i|$$. Then you should make the argument more rigorous as follows. First choose $$n$$ such that $$|x_i^{n}-x_i|<\epsilon /2$$ for all $$i$$. Fixing this $$n$$ choose $$k$$ such that $$|x_i^{n}|<\epsilon /2$$ for all $$i \geq k$$. Now you get $$|x_i| <\epsilon$$ for all $$i \geq k$$. • Your last part is still not rigorous. There are two variables $i$ and $n$ and you have to state clearly that you first choose $n$ and then the final inequality becomes true for all $i$ sufficiently large. – Kabo Murphy Jan 6 at 0:38 • In the last part you did not specify $m$. If you take $m=N$ your proof will be correct. – Kabo Murphy Jan 6 at 4:47
### 2.2.1. Basic data types We provide the following basic data types: • $\mathrm{𝚊𝚝𝚘𝚖}$ corresponds to an atom. Predefined atoms are $\mathrm{𝙼𝙸𝙽𝙸𝙽𝚃}$ and $\mathrm{𝙼𝙰𝚇𝙸𝙽𝚃}$, which respectively correspond to the smallest and to the largest integer. • $\mathrm{𝚒𝚗𝚝}$ corresponds to an integer value. • $\mathrm{𝚍𝚟𝚊𝚛}$ corresponds to a domain variable. A domain variable $V$ is a variable that will be assigned an integer value taken from an initial finite set of integer values denoted by $\mathrm{𝑑𝑜𝑚}\left(V\right)$. $\underline{V}$ and $\overline{V}$ respectively denote the minimum and the maximum values of $\mathrm{𝑑𝑜𝑚}\left(V\right)$. • $\mathrm{𝚏𝚍𝚟𝚊𝚛}$ corresponds to a possibly unbounded domain variable. A possibly unbounded domain variable is a variable that will be assigned an integer value from an initial finite set of integer values denoted by $\mathrm{𝑑𝑜𝑚}\left(V\right)$ or from interval minus infinity, plus infinity. This type is required for declaring the domain of a variable. It is also required by some systems in the context of specific constraints like arithmetic or $\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}$ constraints. • $\mathrm{𝚜𝚒𝚗𝚝}$ corresponds to a finite set of integer values. • $\mathrm{𝚜𝚟𝚊𝚛}$ corresponds to a set variable. A set variable $V$ is a variable that will be assigned to a finite set of integer values. Its lower bound $\underline{V}$ denotes the set of integer values that for sure belong to $V$, while its upper bound $\overline{V}$ denotes the set of integer values that may belong to $V$. $\mathrm{𝑑𝑜𝑚}\left(V\right)=\left\{{𝐯}_{\mathbf{1}},\cdots ,{𝐯}_{𝐧},{v}_{n+1},\cdots ,{v}_{m}\right\}$ is a shortcut for combining the lower and upper bounds of $V$ in a single notation: • Bold values designate those values that only belong to $\underline{V}$. • Plain values indicate those values that belong to $\overline{V}$ and not to $\underline{V}$. • $\mathrm{𝚖𝚒𝚗𝚝}$ corresponds to a multiset of integer values. • $\mathrm{𝚖𝚟𝚊𝚛}$ corresponds to a multiset variable. A multiset variable is a variable that will be assigned to a multiset of integer values. • $\mathrm{𝚛𝚎𝚊𝚕}$ corresponds to a real number. • $\mathrm{𝚛𝚟𝚊𝚛}$ corresponds to a real variable. A real variable is a variable that will be assigned a real number taken from an initial finite set of intervals. A real number is usually represented by an interval of two floating point numbers. Beside domain, set, multiset and float variables we have not yet introduced graph variables [Dooms06]. A graph variable is currently simulated by using one set variable for each vertex of the graph (see the third example of type declaration of 2.2.2).
College Algebra (6th Edition) Horizontal asymptote: $y=-\dfrac{3}{5}$ $f(x)=\dfrac{-3x+7}{5x-2}$ The degrees of the numerator and the denominator are the same. Divide the leading coefficient of the numerator by the leading coefficient of the denominator to obtain the horizontal asymptote of this function: $y=\dfrac{-3}{5}$ $y=-\dfrac{3}{5}$ Horizontal asymptote: $y=-\dfrac{3}{5}$
# Checkerboard Carpet Fractal The fractal that I designed, aka the Checkerboard Carpet Fractal, is a simple 4×4 square with 2 pieces removed. The two coordinates that were removed were (2,1) and (1,2). ### Surface Area MATH Length of one side is 4 Overall Shape (true for all iterations) 4^2=16 Iteration One: 2 squares with a length of 1 removed 2(1^2)=2 16-2=14 Iteration Two: 28 additional squares removed with a length of 1/4 28((1/4)^2)=1.75 16-2-1.75=12.25 Iteration Three: 392((1/16)^2)=1.53129 16-2-1.75-1.53129=10.71875 ### Infinite Series MATH Iteration One: 16-[(2*1)]=14 Solution=14 Iteration Two: 16-[(2*1)+(2*(1/16))(14)] Solution=12.25 Iteration Three: 16-[(2*1)+(2*(1/16))(14)+(2*(1/16^2))(14)(14)] Solution=10.71875 Formula: a+ar^2+ar^3+ar^4+ar^5……=a(1/1-r) (if |r|<1) Explanation: In iteration 2, the 16 is the number of squares The (2*1) is how many squares were removed in the first iteration (2) multiplied by the area of those boxes (1) In the second set of parentheses, the two tells us two more boxes were removed from each of 14 more of the boxes that were not removed in iteration one. The 1/16 is the area of these smaller boxes removed. The 14 tells us the number of times these 2 boxes were removed. ### Dimension MATH Scaled Down by 1/4, Is a smaller copy of the whole fractal and larger one is 14 copies of this one. Iteration One Little square is 1/15th (both put together is 1/14th to total) (1/4)^D = (1/14) (1/(4^D)) = (1/14) 4^D = 14 log4(14) = D D ~~ 1.90637 A^D = B logA(B) = D
+0 # One number is 4 times another number. If the sum of their reciprocal is equal to 5/12, find the two numbers 0 807 6 One number is 4 times another number. If the sum of their reciprocal is equal to 5/12, find the two numbers Guest Feb 28, 2015 #4 +26406 +5 Hmm!  Your first expression isn't right either Nauseated. $$\\ \frac{1}{x}+\frac{1}{4x}=\frac{5}{12}\\\\\frac{4}{4x}+\frac{1}{4x}=\frac{5}{12}\\\\\frac{5}{4x}=\frac{5}{12}\\\\4x=12\\\\x=3$$ . Alan  Feb 28, 2015 Sort: #1 +81077 +5 Let x be the first number and 4x the second. So we have x + 1/(4x)= 5/12    mutiply both sides by 4x 4x^2 + 1 = (20/12)x 4x^2 + 1 = (5/3)x    multiply both sides by 3 12x^2 + 3 = 5x      rearrange 12x^2 - 5x + 3 = 0 This doesn't have a real solution......???? CPhill  Feb 28, 2015 #2 0 Sorry but the guy above me is incorrect. The answer is actually x=1.5 thankyou very much and goodnight. Now I can have the cool shades on Guest Feb 28, 2015 #3 +1037 0 Well, Anon, maybe you should take off the blinders instead of putting more on. But I guess this is normal for you because you are a m***n. Too bad there isn’t a “close pin on the nose” emoto. One reason for the nausea of CDD is the stench. Morons with CDD are very foul! $$\Text {solution :} \\\\ \hspace*{1em} \dfrac{1}{x}+\dfrac{1}{(x+4)}\;= \; \dfrac{5}{12} \hspace*{3em} \Leftarrow \; \text {Solve} \\\\ \hspace*{1em} 12(x+4)+12x=5x(x+4)\hspace{10pt} \Leftarrow \; \text {simplify using 12x(x+4) } \\\ \hspace*{3em} 24x+48=5x^2+20x \hspace{16pt} \Leftarrow \; \text {expand } \\ \hspace*{3em} -5x^2+4x+48=0 \hspace*{1em} \; \Leftarrow \ {Subtract}\; 5x^2+20x \; \text {and set = 0}\\\\ x = \dfrac{-(4)\pm \sqrt{((4^2)-4 \times(-5)(48))}}{2(-5)} \hspace*{1em} \Leftarrow \; \text { Use quadratic formula}\\\\\\\\$$ $$\text {. .}\ \\\ \hspace{1pt} \qquad -4 \pm \dfrac {\sqrt{976}}{-10}}\\\\ \hspace{1pt} \qquad \text {x = -2.7240998703626618} \\\ OR \\ \hspace{34pt}\text {x = 3.5240998703626616} \\$$ Nauseated  Feb 28, 2015 #4 +26406 +5 Hmm!  Your first expression isn't right either Nauseated. $$\\ \frac{1}{x}+\frac{1}{4x}=\frac{5}{12}\\\\\frac{4}{4x}+\frac{1}{4x}=\frac{5}{12}\\\\\frac{5}{4x}=\frac{5}{12}\\\\4x=12\\\\x=3$$ . Alan  Feb 28, 2015 #5 +91481 0 Thanks Alan, ----------------------------- My goodness Nauseated, it seems you have become contaminated very badly with CDD. You must be hospialised and quarantined immediately. Do not fret the nurses wear face masks and nose pins so that they can handle the stench! Plus it is their calling so they shall nurse you back o health while they struggle to control their nausea. Melody  Feb 28, 2015 #6 +1037 0 OH,OH Revenge is in the shadows! HAHAHA! I wondered why this was so complicated. Most of the time I can figure the answer just by looking. Along with the clothespin, a pair of untinted spectacles might help. Well, CPhill, it looks like we have the same strain of CDD. Thanks Alan! Nauseated  Feb 28, 2015 ### 32 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
1 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found • Abstract: found • Article: found Is Open Access # JIMWLK evolution and small-x asymptotics of 2n-tuple Wilson line correlators Preprint , , Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract JIMWLK equation tells how gauge invariant higher order Wilson line correlators would evolve at high energy. In this article we present a convenient integro-differential form of this equation, for 2n-tuple correlator, where all real and virtual terms are explicit. The real' terms correspond to splitting (say at position z) of this 2n-tuple correlator to various pairs of 2m-tuple and (2n+2-2m)-tuple correlators whereas virtual' terms correspond to splitting into pairs of 2m-tuple and (2n-2m)-tuple correlators. Kernels of virtual terms with m=0 (no splitting) and of real terms with m=1 (splitting with atleast one dipole) have poles and when integrated over z they do generate ultraviolet logarithmic divergences, separately for real and virtual terms. Except these two cases in all other terms the corresponding kernels, separately for real and virtual terms, have rather soften ultraviolet singularity and when integrated over z do not generate ultraviolet logarithmic divergences. We went on to study the solution of the JIMWLK equation for the 2n-tuple Wilson line correlator in the strong scattering regime where all transverse distances are much larger than inverse saturation momentum and shown that it also exhibits geometric scaling like color dipole deep inside saturation region. ### Most cited references9 • Record: found • Abstract: found • Article: found Is Open Access ### The Color Glass Condensate (2010) We provide a broad overview of the theoretical status and phenomenological applications of the Color Glass Condensate effective field theory describing universal properties of saturated gluons in hadron wavefunctions that are extracted from deeply inelastic scattering and hadron-hadron collision experiments at high energies. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### The Wilson renormalization group for low x physics: towards the high density regime (1997) We continue the study of the effective action for low $$x$$ physics based on a Wilson renormalization group approach. We express the full nonlinear renormalization group equation in terms of the average value and the average fluctuation of extra color charge density generated by integrating out gluons with intermediate values of $$x$$. This form clearly exhibits the nature of the phenomena driving the evolution and should serve as the basis of the analysis of saturation effects at high gluon density at small $$x$$. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Nonlinear Gluon Evolution in the Color Glass Condensate: I (2000) We consider a nonlinear evolution equation recently proposed to describe the small-$$x$$ hadronic physics in the regime of very high gluon density. This is a functional Fokker-Planck equation in terms of a classical random color source, which represents the color charge density of the partons with large $$x$$. In the saturation regime of interest, the coefficients of this equation must be known to all orders in the source strength. In this first paper of a series of two, we carefully derive the evolution equation, via a matching between classical and quantum correlations, and set up the framework for the exact background source calculation of its coefficients. We address and clarify many of the subtleties and ambiguities which have plagued past attempts at an explicit construction of this equation. We also introduce the physical interpretation of the saturation regime at small $$x$$ as a Color Glass Condensate. In the second paper we shall evaluate the expressions derived here, and compare them to known results in various limits. Bookmark ### Author and article information ###### Journal 31 January 2019 ###### Article 1901.11531
### Home > PC > Chapter 4 > Lesson 4.2.1 > Problem4-71 4-71. Find all values of $θ$ from $[0, 2\pi]$ where the following equations are true. Sketching a picture of the unit circle may help. 1. $\text{sin }θ =\frac { 1 } { 2 }$ $30\degree\text{ or }\frac{\pi}{6}$ $150\degree\text{ or }\frac{5\pi}{6}$ 1. $\text{cos }θ = −\frac { 1 } { 2 }$ 1. $\text{sin }θ = - \frac { \sqrt { 3 } } { 2 }$ What special angle(s) are involved? What does the negative sign mean? 1. $\text{cos }θ = 0$ These angles are coterminal with the $y$-axis.
Evaluating a String in Sage¶ sage.misc.sage_eval.sage_eval(source, locals=None, cmds='', preparse=True) Obtain a Sage object from the input string by evaluating it using Sage. This means calling eval after preparsing and with globals equal to everything included in the scope of from sage.all import *.). INPUT: • source - a string or object with a _sage_ method • locals - evaluate in namespace of sage.all plus the locals dictionary • cmds - string; sequence of commands to be run before source is evaluated. • preparse - (default: True) if True, preparse the string expression. EXAMPLES: This example illustrates that preparsing is applied. sage: eval('2^3') 1 sage: sage_eval('2^3') 8 However, preparsing can be turned off. sage: sage_eval('2^3', preparse=False) 1 Note that you can explicitly define variables and pass them as the second option: sage: x = PolynomialRing(RationalField(),"x").gen() sage: sage_eval('x^2+1', locals={'x':x}) x^2 + 1 This example illustrates that evaluation occurs in the context of from sage.all import *. Even though bernoulli has been redefined in the local scope, when calling sage_eval the default value meaning of bernoulli is used. Likewise for QQ below. sage: bernoulli = lambda x : x^2 sage: bernoulli(6) 36 sage: eval('bernoulli(6)') 36 sage: sage_eval('bernoulli(6)') 1/42 sage: QQ = lambda x : x^2 sage: QQ(2) 4 sage: sage_eval('QQ(2)') 2 sage: parent(sage_eval('QQ(2)')) Rational Field This example illustrates setting a variable for use in evaluation. sage: x = 5 sage: eval('4/3 + x', {'x':25}) 26 sage: sage_eval('4/3 + x', locals={'x':25}) 79/3 You can also specify a sequence of commands to be run before the expression is evaluated: sage: sage_eval('p', cmds='K.<x> = QQ[]\np = x^2 + 1') x^2 + 1 If you give commands to execute and a dictionary of variables, then the dictionary will be modified by assignments in the commands: sage: vars = {} sage: sage_eval('None', cmds='y = 3', locals=vars) sage: vars['y'], parent(vars['y']) (3, Integer Ring) You can also specify the object to evaluate as a tuple. A 2-tuple is assumed to be a pair of a command sequence and an expression; a 3-tuple is assumed to be a triple of a command sequence, an expression, and a dictionary holding local variables. (In this case, the given dictionary will not be modified by assignments in the commands.) sage: sage_eval(('f(x) = x^2', 'f(3)')) 9 sage: vars = {'rt2': sqrt(2.0)} sage: sage_eval(('rt2 += 1', 'rt2', vars)) 2.41421356237309 sage: vars['rt2'] 1.41421356237310 This example illustrates how sage_eval can be useful when evaluating the output of other computer algebra systems. sage: R.<x> = PolynomialRing(RationalField()) sage: gap.eval('R:=PolynomialRing(Rationals,["x"]);') 'Rationals[x]' sage: ff = gap.eval('x:=IndeterminatesOfPolynomialRing(R);; f:=x^2+1;'); ff 'x^2+1' sage: sage_eval(ff, locals={'x':x}) x^2 + 1 sage: eval(ff) Traceback (most recent call last): ... RuntimeError: Use ** for exponentiation, not '^', which means xor in Python, and has the wrong precedence. Here you can see eval simply will not work but sage_eval will. TESTS: We get a nice minimal error message for syntax errors, that still points to the location of the error (in the input string): sage: sage_eval('RR(22/7]') Traceback (most recent call last): ... File "<string>", line 1 RR(Integer(22)/Integer(7)] ^ SyntaxError: unexpected EOF while parsing sage: sage_eval('None', cmds='$x =$y[3] # Does Perl syntax work?') Traceback (most recent call last): ... File "<string>", line 1 $x =$y[Integer(3)] # Does Perl syntax work? ^ SyntaxError: invalid syntax sage.misc.sage_eval.sageobj(x, vars=None) Return a native Sage object associated to x, if possible and implemented. If the object has an _sage_ method it is called and the value is returned. Otherwise str is called on the object, and all preparsing is applied and the resulting expression is evaluated in the context of from sage.all import *. To evaluate the expression with certain variables set, use the vars argument, which should be a dictionary. EXAMPLES: sage: type(sageobj(gp('34/56'))) <type 'sage.rings.rational.Rational'> sage: n = 5/2 sage: sageobj(n) is n True sage: k = sageobj('Z(8^3/1)', {'Z':ZZ}); k 512 sage: type(k) <type 'sage.rings.integer.Integer'> This illustrates interfaces: sage: f = gp('2/3') sage: type(f) <class 'sage.interfaces.gp.GpElement'> sage: f._sage_() 2/3 sage: type(f._sage_()) <type 'sage.rings.rational.Rational'> sage: a = gap(939393/2433) sage: a._sage_() 313131/811 sage: type(a._sage_()) <type 'sage.rings.rational.Rational'> Previous topic Support for persistent functions in .sage files Random testing
# Recent Posts ### Applying Network Science to Reddit: Key Content Detection (Part 2) Finally we can jump into the analysis! The purpose of this post is to describe the essential model used for detect key content generators or posts and show what happens when we apply the techniques discussed in the previous post to the /r/uwaterloo subreddit. The Model I could dive straight into the model construction, but I think it is of much greater value to spend some time walking through my thought process. ### Applying Network Science to Reddit: Key Content Detection (Part 1) My first investigation involves attempting to detect key content or users in a given community. I begin by establishing the model and various mathematical preliminaries. Mathematical Preliminaries Basic Definitions Networks can be modelled by nodes that are connected together by edges. The mathematics of Algebraic Graph Theory gives us the tools we need for this section. In this case, we introduce the language used for directed graphs. Definition A graph $G = (V, E)$ is a pair consisting of a set of nodes $V$ and a set of edges $E$ that describe which nodes lead to which others. # Selected Publications ### Dual Conditions for Local Transverse Feedback Linearization Provides a set of conditions, in the algebra of differential forms, that can be used to test whether a smooth manifold can be transversely feedback linearized — in a local sense — under a given affine control system. IEEE CDC, 2018 # Projects #### Red Prism - Network Science Applied to Reddit Course project applying the tools of network science to the social network graph of Reddit. #### SE499 Independent Research Project Investigating the performance difference between two different path following control techniques.
# ACT Math: All About Circles There are four main things you need to know about circles to tackle any ACT Math question. 1. The definition of Diameter and RadiusFor a pictorial illustration, see the diagram below. 2. The formula for Circumference given by (diameter)(p) 3. The formula for Area given by (p)(radius2) 4. Knowing what fraction of the circle the sector is. Note that a sector is what the ‘slice’ in the circle, an example sector is the yellow wedge show in the green circle. The white part of the circle is also known as a sector, even though it doesn’t look like your typical ‘slice’. Let’s try and apply the concepts to see if you understand. ### Circle Example Questions Suppose the diameter of a circle is 12 cm. What would be its circumference? And what would be its area? Simply apply the formulas to get: circumference – (diameter)(π)-(12)(π)-12π Don’t forget that radius is half of the diameter, so if the diameter is 12 cm, the radius is 6 cm. The question might add another step by telling you that there is another circle with diameter 6 cm and ask how many times bigger, in terms of area, is the circle with diameter 12 cm than this circle? Just because one has diameter 12 cm and the other has diameter 6 cm does not mean that the bigger circle is twice as big. Use the formulas to figure out that the area of the new circle is 9π cm² while the area of the big circle is 36π cm², as we found out earlier. This means that the bigger circle is 4 times as large as the smaller circle. The last concept involves applying your knowledge of circumference and area to sectors. Sectors are basically a fraction of the entire circle. If the yellow sector in the circle above had an arc of 45° that means that it is 1/9 of the entire circle, because a circle has 360°. The implications of this are two fold: 1. The arc length of the yellow wedge is also 1/9th of the circumference 2. The area of the yellow wedge is also 1/9th of the area of the circle Thus, if the radius of that circle was 10 cm, then its circumference would be 10π cm and its area would be 25π cm². Correspondingly, the arc length of the sector would be 10π/9 cm and the area of the sector would be 25π/9 cm². If the question wants you to find the perimeter of the yellow wedge, all you have to do is add the radius twice to the arc length you found to get (10 + 10π) cm. Remember these four concepts in mind and you’ll have the tools to solve any circle problem! ### Circle Practice Questions For practice, you could try solving these practice problems. 1. A circle has area 50π cm2. A second circle has half the area of the first circle. What is the diameter of the second circle? 2. A circle has a shaded sector. The sector is 1/6 of the whole circle. What is the area of the circle if the sector has area 6π cm2 ? What is the diameter of the circle?
# Stretched-Grid Simulations¶ Note Stretched-grid simulations are described in [Bindle et al., 2020]. The paper also discusses things you should consider, and offers guidance for choosing appropriate stretching parameters. A stretched-grid is a cubed-sphere grid that is “stretched” to enhance its resolution in a region. To set up a stretched-grid simulation you need to do two things: 1. Create a restart file for your simulation. 2. Update runConfig.sh to specify the grid and restart file. Before setting up your stretched-grid simulation, you will need to choose stretching parameters. ## Choose stretching parameters¶ The target face is face of a stretched-grid that shrinks so that the grid resolution is finer. The target face is centered on a target point, and the degree of stretching is controlled by a parameter called the stretch-factor. Relative to a normal cubed-sphere, the resolution of the target face is refined by approximately the stretch-factor. For example, a C60 stretched-grid with a stretch-factor of 3.0 has approximately C180 (~50 km) resolution in the target face. The enhancement-factor is approximate because (1) the stretching gradually changes with distance from the target point, and (2) gnominic cubed-sphere grids are quasi-uniform with grid-boxes at face edges being ~1.5x shorter than at face centers. You can choose a stretch-factor and target point using the interactive figure below. You can reposition the target face by changing the target longitude and target latitude. The domain of refinement can be increased or decreased by changing the stretch-factor. Choose parameters so that the target face roughly covers the refion that you want to refine. Note The interactive figure above can be a bit fiddly. Refresh the page if the view gets messed up. If the figure above is not showing up properly, please open an issue. Next you need to choose a cubed-sphere size. The cubed-sphere size must be an even integer (e.g., C90, C92, C94, etc.). Remeber that the resolution of the target face is enhanced by approximately the stretch-factor. ## Create a restart file¶ A simulation restart file must have the same grid as the simulation. For example, a C180 simulation requires a restart file with a C180 grid. Likewise, a stretched-grid simulation needs a restart file with the same stretched-grid (i.e., an identical cubed-sphere size, stretch-factor, target longitude, and target latitude). You can regrid an existing restart file to a stretched-grid with GCPy’s gcpy.file_regrid program. Below is an example of regridding a C90 cubed-sphere restart file to a C48 stretched-grid with a stretch factor of 3, a target longitude of 260.0, and a target latitude of 40.0. See the GCPy documentation for this program’s exact usage, and for installation instructions. $python -m gcpy.file_regrid \ -i initial_GEOSChem_rst.c90_standard.nc \ --dim_format_in checkpoint \ -o sg_restart_c48_3_260_40.nc \ --cs_res_out 48 \ --sg_params_out 3.0 260.0 40.0 \ --dim_format_out checkpoint Description of arguments: -i initial_GEOSChem_rst.c90_standard.nc Specifies the input restart file is initial_GEOSChem_rst.c90_standard.nc (in the current working directory). --dim_format_in checkpoint Specifies that the input file is in the “checkpoint” format. GCHP restart files use the “checkpoint” format. -o sg_restart_c48_3_260_40.nc Specifies that the output file should be named sg_restart_c48_3_260_40.nc. --cs_res_out 48 Specifies that the output grid has a cubed-sphere size 48 (C48). --sg_params_out 3.0 260.0 40.0 Specifies that the output grid’s stretched-grid parameters in the order stretch factor (3.0), target longitude (260.0), target latitude (40.0). --dim_format_out checkpoint Specifies that the output file should be in the “checkpoint” format. GCHP restart files must be in the “checkpoint” format. Once you have created a restart file for your simulation, you can move on to updating your simulation’s configuration files. ## Update your configuration files¶ Modify the section of runConfig.sh that controls the simulation grid. Turn STRETCH_GRID to ON and update CS_RES, STRETCH_FACTOR, TARGET_LAT, and TARGET_LON for your specific grid. #------------------------------------------------ # Internal Cubed Sphere Resolution #------------------------------------------------ # Primary resolution is an integer value. Set stretched grid to ON or OFF. # 24 ~ 4x5, 48 ~ 2x2.25, 90 ~ 1x1.25, 180 ~ 1/2 deg, 360 ~ 1/4 deg CS_RES=24 STRETCH_GRID=ON # Stretched grid parameters # Rules and notes: # (1) Minimum STRETCH_FACTOR is 1.0001 # (2) Target lat and lon must be floats (contain decimal) # (3) Target lon must be in range [0,360) STRETCH_FACTOR=3.0 TARGET_LAT=40.0 TARGET_LON=260.0 Next, modify the section of runConfig.sh that specifies the simulation restart file. Set INITIAL_RESTART to the restart file we created in the previous step. #------------------------------------------------ # Initial Restart File #------------------------------------------------ # By default the linked restart files in the run directories will be # used. Please note that HEMCO restart variables are stored in the same # restart file as species concentrations. Initial restart files available # on gcgrid do not contain HEMCO variables which will have the same effect # as turning the HEMCO restart file option off in GC classic. However, all # output restart files will contain HEMCO restart variables for your next run. # INITIAL_RESTART=initial_GEOSChem_rst.c${CS_RES}_TransportTracers.nc # You can specify a custom initial restart file here to overwrite: INITIAL_RESTART=sg_restart_c48_3_260_40.nc Lastly, execute ./runConfig.sh to update to update your run directory’s configuration files. \$ ./runConfig.sh
Problem of the Week #233 - Nov 15, 2016 Status Not open for further replies. Euge MHB Global Moderator Staff member Here is this week's POTW: ----- If $\omega$ is a two-form on the four-sphere, is $\omega^2$ (i.e., $\omega \wedge \omega$) nowhere vanishing? ----- The answer is no. Suppose to the contrary that $\omega$ is nowhere vanishing. Then $\omega^2$, being a 4-form on a compact oriented 4-manifold has nonzero integral. The de Rham cohomology groups of the four-sphere are $\Bbb R$ in dimensions zero and four, and zero in every other dimension. Thus, $\omega$ is exact. If $\omega = d\eta$, then $\omega^2 = d(\eta \wedge d\eta)$. So $\omega^2$ is exact; its integral is zero by Stokes's theorem. This is absurd.
Limit of two hypergeometric functions (2F1) Hi, Does anyone know whether there is a known function/distribution that corresponds to the limit: $\lim_{\epsilon\rightarrow0^+} \mathfrak{Re}\left[f(x+i\epsilon) - f(x-i\epsilon)\right]$ when $f(z)={}_2F_1(\frac12+\mu,\frac12-\mu,1,z)$? I think this is essentially the definition of a hyperfunction. In particular I am interested in the case where $\mu$ and $x$ are both purely imaginary. You can plot it in Mathematica and see the function for small values of epsilon. When $x$ is imaginary a nice function emerges, when $x$ is real there seems to be a Dirac delta function behaviour at the origin. I wonder whether there is a standard reference where I could look for distributional expressions of hyperfunctions other than the common ones like the Dirac delta or the Heaviside step function. (The context here is that of quantum field theory on curved manifolds - the commutator typically turns out to be a hyperfunction.) - According to the usual definition, the branch cut of the hypergeometric function is on the real axis extending from 1 to infinity. So there should not be any jump on the imaginary axis. –  Michael Renardy Apr 1 '11 at 19:44 what is $\mu$? for $\mu\in1/2\mathbb{N}$ $f$ should be just a polynomial and the expression would be 0. –  Marcel Bischoff Apr 2 '11 at 10:13 There may simply not be an expression involving functions less general than the ${}_2F_1$ itself. Perhaps what you are really after is not a name for this function (hyperfunction), but rather some specific properties of it. Support? Singular support? Asymptotics? You may want to refine your question. –  Igor Khavkine Apr 2 '11 at 17:55
:: Isomorphisms of Categories :: by Andrzej Trybulec :: :: Copyright (c) 1991-2018 Association of Mizar Users theorem :: ISOCAT_1:1 canceled; theorem :: ISOCAT_1:2 canceled; ::$CT 2 theorem Th1: :: ISOCAT_1:3 for A, B being Category for F being Functor of A,B for a, b being Object of A for f being Morphism of a,b st f is invertible holds F /. f is invertible proof end; theorem Th2: :: ISOCAT_1:4 for A, B being Category for F1, F2 being Functor of A,B st F1 is_transformable_to F2 holds for t being transformation of F1,F2 for a being Object of A holds t . a in Hom ((F1 . a),(F2 . a)) proof end; theorem Th3: :: ISOCAT_1:5 for A, B, C being Category for F1, F2 being Functor of A,B for G1, G2 being Functor of B,C st F1 is_transformable_to F2 & G1 is_transformable_to G2 holds G1 * F1 is_transformable_to G2 * F2 proof end; theorem Th4: :: ISOCAT_1:6 for A, B being Category for F1, F2 being Functor of A,B st F1 is_transformable_to F2 holds for t being transformation of F1,F2 st t is invertible holds for a being Object of A holds F1 . a,F2 . a are_isomorphic proof end; definition let C, D be Category; redefine mode Functor of C,D means :: ISOCAT_1:def 1 ( ( for c being Object of C ex d being Object of D st it . (id c) = id d ) & ( for f being Morphism of C holds ( it . (id (dom f)) = id (dom (it . f)) & it . (id (cod f)) = id (cod (it . f)) ) ) & ( for f, g being Morphism of C st dom g = cod f holds it . (g (*) f) = (it . g) (*) (it . f) ) ); compatibility for b1 being M3( bool [: the carrier' of C, the carrier' of D:]) holds ( b1 is Functor of C,D iff ( ( for c being Object of C ex d being Object of D st b1 . (id c) = id d ) & ( for f being Morphism of C holds ( b1 . (id (dom f)) = id (dom (b1 . f)) & b1 . (id (cod f)) = id (cod (b1 . f)) ) ) & ( for f, g being Morphism of C st dom g = cod f holds b1 . (g (*) f) = (b1 . g) (*) (b1 . f) ) ) ) by ; end; :: deftheorem defines Functor ISOCAT_1:def 1 : for C, D being Category for b3 being M3( bool [: the carrier' of b1, the carrier' of b2:]) holds ( b3 is Functor of C,D iff ( ( for c being Object of C ex d being Object of D st b3 . (id c) = id d ) & ( for f being Morphism of C holds ( b3 . (id (dom f)) = id (dom (b3 . f)) & b3 . (id (cod f)) = id (cod (b3 . f)) ) ) & ( for f, g being Morphism of C st dom g = cod f holds b3 . (g (*) f) = (b3 . g) (*) (b3 . f) ) ) ); theorem Th5: :: ISOCAT_1:7 for A, B being Category for F being Functor of A,B st F is isomorphic holds for g being Morphism of B ex f being Morphism of A st F . f = g proof end; theorem Th6: :: ISOCAT_1:8 for A, B being Category for F being Functor of A,B st F is isomorphic holds for b being Object of B ex a being Object of A st F . a = b proof end; theorem Th7: :: ISOCAT_1:9 for A, B being Category for F being Functor of A,B st F is one-to-one holds Obj F is one-to-one proof end; definition let A, B be Category; let F be Functor of A,B; assume A1: F is isomorphic ; func F " -> Functor of B,A equals :Def2: :: ISOCAT_1:def 2 F " ; coherence F " is Functor of B,A proof end; end; :: deftheorem Def2 defines " ISOCAT_1:def 2 : for A, B being Category for F being Functor of A,B st F is isomorphic holds F " = F " ; definition let A, B be Category; let F be Functor of A,B; redefine attr F is isomorphic means :: ISOCAT_1:def 3 ( F is one-to-one & rng F = the carrier' of B ); compatibility ( F is isomorphic iff ( F is one-to-one & rng F = the carrier' of B ) ) proof end; end; :: deftheorem defines isomorphic ISOCAT_1:def 3 : for A, B being Category for F being Functor of A,B holds ( F is isomorphic iff ( F is one-to-one & rng F = the carrier' of B ) ); theorem Th8: :: ISOCAT_1:10 for A, B being Category for F being Functor of A,B st F is isomorphic holds F " is isomorphic proof end; theorem :: ISOCAT_1:11 for A, B being Category for F being Functor of A,B st F is isomorphic holds (Obj F) " = Obj (F ") proof end; theorem :: ISOCAT_1:12 for A, B being Category for F being Functor of A,B st F is isomorphic holds (F ") " = F proof end; theorem Th11: :: ISOCAT_1:13 for A, B being Category for F being Functor of A,B st F is isomorphic holds ( F * (F ") = id B & (F ") * F = id A ) proof end; theorem Th12: :: ISOCAT_1:14 for A, B, C being Category for F being Functor of A,B for G being Functor of B,C st F is isomorphic & G is isomorphic holds G * F is isomorphic proof end; :: Isomorphism of categories definition let A, B be Category; pred A,B are_isomorphic means :: ISOCAT_1:def 4 ex F being Functor of A,B st F is isomorphic ; reflexivity for A being Category ex F being Functor of A,A st F is isomorphic proof end; symmetry for A, B being Category st ex F being Functor of A,B st F is isomorphic holds ex F being Functor of B,A st F is isomorphic proof end; end; :: deftheorem defines are_isomorphic ISOCAT_1:def 4 : for A, B being Category holds ( A,B are_isomorphic iff ex F being Functor of A,B st F is isomorphic ); notation let A, B be Category; synonym A ~= B for A,B are_isomorphic ; end; theorem :: ISOCAT_1:15 for A, B, C being Category st A ~= B & B ~= C holds A ~= C proof end; theorem :: ISOCAT_1:16 for A being Category for o, m being set holds [:(1Cat (o,m)),A:] ~= A proof end; theorem :: ISOCAT_1:17 for A, B being Category holds [:A,B:] ~= [:B,A:] proof end; theorem :: ISOCAT_1:18 for A, B, C being Category holds [:[:A,B:],C:] ~= [:A,[:B,C:]:] proof end; theorem :: ISOCAT_1:19 for A, B, C, D being Category st A ~= B & C ~= D holds [:A,C:] ~= [:B,D:] proof end; definition let A, B, C be Category; let F1, F2 be Functor of A,B; assume A1: F1 is_transformable_to F2 ; let t be transformation of F1,F2; let G be Functor of B,C; func G * t -> transformation of G * F1,G * F2 equals :Def5: :: ISOCAT_1:def 5 G * t; coherence G * t is transformation of G * F1,G * F2 proof end; correctness ; end; :: deftheorem Def5 defines * ISOCAT_1:def 5 : for A, B, C being Category for F1, F2 being Functor of A,B st F1 is_transformable_to F2 holds for t being transformation of F1,F2 for G being Functor of B,C holds G * t = G * t; definition let A, B, C be Category; let G1, G2 be Functor of B,C; assume A1: G1 is_transformable_to G2 ; let F be Functor of A,B; let t be transformation of G1,G2; func t * F -> transformation of G1 * F,G2 * F equals :Def6: :: ISOCAT_1:def 6 t * (Obj F); coherence t * (Obj F) is transformation of G1 * F,G2 * F proof end; correctness ; end; :: deftheorem Def6 defines * ISOCAT_1:def 6 : for A, B, C being Category for G1, G2 being Functor of B,C st G1 is_transformable_to G2 holds for F being Functor of A,B for t being transformation of G1,G2 holds t * F = t * (Obj F); theorem Th18: :: ISOCAT_1:20 for A, B, C being Category for G1, G2 being Functor of B,C st G1 is_transformable_to G2 holds for F being Functor of A,B for t being transformation of G1,G2 for a being Object of A holds (t * F) . a = t . (F . a) proof end; theorem Th19: :: ISOCAT_1:21 for A, B, C being Category for F1, F2 being Functor of A,B st F1 is_transformable_to F2 holds for t being transformation of F1,F2 for G being Functor of B,C for a being Object of A holds (G * t) . a = G /. (t . a) proof end; theorem Th20: :: ISOCAT_1:22 for A, B, C being Category for F1, F2 being Functor of A,B for G1, G2 being Functor of B,C st F1 is_naturally_transformable_to F2 & G1 is_naturally_transformable_to G2 holds G1 * F1 is_naturally_transformable_to G2 * F2 proof end; definition let A, B, C be Category; let F1, F2 be Functor of A,B; assume A1: F1 is_naturally_transformable_to F2 ; let t be natural_transformation of F1,F2; let G be Functor of B,C; func G * t -> natural_transformation of G * F1,G * F2 equals :Def7: :: ISOCAT_1:def 7 G * t; coherence G * t is natural_transformation of G * F1,G * F2 proof end; correctness ; end; :: deftheorem Def7 defines * ISOCAT_1:def 7 : for A, B, C being Category for F1, F2 being Functor of A,B st F1 is_naturally_transformable_to F2 holds for t being natural_transformation of F1,F2 for G being Functor of B,C holds G * t = G * t; theorem Th21: :: ISOCAT_1:23 for A, B, C being Category for F1, F2 being Functor of A,B st F1 is_naturally_transformable_to F2 holds for t being natural_transformation of F1,F2 for G being Functor of B,C for a being Object of A holds (G * t) . a = G /. (t . a) proof end; definition let A, B, C be Category; let G1, G2 be Functor of B,C; assume A1: G1 is_naturally_transformable_to G2 ; let F be Functor of A,B; let t be natural_transformation of G1,G2; func t * F -> natural_transformation of G1 * F,G2 * F equals :Def8: :: ISOCAT_1:def 8 t * F; coherence t * F is natural_transformation of G1 * F,G2 * F proof end; correctness ; end; :: deftheorem Def8 defines * ISOCAT_1:def 8 : for A, B, C being Category for G1, G2 being Functor of B,C st G1 is_naturally_transformable_to G2 holds for F being Functor of A,B for t being natural_transformation of G1,G2 holds t * F = t * F; theorem Th22: :: ISOCAT_1:24 for A, B, C being Category for G1, G2 being Functor of B,C st G1 is_naturally_transformable_to G2 holds for F being Functor of A,B for t being natural_transformation of G1,G2 for a being Object of A holds (t * F) . a = t . (F . a) proof end; theorem Th23: :: ISOCAT_1:25 for A, B being Category for F1, F2 being Functor of A,B st F1 is_naturally_transformable_to F2 holds for a being Object of A holds Hom ((F1 . a),(F2 . a)) <> {} by NATTRA_1:def 2; theorem Th24: :: ISOCAT_1:26 for A, B being Category for F1, F2 being Functor of A,B st F1 is_naturally_transformable_to F2 holds for t1, t2 being natural_transformation of F1,F2 st ( for a being Object of A holds t1 . a = t2 . a ) holds t1 = t2 by NATTRA_1:19; theorem Th25: :: ISOCAT_1:27 for A, B, C being Category for F1, F2, F3 being Functor of A,B for G being Functor of B,C for s being natural_transformation of F1,F2 for s9 being natural_transformation of F2,F3 st F1 is_naturally_transformable_to F2 & F2 is_naturally_transformable_to F3 holds G * (s9 * s) = (G * s9) * (G * s) proof end; theorem Th26: :: ISOCAT_1:28 for A, B, C being Category for F being Functor of A,B for G1, G2, G3 being Functor of B,C for t being natural_transformation of G1,G2 for t9 being natural_transformation of G2,G3 st G1 is_naturally_transformable_to G2 & G2 is_naturally_transformable_to G3 holds (t9 * t) * F = (t9 * F) * (t * F) proof end; theorem Th27: :: ISOCAT_1:29 for A, B, C, D being Category for F being Functor of A,B for G being Functor of B,C for H1, H2 being Functor of C,D for u being natural_transformation of H1,H2 st H1 is_naturally_transformable_to H2 holds (u * G) * F = u * (G * F) proof end; theorem Th28: :: ISOCAT_1:30 for A, B, C, D being Category for F being Functor of A,B for G1, G2 being Functor of B,C for H being Functor of C,D for t being natural_transformation of G1,G2 st G1 is_naturally_transformable_to G2 holds (H * t) * F = H * (t * F) proof end; theorem Th29: :: ISOCAT_1:31 for A, B, C, D being Category for F1, F2 being Functor of A,B for G being Functor of B,C for H being Functor of C,D for s being natural_transformation of F1,F2 st F1 is_naturally_transformable_to F2 holds (H * G) * s = H * (G * s) proof end; theorem Th30: :: ISOCAT_1:32 for A, B, C being Category for F being Functor of A,B for G being Functor of B,C holds (id G) * F = id (G * F) proof end; theorem Th31: :: ISOCAT_1:33 for A, B, C being Category for F being Functor of A,B for G being Functor of B,C holds G * (id F) = id (G * F) proof end; theorem Th32: :: ISOCAT_1:34 for B, C being Category for G1, G2 being Functor of B,C for t being natural_transformation of G1,G2 st G1 is_naturally_transformable_to G2 holds t * (id B) = t proof end; theorem Th33: :: ISOCAT_1:35 for A, B being Category for F1, F2 being Functor of A,B for s being natural_transformation of F1,F2 st F1 is_naturally_transformable_to F2 holds (id B) * s = s proof end; definition let A, B, C be Category; let F1, F2 be Functor of A,B; let G1, G2 be Functor of B,C; let s be natural_transformation of F1,F2; let t be natural_transformation of G1,G2; func t (#) s -> natural_transformation of G1 * F1,G2 * F2 equals :: ISOCAT_1:def 9 (t * F2) * (G1 * s); correctness coherence (t * F2) * (G1 * s) is natural_transformation of G1 * F1,G2 * F2 ; ; end; :: deftheorem defines (#) ISOCAT_1:def 9 : for A, B, C being Category for F1, F2 being Functor of A,B for G1, G2 being Functor of B,C for s being natural_transformation of F1,F2 for t being natural_transformation of G1,G2 holds t (#) s = (t * F2) * (G1 * s); theorem Th34: :: ISOCAT_1:36 for A, B, C being Category for F1, F2 being Functor of A,B for G1, G2 being Functor of B,C for s being natural_transformation of F1,F2 for t being natural_transformation of G1,G2 st F1 is_naturally_transformable_to F2 & G1 is_naturally_transformable_to G2 holds t (#) s = (G2 * s) * (t * F1) proof end; theorem :: ISOCAT_1:37 for A, B being Category for F1, F2 being Functor of A,B for s being natural_transformation of F1,F2 st F1 is_naturally_transformable_to F2 holds (id (id B)) (#) s = s proof end; theorem :: ISOCAT_1:38 for B, C being Category for G1, G2 being Functor of B,C for t being natural_transformation of G1,G2 st G1 is_naturally_transformable_to G2 holds t (#) (id (id B)) = t proof end; theorem :: ISOCAT_1:39 for A, B, C, D being Category for F1, F2 being Functor of A,B for G1, G2 being Functor of B,C for H1, H2 being Functor of C,D for s being natural_transformation of F1,F2 for t being natural_transformation of G1,G2 for u being natural_transformation of H1,H2 st F1 is_naturally_transformable_to F2 & G1 is_naturally_transformable_to G2 & H1 is_naturally_transformable_to H2 holds u (#) (t (#) s) = (u (#) t) (#) s proof end; theorem :: ISOCAT_1:40 for A, B, C being Category for F being Functor of A,B for G1, G2 being Functor of B,C for t being natural_transformation of G1,G2 st G1 is_naturally_transformable_to G2 holds t * F = t (#) (id F) proof end; theorem :: ISOCAT_1:41 for A, B, C being Category for F1, F2 being Functor of A,B for G being Functor of B,C for s being natural_transformation of F1,F2 st F1 is_naturally_transformable_to F2 holds G * s = (id G) (#) s proof end; theorem :: ISOCAT_1:42 for A, B, C being Category for F1, F2, F3 being Functor of A,B for G1, G2, G3 being Functor of B,C for s being natural_transformation of F1,F2 for s9 being natural_transformation of F2,F3 for t being natural_transformation of G1,G2 for t9 being natural_transformation of G2,G3 st F1 is_naturally_transformable_to F2 & F2 is_naturally_transformable_to F3 & G1 is_naturally_transformable_to G2 & G2 is_naturally_transformable_to G3 holds (t9 * t) (#) (s9 * s) = (t9 (#) s9) * (t (#) s) proof end; theorem Th41: :: ISOCAT_1:43 for A, B, C, D being Category for F being Functor of A,B for G being Functor of C,D for I, J being Functor of B,C st I ~= J holds ( G * I ~= G * J & I * F ~= J * F ) proof end; theorem Th42: :: ISOCAT_1:44 for A, B being Category for F being Functor of A,B for G being Functor of B,A for I being Functor of A,A st I ~= id A holds ( F * I ~= F & I * G ~= G ) proof end; definition let A, B be Category; pred A is_equivalent_with B means :: ISOCAT_1:def 10 ex F being Functor of A,B ex G being Functor of B,A st ( G * F ~= id A & F * G ~= id B ); reflexivity for A being Category ex F, G being Functor of A,A st ( G * F ~= id A & F * G ~= id A ) proof end; symmetry for A, B being Category st ex F being Functor of A,B ex G being Functor of B,A st ( G * F ~= id A & F * G ~= id B ) holds ex F being Functor of B,A ex G being Functor of A,B st ( G * F ~= id B & F * G ~= id A ) ; end; :: deftheorem defines is_equivalent_with ISOCAT_1:def 10 : for A, B being Category holds ( A is_equivalent_with B iff ex F being Functor of A,B ex G being Functor of B,A st ( G * F ~= id A & F * G ~= id B ) ); notation let A, B be Category; synonym A,B are_equivalent for A is_equivalent_with B; end; theorem :: ISOCAT_1:45 for A, B being Category st A ~= B holds A is_equivalent_with B proof end; theorem Th44: :: ISOCAT_1:46 for A, B, C being Category st A,B are_equivalent & B,C are_equivalent holds A,C are_equivalent proof end; definition let A, B be Category; assume A1: A,B are_equivalent ; mode Equivalence of A,B -> Functor of A,B means :Def11: :: ISOCAT_1:def 11 ex G being Functor of B,A st ( G * it ~= id A & it * G ~= id B ); existence ex b1 being Functor of A,B ex G being Functor of B,A st ( G * b1 ~= id A & b1 * G ~= id B ) by A1; end; :: deftheorem Def11 defines Equivalence ISOCAT_1:def 11 : for A, B being Category st A,B are_equivalent holds for b3 being Functor of A,B holds ( b3 is Equivalence of A,B iff ex G being Functor of B,A st ( G * b3 ~= id A & b3 * G ~= id B ) ); theorem :: ISOCAT_1:47 for A being Category holds id A is Equivalence of A,A proof end; theorem :: ISOCAT_1:48 for A, B, C being Category st A,B are_equivalent & B,C are_equivalent holds for F being Equivalence of A,B for G being Equivalence of B,C holds G * F is Equivalence of A,C proof end; theorem Th47: :: ISOCAT_1:49 for A, B being Category st A,B are_equivalent holds for F being Equivalence of A,B ex G being Equivalence of B,A st ( G * F ~= id A & F * G ~= id B ) proof end; theorem Th48: :: ISOCAT_1:50 for A, B being Category for F being Functor of A,B for G being Functor of B,A st G * F ~= id A holds F is faithful proof end; theorem :: ISOCAT_1:51 for A, B being Category st A,B are_equivalent holds for F being Equivalence of A,B holds ( F is full & F is faithful & ( for b being Object of B ex a being Object of A st b,F . a are_isomorphic ) ) proof end; :: The elimination of the Id selector caused the necessity to :: introduce corresponding functor because 'the Id of C' is sometimes :: separately used, not applied to an object (2012.01.25, A.T.) definition let C be Category; deffunc H1( Object of C) -> Morphism of$1,$1 = id$1; func IdMap C -> Function of the carrier of C, the carrier' of C means :: ISOCAT_1:def 12 for o being Object of C holds it . o = id o; existence ex b1 being Function of the carrier of C, the carrier' of C st for o being Object of C holds b1 . o = id o proof end; uniqueness for b1, b2 being Function of the carrier of C, the carrier' of C st ( for o being Object of C holds b1 . o = id o ) & ( for o being Object of C holds b2 . o = id o ) holds b1 = b2 proof end; end; :: deftheorem defines IdMap ISOCAT_1:def 12 : for C being Category for b2 being Function of the carrier of C, the carrier' of C holds ( b2 = IdMap C iff for o being Object of C holds b2 . o = id o );
image credit Electromagnetic spectrum The electromagnetic spectrum is the collective term for all known frequencies and their linked wavelengths of the known photons (electromagnetic radiation). The "electromagnetic spectrum" of an object has a different meaning, and is instead the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. MORE Mediander uses proprietary software that curates millions of interconnected topics to produce the Mediander Topics search results. As with any algorithmic search, anomalous results may occur. If you notice such an anomaly, or have any comments or suggestions, please contact us.
What is the inverse of y=log(x-3)? ? Nov 1, 2015 $y = {10}^{x} + 3$ Explanation: The inverse of a logarithmic function $y = {\log}_{a} x$ is the exponential function $y = {a}^{x}$. $\left[1\right] \text{ } y = \log \left(x - 3\right)$ First we must convert this to exponential form. $\left[2\right] \text{ } \Leftrightarrow {10}^{y} = x - 3$ Isolate $x$ by adding $3$ to both sides. $\left[3\right] \text{ } {10}^{y} + 3 = x - 3 + 3$ $\left[4\right] \text{ } x = {10}^{y} + 3$ Finally, switch the positions of $x$ and $y$ to get the inverse function. $\left[5\right] \text{ } \textcolor{b l u e}{y = {10}^{x} + 3}$
# Figure 2 Ratio of inclusive photon invariant cross sections obtained with the PCM and \EMC\ methods to the TCM fit, \Eq{eq:Bylinkin} of the combined inclusive photon spectrum in pp collisions at $\sqrt{s}=2.76$ TeV~(\textit{left}) and 8 TeV~(\textit{right}) Statistical uncertainties are given by vertical lines and systematic uncertainties are visualized by the height of the boxes, while the bin widths are represented by the widths of the boxes.
# How Is Area Related To A Definite Integral? You might have seen the marketing campaign for the Super Bass-o-Matic 76. Let’s look at some options for finding the increased costs when changing production of Super Bass-o-Matic blenders  from 200 blenders to 800 blenders. Suppose the marginal cost is given by $latex \displaystyle {C}'(x)=30-0.02x {\text{ dollars per blender}}$ By finding the area of rectangles, we get values that indicate an increase in cost. For instance, the first term of the left hand sum below, $latex \displaystyle \left( 26\frac{\$}{\text{blender}} \right)\left( 100\text{ blenders} \right)=\$2600$ This means that if we estimate the cost per blender as 26 dollars per blender for all 100 blenders from 200 to 300 blenders, the increased cost is $2600. If we continue this for more rectangles until 800, the sum of the areas give an estimate for the increased cost from increasing production from 200 to 800. In this case the left hand sum is$11400 and the right hand sum is $12600…the difference between these estimates is$1200. If we do the the same process for 60 rectangles (each being 10 wide), we get additional estimates. In this case, the left hand sum is $11940 and the right hand sum is 12060. The difference between the estimate for 60 rectangles is$120. Ten times as many rectangles leads to a tenth of the difference. We get a similar picture for 600 rectangles. The left hand sum is $11994 and the right hand sum is$12006. In this case the difference between the estimates is $12. Also notice that as the number of rectangles gets larger, the area in the rectangles looks more and more like the area under the function and above the horizontal axis. Let’s summarize what happens as the number of rectangles increases. Number of Rectangles LHS RHS Difference 6 11400 12600 1200 60 11940 12060 120 600 11994 12006 12 You might guess that 6000 rectangles would lead to a difference of$1.2 and you would be correct. The left hand sum is $11999.4 and the right hand sum is$12000.6. In fact, as the number of rectangles gets larger and larger (each rectangle getting narrower and narrower) each estimate gets closer and closer to $12000. This is the exact area in the blue shaded region under the function, above the horizontal axis, and between 200 and 800. To convince yourself that this is the case, we can this area by breaking the region up into a rectangle and triangle. The area is$latex \displaystyle 14\cdot 600+\frac{1}{2}\cdot 12\cdot 60=12000$The exact area under the function is represented with a definite integral,$latex \displaystyle \int\limits_{200}^{800}{(30-0.02x)\,dx}=12000\$
# ON PARTITION CONGRUENCES FOR OVERCUBIC PARTITION PAIRS • Kim, Byung-Chan (School of Liberal Arts Seoul National University of Science and Technology) • Published : 2012.07.31 #### Abstract In this note, we investigate partition congruences for a certain type of partition function, which is named as the overcubic partition pairs in light of the literature. Let $\bar{cp}(n)$ be the number of overcubic partition pairs. Then we will prove the following congruences: $$\bar{cp}(8n+7){\equiv}0(mod\;64)\;and\;\bar{cp}(9m+3){\equiv}0(mod\;3)$$. #### References 1. G. E. Andrews and F. G. Garvan, Dyson's crank of a partition, Bull. Amer. Math. Soc. (N.S.) 18 (1988), no. 2, 167-171. https://doi.org/10.1090/S0273-0979-1988-15637-6 2. B. C. Berndt, Number Theory in the Spirit of Ramanujan, American Mathematical Society, Providence, RI, 2006. 3. H.-C. Chan, Ramanujan's cubic continued fraction and an analog of his "most beautiful identity", Int. J. Number Theory 6 (2010), no. 3, 673-680. https://doi.org/10.1142/S1793042110003150 4. H.-C. Chan, Ramanujan's cubic continued fraction and Ramanujan type congruences for a certain partition function, Int. J. Number Theory 6 (2010), no. 4, 819-834. https://doi.org/10.1142/S1793042110003241 5. H.-C. Chan, Distribution of a certain partition function modulo powers of primes, Acta Math. Sin. (Engl. Ser.) 27 (2011), 625-634. 6. B. Gordon and K. Hughes, Ramanujan congruences for q(n), Analytic number theory (Philadelphia, Pa., 1980), 333-359, Lecture Notes in Math., 899, Springer, Berlin-New York, 1981. https://doi.org/10.1007/BFb0096473 7. B. Kim, The overcubic partition function mod 3, Proceedings of Ramanujan Rediscovered 2009: A Conference in Memory of K. Venkatachaliengar on the Centenary of His Birth, Lecture Note Series of the Ramanujan Mathematical Society 14 (2010), 157-163. 8. B. Kim, Overpartition pairs modulo powers of 2, Discrete Math. 311 (2011), no. 10-11, 835-840. https://doi.org/10.1016/j.disc.2011.02.002 9. G. Ligozat, Courbes modulaires de genre 1, Bull. Soc. Math. France, Mem. 43. Supplement au Bull. Soc. Math. France Tome 103, no. 3. Societe Mathematique de France, Paris, 1975. 80 pp. 10. K. Mahlburg, The overpartition function modulo small powers of 2, Discrete Math. 286 (2004), no. 3, 263-267. https://doi.org/10.1016/j.disc.2004.03.014 11. M. Newman, Construction and application of a class of modular functions II, Proc. London Math. Soc. (3) 9 (1959), 373-387. https://doi.org/10.1112/plms/s3-9.3.373 12. K. Ono, Web of Modularity: arithmetic of the coefficients of modular forms and q-series, CBMS Regional Conference Series in Mathematics, 102. Published for the Conference Board of the Mathematical Sciences, Washington, DC, by the American Mathematical Society, Providence, RI, 2004. 13. H. Zhao and Z. Zhong, Ramanujan type congruences for a partition function, The Electronic Lournal of Combinatorics 18 (2011), P. 58. #### Cited by 1. Congruences modulo 27 for cubic partition pairs vol.171, 2017, https://doi.org/10.1016/j.jnt.2016.07.012 2. New congruences for overcubic partition pairs vol.10, pp.4, 2017, https://doi.org/10.1515/tmj-2017-0050
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Discussion Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeFeb 4th 2014 • (edited Feb 4th 2014) The entry (infinity,1)-Kan extension is still a sad stub which you shouldn’t look at if you have better things to do. But I have now briefly added at least a few more specific pointers to HTT, in particular to the pointwise-ness issue. But just pointers, essentially no text for the moment. (If you feel energetic, be invited to turn the entry into something prettier!) • CommentRowNumber2. • CommentAuthorDavid_Corfield • CommentTimeJul 26th 2016 At super formal smooth infinity-groupoid, how do I know what the pattern of adjoints on $(\infty, 1)$-sheaves will look like from the diagram of sites?: $\ast \stackrel{\longleftarrow}{\hookrightarrow} CartSp \stackrel{\hookrightarrow}{\longleftarrow} CartSp\rtimes InfPoint \stackrel{\longleftarrow}{\stackrel{\hookrightarrow}{\longleftarrow}} CartSp \rtimes SuperPoint \,.$ Any arrow there induces a map in the opposite direction on sheaves, and this has left and right adjoints by (infinity,1)-Kan extension. But what more can I tell from the kinds of maps between sites? E.g., when there is an adjunction between sites, does this make one of the induced maps on sheaves coincide with one of the adjoints of the induced map from the adjoint? And does a map of sites being an inclusion make a difference to the induced maps? • CommentRowNumber3. • CommentAuthorDavid_Corfield • CommentTimeJul 26th 2016 The page restriction and extension of sheaves is relevant but rather isolated. • CommentRowNumber4. • CommentAuthorUrs • CommentTimeJul 26th 2016 • (edited Jul 26th 2016) E.g., when there is an adjunction between sites, does this make one of the induced maps on sheaves coincide with one of the adjoints of the induced map from the adjoint? Yes! If $i \dashv p$ then $i_! \dashv p_! \simeq i^\ast \dashv i_\ast \simeq p_\ast$ and so on. • CommentRowNumber5. • CommentAuthorDavid_Corfield • CommentTimeJul 26th 2016 So I guess there should be a further right adjoint generated from $CartSp\rtimes InfPoint \stackrel{\longleftarrow}{\stackrel{\hookrightarrow}{\longleftarrow}} CartSp \rtimes SuperPoint \,,$ a right Kan extension of the map induced by the lower arrow. • CommentRowNumber6. • CommentAuthorUrs • CommentTimeJul 26th 2016 So the Kan extension is a functor not between the sites, but between the toposes over these sites. A single morphism of sites induces an adjoint triple on presheaves over the sites. An adjoint pair of morphisms between sites induces an adjoint quadruple on presheaves, by the relation in #4. Under certain conditions then some of these adjoints descend also to sheaves. • CommentRowNumber7. • CommentAuthorDavid_Corfield • CommentTimeJul 26th 2016 Yes, I know that. I was just saying that each of those three arrows in #5 generates an adjoint triple on presheaves, which amounts to an adjoint quintuple between toposes. But maybe we neglect the rightmost adjoint as not descending to sheaves? • CommentRowNumber8. • CommentAuthorUrs • CommentTimeJul 26th 2016 Oh, sorry, I didn’t read properly. Yes, that’s right. There might be a further adjoint.
# Infinite universe and big bang singularity 1. Jun 27, 2012 ### kuartus4 Hello. Cosmologists leave open the possibility that the universe as a whole may be infinitely big. My question is, does that mean that the entire infinite universe was compressed into the initial singularity? And how can a universe go from a singularity to being infinitely big in a finite time? 2. Jun 27, 2012 ### Number Nine "Singularity" doesn't mean "point in space", it refers to a mathematical phenomenon that signals that our models no longer work (or, at least, probably don't describe physical reality). A singularity is a quirk of the equations, not an actual, physical "thing". 3. Jun 27, 2012 ### phinds To add to what Number Nine said, the universe is known to have been MUCH smaller at one Plank Time after the singularity than it is now, but it MIGHT have been infinite then (which of course implies infinite now). If it was not infinite then, it is not infinite now. Our observable universe (currently some 90+ billion light years in diameter) at that time is stated in various reports as sizes ranging from a golf ball down to much smaller. Reports that put it as a point are patently ridiculous. 4. Jun 27, 2012 ### marcus Well said. Thanks, it's an important message to get across to new members. People get fooled by the word "singularity" because it sounds like it refers to a "single point". But instead it refers to a failure of some man-made mathematics. History shows that in the past physicists have gotten rid of singularities in various other theories by improving the equations so they don't blow up. Now they are working on curing the singularity that occurs in the conventional model of the cosmos, right at the start of expansion. An equation does not necessarily fail at a single isolated point. It can fail everywhere along a wide frontier--at infinitely many points. In which case the singularity is said to occur throughout the whole region where the model breaks down. As Phinds rightly indicated, it is logically possible that our universe began expanding with an infinite volume. It would necessarily have begun infinite if it is spatially infinite today. We do not yet know whether to consider space finite or infinite. The region we are now looking at is evidently not the whole thing. Last edited: Jun 27, 2012 5. Jun 27, 2012 ### jcsd Describing singularities in general in GR is a problem let alone trying to describe them physically (not that I think singualrities are physical). That said some singualrities do, for example, resemble points in space. An infinite universe is always infinite until the point of the singularity; however at the singularity all distances between objects, no matter how far apart they are later in time, goes to zero. 6. Jun 27, 2012 ### marcus At the classical model's singularity, distances between objects are not defined. What "objects"? The model blows, it no longer applies to nature and it is meaningless to talk about distances between objects being this that or the other thing. That's my take on it. If you want to imagine differently, fine. There are now rival models, waiting to be tested, which go back further in time to before the start of expansion, where the classic 1915 model blows up. I'd guess you know about them and currently which are getting the most research attention. In the new models it is NOT true that all distances are zero at the start of expansion. However with the new models at least you might be able to talk meaningfully about that kind of stuff. What the highest density is, that is reached at the moment the bounce happens, and so on... The idea of objects and distances between them is somewhat nebulous under the circumstances, but the energy density (how much crowded into a unit volume) is finite and well-defined and can sort of take the place of the "distance between objects" idea. Last edited: Jun 27, 2012 7. Jun 27, 2012 ### jcsd I said "go to zero" (well I actually made a gammatical error due to editing and said "goes", but that's neither here nor there). edit: just to make this clear I am saying that as t->0 where 0 is the big bang singualrity d->0 (where is d is the distance between any two objects). Last edited: Jun 28, 2012 8. Jun 28, 2012 ### DrStupid This is valid for finite distances but in an infinite universe there are also infinite distances. 9. Jun 28, 2012 ### Number Nine Eh...that's a touchy issue. Even on the plane (which is infinite), any two defined points have a finite distance between them. It's entirely possible that two points could never reach each other (say, due to FTL expansion), but the distance between any two points is finite. 10. Jun 29, 2012 ### jcsd No there are no infinite distances, even in an infinite Universe- in standard big bang theory anyway 11. Jun 29, 2012 ### DrStupid Assuming we have n+1 points in a row with equal distances dr. Than the distance between point 0 and point n is r=n·dr and the limes of r for n->oo (this is possible in an universe with infinite size) is infinite. 12. Jun 29, 2012 ### DeepSpace9 Does the observable universe make up 80% of the universe? Or 20%? I have heard different numbers.. 13. Jun 29, 2012 ### Calimero Nobody can't tell you that, there are some lower bound estimates which tells that hole thing must have some considerably larger volume then OU (hate to dig for exact number right now), but then again it can be infinite, which means that it is infinitely larger then OU. 14. Jun 29, 2012 ### jcsd But assuming $n \in \mathbb{N}$ then n is never equal to ∞ and hence r is never equal to ∞ as well. Of course that doesn't conclusively show that there are non-existance of infinite proper distances in standard big bang cosmology, however it is the case. 15. Jun 29, 2012 ### DrStupid Of course not. ∞ is not a number. But if the limes of r is ∞ than you get r'=0·∞ when the scale factor goes to zero. This term is not necessarily zero. It could even be infinite. 16. Jun 29, 2012 ### kuartus4 When I was reading roger penrose book the road to reality, he seemed to say that general relativity should be regarded higher that quantum mechanics and that general relativity should not have to compromise to fit with quantum physics but the other way around. I could be mistaken. Most of the book went over my head. But if Penrose is right about relativity being correct, then doesnt that mean that per the hawking penrose singularity theorems a primordial singularity before the hot big bang is inevitable? 17. Jun 29, 2012 ### Number Nine The problem is that you're now talking about something completely different. The distance between any two given points is finite; the fact that the distance tends towards infinity as you move the points further apart only means that distance is unbounded, which is something different entirely. 18. Jun 29, 2012 ### DrStupid I am still talking about infinite distances. Distances between given points are something completely different. The latter are finite and shrink to zero in the big bang singularity. The former do not need to be zero in the singularity which therefore do not need to be a point. 19. Jun 29, 2012 ### Naty1 kuartus: no, he did not say the GR is predomininant....In fact on page 713 he points out that for a black hole singularity and no the the second part as well: http://en.wikipedia.org/wiki/Singularity_theorem This again shows an emphasis in favor of quantum theory over relativity. 20. Jun 30, 2012 ### jcsd There no such thing as 'infinite distances', at least not in this scenario. What you are trying to say is that whilst all distances tend to zero, the length of say an infinitely-long curve in space will not tend to zero will remain infinite as t->0. This is somewhat moot as the singularity is not a point on the manifold. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
### On the local description of varieties The degree of a curve is the degree of the defining polynomial. For example, a line is a curve of degree $1$. Let $F$ be a curve in $\Bbb{A}[X,Y]$, and $P=(a,b)\in F$. Then $P$ will be called a simple point if $F_X(P)\neq 0$ or $F_Y(P)\neq 0$. What does this condition really imply? It means that the function doesn’t locally stop changing as we move away from $x$ and $y$. It is not a local maximum, or minimum (it’s not an extremal point). In this case the line $F_X(X-a)+F_Y(Y-b)=0$ is called the tangent line. Why do we get tangent lines whose equations look like this? This is analogous to the dot product rule for writing down the equation to a plane (or any hyperplane in general). The line is the hyperplane in $\Bbb{A}^2$. Take an equation $F$, and write it down as $F=F_m+F_{m+1}+\dots+F_n$, where $F_m\neq 0$. Here each $F_i$ is a homogeneous component. Then $m$ is the multiplicity of $F$ at $(0,0)$. What does this mean? As we approach $(0,0)$, the higher degree terms approach $0$ much faster than $F_m$. Hence, $F$ “looks like” $F_m$ near $0$. An analogy would be $0=xy+(xy)^2+(xy)^3$. Near $(0,0)$, the function “looks like” $xy=0$. Hence, it makes sense to call $m$ the multiplicity of the curve at $(0,0)$. But how do we define the multiplicity of the curve at an arbitrary point on it? Why only $(0,0)$? We can create new variables $x'=(x-a)$ and $y'=(y-b)$, and then re-write the equation. The new equation contains the point $(0,0)$. Now we can follow the same procedure. As $F_m$ is a function of two variables in this case, it can be factorized into a product of linear polynomials, raised to various exponents. Is it weird that the function looks like a bunch of lines near $(0,0)$? No. Locally, every polynomial, and in fact every function, looks like a linear near a point. Here, because we’re talking about varieties, and hence possibly a product of polynomials, the local description of a polynomial is a bunch of lines. Nothing weird in it at all. Why are the factors of $F_m$ tangents at $(0,0)$? Are we taking the “local” argument too far? No. It’s a revelation though. I have never seen it before. It makes complete mathematical sense. Say we take $y-x^2=0$. Then the tangent at $(0,0)$ is $y=0$, which is in fact true.
# Proof that $(AA^{-1}=I) \Rightarrow (AA^{-1} = A^{-1}A)$ [duplicate] I'm trying to prove a pretty simple problem - commutativity of multiplication of matrix and its inverse. But I'm not sure, if my proof is correct, because I'm not very experienced. Could you, please, take a look at it? ### My proof: • We know, that $AA^{-1}=I$, where $I$ is an identity matrix and $A^{-1}$ is an inverse matrix. • I want to prove, that it implies $AA^{-1}=A^{-1}A$ \begin{align} AA^{-1}&=I\\ AA^{-1}A&=IA\\ AX&=IA \tag{$X=A^{-1}A$}\\ AX&=A \end{align} At this point we can see, that $X$ must be a multiplicative identity for matrix $A \Rightarrow X$ must be an identity matrix $I$. \begin{align} X = A^{-1}A &= I\\ \underline{\underline{AA^{-1} = I = A^{-1}A}} \end{align} • How do you have $A^{-1}$ defined? Is $A$ a square matrix? It is in general not true for nonsquare matrices that if $AB=I$ that $BA=I$, so the fact that $A$ is square must come into play somehow in the proof. – JMoravitz Oct 22 '16 at 22:34 • Further, the fact that $AX=A$ does not imply that $X$ is an identity matrix. (It only implies this if $A$ is invertible, but the proof would require left-multiplication by $A^{-1}$ to cancel out, but that would be circular logic since you are explicitly trying to show $A^{-1}A=I$) – JMoravitz Oct 22 '16 at 22:36 • @JMoravitz $A^{-1}$ is supposed to be an inverse matrix. I'll add that info into the question – Eenoku Oct 22 '16 at 22:37 • @JMoravitz yes, it's usually defined by $AA^{-1}=I=A^{-1}A$. I thought, that the other part of the equation ($I = A^{-1}A$) could be deduced from the first one, so that it could be omitted in the definition. – Eenoku Oct 22 '16 at 22:47 You claim is not quite true. Consider the example \begin{align} \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 & 1\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}. \end{align} Suppose $A, B$ are square matrices such that $AB = I$. Observe \begin{align} BA= BIA= BA BA = (BA)^2 \ \ \Rightarrow \ \ BA(I-BA) = 0. \end{align} Moreover, using the fact that $AB$ is invertible implies $A$ and $B$ are invertible (which is true only in finite dimensional vector spaces), then it follows \begin{align} I-BA=0. \end{align} Note: we have used the fact that $A, B$ are square matrices when we insert $I$ between $BA$.
Martin Escardo 2011. \begin{code} {-# OPTIONS --without-K #-} module DecidableAndDetachable where open import SetsAndFunctions open import CurryHoward open import Two open import Equality \end{code} We look at decidable propositions and subsets (using the terminogy "detachable" for the latter"). \begin{code} decidable : Prp Prp decidable A = A ¬ A ¬¬-elim : {A : Prp} decidable A ¬¬ A A ¬¬-elim (in₀ a) f = a ¬¬-elim (in₁ g) f = ⊥-elim(f g) negation-preserves-decidability : {A : Prp} decidable A decidable(¬ A) negation-preserves-decidability (in₀ a) = in₁ f f a) negation-preserves-decidability (in₁ g) = in₀ g which-of : {A B : Prp} A B \(b : ) (b A) (b B) which-of (in₀ a) = ∃-intro (∧-intro r a) ())) which-of (in₁ b) = ∃-intro (∧-intro ()) r b)) \end{code} Notice that in Agda the term λ () is a proof of an implication that holds vacuously, by virtue of the premise being false. In the above example, the first occurrence is a proof of ₀ ≡ ₁ → B, and the second one is a proof of ₁ ≡ ₀ → A. The following is a special case we are interested in: \begin{code} truth-value : {A : Prp} decidable A \(b : ) (b A) (b ¬ A) truth-value = which-of \end{code} Notice that this b is unique (Agda exercise) and that the converse also holds. In classical mathematics it is posited that all propositions have binary truth values, irrespective of whether they have BHK-style witnesses. And this is precisely the role of the principle of excluded middle in classical mathematics. The following requires choice, which holds in BHK-style constructive mathematics: \begin{code} indicator : {X : Set} {A B : X Prp} (∀(x : X) A x B x) \(p : X ) ∀(x : X) (p x A x) (p x B x) indicator {X} {A} {B} h = ∃-intro x ∃-witness(lemma₁ x)) x ∃-elim(lemma₁ x)) where lemma₀ : ∀(x : X) (A x B x) \b (b A x) (b B x) lemma₀ x = which-of {A x} {B x} lemma₁ : ∀(x : X) \b (b A x) (b B x) lemma₁ = λ x lemma₀ x (h x) \end{code} We again have a particular case of interest. Detachable subsets, defined below, are often known as decidable subsets. Agda doesn't slighly non-universal terminology. \begin{code} detachable : {X : Set} (A : X Prp) Prp detachable {X} A = x decidable(A x) characteristic-function : {X : Set} {A : X Prp} detachable A \(p : X ) ∀(x : X) (p x A x) (p x ¬(A x)) characteristic-function = indicator co-characteristic-function : {X : Set} {A : X Prp} detachable A \(p : X ) ∀(x : X) (p x ¬(A x)) (p x A x) co-characteristic-function d = indicator x ∨-commutative(d x)) \end{code} Notice that p is unique (Agda exercise - you will need extensionality).
CFD Online Discussion Forums (http://www.cfd-online.com/Forums/) -   FLUENT (http://www.cfd-online.com/Forums/fluent/) sarah_ron2002 April 10, 2007 00:40 Question about Postprocessing Using UDF(in tutoria Dear all, I have a question for the tutorial provided in UDF: Postprocessing Using User-Defined Scalars (I have included at the end of this post for your reference). My question is about setting up the equation for the UDS. In the tutorial, it says "In addition to compiling this UDF, as described in Chapter 5, you will need to enable the solution of a user-defined scalar transport equation in FLUENT.". What are the equations for the two UDS ( T4, MAG_GRAD_T4)? How to determine the diffusity,flux and source terms? Any response will be high welcome! Thank you very much for your help! Sincerely, sarah ================================================== =========== Postprocessing Using User-Defined Scalars (http://venus.imp.mx/hilario/SuperCom...ml/node107.htm) Below is an example of a compiled UDF that computes the gradient of temperature to the fourth power, and stores its magnitude in a user-defined scalar. The computed temperature gradient can, for example, be subsequently used to plot contours. Although the practical application of this UDF is questionable, its purpose here is to show the methodology of computing gradients of arbitrary quantities that can be used for postprocessing. /************************************************** *********************/ /* UDF for computing the magnitude of the gradient of T^4 */ /************************************************** *********************/ #include "udf.h" /* Define which user-defined scalars to use. */ enum { T4, MAG_GRAD_T4, N_REQUIRED_UDS }; /* Make sure there are enough user-defined scalars. */ if (n_uds < N_REQUIRED_UDS) Internal_Error("not enough user-defined scalars allocated"); /* Fill first UDS with temperature raised to fourth power. */ thread_loop_c (t,domain) { { begin_c_loop (c,t) { real T = C_T(c,t); C_UDSI(c,t,T4) = pow(T,4.); } end_c_loop (c,t) } } { { begin_f_loop (f,t) { real T = 0.; T = F_T(f,t); T = C_T(F_C0(f,t),t->t0); F_UDSI(f,t,T4) = pow(T,4.); } end_f_loop (f,t) } } { NULL != T_STORAGE_R_NV(t,SV_UDSI_G(T4))) { begin_c_loop (c,t) { } end_c_loop (c,t) } } { NULL != T_STORAGE_R_NV(t->t0,SV_UDSI_G(T4))) { begin_f_loop (f,t) { } end_f_loop (f,t) } } } The conditional statement if (NULL != THREAD_STORAGE(t,SV_UDS_I(T4))) is used to check if the storage for the user-defined scalar with index T4 has been allocated, while NULL != T_STORAGE_R_NV(t,SV_UDSI_G(T4)) checks whether the storage of the gradient of the user-defined scalar with index T4 has been allocated. In addition to compiling this UDF, as described in Chapter 5, you will need to enable the solution of a user-defined scalar transport equation in FLUENT. Define $\rightarrow$ User-Defined $\rightarrow$ Scalars... Refer to here in the User's Guide for UDS equation theory and details on how to setup scalar equations. sarah_ron April 10, 2007 16:17 Re: Question about Postprocessing Using UDF(in tut Any suggestion? Thanks! -Sarah All times are GMT -4. The time now is 07:45.
# Contents ## Idea What is called Weyl quantization is a method of quantization applicable to symplectic manifolds which are symplectic vector spaces or quotients of these by discrete groups (tori). In Weyl quantization of the flat space $\mathbf{R}^n$, the classical observables of the form $f(x,p)$ are replaced by suitable operators which in the case when $f$ is a polynomial correspond to writing $f$ with $x$ and $p$ replaced by noncommutative variables $x$ and $i h\frac{\partial}{\partial x}$ in symmetric or Weyl ordering. This means that all possible orderings between $x$ and $i h\frac{\partial}{\partial x}$ are summed with an equal weight. More generally, one can extend this rule to more general functions via integral formulas due Weyl and Wigner. This is also useful in fundations of the theory of pseudodifferential operators. ## References • Lars Hörmander, The Weyl calculus of pseudodifferential operators, Comm. Pure Appl. Math. 32 (1979), no. 3, 360–444. MR80j:47060, doi • Robert F. V. Anderson, The Weyl functional calculus, J. Functional Analysis 4:240–267, 1969, MR635128; On the Weyl functional calculus, J. Functional Analysis 6:110–115, 1970, MR262857 • E. M. Stein, Harmonic analysis: real variable methods, orthogonality, and oscillatory integrals, Princeton University Press 1993 M. W. Wong, Weyl transforms, the heat kernel and Green function of a degenerate elliptic operator, Annals Global Anal. Geom. 28 (2005) 271–283 Discussion of quantization of Chern-Simons theory in terms of Weyl quantization is in Discussion of the generalization to BV-quantization is in Last revised on December 21, 2016 at 11:48:25. See the history of this page for a list of all contributions to it.
03 July 2006 I searched the web for a while to find a WPF HLS Color selector like Interactive Designer is equipped with. Since I didn’t find anything I thought about creating my own one. The most difficult task was to convert .NET Rgb Color Values to HLS values (the difference is explained here). In my opinion the HLS color system is superior to the RGB system. After hours of searching I found a VBA sample and ported it to WPF (the code can be found at the bottom of the article). The next thing to do was to create a new slider for the Hue and a triangle for Lightness and Saturation values. The Slider It was easy to create a Color Slider since it is possible to override the default template. I created a gradient from hue 0 over hue 0.1, hue 0.2 … to hue 1.0 and put it into the background. At last I restyled the value selection thumb using Interactive Designer and the result looked like this. The Triangle The triangle was a bit more complex than the slider since I had to start from scratch. I created the triangle shape using Paths and Interactive Designer. The I assigned 3 backgrounds with different opacity masks to it. 1. Background Black From lower left to upper right Background White From upper left to lower right Background Colored HLS(CurrentHue, 0.5, 1) from right to left To show the current selected Lightness and Saturation I added an ellipse. The next step was to attach MouseDown, MouseUp and MouseMove event Handlers to the triangle. Whenever the mouse is moved over the triangle with the left button pressed the ellipse should follow. To ensure that the ellipse is moved correctly when you leave the triangle with the left button pressed I assigned Mouse.Capture(element, Element) to the element on MouseDown and Mouse.Capture(element, None) on MouseUp. Last but not least I had to write the code that generates the appropriate color from slider and ellipse position. The complete code can be found here. The Result In my opinion it was worth to spent some time to create this control. I hope that more people will begin building their own controls and make them available to the public. My control is not perfect and could be done in other ways. This is article is supposed to show only one possible way. Rgb <–> Hls Converter Code Imports System.Windows.Media Public Class HlsConverter Public Shared Function ConvertFromHls(ByVal value As HlsValue) As Color Return ConvertFromHls(value.Hue, value.Lightness, value.Saturation) End Function Public Shared Function ConvertFromHls(ByVal h As Double, ByVal l As Double, ByVal s As Double) As Color Dim p1 As Double Dim p2 As Double Dim r As Double Dim g As Double Dim b As Double h *= 360 If l <= 0.5 Then p2 = l * (1 + s) Else p2 = l + s - l * s End If p1 = 2 * l - p2 If s = 0 Then r = l g = l b = l Else r = QqhToRgb(p1, p2, h + 120) g = QqhToRgb(p1, p2, h) b = QqhToRgb(p1, p2, h - 120) End If r *= Byte.MaxValue g *= Byte.MaxValue b *= Byte.MaxValue Return Color.FromRgb(CByte(r), CByte(g), CByte(b)) End Function Public Shared Function QqhToRgb(ByVal q1 As Double, ByVal q2 As _ Double, ByVal hue As Double) As Double If hue > 360 Then hue = hue - 360 ElseIf hue < 0 Then hue = hue + 360 End If If hue < 60 Then QqhToRgb = q1 + (q2 - q1) * hue / 60 ElseIf hue < 180 Then QqhToRgb = q2 ElseIf hue < 240 Then QqhToRgb = q1 + (q2 - q1) * (240 - hue) / 60 Else QqhToRgb = q1 End If End Function Public Shared Function ConvertToHls(ByVal color As Color) As HlsValue Dim max As Double Dim min As Double Dim diff As Double Dim r_dist As Double Dim g_dist As Double Dim b_dist As Double Dim r As Double Dim g As Double Dim b As Double Dim h As Double Dim l As Double Dim s As Double r = color.R / Byte.MaxValue g = color.G / Byte.MaxValue b = color.B / Byte.MaxValue max = R If max < G Then max = G If max < B Then max = B min = R If min > G Then min = G If min > B Then min = B diff = max - min L = (max + min) / 2 If Math.Abs(diff) < 0.00001 Then s = 0 h = 0 Else If l <= 0.5 Then s = diff / (max + min) Else s = diff / (2 - max - min) End If r_dist = (max - r) / diff g_dist = (max - g) / diff b_dist = (max - b) / diff If r = max Then h = b_dist - g_dist ElseIf g = max Then h = 2 + r_dist - b_dist Else h = 4 + g_dist - r_dist End If h = h * 60 If h < 0 Then h = h + 360 End If h /= 360 Return New HlsValue(h, l, s) End Function End Class Public Structure HlsValue Private mHue As Double Private mLightness As Double Private mSaturation As Double Public Sub New(ByVal h As Double, ByVal l As Double, ByVal s As Double) mHue = h mLightness = l mSaturation = s End Sub Public Property Hue() As Double Get Return mHue End Get Set(ByVal value As Double) mHue = value End Set End Property Public Property Lightness() As Double Get Return mLightness End Get Set(ByVal value As Double) mLightness = value End Set End Property Public Property Saturation() As Double Get Return mSaturation End Get Set(ByVal value As Double) mSaturation = value End Set End Property Public Shared Operator <>(ByVal left As HlsValue, ByVal right As HlsValue) As Boolean Dim bool1 As Boolean bool1 = left.Lightness <> right.Lightness Or left.Saturation <> right.Saturation If Not bool1 Then Return left.Hue <> right.Hue And (left.Hue > 0 And right.Hue < 1) And (left.Hue < 1 And right.Hue > 0) Else Return bool1 End If End Operator Public Shared Operator =(ByVal left As HlsValue, ByVal right As HlsValue) As Boolean Dim bool1 As Boolean bool1 = left.Lightness = right.Lightness And left.Saturation = right.Saturation If bool1 Then Return Math.Round(left.Hue, 2) = Math.Round(right.Hue, 2) Or (Math.Round(left.Hue, 2) = 1 And Math.Round(right.Hue, 2) = 0) Or (Math.Round(left.Hue, 2) = 0 And Math.Round(right.Hue, 2) = 1) Else Return False End If End Operator Public Overrides Function ToString() As String Return String.Format("H: {0:0.00} L: {1:0.00} S: {2:0.00}", Hue, Lightness, Saturation) End Function End Structure
# Quickstart¶ To explain how Scanner is used, let’s walk through a simple example that reads every third frame from a video, resizes the frames, and then creates a new video from the sequence of resized frames. Note This Quickstart walks you through a very basic Scanner application that downsamples a video in space and time. Once you are done with this guide, check out the examples directory for more useful applications, such as using Tensorflow for detecting objects in all frames of a video and Caffe for face detection. To run the code discussed here, install Scanner (Installation). Then from the top-level Scanner directory, run: cd examples/apps/quickstart python3 main.py After main.py exits, you should now have a resized version of sample-clip.mp4 named sample-clip-resized.mp4 in the current directory. Let’s see how that happened by looking inside main.py. ## Starting up Scanner¶ The first step in any Scanner program is to create a Database object. The Database object manages videos or other data that you have may have stored from data processing you’ve done in the past. The Database object also provides the API to construct and execute new video processing jobs. from scannerpy import Database, Job db = Database() ## Ingesting a video into the Database¶ Scanner is designed to provide fast access to frames in videos, even under random access patterns. In order to provide this functionality, Scanner first needs to analyze the video to build an index on the video. For example, given an mp4 video named example.mp4, we can ingest this video as follow: .. code-block:: python db.ingest_videos([(‘table_name’, ‘example.mp4’)]) Scanner analyzes the file to build the index and creates a Table for that video in the Scanner database called table_name. You can see the contents of the database by running: >>> print(db.summarize()) ** TABLES ** --------------------------------------------------- ID | Name | # rows | Columns | Committed --------------------------------------------------- 0 | table_name | 360 | index, frame | true By default, ingest copies the video data into the Scanner database (located at ~/.scanner/db by default). However, Scanner can also read videos without copying them using the inplace flag. db.ingest_videos([('table_name', 'example.mp4')], inplace=True) This still builds the index for accessing the video but avoids copying the files into the database. ## Defining a Computation Graph¶ Now we can tell Scanner how to process the video by constructing a computation graph. A computation graph is a graph of input nodes (Sources), function nodes (Ops), and output nodes (Sinks). Sources can read data from the Scanner database (such as the table we ingested above) or from other sources of data, such as the filesystem or a SQL database. Ops represent functions that transform their inputs into new outputs. Sinks, like Sources, write data to the database or to other forms of persistent storage. Let’s define a computation graph to read frames from the database, select every third frame, resize them to 640 x 480 resolution, and then save them back to a new database table. First, we’ll create a Source that reads from a column in a table: frame = db.sources.FrameColumn() But wait a second, we didn’t tell the Source the table and column it should read from. What’s going on? Since it’s fairly typical to use the same computation graph to process a collection of videos at once, Scanner adopts a “binding model” that lets the user define a computation graph up front and then later “bind” different videos to the inputs. We’ll see this in action in the Defining Jobs section. The frame object returned by the Source represents the stream of frames that are stored in the table, and we’ll use it as the input to the next operation: sampled_frame = db.streams.Stride(input=frame, stride=3) # Select every third frame This is where we select only every third frame from the stream of frames we read from the Source. This comes from a special class of ops (from db.streams) that can change the size of a stream, as opposed to transforming inputs to outputs 1-to-1. We then process the sampled frames by instantiating a Resize Op that will resize the frames in the frame stream to 640 x 480: resized = db.ops.Resize(frame=sampled_frame, width=640, height=480) This Op returns a new stream of frames which we call resized. The Resize Op is one of the collection of built-in Ops in the Scanner Standard Library. (You can learn how to write your own Ops by following the Tutorials.) Finally, we write these resized frames to a column called ‘frame’ in a new table by passing them into a column Sink: output_frame = db.sinks.Column(columns={'frame': resized}) Putting it all together, we have: frame = db.sources.FrameColumn() sampled_frame = db.streams.Stride(input=frame, stride=3) resized = db.ops.Resize(frame=sampled_frame, width=640, height=480) output_frame = db.sinks.Column(columns={'frame': resized}) At this point, we have defined a computation graph that describes the computation to run, but we haven’t yet told Scanner to execute the graph. ## Defining Jobs¶ As alluded to in Defining a Computation Graph, we need to tell Scanner which table we should read and which table we should write to before executing the computation graph. We can perform this “binding” of arguments to graph nodes using a Job: job = Job(op_args={ frame: db.table('table_name').column('frame'), output_frame: 'resized_table' }) Here, we say that the FrameColumn indicated by frame should read from the column frame in the table "table_name", and that the output table indicated by output_frame should be called "resized_table". ## Running a Job¶ Now we can run the computation graph over the video we ingested. This is done by simply calling run on the database object, specifying the jobs and outputs that we are interested in: output_tables = db.run(output=output_frame, jobs=[job]) This call will block until Scanner has finished processing the job. You should see a progress bar while Scanner is executing the computation graph. Once the job are done, run returns the newly computed tables, here shown as output_tables. ## Reading the results of a Job¶ We can directly read the results of job we just ran in the Python code by querying the frame column on the table resized_table: for resized_frame in db.table('resized_table').column('frame').load(): print(resized_frame.shape) Video frames are returned as numpy arrays. Here we are printing out the shape of the frame, which should have a width of 640 and height of 480. ## Exporting to mp4¶ We can also directly save the frame column as an mp4 file by calling save_mp4 on the frame column: db.table('resized_table').column('frame').save_mp4('resized-video') After this call returns, an mp4 video should be saved to the current working directory called resized-video.mp4 that consists of the resized frames that we generated. That’s a complete Scanner pipeline! If you’d like to learn about process multiple jobs, keep reading! Otherwise, to learn more about the features of Scanner, either follow the Interactive Jupyter Walkthrough or go through the extended Tutorials. ## Processing multiple videos¶ Now let’s say that we have a directory of videos we want to process, instead of just a single one as above. To see the multiple video code in action, run the following commands from the quickstart app directoroy: wget https://storage.googleapis.com/scanner-data/public/sample-clip-1.mp4 python3 main-multi-video.py After main-multi-video.py exits, you should now have a resized version of each of the downloaded videos named sample-clip-%d-resized.mp4 in the current directory, where %d is replaced with the number of the video. There are two places in the code that need to change to process multiple videos. Let’s look at those pieces of code inside main-multi-video.py now. ## Ingesting multiple videos¶ The first change is that we need to ingest all of our videos. This means changing our call to ingest_videos to take a list of three tuples, instead of just one: videos_to_process = [ ('sample-clip-1', 'sample-clip-1.mp4'), ('sample-clip-2', 'sample-clip-2.mp4'), ('sample-clip-3', 'sample-clip-3.mp4') ] # Ingest the videos into the database db.ingest_videos(videos_to_process) Now we have three tables that are ready to be processed! ## Defining and executing multiple Jobs¶ The second change is to define multiple jobs, one for each video that we want to process. jobs = [] for table_name, _ in videos_to_process: job = Job(op_args={ frame: db.table(table_name).column('frame'), output_frame: 'resized-{:s}'.format(table_name) }) jobs.append(job) Now we can process these multiple jobs at the same time using run: output_tables = db.run(output=output_frame, jobs=jobs) Like before, this call will block until Scanner has finished processing all the jobs. You should see a progress bar while Scanner is executing the computation graph as before. Once the jobs are done, run returns the newly computed tables, here shown as output_tables.
Subscribe Issue No.04 - July/August (1998 vol.10) pp: 576-598 ABSTRACT <p><b>Abstract</b>—Integrity constraints are rules that should guarantee the integrity of a database. Provided an adequate mechanism to express them is available, the following question arises: Is there any way to populate a database which satisfies the constraints supplied by a database designer? That is, does the database schema, including constraints, admit at least a nonempty model? This work answers the above question in a complex object database environment, providing a theoretical framework, including the following ingredients: 1) two alternative formalisms, able to express a relevant set of state integrity constraints with a declarative style; 2) two specialized reasoners, based on the tableaux calculus, able to check the consistency of complex objects database schemata expressed with the two formalisms. The proposed formalisms share a common kernel, which supports complex objects and object identifiers, and which allow the expression of acyclic descriptions of: classes, nested relations and views, built up by means of the recursive use of record, quantified set, and object type constructors and by the intersection, union, and complement operators. Furthermore, the kernel formalism allows the declarative formulation of typing constraints and integrity rules. In order to improve the expressiveness and maintain the decidability of the reasoning activities, we extend the kernel formalism into two alternative directions. The first formalism, <tmath>${\cal OLCP,}$</tmath> introduces the capability of expressing path relations. Because cyclic schemas are extremely useful, we introduce a second formalism, <tmath>${\cal OLCD,}$</tmath> with the capability of expressing cyclic descriptions but disallowing the expression of path relations. In fact, we show that the reasoning activity in <tmath>${\cal OLCDP}$</tmath> (i.e., <tmath>${\cal OLCP}$</tmath> with cycles) is undecidable.</p> INDEX TERMS Database models, database semantics, consistency checking, complex object models, description logics, integrity constraints, object-oriented database, semantic integrity, subsumption. CITATION Domenico Beneventano, Sonia Bergamaschi, Stefano Lodi, Claudio Sartori, "Consistency Checking in Complex Object Database Schemata with Integrity Constraints", IEEE Transactions on Knowledge & Data Engineering, vol.10, no. 4, pp. 576-598, July/August 1998, doi:10.1109/69.706058
Is there a solution for these equations? Could someone please help me solve this problem? I want to be able to determine if in the following separate problems the real numbers can be expressed as indicated $$2\sqrt{2}=\frac{\sqrt{a+b\sqrt{c}}}{d}\pm\frac{\sqrt{a-b\sqrt{c}}}{d}$$ and $$7=\frac{\sqrt{a+b\sqrt{c}}}{d}\pm\frac{\sqrt{a-b\sqrt{c}}}{d}$$ where $a,b,c,d$ are integers and if so is there just one, or are there more solutions? I have squared the LHS and RHS to see where that would lead me but to no avail. Many thanks. - By using the $\pm$ sign, you mean that the number can be expressed by either a sum or difference of the two fractions, but not necessarily by both, correct? –  anorton Jan 25 '13 at 19:59 If there are no restrictions on $a,b,c,d$ one can find many solutions. –  André Nicolas Jan 25 '13 at 20:00 @AndréNicolas: The restriction is where $a,b,c,d$ are integers... –  anorton Jan 25 '13 at 20:03 @anorton that is correct the number can either be a sum or a difference of the two fractions or both I guess if there can be more than one solution. –  user59670 Jan 25 '13 at 20:03 Yes, that was in the post, but I meant further restrictions. –  André Nicolas Jan 25 '13 at 20:04 Well, if they were NOT separate problems, $$\frac{\sqrt{57+28\sqrt{2}}}{2}+\frac{\sqrt{57-28\sqrt{2}}}{2} = 7,\qquad \frac{\sqrt{57+28\sqrt{2}}}{2}-\frac{\sqrt{57-28\sqrt{2}}}{2} = 2\sqrt{2} .$$ - Write $(7+2\sqrt{2})^2 = a+b\sqrt{2}$ and then $a,b,c=2,d=2$ is a solution for the first equation with "+" and the second equation with "-". - I don’t see how $$2\sqrt{2}=\frac{\sqrt{2+2\sqrt{2}}}{2}+\frac{\sqrt{2-2\sqrt{2}}}{2}$$ if that is what you are saying for first equation since the second part of the RHS is a complex number. –  user59670 Jan 25 '13 at 20:54 No, $a,b$ are computed in the first step, and are not $2$. –  Thomas Andrews Jan 25 '13 at 22:08 For the first, take for example $a=4$, $b=2$, $c=4$, $d=1$. Or else take $a=3$, $b=2$, $c=2$, $d=1$. There are infinitely many other choices. In this case, one could describe them all. - hint: compute $$2\sqrt{2}+7$$ $$2\sqrt{2}-7$$ $$(2\sqrt{2})\cdot7$$ - \cdot to get $\cdot$ –  Git Gud Jan 25 '13 at 20:02 thanks Git Gud ..... –  Maisam Hedyelloo Jan 25 '13 at 20:08 Here is one approach to come up with a set of solutions. $2\sqrt{2} =\sqrt{8}$ and the number $8$ can be calculated an infinite number of ways. For example $7+1$; $6+2$; $5+3$; $4+4$; $3+5$; $2+6$; $1+7$; $9-1$; $10-2$.... So let’s say we want to compute$$\sqrt{100-92}$$ we can construct the following quadratic equation from this $$x^2-100x+\frac{92^2}{4}=0$$ The solutions to this quadratic are $$50+8\sqrt{6}$$ and $$50-8\sqrt{6}$$ Because we want the $\sqrt{8}$ we take the square roots of each of the solutions for the result$$\sqrt{50+8\sqrt{6}}-\sqrt{50-8\sqrt{6}}=2\sqrt{2}$$ Likewise with $$\sqrt{7+1}$$ we can construct the following quadratic equation from this $$x^2-7x+\frac{1^2}{4}=0$$ The solutions to this quadratic are $$\frac{7+4\sqrt{3}}{2}$$ and $$\frac{7-4\sqrt{3}}{2}$$ Because we want the $\sqrt{8}$ we take the square roots of each of the solutions for the result$$\frac{\sqrt{14+8\sqrt{3}}}{2}+\frac{\sqrt{14-8\sqrt{3}}}{2}=2\sqrt{2}$$ So it appears as has been stated in other answers there are infinite solutions. The same could be done with the equation $$7=\frac{\sqrt{a+b\sqrt{c}}}{d}\pm\frac{\sqrt{a-b\sqrt{c}}}{d}$$ using combinations that equal $\sqrt{49}$. -
# Eclair Mobile latest release: 0.4.17 ( 28th September 2021 ) last analysed  22nd December 2019 Failed to build from source provided! 3.8 ★★★★★ 370 ratings 10 thousand 12th April 2018 # Help spread awareness for build reproducibility Try out searching for "lost bitcoins", "stole my money" or "scammers" together with the wallet's name, even if you think the wallet is generally trustworthy. For all the bigger wallets you will find accusations. Make sure you understand why they were made and if you are comfortable with the provider's reaction. If you find something we should include, you can create an issue or edit this analysis yourself and create a merge request for your changes. # The Analysis ¶ This wallet has a really short description. Here it is in full: Eclair Mobile is a next generation, Lightning-ready Bitcoin wallet. It can be used as a regular Bitcoin wallet, and can also connect to the Lightning Network for cheap and instant payments. This software is based on eclair, and follows the Lightning Network standard. No word on custodial or not. eclair-mobile sounds promising. There, in the description again we find no hints at it being non-custodial. But in the repository’s wiki finally we find: Is Eclair Mobile a “real” Lightning Node ? Yes it is. Eclair Mobile is a real, self-contained lightning node that runs on your phone. It does not require you to run another Lightning Node node at home or in the cloud. It is not a custodial wallet either, you are in full control of your funds. So … can we reproduce the build? The build instructions are not very plentiful on the repo’s description: Developers 1. clone this project 2. clone eclair and checkout the android branch. Follow the steps here to build the eclair-core library. 3. Open the Eclair Mobile project with Android studio. You should now be able to install it on your phone/on an emulator. This has two immediate issues: • How do we know which version of “eclair” should we use? This should be resolved with a git submodule. • Compiling with Android Studio is not easy to automate and should not be necessary. • Branches are not a good way of referencing revisions of a repository. The “android branch” has 1938 revisions and if I want to check anything but the latest revision I have little to go by to find which app would match to which state of the branch. but let’s see how compiling looks once these issues are resolved as we have little hope to verify the current apk … $git clone [email protected]:ACINQ/eclair-mobile.git$ git clone [email protected]:ACINQ/eclair.git $cd eclair$ git checkout android $docker run -it -v$PWD/eclair:/eclair -v $PWD/eclair-mobile:/eclair-mobile --workdir / electrum-android-builder-img user@d0cf683a144a:/$ sudo su - root@d0cf683a144a:~# apt update root@d0cf683a144a:~# apt install maven root@d0cf683a144a:~# mvn install -DskipTests ... [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ eclair-core_2.11 --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- scala-maven-plugin:3.4.2:testCompile (scalac) @ eclair-core_2.11 --- [INFO] /eclair/eclair-core/src/test/java:-1: info: compiling [INFO] /eclair/eclair-core/src/test/scala:-1: info: compiling [INFO] Compiling 114 source files to /eclair/eclair-core/target/test-classes at 1577007350665 [ERROR] /eclair/eclair-core/src/test/scala/fr/acinq/eclair/blockchain/bitcoind/BitcoindService.scala:74: error: value writeString is not a member of object java.nio.file.Files [ERROR] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for eclair_2.11 0.3.4-android-SNAPSHOT: [INFO] [INFO] eclair_2.11 ........................................ SUCCESS [ 1.951 s] [INFO] eclair-core_2.11 ................................... FAILURE [ 28.245 s] [INFO] eclair-node ........................................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE So following the instructions we didn’t get far and for now hope for better documentation and remain with the verdict: This app is not verifiable. (lw) # Verdict Explained We encountered a build error while compiling from source code! As part of our Methodology, we ask: Can the product be built from the source provided? If not, we tag it Build Error! Published code doesn’t help much if the app fails to compile. We try to compile the published source code using the published build instructions into a binary. If that fails, we might try to work around issues but if we consistently fail to build the app, we give it this verdict and open an issue in the issue tracker of the provider to hopefully verify their app later. The product cannot be independently verified. If the provider puts your funds at risk on purpose or by accident, you will probably not know about the issue before people start losing money. If the provider is more criminally inclined he might have collected all the backups of all the wallets, ready to be emptied at the press of a button. The product might have a formidable track record but out of distress or change in management turns out to be evil from some point on, with nobody outside ever knowing before it is too late.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Drug-utilisation profiles and COVID-19 ## Abstract Coronavirus disease 2019 (COVID-19) has substantially challenged healthcare systems worldwide. By investigating population characteristics and prescribing profiles, it is possible to generate hypotheses about the associations between specific drug-utilisation profiles and susceptibility to COVID-19 infection. A retrospective drug-utilisation study was carried out using routinely collected information from a healthcare database in Campania (Southern Italy). We aimed to discover the prevalence of drug utilisation (monotherapy and polytherapy) in COVID-19 versus non-COVID-19 patients in Campania (~ 6 million inhabitants). The study cohort comprised 1532 individuals who tested positive for COVID-19. Drugs were grouped according to the Anatomical Therapeutic Chemical (ATC) classification system. We noted higher prevalence rates of the use of drugs in the ATC categories C01, B01 and M04, which was probably linked to related comorbidities (i.e., cardiovascular and metabolic). Nevertheless, the prevalence of the use of drugs acting on the renin-angiotensin system, such as antihypertensive drugs, was not higher in COVID-19 patients than in non-COVID-19 patients after adjustments for age and sex. These results highlight the need for further case–control studies to define the effects of medications and comorbidities on susceptibility to and associated mortality from COVID-19. ## Introduction As of 24 April 2020, there has been ~ 3,000,000 coronavirus disease 2019 (COVID-19) cases and > 200,000 associated deaths worldwide1. COVID-19 is very contagious and has a wide spectrum of presentations. COVID-19 symptoms can range from no symptoms to severe illness, and the disease includes three phases (i.e., viral infection, pulmonary, hyperinflammation/systemic phases)2. Ageing and underlying diseases (e.g., heart disease, diabetes mellitus) have been reported to be risk factors for adverse outcomes, and male sex and a genetic predisposition to infection are under investigation as potential contributors3,4,5,6,7. Moreover, initial reports suggested a potential pro-infective effect of drugs. Two classes of drugs that have been implicated are angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin II receptor blockers (ARBs). These effects may be due to the interaction between the virus that causes COVID-19, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and ACE-2 receptors in the lungs, though this theory is controversial8,9,10,11,12. There is a lack of data on drug use (monotherapy and polytherapy) in COVID-19 patients. The main aims of this study were to (1) discover the prevalence of drug utilisation (monotherapy and polytherapy) in COVID-19 versus non-COVID-19 patients in Campania, southern Italy and (2) ascertain the epidemiology and profiles of affected patients in relation to drug utilisation. ## Methods ### Study design A retrospective drug-utilisation study was carried out using routinely collected information from healthcare databases in Campania. The Campania Region Database (CaReDB) includes information on patient demographics and the electronic records of outpatient pharmacy dispensing for ~ 6 million residents, comprising a well-defined population in Italy (~ 10% of the population of Italy). CaReDB is complete and includes data that has been validated in previous drug-utilisation studies13,14,15,16,17,18,19,20. The characteristics of CaReDB are described in Supplementary Table S1. At the beginning of the COVID-19 epidemic, a surveillance system was implemented to collect the data of all cases identified by reverse transcription-polymerase chain reaction (RT-PCR) testing for SARS-CoV-2. These archives are linked together by a unique anonymous identifier that is encrypted to protect patient privacy. Our research protocol adhered to the tenets of the Declaration of Helsinki 1975 and its later amendments. Permission to use anonymised data for this study was granted to the researchers of the Centro di Ricerca in Farmacoeconomia e Farmacoutilizzazione (CIRFF) by the governance board of Unità del Farmaco della Regione Campania. The research did not involve a clinical study, and all patients’ data were fully anonymised and were analysed retrospectively. For this type of study, formal consent was not required according current national established by the Italian Medicines Agency, and according to the Italian Data Protection Authority, neither ethical committee approval nor informed consent was required for our study21. ### Study population People who had been dispensed medication according to CaReDB during 2019 were included in the study cohort. From regional surveillance system data, we obtained the information of patients with confirmed COVID-19 from the beginning of the epidemic (26 February 2020) to 30 March 2020 who were linked to the population identified in CaReDB. For the purposes of our investigation, the study population diagnosed with SARS-CoV-2 infection on or before the date of analysis was referred to as the ‘COVID-19 group’ (C19G). The remaining individuals were used as a comparator group in the analysis and were referred to as the ‘general population group’ (GPG). ### Patient characteristics The study population was categorised by sex and subdivided into four age groups: 0–39; 40–59; 60–79; and ≥ 80 years. The number of drug prescriptions, prevalence of drug use and polypharmacy regimens (classified as ‘no-polypharmacy’; ‘polypharmacy’; and ‘excessive polypharmacy’) were ascertained in 2019. Drugs were grouped according to the Anatomical Therapeutic Chemical (ATC) classification system. ATC II and ATC IV codes with a prevalence ≥ 3% in the C19G were included in the analysis. ### Outcome The drug-utilisation profile was evaluated as the prevalence of drug use. Drug use prevalence was estimated as the number of individuals dispensed ≥ 1 drug prescription per 100 inhabitants in 2019. The prevalence of drug use was evaluated in the C19G and GPG. Prevalence was stratified by age group and sex. Prevalence was probably influenced by the heterogeneous demographic distribution among the age groups, so we conducted direct standardisation. ### Statistical methods The baseline characteristics of the study population were analysed using descriptive statistics. Quantitative variables are described as means ± standard deviations. Categorical variables are described as counts and percentages. The chi-square test and t-test were performed to determine the difference between the C19G and GPG in terms of sex and age. A P-value of < 0.05 was considered significant. Crude and age-adjusted prevalence rates were calculated. Differences in the prevalence between the C19G and GPG are expressed as risk ratios (RRs) adjusted for sex and age with 95% confidence intervals (CIs). Standardisation was performed using a direct method whereby the Italian population up to 1 January 2019 was used as the standard population (available on the Demo Istat website22). $${\text{Direct}}\,{\text{standardised}}\,{\text{rate}}\, = \,\frac{{\sum\nolimits_{i = 1}^{m} {w_{i} \cdot T_{i} } }}{{\sum\nolimits_{i = 1}^{m} {w_{i} } }} \cdot k$$ where (Ti = ni/n) = rate in stratum ‘i’ of the study population; ni = number of cases in stratum ‘i’ of the study population; N = size of the study population in stratum ‘i’; wi = size of stratum ‘i’ of the reference population; m = number of considered strata; k = multiplicative constant. The age-adjusted RRs and 95% CIs were computed using standard methods. Data management was performed with SQL server v2018 (Microsoft, Redmond, WA, USA). Analyses were carried out with SPSS v17.1 (IBM, Armonk, NY, USA). ## Results ### C19G characteristics A total of 1,532 individuals in Campania who tested positive for COVID-19 on 30 March 2020 were identified. Of these, 926 (60.4%) were males, and the median age of the entire sample was 55 ± 19 years. Among the C19G patients, 20.8% were aged 0–39 years, 36.1% were aged 40–59 years, 33.6% were aged 60–79 years, and 9.5% were aged > 80 years. The percentage of males was higher than that of females in all age groups except the > 80 years age group (43.8% males). Differences in age and the sex ratio between the C19G and GPG were statistically significant (p-value < 0.001). The prevalence of drug use among the C19G was 74.5% and increased with age, reaching 93.8% in those aged > 80 years. The median number of prescriptions per patient (overall: 16 [interquartile range, IQR]: 5–42) ranged from 3 (IQR, 1–6) among people aged 0–39 years to 51 (IQR, 29–71) among individuals aged > 80 years. Half of the COVID-19 patients aged 0–39 years had no exposure to any medication, whereas 45.5% of the COVID-19 patients were prescribed ≤ 4 medications, and 4.1% had polypharmacy regimens (5–9 drugs). The percentage of participants receiving polypharmacy increased with increasing age, at 18.3% in those aged 40–59 years and 34.8% in those aged 60–79 years; moreover, ~ 80% of participants aged > 80 years were prescribed polypharmacy or excessive polypharmacy (≥ 10 drugs) regimens. The C19G characteristics are shown in Table 1. ### Drug-utilisation profiles of the C19G Twenty-three pharmacological ATC II groups and 39 ATC IV groups had a prevalence > 3% in the C19G. The highest unadjusted and adjusted prevalence rates of drug use in the ATC II groups were observed for drug categories J01, A02, C09, M01, B01 and R03 in the C19G and GPG (Fig. 1). Crude differences (at least ± 20% in the overall prevalence of drug use between the C19G and GPG) were found in all 23 pharmacological ATC II groups and in 30 of 39 ATC IV groups included in the analysis (Fig. 1, Table 2). After adjustment, differences remained in six ATC II groups and eight ATC IV groups. With respect to drugs acting on the renin–angiotensin system (RAS) (C09), including beta-blockers (C07), antibacterial drugs for systemic use (J01) and anti-inflammatory and antirheumatic drugs (M01), the differences disappeared after adjustment. The large differences in antithrombotic agents (B01), cardiac therapy drugs (C01) and anti-epileptics (N03) diminished after adjustment, even though they were more common in the C19G than in the GPG after adjustment. ### ATC A: drugs targeting the alimentary tract and metabolism Drugs for acid-related disorders (ATC II: A02) had adjusted prevalence rates of 32.2% in the C19G and 28.8% in the GPG (RR, 1.12; 95% CI, 1.116–1.120) (Fig. 1). This difference increased mainly in those aged 40–59 years (32.4% vs. 26.5%; RR, 1.22) (Fig. 2). Regarding chemical subgroups, proton pump inhibitor (ATC IV: A02BC) use had a higher prevalence in the C19G than in the GPG, mainly in those aged 0–39 years (6.8% vs. 5.2%; RR, 1.36) and 40–59 years (30.1% vs. 22.8%; RR, 1.32) (Supplementary Tables S4, S5). The difference in the prevalence of drug use for diabetes mellitus (ATC II: A10) between the C19G and GPG after adjustment was very small. With regard to ATC IV, biguanide (A10BA) use had a higher prevalence in the C19G than in the GPG, mainly in those aged ≥ 80 years (14.6% vs. 10.7%; RR, 1.36) (Supplementary Tables  S4, S5). ### ATC B: drugs targeting blood and blood-forming organs Antithrombotic agents (ATC II: B01) was the therapeutic group with the highest adjusted prevalence difference between the C19G and GPG (17.1% vs. 11.6%; RR: 1.47; 95% CI: 1.467–1.475) (Fig. 1). All age groups showed differences in adjusted prevalence rates between the C19G and GPG, with higher RRs observed in the younger age groups (Supplementary Tables S2, S3). An identical trend was observed for ATC IV. Heparin (B01AB) and platelet aggregation inhibitor (B01AC) use had higher adjusted prevalence rates in the C19G than in the GPG, with higher RRs in participants < 60 years of age (heparin: RR, 3.19 for 0–39 years and RR, 2.27 for 40–59 years; platelet aggregation inhibitors: RR, 1.94 for 0–39 years and RR, 1.52 for 40–59 years) (Fig. 3). Folic acid and derivative (B03BB) use had a higher prevalence in the C19G than in the GPG, mainly in those aged 0–39 years (3.3% vs. 1.5%; RR, 2.22) (Supplementary Tables S4, S5). ### ATC C: drugs targeting the cardiovascular system Among drugs targeting the cardiovascular system, cardiac therapy (ATC II: C01) use had the largest differences in adjusted prevalence rates between the C19G and GPG overall and in each age group; the difference decreased with age (0–39 years: RR, 4.63; 40–59 years: RR, 2.09; 60–79 years: RR, 1.50) (Supplementary Table S3). The other ATC II therapeutic group that pertained to the cardiovascular system did not show a relevant difference in the overall adjusted prevalence between the C19G and GPG (Fig. 1). Nevertheless, after stratification by age group, a higher RR (C19G/GPG) in people aged < 60 years was noted. In those older than 80 years, the differences disappeared or reversed, specifically for agents acting on the RAS (ATC II: C09) and lipid-modifying agents (ATC II: C10) (65.6% vs. 71.2% and 34.6% vs. 42.7% in the C19G vs. GPG, respectively) (Fig. 2). ### ATC J: anti-infectives for systemic use Relevant differences were not observed in the overall adjusted prevalence between the C19G and GPG for the therapeutic group (ATC II) included in this drug category (Fig. 1). Nevertheless, focusing on chemical subgroups (ATC IV), among people less than 40 years of age, third-generation cephalosporin (J01DD) use had a higher prevalence in the C19G than in the GPG (11.8% vs. 9.8%; RR, 1.20). In the 40–59 years group, macrolide (J01FA) and fluoroquinolone (J01MA) use had higher prevalence rates in the C19G group than in the GPG group (16.2% vs. 11.9%, RR, 1.37; 13.1% vs. 10.2%, RR, 1.29, respectively). Among those aged > 80 years, third-generation cephalosporin (J01DD) use had a higher prevalence in the C19G than in the GPG (37.3% vs. 29.1%, RR, 1.28) (Fig. 3 and Supplementary Tables S4, S5). With regard to anti-mycotics for systemic use (ATC IV: J02AC), a large sex difference in the overall adjusted prevalence in the C19G was noted (male RR: 1.41) (Supplementary Tables S5). ### ATC M: drugs targeting the musculoskeletal system Regarding anti-inflammatory and antirheumatic drug (ATC II: M01) use, no significant differences were observed in the overall adjusted prevalence rates between the C10G and CPG (Fig. 1). Focusing on chemical subgroups (ATC IV), acetic acid derivative and related substance (M01AB; RR, 2.07) use and propionic acid derivative (M01AE; RR, 1.75) use had a higher prevalence rates in those aged > 40 years (Fig. 3). Anti-gout preparation (ATC II: M04) use had adjusted prevalence rates of 4.5% in the C19G and 3.3% in the GPG (RR, 1.37; 95% CI, 1.36–1.37) (Fig. 1). A large sex difference in the overall adjusted prevalence in the C19G was observed (female RR, 1.55) (Supplementary Table S3. Focusing on chemical subgroups (ATC IV), use of preparations inhibiting uric acid production (M04AA) had a higher prevalence in the C19G in those aged 40–59 years (2.8% vs. 1.2%; RR, 2.36) and 60–79 years (8.5% vs. 7.1%; RR, 1.21) (Supplementary Tables S4, S5). ### ATC N: drugs targeting the nervous system Among drugs targeting the nervous system, anti-epileptic (ATC II: N03) use had the largest prevalence difference between the C19G and GPG (5.0% vs. 3.6%; RR, 1.39) (Fig. 1). For the pertaining chemical subgroup of other anti-epileptics (ATC VI: N03AX), the RR in COVID-19 patients increased with age, reaching its highest value in those aged > 80 years (11.7% vs. 7.2%; RR, 1.62) (Supplementary Tables S4, S5). Psychoanaleptic (ATC II: N06) use had adjusted prevalence rates of 6.2% in the C19G and 5.5% in the GPG (RR, 1.12; 95% CI, 1.114–1.122) (Fig. 1). Focusing on chemical subgroups, other antidepressant (ATC IV: N06AX) use had a high prevalence in COVID-19 patients in all age groups except for those aged 40–59 years (Fig. 3). Sex differences were observed for analgesic drug (N02) use (male RR, 1.41), other anti-epileptic (N03AX) use (female RR, 1.55) and selective serotonin reuptake inhibitor (N06AB) use (male RR, 0.67) (Supplementary Tables S3, S5). ### ATC R: drugs targeting the respiratory system Marked differences in the prevalence of therapeutic group (ATC II) use were not observed between the C19G and GPG (Fig. 1). However, focusing on chemical subgroups (ATC IV), inhaled anticholinergic agent (R03BB) use had a larger sex difference in the overall adjusted prevalence in the C19G (male RR, 1.44) (Supplementary Table S5). Adrenergic agent combined with corticosteroid (R03AK) use had the highest prevalence in the C19G (6.1% vs. 4.0%; RR, 1.53) among those aged 40–59 years (Supplementary Table S5). Glucocorticoid (R03BA) use had the highest prevalence in the C19G among those aged 40–59 years (10.4% vs. 7.3%; RR, 1.42) (Supplementary Table S5) and those aged 60–79 years (14.9% vs. 11.4%; RR, 1.31) (Supplementary Table S5). Higher prevalence rates of anticholinergic (R03BB) use (11.9% vs. 9.8%; RR, 1.23) and piperazine derivative (R06AE) use (7.1% vs. 5.5%; RR, 1.30) were observed in the C19G among those aged > 80 years (Supplementary Table S5). ## Discussion The COVID-19 pandemic has imposed great challenges on healthcare systems worldwide. Some literature has been published on the clinical aspects of, possible treatments for and risk factors in patients with COVID-1923,24,25,26. Nevertheless, apart from a few studies, the epidemiology of and drug use profiles in patients with COVID-19 has not been studied. To our knowledge, this is the first study addressing this topic. Most of our COVID-19 population comprised middle-aged men (55 ± 19 years; 80% were > 40 years of age) and those receiving ≥ 1 drug (74.5% of patients, including 35% with a polypharmacy regimen). In general, our results revealed four profiles. The first comprises an age range of 0–39 (median age, 27 ± 9) years, male sex, no exposure to any drug in approximately half of the patients and a very low prevalence of polytherapy. The second comprises an age range of 40–59 (median age, 51 ± 5) years, male sex, use of 1–4 drugs in nearly half of the patients and a low prevalence of polytherapy (< 25%). The third comprises an age range of 60–79 (median age, 68 ± 6) years, male sex, use of ≥ 1 drug in 90% of the patients and polytherapy in more than half of the patients. The final profile comprises an age > 80 (median age 85 ± 4) years, female sex, use of ≥ 1 drug in 94% of the patients and polytherapy in 78% of the patients. The analyses of drug-utilisation profiles highlighted differences between the C19G and GPG in terms of the prevalence of drug exposure. The drug categories with a difference ≥ 30% were antithrombotic agents (B01), antiepileptics (N03), anti-hyperuricaemics/anti-gout (M04) and cardiac therapy agents (C01). The higher prevalence rates associated with drugs in categories C01, B01 and M04 indicate a frequent pattern of cardiovascular and metabolic comorbidities in COVID-19 populations, as reported in other studies4,5,8. It is of some relevance that B01 drug use had the largest difference between the COVID-19 and general populations. This therapeutic profile indicates the presence of cardiovascular complications (including venous thromboembolism), supporting the hypothesis of an increased risk associated with COVID-19 infection in these patients8. With regard to the higher prevalence of use of drugs in the M04 category, a retrospective cohort study including 131,565 patients and 252,763 controls, using data from the UK Clinical Practice Research Datalink, reported an increased risk of pneumonia (hazard ratio, 1.27; 95% CI 1.18–1.36) in patients with gout27. There is no clear association between epilepsy and the risk of developing COVID-19. Nevertheless, epilepsy may be associated with other comorbidities or a component of congenital/inherited syndromes that may affect the immune system. Additionally, anti-epileptic agents can be used in association with other medications that can influence the immune system (e.g., adrenocorticotropic hormones, corticosteroids, everolimus, immunotherapy agents), and this may increase the infection risk28. Moreover, these patients may require frequent clinical evaluation, which may explain (at least in part) their increased risk of healthcare-associated infections. Notably, the adjusted prevalence of the use of drugs acting on the RAS (C09) was not different between the C19G and GPG (RR, 1.02; 95% CI, 1.01–1.02). This result is in accordance with evidence from a retrospective study involving a COVID-19 cohort in Italy29 and supports the position of the European Society of Cardiology30. Furthermore, no major differences were noted for any category of antihypertensive drugs. Corroborating our results, a recent study carried out in the United States revealed no association between ACEI or ARB use and COVID-19, supporting the recommendation of continuing ACEI and ARB use in the setting of the COVID-19 pandemic31. This was further explored in a recent Brazilian study that confirmed that among patients hospitalised with mild to moderate COVID-19 who were taking ACEIs or ARBs before hospital admission, there were no significant differences in the mean number of days alive and out of the hospital between those who discontinued and continued these medications32. Stratification by age showed a higher prevalence of use of drugs in categories B01, B03, C09 and C10 in people aged < 40 years. This evidence should be interpreted with caution because the number of such patients was very small. Nevertheless, a morbidity pattern similar to that in older patients was observed in these patients. Conversely, in COVID-19 patients aged > 60 years, there was no significant difference in the prevalence of drug use for cardiometabolic diseases compared with that in the GPG, but the prevalence rates of drug use for respiratory diseases and neurological diseases were increased in the C19G. A large number of males took analgesics (N02) and drugs for cardiac therapy (C01). A high number of females took anti-anaemia agents (B03) and anti-epileptic agents (N03). Early descriptions of COVID-19 suggested a male preponderance23,24,33. Sex-based immunological, genetic, and lifestyle differences (e.g., tobacco smoking) have been postulated to explain the male preponderance of COVID-1934. In a population of 507 patients with COVID-19 between 13 and 31 January 2020 (including 364 from mainland China), 281 patients were male (55%), and the median age was 46 (IQR, 35–60) years35. Zhou and colleagues described 191 COVID-19 patients from Wuhan (Hubei Province, China) during the first month of the outbreak. That cohort had a median age of 56 (IQR, 46.0–67.0) years, with 62% being male and 48% with comorbidities23. Additionally, data from Italy revealed a higher prevalence of COVID-19 in males than in females36,37. However, sex- and age-disaggregated data revealed the opposite to be true for women aged > 80 years in Campania. National data from Italy revealed that in those aged 20–29 years, 56.5% of the diagnosed patients were female, and only after the age of 50 years does the male preponderance of COVID-19 increase. Thus, the male preponderance of COVID-19 should be interpreted with caution because sex-disaggregated data are incomplete, and more robust evidence is needed. Our study was not designed to define the association between drug use, comorbidities, risk of adverse outcomes and outcomes in COVID-19 patients. The associations between the use of certain drugs and susceptibility to SARS-CoV-2 infection (e.g., predictive factors for poor outcome) must be studied in a large cohort with a control group and robust clinical data. This was a retrospective study of health records. Additional detailed patient information (mainly regarding clinical outcomes) was not available at the time of the analysis. Despite these limitations, we delineated the drug use profiles and epidemiological and demographic characteristics of 1532 Italian patients with COVID-19. This information provides the first evidence of the association between drug utilisation and COVID-19 risk, giving us a solid background for further analyses and interpretations using new data. ## Conclusions In conclusion, the current data provide baseline information about the complexity of patients affected by COVID-19, showing frequencies and differences in drug utilisation profiles in COVID-19 patients compared with the general population. The higher prevalence rates of C01, B01 and M04 use were probably linked to related comorbidities (i.e., cardiovascular, metabolic). Nevertheless, the prevalence of the use of drugs acting on RAS, such as other anti-hypertensive drugs, did not show a higher prevalence among COVID-19 patients than among the general population. Since these pilot data were derived from the first month of documented COVID-19 cases in the Campania region (southern Italy), our results highlight the need for further case–control studies to define the effects of medications and comorbidities on susceptibility to and associated mortality from COVID-19 infection. Finally, to better understand the global epidemiology of COVID-19, reproducible and comparable results from cohorts from multiple countries and regions are needed for further investigation and meta-analysis. ## References 1. World Health Organization. Coronavirus disease 2019 (COVID-19), Situation Report—95. www.who.int/docs/default-source/coronaviruse/situation-reports/20200424-sitrep-95-covid-19.pdf?sfvrsn=e8065831_4. Accessed on 4 April 2020. 2. Yuen, K. S., Ye, Z. W., Fung, S. Y., Chan, C. P. & Jin, D. Y. SARS-CoV-2 and COVID-19: the most important research questions. Cell Biosci. 10(1), 1–5 (2020). 3. Wenham, C., Smith, J. & Morgan, R. COVID-19: the gendered impacts of the outbreak. Lancet 395(10227), 846–848 (2020). 4. Yang, X. et al. Clinical course and outcomes of critically ill patients with SARS-CoV-2 pneumonia in Wuhan, China: a single-centered, retrospective, observational study. Lancet Respir. Med. 8, 475–481 (2020). 5. Li, B. et al. Prevalence and impact of cardiovascular metabolic diseases on COVID-19 in China. Clin. Res. Cardiol. 109(5), 531–538 (2020). 6. Tignanelli, C. J. et al. Antihypertensive drugs and risk of COVID-19?. Lancet Respir. Med. 8, e30–e31 (2020). 7. Gandhi, R. T., Lynch, J. B. & del Rio, C. Mild or moderate Covid-19. N. Engl. J. Med. https://doi.org/10.1056/NEJMcp2009249 (2020). 8. Guo, T. et al. Cardiovascular implications of fatal outcomes of patients with coronavirus disease 2019 (COVID-19). JAMA Cardiol. https://doi.org/10.1001/jamacardio.2020.1017 (2020). 9. Rossi, G. P., Sanga, V. & Barton, M. Potential harmful effects of discontinuing ACE-inhibitors and ARBs in COVID-19 patients. Elife 9, e57278 (2020). 10. South, A. M., Diz, D. & Chappell, M. C. COVID-19, ACE2 and the cardiovascular consequences. Am. J. Physiol. Heart Circ. Physiol. 318(5), H1084–H1090 (2020). 11. Aronson, J. K. & Ferner, R. E. Drugs and the renin-angiotensin system in covid-19. BMJ 369, m1313 (2020). 12. Sommerstein, R., Kochen, M. M., Messerli, F. H. & Gräni, C. Coronavirus disease 2019 (COVID-19): do angiotensin-converting enzyme inhibitors/angiotensin receptor blockers have a biphasic effect?. J. Am. Heart Assoc. 9(7), e016509 (2020). 13. Moreno-Juste, A. et al. Treatment patterns of diabetes in Italy: a population-based study. Front. Pharmacol. 10, 870 (2019). 14. Guerriero, F. et al. Biological therapy utilization, switching, and cost among patients with psoriasis: retrospective analysis of administrative databases in Southern Italy. Clinicoecon. Outcomes Res. 9, 741 (2017). 15. Putignano, D. et al. Differences in drug use between men and women: an Italian cross sectional study. BMC Women’s Health 17(1), 73 (2017). 16. Iolascon, G. et al. Osteoporosis drugs in real-world clinical practice: an analysis of persistence. Aging Clin. Exp. Res. 25(1), 137–141 (2013). 17. Orlando, V. et al. Prescription patterns of antidiabetic treatment in the elderly. Results from Southern Italy. Curr. Diabetes Rev. 12(2), 100–106 (2016). 18. Menditto, E. et al. Adherence to chronic medication in older populations: application of a common protocol among three European cohorts. Patient Prefer. Adherence 12, 1975 (2018). 19. Casula, M. et al. Assessment and potential determinants of compliance and persistence to antiosteoporosis therapy in Italy. Am J Manag. Care 20(5), e138–e145 (2014). 20. Orlando, V. et al. Drug utilization pattern of antibiotics: the role of age, sex and municipalities in determining variation. Risk Manag. Healthc. Policy 13, 63 (2020). 21. Italian Data Protection Authority. General authorisation to process personal data for scientific research purposes—1 March 2012 [1884019]. https://doi.org/10.1094/PDIS-11-11-0999-PDN. 22. Demo-Geodemo. Mappe, Popolazione, Statistiche Demografiche dell’ISTAT. http://demo.istat.it/. Accessed on 1 March 2020. 23. Zhou, F. et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet 395(10229), 1054–1062 (2020). 24. Chen, N. et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet 395(10223), 507–513 (2020). 25. Shoenfeld, Y. Corona (COVID-19) time musings: our involvement in COVID-19 pathogenesis, diagnosis, treatment and vaccine planning. Autoimmun. Rev. https://doi.org/10.1016/j.autrev.2020.102538 (2020). 26. Vellingiri, B. et al. COVID-19: a promising cure for the global panic. Sci. Total Environ. 725, 138277 (2020). 27. Spaetgens, B. et al. Risk of infections in patients with gout: a population-based cohort study. Sci. Rep. 7(1), 1–9 (2017). 28. Beghi, E. & Shorvon, S. Antiepileptic drugs and the immune system. Epilepsia 52, 40–44 (2011). 29. Mancia, G., Rea, F., Ludergnani, M., Apolone, G. & Corrao, G. Renin-angiotensin-aldosterone system blockers and the risk of covid-19. N. Engl. J. Med. https://doi.org/10.1056/NEJMoa2006923 (2020). 30. The European Society of Cardiology. ESC Guidance for the Diagnosis and Management of CV Disease during the COVID-19 Pandemic (2020). www.escardio.org/static_file/Escardio/Education-General/Topic%20pages/Covid-19/ESC%20Guidance%20Document/ESC-Guidance-COVID-19-Pandemic.pdf. Accessed on 21 Apr 2020. 31. Mehta, N. et al. Association of use of angiotensin-converting enzyme inhibitors and angiotensin II receptor blockers with testing positive for coronavirus disease 2019 (COVID-19). JAMA Cardiol. 5(9), 1020–1026. https://doi.org/10.1001/jamacardio.2020.1855 (2020). 32. Lopes, R. D. et al. Effect of discontinuing vs continuing angiotensin-converting enzyme inhibitors and angiotensin II receptor blockers on days alive and out of the hospital in patients admitted with COVID-19: a randomized clinical trial. JAMA 325(3), 254–264. https://doi.org/10.1001/jama.2020.25864 (2021). 33. Chen, T. et al. Clinical characteristics of 113 deceased patients with coronavirus disease 2019: retrospective study. BMJ 368, m1091 (2020). 34. Cai, H. Sex difference and smoking predisposition in patients with COVID-19. Lancet Respir. Med. 8(4), e20 (2020). 35. Sun, K., Chen, J. & Viboud, C. Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. Lancet Digit. Health 2(4), e201–e208 (2020). 36. Riccardo, F. et al. Epidemiological characteristics of COVID-19 cases in Italy and estimates of the reproductive numbers one month into the epidemic. medRxiv https://doi.org/10.1101/2020.04.08.20056861 (2020). 37. Onder, G., Rezza, G. & Brusaferro, S. Case-fatality rate and characteristics of patients dying in relation to COVID-19 in Italy. JAMA https://doi.org/10.1001/jama.2020.4683 (2020). ## Funding This study was supported by grants from the Italian Medicine Agency (AIFA), funding on Pharmacovigilance of the Campania Region (Progetto per la valutazione e l’analisi della prescrizione farmaceutica in Regione Campania; Osservatorio sull’uso appropriato dei farmaci). ## Author information Authors ### Contributions V.O. and E.M. conceived the study. I.G. and S.M. conducted the study. V.O., E.M. and G.L. analysed the results and wrote the original draft. E.C., A.P. and U.T. reviewed the manuscript. All authors agreed with the final version of the manuscript. ### Corresponding authors Correspondence to Valentina Orlando or Enrica Menditto. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions Orlando, V., Coscioni, E., Guarino, I. et al. Drug-utilisation profiles and COVID-19. Sci Rep 11, 8913 (2021). https://doi.org/10.1038/s41598-021-88398-y • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-021-88398-y
Tuesday, November 07, 2006 Eliminate the page number on the Latex title page "LATEX will automatically number pages, but often you don't want a page number on the title page. To eliminate the page number on the first page, put \thispagestyle{empty} before the \maketitle. " This paragraph is copied from http://stuff.mit.edu/afs/sipb.mit.edu/project/www/latex/guide/node9.html. However, if you follow this guide, the page number will still stand there, never leave with the wind. The correct way is that put \thispagestyle{empty} after the \maketitle. When using \ chapter, the same rule should be obeyed. By the way, don't use \thank with \maketitle, there is a bug that Latex will produce an additional (mostly blank) page in the output. 1. Ah, Finally! This error was bugging me so much. Thank you! 2. This was a lifesaver (or hair-saver) :-) 3. Thank you!!! 4. 3lectrologos4/16/2010 5:03 AM Thanks, really helped! 5. Didn't work for me :( 6. Thanks :-) 7. does not work ,not sure why -------- \begin{document} \maketitle \thispagestyle{empty} %\pagenumbering{roman} \frontmatter \input{abstract.tex} \input{acknowledgements.tex} . . . -------- 8. No commands to provide the information for the title page. Read the error message carefully. 9. I had to put it in front of \"frontmatter": --- \thispagestyle{empty} \frontmatter --- Thanks :) 10. This is just amazing! All the info online gave commands but not a precise way of actually deleting the annoying number. Appreciated!
## Universal approximation power of deep residual neural networks via nonlinear control theory ### Paulo Tabuada · Bahman Gharesifard ##### Virtual Keywords: [ nonlinear control theory ] [ Deep residual neural networks ] [ universal approximation ] [ Abstract ] [ [ Paper ] Mon 3 May 9 a.m. PDT — 11 a.m. PDT Abstract: In this paper, we explain the universal approximation capabilities of deep residual neural networks through geometric nonlinear control. Inspired by recent work establishing links between residual networks and control systems, we provide a general sufficient condition for a residual network to have the power of universal approximation by asking the activation function, or one of its derivatives, to satisfy a quadratic differential equation. Many activation functions used in practice satisfy this assumption, exactly or approximately, and we show this property to be sufficient for an adequately deep neural network with $n+1$ neurons per layer to approximate arbitrarily well, on a compact set and with respect to the supremum norm, any continuous function from $\mathbb{R}^n$ to $\mathbb{R}^n$. We further show this result to hold for very simple architectures for which the weights only need to assume two values. The first key technical contribution consists of relating the universal approximation problem to controllability of an ensemble of control systems corresponding to a residual network and to leverage classical Lie algebraic techniques to characterize controllability. The second technical contribution is to identify monotonicity as the bridge between controllability of finite ensembles and uniform approximability on compact sets. Chat is not available.
Home > Word Problems calculators > Problem on Ages calculator Method and examples Problem on Ages 1. At present, ratio of Meena's age and Kamla's age is 3:5. At present, sum of Meena's age and Kamla's age is 80. 10 years hence, ratio of Meena's age and Kamla's age will be ? 2. At present, ratio of Meena's age and Kamla's age is 3:5. 5 years hence, sum of Meena's age and Kamla's age will be 90. 10 years hence, ratio of Meena's age and Kamla's age will be ? 3. At present, ratio of Father's age and Son's age is 6:1. 5 years hence, ratio of Father's age and Son's age will be 7:2. At present, Father's age and Son's age is ? 4. 5 years ago, ratio of A's age and B's age was 3:2. 5 years hence, ratio of A's age and B's age will be 5:4. At present, A's age and B's age is ? 5. 1 years ago, ratio of A's age and B's age was 4:5. 1 years hence, ratio of A's age and B's age will be 5:6. At present, A's age and B's age is ? 6. At present, Ram is 2 times as old as Shyam. 10 years ago, Ram was 4 times as old as Shyam. At present, Ram's age and Shyam's age is ? 7. 2 years ago, Father was 6 times as old as Son. 18 years hence, Father will be 2 times as old as Son. At present, Father's age and Son's age is ? 8. 10 years ago, Father was 3 times as old as Son. 10 years hence, Father will be 2 times as old as Son. At present, Father's age and Son's age is ? 9. 10 years ago, A was 4 times as old as B. 10 years hence, A will be 2 times as old as B. At present, A's age and B's age is ? 10. At present, A is 4 times as old as B. 5 years ago, A was 9 times as old as B. At present, A's age and B's age is ? 11. At present, difference of A's age and B's age is 20. 5 years ago, A was 5 times as old as B. At present, A's age and B's age is ? 12. At present, difference of A's age and B's age is 20. 5 years ago, ratio of A's age and B's age was 5:1. At present, A's age and B's age is ? 13. 1 years ago, difference of A's age and B's age was 20. 5 years ago, A was 5 times as old as B. At present, A's age and B's age is ? 14. At present, sum of Father's age and Son's age is 56. 4 years hence, Father will be 3 times as old as Son. At present, Father's age and Son's age is ? 15. At present, sum of Father's age and Son's age is 56. 4 years hence, ratio of Father's age and Son's age will be 3:1. At present, Father's age and Son's age is ? 16. At present, sum of A's age and B's age is 50. 5 years ago, A was 7 times as old as B. At present, A's age and B's age is ? 17. 1 years ago, sum of A's age and B's age was 48. 5 years ago, A was 7 times as old as B. At present, A's age and B's age is ? 18. At present, ratio of A's age and B's age is 3:11. At present, difference of A's age and B's age is 24. 3 years hence, ratio of A's age and B's age will be ? 19. At present, ratio of A's age and B's age is 3:11. 2 years hence, difference of A's age and B's age will be 24. 3 years hence, ratio of A's age and B's age will be ? 20. 10 years ago, A was 1/2 times as old as B. At present, ratio of A's age and B's age is 3:4. At present, A's age and B's age is ? 21. At present, ratio of A's age and B's age is 3:4. 10 years ago, A was 1/2 times as old as B. At present, A's age and B's age is ? 22. At present, ratio of A's age and B's age is 3:5. At present, sum of A's age and B's age is 80. 10 years hence, ratio of A's age and B's age will be ? 23. At present, A is 3/5 times as old as B. At present, sum of A's age and B's age is 80. 10 years hence, ratio of A's age and B's age will be ? 24. At present, ratio of A's age and B's age is 3:11. At present, difference of A's age and B's age is 24. 3 years hence, ratio of A's age and B's age will be ? 25. At present, A is 3/11 times as old as B. At present, difference of A's age and B's age is 24. 3 years hence, ratio of A's age and B's age will be ? 26. At present, A is 3/11 times as old as B. 2 years hence, difference of A's age and B's age will be 24. 3 years hence, ratio of A's age and B's age will be ? 27. At present, A is 3/11 times as old as B. 2 years hence, sum of A's age and B's age will be 46. 3 years hence, ratio of A's age and B's age will be ? 28. 1 years hence, sum of A's age and B's age will be 25. 1 years ago, difference of A's age and B's age was 5. At present, A's age and B's age is ? 29. 1 years ago, difference of A's age and B's age was 5. 1 years hence, sum of A's age and B's age will be 25. 6 years hence, ratio of A's age and B's age will be ? 30. At present, Father is 4 times as old as Ronit. 8 years hence, Father will be 5/2 times as old as Ronit. 16 years hence, Father will be times as old as Ronit? SolutionHelp
# Difference of Two Proportions Hypothesis Test with Weighted Sample Data I have survey data which contain respondents' answers to several questions. As the survey contained a disproportionate number of people from certain demographic groups, the survey results are weighted by race, sex and age. I have responses to the same questions for two years (eg. 2016 and 2017), and am trying to find out if the proportion of people who responded "yes" to a particular question has fallen. That is, I have calculated the weighted agreement rate to the question in 2016 (p1) and the weighted agreement rate to the question in 2017 (p2), and am trying to see if p1 - p2 = 0. The simple hypothesis test for a difference between two proportions is well known (described at https://onlinecourses.science.psu.edu/stat414/node/268). However, I am wondering if: 1) Weighting the samples by demographic variables has changed the standard error; thus, a more complicated hypothesis test formula is required. If so, what is this formula? 2) Whether there are other methods, other than applying this possibly more complicated formula, to rigorously test for a difference between the two proportions. For instance, are there non parametric hypothesis tests that can be used? Of the existing packages, @ThomasLumley's library(survey) in R does the best job in accounting for calibration to demographics. Some combination of design declaration survey::surveydesign(), estimation survey::svymean() and producing a difference survey::svycontrast() should get you there.
# Maxwell's equations in curved space-time 1. Apr 1, 2013 ### ash64449 what changes will take place in maxwell's equations if the space-time was curved? 2. Apr 1, 2013 ### Staff: Mentor The local equations stay the same, as far as I know (locally, space always looks like special relativity). The integrated versions can get problematic. 3. Apr 1, 2013 ### ash64449 Ok. What change can arrive is space was like dynamical,like in General Theory of relativity?.. 4. Apr 1, 2013 ### Staff: Mentor I don't understand that question. The integrated versions require a global time where the integrals are evaluated, that is not given in GR. Actually, I am not even sure if they are true in special relativity, as local changes need some time to propagate. 5. Apr 1, 2013 ### Staff: Mentor All laws of physics (expressed in their differential form) have basically the same changes. Express them in terms of tensors and replace any derivatives with covariant derivatives. That is it. EDIT: this is incorrect, see WannabeNewton's correction below. Last edited: Apr 1, 2013 6. Apr 1, 2013 ### ash64449 Well,yes friend. But i have heard that by adding 5th dimension,Electromagnetic phenomena and gravity was unified. But it said that this theory had some inconsistencies because According to GR,Space is dynamical.i.e It can curve. But Maxwell's equation is only consistent if space is flat.So i heard that they have to freeze the tensor to make it consistent. That is the reason why i asked. Why maxwell's equation break out if space is curved? i wanted to know that. That is why i started the discussion 7. Apr 1, 2013 ### WannabeNewton Let's be careful here. Maxwell's equations are a perfect example of why naively replacing $\partial _{a}\rightarrow \nabla_{a}$ doesn't always work. Consider the following form of maxwell's equations in flat spacetime: $\partial ^{a}F_{ab} = -4\pi j_{b},\partial _{[a}F_{bc]} = 0$. If we make the replacement of partials with covariant derivatives in order to make the equations covariant in curved space-time then we simply get $\nabla ^{a}F_{ab} = -4\pi j_{b},\nabla _{[a}F_{bc]} = 0$. One can show (if you want me to I can show you the calculation right here on this thread) that the inhomogenous Maxwell equations do indeed give us local charge conservation i.e. $\nabla ^{a}F_{ab} = -4\pi j_{b}\Rightarrow \nabla^{a}j_{a} = 0$. Consider now the fact that the poincare lemma allows us to claim that locally there exists a one-form $A_{a}$ such that $F_{ab} = \partial _{a}A_{b} - \partial _{b}A_{a}$ (of course $A_{a}$ is just the 4-potential). Writing the inhomogenous Maxwell equations in flat space-time in terms of the 4-potential, after fixing the Lorenz gauge, we simply have $\partial ^{a}\partial _{a}A_{b} = -4\pi j_{b}$. If we now naively replace the partials with covariant derivatives then we find that $\nabla^{a}\nabla_{a}A_{b} = -4\pi j_{b}$ but this will not give us local charge conservation. By simply replacing partials with covariant derivatives we overlooked the facts that covariant derivatives do NOT commute and that their commutator gives us a curvature term. In fact, the correct form should be $\nabla^{a}\nabla_{a}A_{b} - R^{c}{}{}_{b}A_{c} = -4\pi j_{b}$. Again you can show with a calculation that this does in fact imply $\nabla^{a}j_{a} = 0$. 8. Apr 1, 2013 ### Staff: Mentor Interesting. I was not aware of your good counter-example. I will have to not say that in the future. 9. Apr 1, 2013 ### WannabeNewton As far as I know, it almost always works but there are examples where it doesn't. It is sort of sad that it doesn't work for the 4-potential form of Maxwell's equations , in the Lorenz gauge, when you think about it, consider they are the most awesome equations known to man (yes I am a Maxwell fanboy). 10. Apr 1, 2013 ### Staff: Mentor I guess that you could take the approach that the examples where it doesn't work are not "laws of nature", but in the counter-example you provided I don't like that approach. 11. Apr 1, 2013 ### dextercioby Actually field equations always follow from variational principles. So write the variational principle for electromagnetism in curved space-time and find what you're supposed to find. See the 75 page brochure by Dirac. 12. Apr 2, 2013 ### TrickyDicky I remember a thread where I was trying to make a distinction between equations in tensorial form(tensor field equations like the EFE for instance) and equations in differential form(covector field equations like the Maxwell eq. discussed in this thread). I was told by Dalespam and others that since vectors are tensors there was no distinction wrt general coordinate transformation invariance.(And a quite silly exchange about this followed IIRC). My point was precisely that one can't replace in general partial derivatives with covariant derivatives but I didn't addressed it that way nor came up with any counter-example so I'm not trying to vindicate here my position then just see if I can understand this issue better. So my understanding from WB post is that to handle curvature and and keep the general (not only for inertial coordinates) coordinate transformations invariance one needs a tensor field equation, not just a covector field equation that would be inertial transformation invariant, would this be right? I'm actually not completely sure this translates well to the pseudoRiemannian case since the minimal coupling principle states that: "No terms explicitly containing the curvature tensor should be added in making the transition from SR to GR" as stated in D'inverno (page 131). It also says this principle is vague and should be used with care, maybe it is referring to cases like the one WB showed. 13. Apr 2, 2013 ### Bill_K Well, I think that's right. Often the same equation can be written in more than one way, e.g. by permuting derivatives, and this will cause the curvature to appear explicitly. In the example given by WannabeNewton, we're led to a wave equation for Aμ in the form ◻Aμ - Aν;μν = 0, and it's only because we want to impose the Lorenz condition that we need to permute the indices, bringing in the Ricci tensor term. 14. Apr 2, 2013 ### vanhees71 I think there is a subtle problem with the derivation of Maxwell's equations within GR, and the solution is to refer to Hamilton's principle at the end. I have to check it this evening when I'm at home and have my GR books at hand, but I think this is discussed in both Landau/Lifgarbagez Vol. II and Misner/Thorne/Wheeler. 15. Apr 2, 2013 ### WannabeNewton If we instead look at the differential forms version, $dF = 0, d^{\star }F = 4\pi^{\star }j$ and use Stoke's theorem $\int_{\Omega } d\omega = \int_{\partial \omega} \omega$ on the second equation we can show e.g. that Gauss's law still holds in GR. See the remarks under part (b) of problem 2 in Chapter 4 of Wald. EDIT: if you wanna see how to get the differential forms version from the usual version, I did it on the homework section here a while back: https://www.physicsforums.com/showthread.php?t=674049 (it's just the exercise I referred you to in Wald). Last edited: Apr 2, 2013 16. Apr 2, 2013 ### atyy I think it's viable to say the examples where it doesn't work aren't "laws of nature". From one point of view, the EP applies only locally, so it fails for non-local laws, such as those that have double gradients. (If we like this version of the EP, then isn't it good that it fails for Maxwell's equations in Lorenz gauge?) OTOH, if you write the known fields of special relativity (say the standard model of particle physics) using an action, then only first derivatives of the fields appear, and no derivatives of the metric appear, so you can use the minimal coupling prescription. If we say EP = minimal coupling, then the EP is exact and never fails. Last edited: Apr 2, 2013 17. Apr 2, 2013 ### WannabeNewton If you wanted to be as natural as possible then going along with dexter, the field equations would invariably come out correctly when you vary the appropriate action for the field theory. If we are talking about a Klein Gordon scalar field then $\mathcal{L}_{KG} = -\frac{1}{2}\sqrt{-g}(g^{ab}\nabla_{a}\varphi \nabla_{b} \varphi + m^{2}\varphi^{2})$ will naturally gives us $\nabla^{a}\nabla_{a}\varphi - m^2\varphi = 0$. Similarly $\mathcal{L}_{EM} = -\sqrt{-g}g^{ac}g^{bd}\nabla_{[a}A_{b]}\nabla_{[c}A_{d]}$ will give us the correct field equations of electromagnetism on curved space-time. We can simply use a variational principle for the respective field theory on curved space-time and avoid the whole minimal coupling thing. 18. Apr 2, 2013 ### Ben Niehoff What I think you really have is an example of how index notation is cumbersome and can lead the unwary astray... In differential forms notation, one has $$dF = 0, \qquad d \star F = \star J, \qquad F = dA$$ and hence $$\star d \star dA = -J$$ Now, the Lorentz gauge condition in curved space is $$d \star A = 0$$ Noting that the Laplace-Beltrami operator is $$\Delta = \star d \star d + d \star d \star$$ we can, after imposing the Lorentz condition, write the wave equation $$\Delta A = - J$$ But for comparison with index notation, we should write out what the gauge condition is: $$(d \star A)_{abcd} = 4 \nabla_{[a} (\sqrt{-g} \frac{1}{3!} \varepsilon_{bcd]e} g^{ef} A_f) = \sqrt{-g} \varepsilon_{abcd} \nabla_e A^e = 0$$ And then in this gauge, the wave operator is $$- (\star \Delta A)_{abc} = (d \star dA)_{abc} = 3 \nabla_{[a} (\sqrt{-g} \frac{1}{2! 2!} \varepsilon_{bc]de} g^{dm} g^{en} \nabla_{[m} A_{n]}) = \sqrt{-g} \varepsilon_{abcd} \nabla_e \nabla^{[d} A^{e]}$$ Up to factors of -1 and maybe other constants because I'm being sloppy. We have already imposed the gauge condition, but if you were to write all of this out, there would be yet another term that you can cancel using $\nabla_a A^a = 0$. However, you have to commute two covariant derivatives to get there, hence the appearance of the Ricci tensor. But what's really happening is this: It is still true that imposing the Lorentz gauge condition leaves you with the wave equation for A. The problem is assuming that the wave operator for vector fields is just $\nabla_a \nabla^a$. It clearly isn't. In differential form notation, the wave operator is always $\star d \star d + d \star d \star$ (up to an overall factor of -1 which depends on both the number of dimensions and the signature of the metric). The catch is that when this operator acts on fields with indices, you will get factors of the Ricci tensor. So the problem is nothing specifically to do with electromagnetic fields at all. It's in making unwarranted assumptions about the Laplace operator (i.e. wave operator). The substitution $\partial_a \partial^a \rightarrow \nabla_a \nabla^a$ only works on scalar fields. 19. Apr 2, 2013 ### WannabeNewton Sure I agree there is nothing remotely deep here. I was just using the EM field as a specific example because making a mistake as simple as assuming $\nabla^{a}\nabla_{a}$ acts on $A_{b}$ like $\partial ^{a}\partial _{a}$ does in the flat space-time case can obstruct one from showing $\nabla^{a}j_{a} = 0$. With $\nabla^{a}F_{ab} = -4\pi j_{b}$ one manages to doge that bullet whether one knew about the pitfalls of the notation or not and $\nabla^{a}j_{a} = 0$ successful comes out of calculations. Ideally, one could just stick to $dF = 0, d^{\star }F = 4\pi ^{\star} j$ because physical results regarding the EM field can be much more elegantly derived in this form and one can avoid the cumbersome index business to boot. 20. Apr 2, 2013 ### atyy Wouldn't one fail to get covariant conservation of energy without minimal coupling? I've seen a claim like that in http://arxiv.org/abs/gr-qc/0505128 (Eq 11) and in http://arxiv.org/abs/0704.1733 .
13 views 0 recommends +1 Recommend 0 collections 0 shares • Record: found • Abstract: found • Article: found Is Open Access # Non-adiabatic Evolution of Primordial Perturbations and non-Gaussinity in Hybrid Approach of Loop Quantum Cosmology Preprint , , Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract While loop quantum cosmology (LQC) predicts a robust quantum bounce of the background evolution of a Friedmann-Robertson-Walker (FRW) spacetime prior to the standard slow-roll inflation, whereby the big bang singularity is resolved, there are several different quantization procedures to cosmological perturbations, for instance, {\em the deformed algebra, dressed metric, and hybrid quantizations}. This paper devotes to study the quantum bounce effects of primordial perturbations in the hybrid approach. The main discrepancy of this approach is the effective positive mass at the quantum bounce for the evolution of the background that is dominated by the kinetic energy of the inflaton field at the bounce, while this mass is always nonpositive in the dressed metric approach. It is this positivity of the effective mass that violates the adiabatic evolution of primordial perturbations at the initial moments of the quantum bounce. With the assumption that the evolution of the background is dominated by the kinetic energy of the inflaton at the bounce, we find that the effective potentials for both scalar and tensor perturbations can be well approximately described by a P\"{o}schl-Teller (PT) potential, which allows us to find analytical solutions of perturbations, and from these analytical expressions we are able to study the non-adiabatic evolution of primordial perturbations in details. In particular, we derive their quantum bounce effects and investigate their observational constraints. In addition, the impacts of quantum bounce effects on the non-Gaussinity and their implication on the explanations of observed power asymmetry in CMB have also been explored. ### Most cited references8 • Record: found • Abstract: found • Article: found Is Open Access ### Loop Quantum Cosmology: A Status Report (2011) The goal of this article is to provide an overview of the current state of the art in loop quantum cosmology for three sets of audiences: young researchers interested in entering this area; the quantum gravity community in general; and, cosmologists who wish to apply loop quantum cosmology to probe modifications in the standard paradigm of the early universe. An effort has been made to streamline the material so that, as described at the end of section I, each of these communities can read only the sections they are most interested in, without a loss of continuity. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### The pre-inflationary dynamics of loop quantum cosmology: Confronting quantum gravity with observations (2013) Using techniques from loop quantum gravity, the standard theory of cosmological perturbations was recently generalized to encompass the Planck era. We now apply this framework to explore pre-inflationary dynamics. The framework enables us to isolate and resolve the true trans-Planckian difficulties, with interesting lessons both for theory and observations. Specifically, for a large class of initial conditions at the bounce, we are led to a self consistent extension of the inflationary paradigm over the 11 orders of magnitude in density and curvature, from the big bounce to the onset of slow roll. In addition, for a narrow window of initial conditions, there are departures from the standard paradigm, with novel effects ---such as a modification of the consistency relation between the ratio of the tensor to scalar power spectrum and the tensor spectral index, as well as a new source for non-Gaussianities--- which could extend the reach of cosmological observations to the deep Planck regime of the early universe. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Probability of Inflation in Loop Quantum Cosmology (2011) Inflationary models of the early universe provide a natural mechanism for the formation of large scale structure. This success brings to forefront the question of naturalness: Does a sufficiently long slow roll inflation occur generically or does it require a careful fine tuning of initial parameters? In recent years there has been considerable controversy on this issue. In particular, for a quadratic potential, Kofman, Linde and Mukhanov have argued that the probability of inflation with at least 65 e-foldings is close to one, while Gibbons and Turok have argued that this probability is suppressed by a factor of ~ $$\10^{-85}$$. We first clarify that such dramatically different predictions can arise because the required measure on the space of solutions is intrinsically ambiguous in general relativity. We then show that this ambiguity can be naturally resolved in loop quantum cosmology (LQC) because the big bang is replaced by a big bounce and the bounce surface can be used to introduce the structure necessary to specify a satisfactory measure. The second goal of the paper is to present a detailed analysis of the inflationary dynamics of LQC using analytical and numerical methods. By combining this information with the measure on the space of solutions, we address a sharper question than those investigated in the literature: What is the probability of a sufficiently long slow roll inflation WHICH IS COMPATIBLE WITH THE SEVEN YEAR WMAP DATA? We show that the probability is very close to 1. The material is so organized that cosmologists who may be more interested in the inflationary dynamics in LQC than in the subtleties associated with measures can skip that material without loss of continuity. Bookmark ### Author and article information ###### Journal 10 September 2018 ###### Article 1809.03172
## A community for students. Sign up today Here's the question you clicked on: ## CaitAlbe 2 years ago Show the work to solve |5x – 2| ≥ 8. • This Question is Closed 1. ilikephysics2 i did this in middle school, hang on let me think 2. CaitAlbe This is algebra 2. 3. CaitAlbe Absolute value inequalities 4. ilikephysics2 x< or equal to 6/5 and then x > or equal to 10 5. ilikephysics2 i know i did this years ago 6. ilikephysics2 try what i just gave you im sure thats correct 7. ilikephysics2 -6/5 > or less than x 8. CaitAlbe 5x-2>8 5x-2>-8 +2 +2 +2 +2 5x > 10 5x > -10 9. CaitAlbe I dont know what to do after the last line 10. Desert_lover12 $\left| 5x-2 \right|>-8$ 5x - 2 > -8 5x -2 > 8 +2 +2 +2 +2 5x > -6 5x > 10 /5 /5 /5 /5 x > -6/5 or -1 1/5 x > 2 $x=\frac{ -6 }{ 5 }, 2$ 11. Desert_lover12 @CaitAlbe, -8 + 2 = -6 You forgot the negative 12. Desert_lover12 Sorry, in the answer, the = sign should be a > sign! #### Ask your own question Sign Up Find more explanations on OpenStudy
# How Implication or Material/Concrete Conditional works when the antecedent is false and the consequent is true When we talk about formal logic, we generally agree that $$P$$ and $$Q$$ are deductively valid propositions. Below fact is supported by Ref. Rutger Uni. page no.2 (PDF doc) And further when we talk about implication or material/concrete conditional $$P\to Q$$ we assume: "if $$P$$ is true, then $$Q$$ is also true" or more generally in formal logic we assume: if premises are true then conclusions are also true because we are dealing with deductively valid propositions. But when we draw truth-table of $$P\to Q$$ we come across one paradox. That is, When $$P$$ is false, and $$Q$$ is true, $$P\to Q$$ is still true! Why so? How can premise (deductively valid proposition or antecedent) be false with a conclusion (deductively valid proposition or consequent) be true, and their implication be still TRUE! Reading through the explanations given on the question: In classical logic, why is $$(p\Rightarrow q)$$ True if $$p$$ is False and $$q$$ is True? It is bit clear with the given examples that: When $$P$$ is false, and $$Q$$ is true, $$P\to Q$$ is true in some situations. Out of given examples I am selecting example given by Jai [3rd top answer on that page] which in fact comes from this webpage: "If you get an A, then I'll give you a dollar." What if it's false that you get an A? Whether or not I give you a dollar, I haven't broken my promise. Does this mean that if I get A+ or B- or I was ill for the exam, I will still get a dollar as promised? Because, I thought propositions are rigid and their use in conditional should be consider as-is without changing the meaning of them, then what is intended by the author of the given example. Here, it is very clear that author of the example suggest that; it is a necessary condition for "Get an A" is "I'll give you a dollar" and a sufficient condition for "I'll give you a dollar" is "Get an A". Is there any further explanation: The premise (antecedent) requires further support or can be overlooked or ignored in order for conclusion (consequent) to be true when antecedent is false and consequent is true? • This has a long history (Aristotle) : in the Middle Ages students had to learn : "ex falso quodlibet sequetur" : out of a false premissa you can deduce whatever you want... – Jean Marie Feb 3 '19 at 15:38 • Sorry. A copy/paste error ; the similar question was math.stackexchange.com/q/1583209 – Jean Marie Feb 3 '19 at 18:17 • @JeanMarie "ex falso quodlibet sequitur" is not valid for this situation. Here, P->Q are just propositions they are not premise and conclusion. Hence, they are not proving something. – Ubi hatt Feb 4 '19 at 5:49 • You are right. Reference to Aristotelician logic is important for didactic purposes, but can be a little misleading when when we go into the details. – Jean Marie Feb 4 '19 at 7:46 • I erase my part of the last comments that aren't useful for others ! – Jean Marie Feb 4 '19 at 11:06 You seem to be rather confused .... You write: When we talk about formal logic, we generally agree that $$P$$ and $$Q$$ are deductively valid arguments (propositions). No, we don't agree on that at all. $$P$$ and $$Q$$ are statements, or propositions, or claims. But they are not arguments! You also write: And further when we talk about implication or material/concrete conditional $$P\to Q$$ we assume: "if $$P$$ is true, then $$Q$$ is also true" or more generally in formal logic we assume: if premises are true then conclusions are also true because we are dealing with deductively valid propositions. In a conditional $$P \to Q$$, the antecedent is $$P$$ and the consequent is $$Q$$. But the antecedent is not a premise, and the consequent is not a conclusion. $$P \to Q$$ is just a statement, not an argument. And if there is on argument, then there are no premises and conclusion either. But when we draw truth-table of $$P\to Q$$ we come across one paradox. That is, When $$P$$ is false, and $$Q$$ is true, $$P\to Q$$ is still true! Why so? How can premise (deductively valid proposition or antecedent) be false with a conclusion (deductively valid proposition or consequent) be true, and their implication be still TRUE! Again, we are not dealing with any premises or conclusions here, just propositions. and the propositions themselves need not be valid at all. But, in answer to your question why $$P \to Q$$ is true when $$P$$ is false and $$Q$$ is true: I always like this explanation for why $$F \to T$$ should be set to True: Consider the statement $$(P \land Q) \to Q$$. Now, that of course should always true since $$Q$$ logically follows from $$P \land Q$$. OK, so now plug in $$P=F$$ and $$Q=T$$. Then you get: $$(P \land Q) \to Q = (F \land T) \to T = F \to T$$ But like we said, $$(P \land Q) \to Q$$ should always be true, and so $$F \to T=T$$ "If you get an A, then I'll give you a dollar." What if it's false that you get an A? Whether or not I give you a dollar, I haven't broken my promise. Does this mean that if I get A+ or B- or I was ill for the exam, I will still get a dollar as promised? No, it does not mean that. If you get an A+ or B-, you may still get a dollar, but you may not. What is true, however, is that whether you get a dollar or not, the promise has not been broken, and hence "if you get an A, then you get a dollar" is considered true. In short, the whole conditional is true as soon as its antecedent is false, and in particular you have that if the antecedent is false, and the consequent is true, the conditional is true. But given the truth of the conditional, if its antecedent is false, that does not mean its consequent is true. The premise (antecedent) requires further support or can be overlooked or ignored in order to for conclusion (consequent) to be true when antecedent is false and consequent is true? Again, antecedent $$\not =$$ premise, and consequent $$\not =$$ conclusion! Also, if you know the antecedent is false, in what sense would it be overlooked? And if the consequent is true, then why would we need to seek further support for it to be true? I think you are asking one of two questions here (or maybe both): Why is $$P \to Q$$ true as soon as $$P$$ is false, regardless of the value of $$Q$$? Why is $$P \to Q$$ true as soon as $$Q$$ is true, regardless of the value of $$P$$? Well, the previous example can be extended to also cover both the $$P=Q=T$$ and $$P+Q=F$$ cases as well. When $$P$$ and $$Q$$ are both true: $$(P \land Q) \to Q = (T \land T) \to T = T \to T$$ And again, since $$(P \land Q) \to Q$$ should always be true, we get $$T \to T=T$$ Likewise, when they are both false: $$(P \land Q) \to Q = (F \land F) \to F = F \to F$$ And yet again, since $$(P \land Q) \to Q$$ should always be true, we get $$F \to F=T$$ OK, to sum up: we get $$F \to T=T$$, $$T \to T=T$$, and $$F \to F=T$$ The first two should that $$P \to Q$$ should be true whenever $$Q$$ is true, no matter the value of $$P$$. The first and the third show that $$P \to Q$$ should be true whenever $$P$$ is false, no matter the value of $$Q$$. • I appreciate your effort, but previous as well as your answer fails to explain that why antecedent is so insignificant in implication that truth value depends in this scenario only on consequent. – Ubi hatt Feb 3 '19 at 17:55 • The scenario about the getting an A and getting a dollar is supposed to illustrate that if the antecedent is false, the conditional is true, regardless of the value of the consequent. You are now asking a totally different question: why the conditional is true when the consequent is true, regardless of the value of the antecedent. So ... what question is it you want an answer to? Please make that clear in your original post. – Bram28 Feb 3 '19 at 18:03 • @EVG I extended my Answer to show that the conditional should be true whenever the consequent is true, regardless of the value of the antecedent. – Bram28 Feb 3 '19 at 18:08 • I have updated my answer little bit more. – Ubi hatt Feb 3 '19 at 18:21 • @EVG I see two questions in your post: 1) why is $P \to Q$ true when $P$ is false and $Q$ is true? 2) why is $P \to Q$ true when $P$ is false, regardles of value of $Q$? Neither of these is the question you asked in your previous comment: 3) why is $P \to Q$ true when $Q$ is true, regardless of value of $P$? If you want that question answered as well, you'll need to add it to your post. In the meanwhile, I'll update my answer to show how my method can answer all three questions ... – Bram28 Feb 3 '19 at 19:12 Classical logic can only be applied to proposition(s) that are unambiguously either true or false at the moment. It has nothing to do with cause and effect, or the passage of time. It deals with things that are true, not things that will be true. I don't think your example fits into that mold. In any case, for propositions $$A$$ and $$B$$, we can prove that $$A \implies B$$ follows from $$\neg A$$. We can prove that $$\neg A \implies (A \implies B)$$ is a tautology using a truth table. The truth table, however, is based on the standard definition: $$A\implies B \space\equiv\space \neg (A \land \neg B)$$ Both, however, can be derived using only the following rules of natural deduction: 1. Conditional proof ($$\implies$$ intro) 2. Proof by contradiction ($$\neg$$ intro) 3. Joining a pair of statements using conjunction ($$\land$$ intro) 4. Splitting up a conjunction into a pair of statements ($$\land$$ elim) 5. Detachment ($$\implies$$ elim) 6. Removing double negation ($$\neg\neg$$ elim) EDIT See my blog posting on this topic.
This is one of the 100+ free recipes of the IPython Cookbook, Second Edition, by Cyrille Rossant, a guide to numerical computing and data science in the Jupyter Notebook. The ebook and printed book are available for purchase at Packt Publishing. ▶  Text on GitHub with a CC-BY-NC-ND license ▶  Code on GitHub with a MIT license Ordinary Differential Equations (ODEs) describe the evolution of a system subject to internal and external dynamics. Specifically, an ODE links a quantity depending on a single independent variable (time, for example) to its derivatives. In addition, the system can be under the influence of external factors. A first-order ODE can typically be written as: $$y'(t)=f(t,y(t))$$ More generally, an $$n$$-th order ODE involves successive derivatives of $$y$$ until the order $$n$$. The ODE is said to be linear or nonlinear depending on whether $$f$$ is linear in $$y$$ or not. ODEs naturally appear when the rate of change of a quantity depends on its value. Therefore, ODEs are found in many scientific disciplines such as mechanics (evolution of a body subject to dynamic forces), chemistry (concentration of reacting products), biology (spread of an epidemic), ecology (growth of a population), economics, and finance, among others. Whereas simple ODEs can be solved analytically, many ODEs require a numerical treatment. In this recipe, we will simulate a simple linear second-order autonomous ODE, describing the evolution of a particle in the air subject to gravity and viscous resistance. Although this equation could be solved analytically, here we will use SciPy to simulate it numerically. ## How to do it... 1.  Let's import NumPy, SciPy (the integrate package), and matplotlib: import numpy as np import scipy.integrate as spi import matplotlib.pyplot as plt %matplotlib inline 2.  We define a few parameters appearing in our model: m = 1. # particle's mass k = 1. # drag coefficient g = 9.81 # gravity acceleration 3.  We have two variables: $$x$$ and $$y$$ (two dimensions). We note $$u=(x,y)$$. The ODE that we are going to simulate is: $$u'' = -\frac{k}{m} u' + g$$ Here, $$g$$ is the gravity acceleration vector. In order to simulate this second-order ODE with SciPy, we can convert it to a first-order ODE (another option would be to solve $$u'$$ first before integrating the solution). To do this, we consider two 2D variables: $$u$$ and $$u'$$. We note $$v = (u, u')$$. We can express $$v'$$ as a function of $$v$$. Now, we create the initial vector $$v_0$$ at time $$t=0$$: it has four components. # The initial position is (0, 0). v0 = np.zeros(4) # The initial speed vector is oriented # to the top right. v0[2] = 4. v0[3] = 10. 4.  Let's create a Python function $$f$$ that takes the current vector $$v(t_0)$$ and a time $$t_0$$ as arguments (with optional parameters) and that returns the derivative $$v'(t_0)$$: def f(v, t0, k): # v has four components: v=[u, u']. u, udot = v[:2], v[2:] # We compute the second derivative u'' of u. udotdot = -k / m * udot udotdot[1] -= g # We return v'=[u', u'']. return np.r_[udot, udotdot] 5.  Now, we simulate the system for different values of $$k$$. We use the SciPy odeint() function, defined in the scipy.integrate package. Starting with SciPy 1.0, the generic scipy.integrate.solve_ivp() function can be used instead of the old function odeint(): fig, ax = plt.subplots(1, 1, figsize=(8, 4)) # We want to evaluate the system on 30 linearly # spaced times between t=0 and t=3. t = np.linspace(0., 3., 30) # We simulate the system for different values of k. for k in np.linspace(0., 1., 5): # We simulate the system and evaluate $v$ on the # given times. v = spi.odeint(f, v0, t, args=(k,)) # We plot the particle's trajectory. ax.plot(v[:, 0], v[:, 1], 'o-', mew=1, ms=8, mec='w', label=f'k={k:.1f}') ax.legend() ax.set_xlim(0, 12) ax.set_ylim(0, 6) In the preceding figure, the most outward trajectory (blue) corresponds to drag-free motion (without air resistance). It is a parabola. In the other trajectories, we can observe the increasing effect of air resistance, parameterized with $$k$$. ## How it works... Let's explain how we obtained the differential equation from our model. Let $$u = (x,y)$$ encode the 2D position of our particle with mass $$m$$. This particle is subject to two forces: gravity $$mg = (0, -9.81 \cdot m)$$ and air drag $$F = -ku'$$. This last term depends on the particle's speed and is only valid at low speed. With higher speeds, we need to use more complex nonlinear expressions. Now, we use Newton's second law of motion in classical mechanics. This law states that, in an inertial reference frame, the mass multiplied by the acceleration of the particle is equal to the sum of all forces applied to that particle. Here, we obtain: $$m \cdot u'' = F + mg$$ We immediately obtain our second-order ODE: $$u'' = -\frac{k}{m} u' + g$$ We transform it into a single-order system of ODEs, with $$v=(u, u')$$: $$v' = (u', u'') = (u', -\frac{k}{m} u' + g)$$ The last term can be expressed as a function of $$v$$ only. The SciPy odeint() function is a black-box solver; we simply specify the function that describes the system, and SciPy solves it automatically. This function leverages the FORTRAN library ODEPACK, which contains well-tested code that has been used for decades by many scientists and engineers. The newer solve_ivb() function offers a common API for Python implementations of various ODE solvers. An example of a simple numerical solver is the Euler method. To numerically solve the autonomous ODE $$y'=f(y)$$, the method consists of discretizing time with a time step $$dt$$ and replacing $$y'$$ with a first-order approximation: $$y'(t) \simeq \frac{y(t+dt)-y(t)}{dt}$$ Then, starting from an initial condition $$y_0 = y(t_0)$$, the method evaluates $$y$$ successively with the following recurrence relation: $$y_{n+1} = y_n + dt \cdot f(y_n) \qquad \textrm{with} \quad t = n \cdot dt, \quad y_n = y(n \cdot dt)$$ ## There's more... Here are a few references:
## Python for NLP: Creating Bag of Words Model from Scratch This is the 13th article in my series of articles on Python for NLP. In the previous article, we saw how to create a simple rule-based chatbot that uses cosine similarity between the TF-IDF vectors of the words in the corpus and the user input, to generate a response. The TF-IDF model was basically used to convert word to numbers. In this article, we will study another very useful model that converts text to numbers i.e. the Bag of Words (BOW). Since most of the statistical algorithms, e.g machine learning and deep learning techniques, work with numeric data, therefore we have to convert text into numbers. Several approaches exist in this regard. However, the most famous ones are Bag of Words, TF-IDF, and word2vec. Though several libraries exist, such as Scikit-Learn and NLTK, which can implement these techniques in one line of code, it is important to understand the working principle behind these word embedding techniques. The best way to do so is to implement these techniques from scratch in Python and this is what we are going to do today. In this article, we will see how to implement the Bag of Words approach from scratch in Python. In the next article, we will see how to implement the TF-IDF approach from scratch in Python. Before coding, let's first see the theory behind the bag of words approach. ### Theory Behind Bag of Words Approach To understand the bag of words approach, let's first start with the help of an example. Suppose we have a corpus with three sentences: • "I like to play football" • "Did you go outside to play tennis" • "John and I play tennis" Now if we have to perform text classification, or any other task, on the above data using statistical techniques, we can not do so since statistical techniques work only with numbers. Therefore we need to convert these sentences into numbers. #### Step 1: Tokenize the Sentences The first step in this regard is to convert the sentences in our corpus into tokens or individual words. Look at the table below: Sentence 1 Sentence 2 Sentence 3 I Did John like you and to go I play outside play football to tennis play tennis #### Step 2: Create a Dictionary of Word Frequency The next step is to create a dictionary that contains all the words in our corpus as keys and the frequency of the occurrence of the words as values. In other words, we need to create a histogram of the words in our corpus. Look at the following table: Word Frequency I 2 like 1 to 2 play 3 football 1 Did 1 you 1 go 1 outside 1 tennis 2 John 1 and 1 In the table above, you can see each word in our corpus along with its frequency of occurrence. For instance, you can see that since the word play occurs three times in the corpus (once in each sentence) its frequency is 3. In our corpus, we only had three sentences, therefore it is easy for us to create a dictionary that contains all the words. In the real world scenarios, there will be millions of words in the dictionary. Some of the words will have a very small frequency. The words with very small frequency are not very useful, hence such words are removed. One way to remove the words with less frequency is to sort the word frequency dictionary in the decreasing order of the frequency and then filter the words having a frequency higher than a certain threshold. Let's sort our word frequency dictionary: Word Frequency play 3 tennis 2 to 2 I 2 football 1 Did 1 you 1 go 1 outside 1 like 1 John 1 and 1 #### Step 3: Creating the Bag of Words Model To create the bag of words model, we need to create a matrix where the columns correspond to the most frequent words in our dictionary where rows correspond to the document or sentences. Suppose we filter the 8 most occurring words from our dictionary. Then the document frequency matrix will look like this: Play Tennis To I Football Did You go Sentence 1 1 0 1 1 1 0 0 0 Sentence 2 1 1 1 0 0 1 1 1 Sentence 3 1 1 0 1 0 0 0 0 It is important to understand how the above matrix is created. In the above matrix, the first row corresponds to the first sentence. In the first, the word "play" occurs once, therefore we added 1 in the first column. The word in the second column is "Tennis", it doesn't occur in the first sentence, therefore we added a 0 in the second column for sentence 1. Similarly, in the second sentence, both the words "Play" and "Tennis" occur once, therefore we added 1 in the first two columns. However, in the fifth column, we add a 0, since the word "Football" doesn't occur in the second sentence. In this way, all the cells in the above matrix are filled with either 0 or 1, depending upon the occurrence of the word. Final matrix corresponds to the bag of words model. In each row, you can see the numeric representation of the corresponding sentence. For instance, the first row shows the numeric representation of Sentence 1. This numeric representation can now be used as input to the statistical models. Enough of the theory, let's implement our very own bag of words model from scratch. ### Bag of Words Model in Python The first thing we need to create our Bag of Words model is a dataset. In the previous section, we manually created a bag of words model with three sentences. However, real-world datasets are huge with millions of words. The best way to find a random corpus is Wikipedia. In the first step, we will scrape the Wikipedia article on Natural Language Processing. But first, let's import the required libraries: import nltk import numpy as np import random import string import bs4 as bs import urllib.request import re As we did in the previous article, we will be using the Beautifulsoup4 library to parse the data from Wikipedia. Furthermore, Python's regex library, re, will be used for some preprocessing tasks on the text. Next, we need to scrape the Wikipedia article on natural language processing. raw_html = urllib.request.urlopen('https://en.wikipedia.org/wiki/Natural_language_processing') raw_html = raw_html.read() article_html = bs.BeautifulSoup(raw_html, 'lxml') article_paragraphs = article_html.find_all('p') article_text = '' for para in article_paragraphs: article_text += para.text In the script above, we import the raw HTML for the Wikipedia article. From the raw HTML, we filter the text within the paragraph text. Finally, we create a complete corpus by concatenating all the paragraphs. The next step is to split the corpus into individual sentences. To do so, we will use the sent_tokenize function from the NLTK library. corpus = nltk.sent_tokenize(article_text) Our text contains punctuations. We don't want punctuations to be the part of our word frequency dictionary. In the following script, we first convert our text into lower case and then will remove the punctuation from our text. Removing punctuation can result in multiple empty spaces. We will remove the empty spaces from the text using regex. Look at the following script: for i in range(len(corpus )): corpus [i] = corpus [i].lower() corpus [i] = re.sub(r'\W',' ',corpus [i]) corpus [i] = re.sub(r'\s+',' ',corpus [i]) In the script above, we iterate through each sentence in the corpus, convert the sentence to lower case, and then remove the punctuation and empty spaces from the text. Let's find out the number of sentences in our corpus. print(len(corpus)) The output shows 49. Let's print one sentence from our corpus: print(corpus[30]) Output: in the 2010s representation learning and deep neural network style machine learning methods became widespread in natural language processing due in part to a flurry of results showing that such techniques 4 5 can achieve state of the art results in many natural language tasks for example in language modeling 6 parsing 7 8 and many others You can see that the text doesn't contain any special character or multiple empty spaces. Now we have our own corpus. The next step is to tokenize the sentences in the corpus and create a dictionary that contains words and their corresponding frequencies in the corpus. Look at the following script: wordfreq = {} for sentence in corpus: tokens = nltk.word_tokenize(sentence) for token in tokens: if token not in wordfreq.keys(): wordfreq[token] = 1 else: wordfreq[token] += 1 In the script above we created a dictionary called wordfreq. Next, we iterate through each sentence in the corpus. The sentence is tokenized into words. Next, we iterate through each word in the sentence. If the word doesn't exist in the wordfreq dictionary, we will add the word as the key and will set the value of the word as 1. Otherwise, if the word already exists in the dictionary, we will simply increment the key count by 1. If you are executing the above in the Spyder editor like me, you can go the variable explorer on the right and click wordfreq variable. You should see a dictionary like this: You can see words in the "Key" column and their frequency of occurrences in the "Value" column. As I said in the theory section, depending upon the task at hand, not all of the words are useful. In huge corpora, you can have millions of words. We can filter the most frequently occurring words. Our corpus has 535 words in total. Let us filter down to the 200 most frequently occurring words. To do so, we can make use of Python's heap library. Look at the following script: import heapq most_freq = heapq.nlargest(200, wordfreq, key=wordfreq.get) Now our most_freq list contains 200 most frequently occurring words along with their frequency of occurrence. The final step is to convert the sentences in our corpus into their corresponding vector representation. The idea is straightforward, for each word in the most_freq dictionary if the word exists in the sentence, a 1 will be added for the word, else 0 will be added. sentence_vectors = [] for sentence in corpus: sentence_tokens = nltk.word_tokenize(sentence) sent_vec = [] for token in most_freq: if token in sentence_tokens: sent_vec.append(1) else: sent_vec.append(0) sentence_vectors.append(sent_vec) In the script above we create an empty list sentence_vectors which will store vectors for all the sentences in the corpus. Next, we iterate through each sentence in the corpus and create an empty list sent_vec for the individual sentences. Similarly, we also tokenize the sentence. Next, we iterate through each word in the most_freq list and check if the word exists in the tokens for the sentence. If the word is a part of the sentence, 1 is appended to the individual sentence vector sent_vec, else 0 is appended. Finally, the sentence vector is added to the list sentence_vectors which contains vectors for all the sentences. Basically, this sentence_vectors is our bag of words model. However, the bag of words model that we saw in the theory section was in the form of a matrix. Our model is in the form of a list of lists. We can convert our model into matrix form using this script: sentence_vectors = np.asarray(sentence_vectors) Basically, in the following script, we converted our list into a two-dimensional numpy array using asarray function. Now if you open the sentence_vectors variable in the variable explorer of the Spyder editor, you should see the following matrix: You can see the Bag of Words model containing 0 and 1. ### Conclusion Bag of Words model is one of the three most commonly used word embedding approaches with TF-IDF and Word2Vec being the other two. In this article, we saw how to implement the Bag of Words approach from scratch in Python. The theory of the approach has been explained along with the hands-on code to implement the approach. In the next article, we will see how to implement the TF-IDF approach from scratch in Python. Paris (France) Programmer | Blogger | Data Science Enthusiast | PhD To Be | Arsenal FC for Life
+0 # Repost of unanswered question +2 170 1 +738 How many ordered triples (x,y,z) of integers are there such that ? Does the question have a geometric interpretation? MIRB16  Mar 12, 2018 #1 0 $$\displaystyle x^{2}+y^{2}+z^{2}=49,$$ is the equation of a sphere radius 7, centered at the origin, so what you are looking for are points on its surface such that all three co-ordinates are integers. I don't see any way through it other than by trying each integer value of one of the variables in turn. Choosing z, it's largest value is 7, and that gets us $$\displaystyle x^{2}+y^{2}=0, \text{ from which we have } x=0\text{ and }y=0.$$ If z = 6, $$\displaystyle x^{2}+y^{2}=13,\text{ leading to }x=\pm2, y=\pm3,\text{ or }x=\pm3,y=\pm2.$$ If z = 5, $$\displaystyle x^{2}+y^{2}=24,\text{ and there are no integer solutions for that.}$$ So on through to z = -7. Guest Mar 12, 2018 ### New Privacy Policy We use cookies to personalise content and advertisements and to analyse access to our website. Furthermore, our partners for online advertising receive information about your use of our website. For more information: our cookie policy and privacy policy.
## Cryptology ePrint Archive: Report 2021/102 A Note on Advanced Encryption Standard with Galois/Counter Mode Algorithm Improvements and S-Box Customization Madalina Chirita and Alexandru-Mihai Stroie and Andrei-Daniel Safta and Emil Simion Abstract: Advanced Encryption Standard used with Galois Counter Mode, mode of operation is one of the the most secure modes to use the AES. This paper represents an overview of the AES modes focusing the AES-GCM mode and its particularities. Moreover, after a detailed analysis of the possibility of enhancement for the encryption and authentication phase, a method of generating custom encryption schemes based on GF($2^8$) irreducible polynomials different from the standard polynomial used by the AES-GCM mode is provided. Besides the polynomial customization, the solution proposed in this paper offers the possibility to determine, for each polynomial, the constants that can be used in order to keep all the security properties of the algorithm. Using this customization method, allows changing the encryption schemes over a period of time without interfering with the process, bringing a major improvement from the security point of view by avoiding pattern creation. Furthermore, this paper sets the grounds for implementing authentication enhancement using a similar method to determine the polynomials that can be used instead of the default authentication polynomial, without changing the algorithm strength at all. Category / Keywords: AES-GCM, Sbox, irreductible polynomials, custom encryption schemes
# How do I solve for n in this permutation question? I have the following question: Solve for n: $$_nP_3 = 6_{n-1}P_2$$ I don't know how I should begin to tackle this problem? Any tips/help would be appreciated. - Note that we can "expand" with how $_nP_m$ is defined: $$\dfrac{n!}{(n-m)!}= \dfrac{n(n-1)(n - 2)\cdots}{(n-m)(n-m-1)\cdots} = n(n-1) \cdots (n-m + 1)$$ \begin{align} _nP_3 &= n(n-1)(n-2) \\ \\ 6(_{n-1}P_2) & = 6(n - 1)(n-2)\end{align} Now, since both expressions are given as equal to one another, we must have $n = 6$. - You must convert both sides of the equation into the equivalent of a permutation. So, nP3 would become n!/(n-3)! the other side would be 6((n-1)!/(n-3)!) now you just rearrange the equations and solve for n: the (n-3)! cancel on both sides so you are left with n! = 6(n-1)! you notice that 6(n-1)! is = n! and thus n = 6 - \begin{eqnarray*} _{n}{\rm P}_{3} & = & 6\left(_{n-1}{\rm P}_{2}\right)\\ \frac{n!}{\left(n-3\right)!} & = & 6\frac{\left(n-1\right)!}{\left(\left(n-1\right)-2\right)!}\\ \frac{n\times\left(n-1\right)!}{\left(n-3\right)!} & = & 6\frac{\left(n-1\right)!}{\left(n-3\right)!}\\ n & = & 6 \end{eqnarray*} -
E. Chess Championship time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output Ostap is preparing to play chess again and this time he is about to prepare. Thus, he was closely monitoring one recent chess tournament. There were m players participating and each pair of players played exactly one game. The victory gives 2 points, draw — 1 points, lose — 0 points. Ostap is lazy, so he never tries to remember the outcome of each game. Instead, he computes the total number of points earned by each of the players (the sum of his points in all games which he took part in), sort these value in non-ascending order and then remembers first n integers in this list. Now the Great Strategist Ostap wonders whether he remembers everything correct. He considers that he is correct if there exists at least one tournament results table such that it will produce the given integers. That means, if we count the sum of points for each player, sort them and take first n elements, the result will coincide with what Ostap remembers. Can you check if such table exists? Input The first line of the input contains two integers m and n (1 ≤ n ≤ m ≤ 3000) — the number of participants of the tournament and the number of top results Ostap remembers. The second line contains n integers, provided in non-ascending order — the number of points earned by top participants as Ostap remembers them. It's guaranteed that this integers are non-negative and do not exceed m. Output If there is no tournament such that Ostap can obtain the given set of integers using the procedure described in the statement, then print "no" in the only line of the output. Otherwise, the first line of the output should contain the word "yes". Next m lines should provide the description of any valid tournament. Each of these lines must contain m characters 'X', 'W', 'D' and 'L'. Character 'X' should always be located on the main diagonal (and only there), that is on the i-th position of the i-th string. Character 'W' on the j-th position of the i-th string means that the i-th player won the game against the j-th. In the same way character 'L' means loose and 'D' means draw. The table you print must be consistent and the points earned by best n participants should match the memory of Ostap. If there are many possible answers, print any of them. Examples Input 5 58 6 4 2 0 Output yesXWWWWLXWWWLLXWWLLLXWLLLLX Input 5 19 Output no
# cupyx.scipy.linalg.lu¶ cupyx.scipy.linalg.lu(a, permute_l=False, overwrite_a=False, check_finite=True) LU decomposition. Decomposes a given two-dimensional matrix into P @ L @ U, where P is a permutation matrix, L is a lower triangular or trapezoidal matrix with unit diagonal, and U is a upper triangular or trapezoidal matrix. Parameters • a (cupy.ndarray) – The input matrix with dimension (M, N). • permute_l (bool) – If True, perform the multiplication P @ L. • overwrite_a (bool) – Allow overwriting data in a (may enhance performance) • check_finite (bool) – Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs. Returns (P, L, U) if permute_l == False, otherwise (PL, U). P is a cupy.ndarray storing permutation matrix with dimension (M, M). L is a cupy.ndarray storing lower triangular or trapezoidal matrix with unit diagonal with dimension (M, K) where K = min(M, N). U is a cupy.ndarray storing upper triangular or trapezoidal matrix with dimension (K, N). PL is a cupy.ndarray storing permuted L matrix with dimension (M, K). Return type tuple
# zbMATH — the first resource for mathematics Robust stability for a class of linear systems with time-varying delay and nonlinear perturbations. (English) Zbl 1154.93408 The author investigates the robust stability of uncertain systems with a single time-varying discrete delay $x'(t)=Ax(t)+Bx(t-h(t))+f(x(t),t)+g(x(t-h(t)),t),\tag{1}$ where $$f$$ and $$g$$ represent unknown perturbations and obtains a stability criteria. Numerical examples show significant improvements over some existing results. Reviewer’s remarks: 1) Describing the system (1) the author should indicate the interval where the system is defined, i.e., on the semi-axis $$[0,\infty)$$. 2) The second line after formula (1) is formally correct, but usually it is written $$f(u,t)$$, where $$u\in{\mathbb R}^n$$, $$t\geq 0$$, instead of $$f(x(t),t)$$ that is superposition. 3) In the “Definition 1” it is not clear what is meant by $${\mathbb Z}=\{z\}$$. 4) The conditions (6a), (6b) are not written in terms of solutions, but in terms of functions $$f$$, $$g$$, for instance in the form: $\|f(u,t)\|\leq\alpha\|u\|, \qquad \|g(u,t)\|\leq\beta\|u\|. \tag{(1)}$ 5) The same remark for inequalities (7a), (7b), in the line before the “Corollary 1”, after formula (16), and also in the “Example 1”. 6) Inequality (8): Maybe the author means a negative defined matrix? 7) I suggest to the author to indicate the dimensions of the matrices introduced in “Theorem 1”. 8) The formulation of “Lemma 1” is not concise because there is not the phrase “there exists” after $$\gamma$$ or after $$\omega$$. In (9) instead matrix $$M$$ is written the matrix $$Q$$ that is not defined above, also $$r_M$$, and $$r(t)$$ were not defined before. 9) In page 1204, line 7 from bottom, instead “are the solution of (8)” I think is better to write “satisfy (8)”. 10) I did not understand why in the “Remark 3” the constant $$h_M$$ is linear?. 11) The second phrase from the beginning of Section 4. I did not understand why from bounded by norm $$f$$, $$g$$ follows the formula (17) (nonlinear equation became linear). 12) Next, in (18) is not explained what means the symbol $$\sigma_{\max}$$, perhaps it is the maximal eigenvalue. In the next paragraph “System (15)” the citation number is wrong. 13) The phrase after (20) is not adequate: “rewrite (17), (18) as (21a), (21b)”. Indeed, (18) as I understood, means that the spectral radius $$F(t)$$ not more than 1 and (21b) means that $$\|F(t)y\|\leq \|y\|$$, i.e., $$\|F(t)\|\leq 1$$. But this condition is more strong than (18), i.e., (18)$$\Leftarrow$$(21b), however (18)$$\not\Rightarrow$$(21b). Thus, the word “rewrite” is not exact. ##### MSC: 93D09 Robust stability 93C23 Control/observation systems governed by functional-differential equations Full Text: ##### References: [1] Hale, J.K.; Verduyn Lunel, S.M., Introduction to functional differential equations, (1993), Springer-Verlag New York · Zbl 0787.34002 [2] Malek-Zavarei, M.; Jamshidi, M., Time delay systems: analysis, optimization and applications, (1987), North-Holland · Zbl 0658.93001 [3] Wang, S.S.; Chen, B.S.; Lin, T.P., Robust stability of uncertain time-delay systems, Int. J. contr., 46, 963-976, (1987) · Zbl 0629.93051 [4] Niculescu, S.-I.; de Souza, C.E.; Dugard, L.; Dion, J.-M., Robust exponential stability of uncertain systems with time-varying delays, IEEE trans. automat. contr., 43, 743-748, (1998) · Zbl 0912.93053 [5] Shyu, K.-K.; Yan, J.-J., Robust stability of uncertain time-delay systems and its stabilization by variable structure control, Int. J. contr., 57, 237-246, (1993) · Zbl 0774.93066 [6] Goubet-Batholomeus, A.; Dambrine, M.; Richard, J.P., Stability of perturbed systems with time-varying delays, Syst. contr. lett., 31, 155-163, (1997) · Zbl 0901.93047 [7] Cao, Y.-Y.; Lam, J., Computation of robust stability bounds for time-delay systems with nonlinear time-varying perturbations, Int. J. systems sci., 31, 359-365, (2000) · Zbl 1080.93519 [8] Kharitonov, V.L.; Melchor-Aguilar, D., Additional dynamics for time-varying systems with delay, (), 4721-4726 [9] Fridman, E., New Lyapunov-krasovskii functionals for stability of linear retarded and neutral type systems, Systems control lett., 43, 309-319, (2001) · Zbl 0974.93028 [10] Gu, K.; Niculescu, S.-I., Additional dynamics in transformed time-delay systems, IEEE trans. automat. contr., 45, 572-575, (2000) · Zbl 0986.34066 [11] Fridman, E.; Shaked, U., Stability and H∞ control of systems with time-varying delays, () · Zbl 1023.93032 [12] Park, P., A delay-dependent stability criterion for systems with uncertain time-invariant delays, IEEE trans. automat. contr., 44, 876-877, (1999) · Zbl 0957.34069 [13] Yakubovich, V.A., Σ-procedure in nonlinear control theory, (), 62-77, Ser. Matematika · Zbl 0232.93010 [14] Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V., Linear matrix inequalities in systems and control theory, (1994), SIAM Pennsylvania · Zbl 0816.93004 [15] Takaba, K.; Morihira, N.; Katayama, T., A generalized Lyapunov theorem for descriptor systems, Systems control lett., 24, 49-51, (1995) · Zbl 0883.93035 [16] Kim, J.-H., Delay and its time-derivative dependent robust stability of timedelayed linear systems with uncertainty, IEEE trans. automat. contr., 46, 789-792, (2001) · Zbl 1008.93056 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Discover Elixir & Phoenix Back to All Courses Lesson 10 # Forms In the previous lesson, we learned how to create changesets from a struct and using them to add a record to the database. The last piece of that puzzle is that we want the changeset to be generated based on what the user sends from the browser. In other words, we want the "params" sent to the create_user/1-function to come from a form that the user fills in. ## Creating an empty changeset Let's check out the form that we currently have at the /signup path: What we want to do now is initialise this signup-page with an empty changeset that the user can populate. In order to easily create this empty changeset, we're going to slightly tweak create_user in our Accounts module, by extracting the changeset-related functions into their own register_changeset/1 function: # lib/messengyr/accounts/accounts.ex defmodule Messengyr.Accounts do # ... # Update this function: def create_user(params) do register_changeset(params) |> Repo.insert end # ... and add this function: def register_changeset(params \\ %{}) do %User{} end end Next, we go to the controller function for the signup page: # lib/messengyr_web/controller/page_controller.ex defmodule MessengyrWeb.PageController do # ... def signup(conn, _params) do render conn end end Instead of just having a boring render conn here, we'll now create the empty changeset and pass it to our render function in a map, under the key user_changeset, so that we can use it in the template. # lib/messengyr_web/controller/page_controller.ex defmodule MessengyrWeb.PageController do # ... # Remember to alias the module first: alias Messengyr.Accounts # ... def signup(conn, _params) do # Create the empty changeset: changeset = Accounts.register_changeset() # Pass it to "render": render conn, user_changeset: changeset end end ## Generating forms Okay, so now that we can access the user_changeset from the template, here's how we're going to rewrite it (don't worry if it looks complicated at first): # lib/messengyr_web/templates/page/signup.html.eex <%= form_for @user_changeset, page_path(@conn, :create_user), fn f -> %> <%= text_input f, :email, placeholder: "Email" %> <% end %> </div> After making these changes, you'll get an error if you go to /signup in your browser: We'll take care of this soon. So first -- what's all this about? Why has our form tag been replaced with Elixir's form_for, and why are our inputs now text_input? It turns out that Phoenix comes bundled with some helpers to make our form handling much more maintainable! As you already know, &lt;%= essentially just means "run this code as normal Elixir code, and output the result in the template". Let's go through the helpers we're using one by one: • form_for replaces our form tag. In it, we specify what changeset we want to use for the form (user_changeset), and then we tell Elixir what controller function should be called when the user sends the form. In this case it's PageController.create_user (which doesn't exist yet -- that's why we see the error message). • text_input works exactly like aninput tag, except that we also specify the keys of the map that will be sent when we send the form (:email:username and :password). • submit just generates a button tag with type="submit". When you click on it, it will trigger the form action. ## Handling form requests Alright, now let's fix that error message we've got. For that, we need to create a route for the PageController action :create_user. We'll use the endpoint POST /signup for that: # lib/messengyr_web/router.ex defmodule MessengyrWeb.Router do # ... scope "/", MessengyrWeb do # ... post "/signup", PageController, :create_user end end # lib/messengyr_web/controllers/page_controller.ex defmodule MessengyrWeb.PageController do # ... def create_user(conn, params) do IO.puts "Create user!" end end Now, whenever the user clicks "Sign up", a POST request will be sent to /signup, and you'll see the following error page: The error is due to the fact that we're not returning a conn, but if you check your Elixir console, you'll see that the request at least worked as expected: There's our "IO.puts"! All we have to do now is build this create_user/2 function properly! We want it to read the parameters that the users sends in and build a new changeset from those. If the changeset is valid, we insert it into the database and return a success message to the browser. If not, we simply return an error message. The first thing we'll do is apply some of our previous Elixir knowledge and use pattern matching! If you log the params argument, you'll see that we get all sorts of info from the POST request (like _csrf_token_utf8...). However, we're only interested in the user parameter. To make it easy for us, we can extract that part into a user_params variable, right in the function definition: # lib/messengyr_web/controllers/page_controller.ex defmodule MessengyrWeb.PageController do # ... def create_user(conn, %{"user" => user_params}) do IO.puts "Create user!" IO.inspect user_params end end Next, we simply call Accounts.create_user/2 using these params, and that function will build the new changeset for us and attempt to insert it into the database. We'll inspect the results to see what happens: # lib/messengyr_web/controllers/page_controller.ex defmodule MessengyrWeb.PageController do # ... def create_user(conn, %{"user" => user_params}) do IO.puts "Create user!" IO.inspect user_params Accounts.create_user(user_params) |> IO.inspect end end Seems like the changeset is invalid. Since we didn't fill in our username or email before clicking "Submit", the function returns a tuple with an :error atom and the invalid changeset. We need to catch this error and show it to the user so that they know what went wrong! ## Error flashes So we know that if Repo.insert fails (because of an invalid changeset), we get a tuple containing an :error atom and the changeset with its errors. In the previous lesson, we also learned that if the function succeeds, we will instead get a tuple with an :ok atom and a **struct **representation of the row we just inserted into the database. The two cases that we need to handle Let's handle these two cases in a case statement through pattern matching: # lib/messengyr_web/controllers/page_controller.ex defmodule MessengyrWeb.PageController do # ... def create_user(conn, %{"user" => user_params}) do case Accounts.create_user(user_params) do {:ok, _user} -> IO.puts "It worked!" {:error, user_changeset} -> IO.puts "It failed!" end end end If we send the same form again, the console should now print "It failed!". Our pattern matching works! These logs are handy for us, but they're still invisible to the user, so finally, we need to show these messages to the user in the browser. Phoenix has a built-in solution, called flashes, for showing short one-time messages to the user, like errors or success messages. If everything goes well, we want to redirect the user to the landing page and show an info flash. If something goes wrong however, we want to go back to the signup page and show an error flash. Let's change our case statement to handle this using the put_flash function: # lib/messengyr_web/controllers/page_controller.ex defmodule MessengyrWeb.PageController do # ... def create_user(conn, %{"user" => user_params}) do case Accounts.create_user(user_params) do {:ok, _user} -> conn |> put_flash(:info, "User created successfully!") |> redirect(to: "/") {:error, user_changeset} -> conn |> put_flash(:error, "Unable to create account!") |> render("signup.html", user_changeset: user_changeset) end end end Try signing up without filling in any field again, and you'll see this: ## Error tags We're almost there now! What we have is already pretty good, but ideally, we'd want to show the user exactly which fields are invalid and why. Remember, the returned invalid changeset already has all the information that we need about which fields are invalid (thanks to its errors-key). Moreover, we're already passing back this invalid changeset into the template, so all we need to do is use some magic Phoenix tags to render these errors. Open the signup page template again and add the following error tags: # lib/messengyr_web/templates/page/signup.html.eex <!-- ... --> <%= form_for @user_changeset, page_path(@conn, :create_user), fn f -> %> <%= text_input f, :email, placeholder: "Email" %> <%= error_tag f, :email %> <!-- Add this! --> <%= error_tag f, :username %> <!-- ...and this! --> <!-- ... --> Awesome, we now have pretty good error messages! It's worth noting that the PageController's create_user function is not entirely done yet, since we still need to hash the user's password before we insert the record into the database. We'll look into that in the next lesson!
Categories: Optics, Physics, Quantum mechanics. # Electric dipole approximation Suppose that an electromagnetic wave is travelling through an atom, and affecting the electrons. The general Hamiltonian of an electron in such a wave is given by: \begin{aligned} \hat{H} &= \frac{(\vu{P} - q \vb{A})^2}{2 m} + q \varphi \\ &= \frac{\vu{P}{}^2}{2 m} - \frac{q}{2 m} (\vb{A} \cdot \vu{P} + \vu{P} \cdot \vb{A}) + \frac{q^2 \vb{A}^2}{2m} + q \varphi \end{aligned} With charge $$q = - e$$, canonical momentum operator $$\vu{P} = - i \hbar \nabla$$, and magnetic vector potential $$\vb{A}(\vb{x}, t)$$. We reduce this by fixing the Coulomb gauge $$\nabla \cdot \vb{A} = 0$$, so that $$\vb{A} \cdot \vu{P} = \vu{P} \cdot \vb{A}$$: \begin{aligned} \comm*{\vb{A}}{\vu{P}} \psi &= -i \hbar \vb{A} \cdot (\nabla \psi) + i \hbar \nabla \cdot (\vb{A} \psi) \\ &= i \hbar (\nabla \cdot \vb{A}) \psi = 0 \end{aligned} Where $$\psi$$ is an arbitrary test function. Assuming $$\vb{A}$$ is so small that $$\vb{A}{}^2$$ is negligible, we split $$\hat{H}$$ as follows, where $$\hat{H}_1$$ can be regarded as a perturbation to $$\hat{H}_0$$: \begin{aligned} \hat{H} = \hat{H}_0 + \hat{H}_1 \qquad \quad \hat{H}_0 \equiv \frac{\vu{P}{}^2}{2 m} + q \varphi \qquad \quad \hat{H}_1 \equiv - \frac{q}{m} \vu{P} \cdot \vb{A} \end{aligned} In an electromagnetic wave, $$\vb{A}$$ is oscillating sinusoidally in time and space: \begin{aligned} \vb{A}(\vb{x}, t) = \vb{A}_0 \sin\!(\vb{k} \cdot \vb{x} - \omega t) \end{aligned} Mathematically, it is more convenient to represent this with a complex exponential, whose real part should be taken at the end of the calculation: \begin{aligned} \vb{A}(\vb{x}, t) = - i \vb{A}_0 \exp\!(i \vb{k} \cdot \vb{x} - i \omega t) \end{aligned} The corresponding perturbative electric field $$\vb{E}$$ is then given by: \begin{aligned} \vb{E}(\vb{x}, t) = - \pdv{\vb{A}}{t} = \vb{E}_0 \exp\!(i \vb{k} \cdot \vb{x} - i \omega t) \end{aligned} Where $$\vb{E}_0 = \omega \vb{A}_0$$. Let us restrict ourselves to visible light, whose wavelength $$2 \pi / |\vb{k}| \sim 10^{-6} \:\mathrm{m}$$. Meanwhile, an atomic orbital is several Bohr $$\sim 10^{-10} \:\mathrm{m}$$, so $$\vb{k} \cdot \vb{x}$$ is negligible: \begin{aligned} \boxed{ \vb{E}(\vb{x}, t) \approx \vb{E}_0 \exp\!(- i \omega t) } \end{aligned} This is the electric dipole approximation: we ignore all spatial variation of $$\vb{E}$$, and only consider its temporal oscillation. Also, since we have not used the word “photon”, we are implicitly treating the radiation classically, and the electron quantum-mechanically. Next, we want to rewrite $$\hat{H}_1$$ to use the electric field $$\vb{E}$$ instead of the potential $$\vb{A}$$. To do so, we use that $$\vu{P} = m \: \dv*{\vu{x}}{t}$$ and evaluate this in the interaction picture: \begin{aligned} \vu{P} = m \dv*{\vu{x}}{t} = m \frac{i}{\hbar} \comm*{\hat{H}_0}{\vu{x}} = m \frac{i}{\hbar} (\hat{H}_0 \vu{x} - \vu{x} \hat{H}_0) \end{aligned} Taking the off-diagonal inner product with the two-level system’s states $$\ket{1}$$ and $$\ket{2}$$ gives: \begin{aligned} \matrixel{2}{\vu{P}}{1} = m \frac{i}{\hbar} \matrixel{2}{\hat{H}_0 \vu{x} - \vu{x} \hat{H}_0}{1} = m i \omega_0 \matrixel{2}{\vu{x}}{1} \end{aligned} Therefore, $$\vu{P} / m = i \omega_0 \vu{x}$$, where $$\omega_0 \equiv (E_2 \!-\! E_1) / \hbar$$ is the resonance of the energy gap, close to which we assume that $$\vb{A}$$ and $$\vb{E}$$ are oscillating, i.e. $$\omega \approx \omega_0$$. We thus get: \begin{aligned} \hat{H}_1(t) &= - \frac{q}{m} \vu{P} \cdot \vb{A} = - (- i i) q \omega_0 \vu{x} \cdot \vb{A}_0 \exp\!(- i \omega t) \\ &\approx - q \vu{x} \cdot \vb{E}_0 \exp\!(- i \omega t) = - \vu{d} \cdot \vb{E}_0 \exp\!(- i \omega t) \end{aligned} Where $$\vu{d} \equiv q \vu{x} = - e \vu{x}$$ is the transition dipole moment operator of the electron, hence the name electric dipole approximation. Finally, we take the real part, yielding: \begin{aligned} \boxed{ \hat{H}_1(t) = - \vu{d} \cdot \vb{E}(t) = - q \vu{x} \cdot \vb{E}_0 \cos\!(\omega t) } \end{aligned} If this approximation is too rough, $$\vb{E}$$ can always be Taylor-expanded in $$(i \vb{k} \cdot \vb{x})$$: \begin{aligned} \vb{E}(\vb{x}, t) = \vb{E}_0 \Big( 1 + (i \vb{k} \cdot \vb{x}) + \frac{1}{2} (i \vb{k} \cdot \vb{x})^2 + \: ... \Big) \exp\!(- i \omega t) \end{aligned} Taking the real part then yields the following series of higher-order correction terms: \begin{aligned} \vb{E}(\vb{x}, t) = \vb{E}_0 \Big( \cos\!(\omega t) + (\vb{k} \cdot \vb{x}) \sin\!(\omega t) - \frac{1}{2} (\vb{k} \cdot \vb{x})^2 \cos\!(\omega t) + \: ... \Big) \end{aligned} ## References 1. M. Fox, Optical properties of solids, 2nd edition, Oxford. 2. D.J. Griffiths, D.F. Schroeter, Introduction to quantum mechanics, 3rd edition, Cambridge. © Marcus R.A. Newman, a.k.a. "Prefetch". Available under CC BY-SA 4.0.
# figure position in using baposter [duplicate] I am usign the baposter enviroment to create a scientific poster. My question is that I am attaching images of different sizes in \begin{center} \includegraphics[scale=0.18]{fig1.pdf} \includegraphics[scale=0.22]{fig1a.pdf} \includegraphics[scale=0.22]{fig2a.pdf} \includegraphics[scale=0.22]{fig2b.pdf} \includegraphics[scale=0.22]{fig2c.pdf} \captionof{figure}{\tiny{}} \end{center} and the issue is that I would like to put the figures fig2x.pdf a little bit higher compared to where they are (such that the space between the text and these set of figures is lesser compared to the other two figures). ## marked as duplicate by Torbjørn T., Mensch, TeXnician, diabonas, Stefan PinnowOct 13 '17 at 13:25 • It would be helpful if you composed a fully compilable minimal working example (MWE) including \documentclass and the appropriate packages that sets up the problem. While solving problems can be fun, setting them up is not. Then, those trying to help can simply cut and paste your MWE and get started on solving the problem. – user36296 Sep 25 '17 at 16:17 • Add \usepackage[export]{adjustbox} and use \includegraphics[valign=m,scale=0.22]{file} instead. – Torbjørn T. Sep 27 '17 at 16:13
# C6x assembly programming  (Page 7/11) Page 7 / 11 In many cases, depending on the result of previous operations, you execute the branch instructionconditionally. For example, to implement a loop, you decrement the loop counter by 1 each time you run a set ofinstructions and whenever the loop counter is not zero, you need to branch to the beginning of the code block to iteratethe loop operations. In the C6x CPU, this conditional branching is implemented using the conditional operations . Although B may be the instruction implemented using conditional operationsmost often, all instructions in C6x can be conditional. Conditional instructions are represented in code by using square brackets, [ ] , surrounding the condition register name. For example, the following B instruction is executed only if B0 is nonzero: 1 [B0] B .L1 A0 To execute an instruction conditionally when the condition register is zero, we use ! in front of the register. Forexample, the B instruction is executed when B0 is zero. 1 [!B0] B .L1 A0 Not all registers can be used as the condition registers. In the C62x and C67x devices, the registers that can be tested in conditionaloperations are B0 , B1 , B2 , A1 , A2 . (Simple loop): Write an assembly program computing the summation $\sum_{n=1}^{100} n$ by implementing a simple loop. ## Logical operations and bit manipulation The logical operations and bit manipulations are accomplished by the AND , OR , XOR , CLR , SET , SHL , and SHR instructions. ## Other assembly instructions Other useful instructions include IDLE and compare instructions such as CMPEQ etc. ## C62x instruction set summary The set of instructions that can be performed in each functional unit is as follows (See [link] , [link] , [link] and [link] ). Please refer to TMS320C62x/C67x CPU and Instruction Set Reference Guide for detailed description of each instruction. .s unit Instruction Description ADD(U) signed or unsigned integer addition without saturation ADDK integer addition using signed 16-bit constant ADD2 two 16-bit integer adds on upper and lower register halves B branch using a register CLR clear a bit field EXT extract and sign-extend a bit field MV move from register to register MVC move between the control file and the register file MVK move a 16-bit constant into a register and sign extend MVKH move 16-bit constant into the upper bits of a register NEG negate (pseudo-operation) NOT bitwise NOT OR bitwise OR SET set a bit field SHL arithmetic shift left SHR arithmetic shift right SSHL shift left with saturation SUB(U) signed or unsigned integer subtraction without saturation SUB2 two 16-bit integer integer subs on upper and lowerregister halves XOR exclusive OR ZERO zero a register (pseudo-operation) .l unit Instruction Description ABS integer absolute value with saturation ADD(U) signed or unsigned integer addition without saturation AND bitwise AND CMPEQ integer compare for equality CMPGT(U) signed or unsigned integer compare for greater than CMPLT(U) signed or unsigned integer compare for less than LMBD leftmost bit detection MV move from register to register NEG negate (pseudo-operation) NORM normalize integer NOT bitwise NOT +OR bitwise OR SADD integer addition with saturation to result size SAT saturate a 40-bit integer to a 32-bit integer SSUB integer subtraction with saturation to result size SUBC conditional integer subtraction and shift - used for division XOR exclusive OR ZERO zero a register (pseudo-operation) Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology hi Loga what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
# MDL Valence-Mageddon The one guarantee in science is that the facts will eventually change. When they do, scientific information tools need to adapt. Information exchange formats occupy an especially delicate position in this regard. Ideally, changes at this foundational level of tooling will always be well-communicated and backward-compatible. But this isn't always true in practice. A case in point is a set of backward-incompatible changes made to the molfile specification that has received little attention. This article delves into what happened and why it matters. ## In a Nutshell In 2014 BIOVIA, corporate custodian of the molfile format, changed the rules for counting implicit hydrogens in molfiles. Although the molfile format supports versioning, the rule change was not accompanied by a change in molfile version. The effect was to retroactively change the meaning of countless existing molfiles (or not). At the very least the change has sowed confusion about which algorithm should be used for the mundane but crucial task of hydrogen counting. This largely undocumented incident has been dubbed "MDL Valence-Mageddon." ## Hydrogen Suppression in Molfiles MDL Valence-Mageddon relates to an obscure yet crucial feature of the molfile format, namely hydrogen suppression. Before diving into what happened in 2014, a few words on hydrogen suppression as it relates to molfiles are in order. A previous article detailed hydrogen suppression in molfiles. To recap: "Hydrogen suppression is the practice of eliding hydrogens as nodes in a molecular graph." Molfiles support two forms of hydrogen suppression: implicit hydrogens and virtual hydrogens. Virtual hydrogens are hydrogen atoms that are compiled to an atomic attribute counter on an atom. Monovalent 1H atoms attached to a root node are virtualized through deletion with an accompanying increment of the counter. This process is more complicated in molfiles because a valence attribute is used instead of a virtual hydrogen count. But the outcome is largely the same. A molfile lets a writer encode the hydrogen atoms attached to a root atom as a count attribute, increasing efficiency of storage, marshalling, and display. The other form of hydrogen suppression, implicit hydrogens, is the topic of today's article. Like virtual hydrogens, implicit hydrogens are deleted as nodes in the molecular graph. But unlike virtual hydrogens, there is no corresponding count attribute on the parent atom. Instead, implicit hydrogen count is determined algorithmically. The algorithm works as follows. Some elements are assigned one or more default valences. Valence is the sum of bond orders, including bonds to implicit hydrogens, at an atom. Alternatively, valence can be thought of as the bonding electron count at an atom. For example, the valence the carbon atom of methane is four. The valence of the oxygen atom of methanol is two, and so on. A default valence is a value that may be associated with an element that represents a common valence for atoms. To compute the implicit hydrogen count (${h}$) at an atom, subtract the bonding electron count (${b}$) from the default valence (${v}$): ${h}={v}-{b}$ There are some special cases to consider. First, hydrogen bonds and coordination bonds (supported in V3000) do not add to an atom's computed valence. Second, in those elements with multiple default valences, the smallest one that does not exceed the computed bond order sum (${b}$) is used for ${v}$. Third, a radical subtracts one from an atom's computed valence. Fourth, atoms with non-zero charge attributes use the default valence(s) of the isoelectronic neutral element, with some provisions for further special cases. An atom will have a zero implicit hydrogen count in three cases: (1) it has a non-default valence attribute; (2) no default valences for its element is defined; and (3) its element has a default valence equal to the computed valence. Surprisingly, this algorithm never appears in the document most commonly cited as "the specification". Instead, it appears in something that has gone by various names over the years, and for simplicity I'll just call "The Guide". Both V2000 and V3000 molfiles use the same implicit hydrogen algorithm defined in The Guide. ## Something's Wrong The Guide occupies a central yet shadowy position in cheminformatics in that it defines the virtual hydrogen algorithm for every molfile-encoded molecular graph. Given the ubiquity of the molfile format, that's a lot of molecules in a lot of databases maintained by a lot of organizations. If The Guide were to change the implicit hydrogen algorithm, even a little, the effects could be wide-ranging. The only way to determine virtual hydrogen count of an atom is to run the algorithm. The count itself is completely absent from the molfile format. It turns out that The Guide didn't just change the implicit hydrogen algorithm a little. It changed the algorithm a lot. And this appears to have happened without warning (unless perhaps for Accelrys/BIOVIA customers) and most certainly without a version change. V2000 and V3000 are still the only two versions of the molfile format. ## The Change The May 30, 2014 edition of The Guide introduced changes to the default valences of several elements. In most cases, default valences were removed entirely. In one case (Br), three of four default valences were removed. In another case (Al), default valences were removed for the neutral atom but added back for the the +1 oxidation state. The name given to this change, "MDL Valence-Mageddon" is a tongue-in-cheek reference to its sweeping and undesirable consequences. The term was coined at NextMove software in a presentation titled Building on Sand. That presentation incorrectly identifies the year of the change as 2017. Another document, CTfile Reading, gives the correct year as 2014. The following table summarizes the differences between the default valences prior to 2014 and from 2014 onward. Default Molfile Elemental Valence Differences ElementPre-20142014 and Later Li(1)- Na(1)- K(1)- Rb(1)- Cs(1)- Fr(1)- Be(2)- Mg(2)- Ca(2)- Sr(2)- Ba(2)- Ra(2)- Al(3)(4) -1 only Ga(3)- In(3)- Tl(1, 3)- Ge(4)- Sn(2, 4)- Pb(2, 4)- Sb(3, 5)- Be(3, 5)- Po(2, 4, 6)- Br(1, 3, 5, 6)(1) These differences were compiled from the corresponding tables in the 2011 and 2014 editions of The Guide. ## Consequences What's the effect of completely removing default valences from an element? Consider an isolated, neutral sodium atom. Under the 2011 version of The Guide, sodium has a single default valence of one. Therefore, the isolated, neutral atom has a hydrogen count of 1 (${h}$ = 1 - 0 = 1). Under the 2014 version of The Guide, sodium has no default valences and therefore no implicit hydrogens. The net effect is to remove a hydrogen — retroactively. It's worth noting that the implicit hydrogen count for Na+1 is zero in both cases, but for different reasons. Another element whose default valences were stripped entirely is tin. Under the 2011 version of the Guide, tin had two default valences (2 and 4). Under the 2014 version of the guid, tin has no default valences. Consider the virtual hydrogen count of the tin atom in tributlytin. Under the 2011 rules, the count is one (${h}$ = 4 - 1 = 3). Under the 2014 rules, the count is zero. The net effect is deletion of an implicit hydrogen. Notice, however, that the default hydrogen count of tetrabutyltin remains zero under either regime, but again for different reasons. In other words, removing a default valence has the effect of under-counting implicit hydrogens in some cases. There are two problems here. The first is that as of 2014, two sets of rules could be used to compute implicit hydrogen count. This problem would, however, be manageable if not for the second problem. There is no way to specify which implicit hydrogen algorithm applies to a given molfile. As mentioned previously, V2000 and V3000 are the only valid versions of the molfile format. There is no way to capture the valence model being used. Various heuristics might be used to solve the problem. For example, we could examine a molfile's timestamp field. If the timestamp preceded May 30, 2014, then the semantics of the 2011 Guide would be used. If the timestamp falls after that, then post-2014 semantics would be used. But there are many problems with this approach, the most serious of which being that copying a molfile can change the timestamp. So the mere act of opening a molfile and saving it again, without editing by the user, could change the meaning of the file. Ironically, divergent implementation of the molfile spec could mitigate the situation somewhat. It's not exactly obvious that The Guide goes with the specification. It could even be argued that The Guide merely describes how one software vendor implicit hydrogen counting. So it's understandable that some implementations have invented their own rules. Those rules would be divorced from any incarnation of The Guide and therefore not subject to change through its revision. All of this raises the question of how various cheminformatics tools handle the situation. Are they using pre-2014 rules or post-2014 rules to compute implicit hydrogens? Do they invent their own rules? Is some combination of approaches being used? The sheer number of molfile implementations makes answering this question a daunting task. I may take up the issue in a future post. But either way, the "right" way to address the problem of two incompatible sets of rules is far from clear. ## Timeline Given its wide-ranging and likely unintended consequences, how does something like MDL Valence-Mageddon happen in the first place? I suspect that factors related to business could explain a lot of it. Molecular Design Limited (MDL), founded in 1978, was an early entrant into the chemistry software market. In 1991 MDL described a family of file formats, of which molfile (then written as "MOLfile") was a member. Both before and after this publication, MDL went through various mergers and acquisitions, eventually becoming the property of Accelrys through a merger with Symyx Technologies in 2010. On November 8, 2011, Accelrys published the last edition of The Guide to assign default valences to metals including sodium and tin. On January 29, 2014, Dassault Sytemes announced its intent to acquire Accelrys. On April 29, 2014, Dassault announced completion of the deal. MDL Valence-Mageddon occurred just one month later, on May 30, 2014 with the publication of the revised Guide. My identification of 2014 as the year in which Valence-Mageddon occurred is based on documents available from the old Accelrys website. The URL for the Guide bearing the date "November 8, 2011" (linked above) contains the string .../insight/2.1/Content/... whereas the URL for the Guide bearing the date "May 30, 2014" (also linked above) contains the string .../insight/2.2/Content/.... These appear to be the directories containing documentation for neighboring minor releases of the Insight product line. And it is these two documents in which the differences in virtual hydrogen counting algorithms first appear. It's nevertheless possible that one or more Guides were published between the years 2011 and 2014. If so, I haven't seen them. My information about this incident is drawn exclusively from public sources. But it does appear that MDL Valence-Mageddon developed simultaneously with the corporate shakeup happening at what had once been MDL. ## Other Changes Given the changes to implicit hydrogen counting introduced in the 2014 Guide, it's reasonable to ask what other changes might affect the semantics of molfiles. The lengths of the documents in question run into the hundreds of pages, so this question is not easy to answer. One other difference I've spotted is the elements allowed to form tetrahedral stereocenters within substructure matches. The 2014 edition appears to expand this list. Nor is this the only difference. The 2014 Guide supported something called "Accelrys line notation," used for salts definitions. The 2014 Guide removed all reference to Accelrys line notation. I would be surprised, however, if these were the only differences. ## Mitigation Given the confusion caused by MDL Valence-Mageddon, how should maintainers of chemical databases respond? As noted previously, atoms in both V2000 and V3000 formats support an attribute called "valence," which indirectly sets a virtual hydrogen count while at the same time disabling implicit hydrogen counting. So one option would be to re-write all molfiles to ensure that every atom, regardless of its element, has a non-default valence attribute. The only problem with this approach is that it's not clear how widely-supported the valence attribute is. Even those tools supporting it now may not have done so in the past. So there's likely to be a spectrum of tools in the wild supporting the valence property to varying degrees. ## Conclusion Implicit hydrogen counting is central to cheminformatics because of its incorporation into one of the most widely-used file formats, molfile. But all implicit hydrogen schemes suffer from a single point of failure, namely that any change whatsoever to the counting algorithm has the potential to cause large-scale data integrity issues. MDL Valence-Mageddon illustrates the problem quite clearly and hopefully serves as a warning to those who would venture to create scientific information exchange formats.
keywords: Taylor, Geometric, Binomial Power Series (Used in ECE301 and ECE438) Taylor Series Formulas Series in symbolic forms $\text{Taylor Series in one variable } = \sum_{n=0} ^ {\infin } \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}$ (info) $\text{Taylor Series in } d \text{ variables } =\sum_{n_1=0}^{\infin} \cdots \sum_{n_d=0}^{\infin} \frac{(x_1-a_1)^{n_1}\cdots (x_d-a_d)^{n_d}}{n_1!\cdots n_d!}\,\left(\frac{\partial^{n_1 + \cdots + n_d}f}{\partial x_1^{n_1}\cdots \partial x_d^{n_d}}\right)(a_1,\dots,a_d).\!$ Taylor Series to remember $\text{Exponential } e^x = \sum_{n=0}^\infty \frac{x^n}{n!}, \text{ for all } x\in {\mathbb C}\$ $\text{Logarithm } \ln (1+x) = \sum^{\infin}_{n=1} (-1)^{n+1}\frac{x^n}n,\text{ when }-1<x\leq 1$ $\sin x \ = \ x \ - \ \frac{x^3}{3!} \ + \ \frac{x^5}{5!} \ - \ \frac{x^7}{7!} \ + \ \cdots, \quad \text{ for } - \infty < x < \infty$ $\cos x \ = \ 1 \ - \ \frac{x^2}{2!} \ + \ \frac{x^4}{4!} \ - \ \frac{x6}{6!} \ + \ \cdots, \quad \text{ for } - \infty < x < \infty$ Geometric Series and related series (info) $\text{Finite Geometric Series Formula } \sum_{k=0}^n x^k = \left\{ \begin{array}{ll} \frac{1-x^{n+1}}{1-x}&, \text{ if } x\neq 1\\ n+1 &, \text{ else}\end{array}\right.$ (info) $\text{Infinite Geometric Series Formula } \sum_{k=0}^\infty x^k = \left\{ \begin{array}{ll} \frac{1}{1-x}&, \text{ if } |x|\leq 1\\ \text{diverges} &, \text{ else }\end{array}\right.$ $\frac{x^m}{1-x} = \sum^{\infin}_{n=m} x^n, \quad\mbox{ for }|x| < 1 \text{ and } m\in\mathbb{N}_0\!$ $\frac{x}{(1-x)^2} = \sum^{\infin}_{n=1}n x^n, \quad\text{ for }|x| < 1\!$ Taylor series of Single Variable Functions $\,f(x) \ = \ f(a) \ + \ f'(a)(x \ - \ a) \ + \ \frac{f''(a)(x-a)^2}{2!} \ + \ \cdot \cdot \cdot \ + \ \frac{f^{(n-1)}(a)(x-a)^{n-1}}{(n-1)!} \ + \ R_n \,$ $\text{Rest of Lagrange } \qquad R_n = \frac {f^{(n)}(\zeta)(x-a)^n}{n!}$ $\text{Rest of Cauchy } \qquad R_n = \frac {f^{(n)}(\zeta)(x-\zeta)^{n-1}(x-a)}{(n-1)!}$ Binomial Series For any positive integer n: \begin{align} (a+x)^n & = \sum_{k=0}^n \left( \begin{array}{ll}n\\k \end{array}\right) x^k a^{n-k}\\ & = a^n + \binom{n}{1} a^{n-1}x + \binom{n}{2} a^{n-2}x^2 + \binom{n}{3} a^{n-3}x^3 + \ldots + x^n \\ \end{align} For any complex number z: \begin{align} (a+x)^z & = a^z + za^{z-1}x + \frac {z(z-1)}{2!} a^{z-2}x^2 + \frac {z(z-1)(z-2)}{3!} a^{z-3}x^3 + \ldots \\ & = a^z + \binom{z}{1} a^{z-1}x + \binom{z}{2} a^{z-2}x^2 + \binom{z}{3} a^{z-3}x^3 + \ldots \\ \end{align} Some particular Cases: $(a+x)^2 \ = \ a^2 \ + \ 2ax \ + \ x^2$ $(a+x)^3 \ = \ a^3 \ + \ 3a^2x \ + \ 3ax^2 \ + \ x^3$ $(a+x)^4 \ = \ a^4 \ + \ 4a^3x \ + \ 6a^2x^2 \ + \ 4ax^3 \ + \ x^4$ $(1+x)^{-1} \ = \ 1 \ - \ x \ + \ x^2 \ - \ x^3 \ + \ x^4 \ - \ \cdots$ $-1 < x < 1 \qquad$ $(1+x)^{-2} \ = \ 1 \ - \ 2x \ + \ 3x^2 \ - \ 4x^3 \ + \ 5x^4 \ - \ \cdots$ $-1 < x < 1 \qquad$ $(1+x)^{-3} \ = \ 1 \ - \ 3x \ + \ 6x^2 \ - \ 10x^3 \ + \ 15x^4 \ - \ \cdots$ $-1 < x < 1 \qquad$ $(1+x)^{-1/2} \ = \ 1 \ - \ \frac{1}{2}x \ + \ \frac{1 \cdot 3}{2 \cdot 4}x^2 \ - \ \frac {1 \cdot 3 \cdot 5 }{2 \cdot 4 \cdot 6} x^3 \ + \ \cdots$ $-1 < x \leqq 1 \qquad$ $(1+x)^{1/2} \ = \ 1 \ + \ \frac{1}{2}x \ - \ \frac{1 }{2 \cdot\ 4}x^2 \ + \ \frac {1 \cdot 3}{2 \cdot 4 \cdot 6} x^3 \ - \ \cdots$ $-1 < x \leqq 1 \qquad$ $(1+x)^{-1/3} \ = \ 1 \ - \ \frac{1}{3}x \ + \ \frac{1 \cdot 4}{3 \cdot 6}x^2 \ - \ \frac {1 \cdot 4 \cdot 7 }{3 \cdot 6 \cdot 9} x^3 \ + \ \cdots$ $-1 < x \leqq 1 \qquad$ $(1+x)^{1/3} \ = \ 1 \ + \ \frac{1}{3}x \ - \ \frac{2}{3 \cdot 6}x^2 \ + \ \frac {2 \cdot 5 }{3 \cdot 6 \cdot 9} x^3 \ - \ \cdots$ $-1 < x \leqq 1 \qquad$ Series Expansion of Exponential functions and Logarithms $e^x \ = \ 1 \ + \ x \ + \ \frac{x^2}{2!} \ + \ \frac{x^3}{3!} \ + \ \cdots$ $- \infty < x < \infty \qquad$ $a^x \ = \ e^{x \ln a} \ = \ 1 \ + \ x \ln a \ + \ \frac{(x \ln a)^2}{2!} \ + \ \frac{(x \ln a)^3}{3!} \ + \ \cdots$ $- \infty < x < \infty \qquad$ $\ln(1+x) \ = \ x \ - \ \frac{x^2}{2} \ + \ \frac{x^3}{3} \ - \ \frac{x^4}{4} \ + \ \cdots$ $-1 < x \leqq 1 \qquad$ $\frac{1}{2} \ln \left ( \frac {1+x}{1-x} \right ) \ = \ x \ + \ \frac{x^3}{3} \ + \ \frac {x^5}{5} \ + \ \frac{x^7}{7} \ + \ \cdots \$ $-1 < x < 1 \qquad$ $\ln x \ = \ 2 \left \{ \left ( \frac {x-1}{x+1} \right ) \ + \ \frac{1}{3} \left ( \frac {x-1}{x+1} \right ) ^3 \ + \ \frac{1}{5} \left ( \frac{x-1}{x+1} \right ) ^ 5 \ + \ \cdots \ \right \}$ $x > 0 \qquad$ $\ln x \ = \ \left ( \frac {x-1}{x} \right ) \ + \ \frac{1}{2} \left ( \frac {x-1}{x} \right ) ^2 \ + \ \frac{1}{3} \left ( \frac{x-1}{x} \right ) ^ 3 \ + \ \cdots \$ $x \geqq \frac {1}{2} \qquad$ Series Expansion of Circular functions $\sin x \ = \ x \ - \ \frac{x^3}{3!} \ + \ \frac{x^5}{5!} \ - \ \frac{x^7}{7!} \ + \ \cdots$ $- \infty < x < \infty$ $\cos x \ = \ 1 \ - \ \frac{x^2}{2!} \ + \ \frac{x^4}{4!} \ - \ \frac{x6}{6!} \ + \ \cdots$ $- \infty < x < \infty$ $\cot x \ = \ \frac{1}{x} \ - \ \frac {x}{3} \ - \ \frac{x^3}{45} \ - \ \frac{2x^5}{945} \ - \ \cdots \ - \ \frac{2^{2n}B_n x^{2n-1}}{(2n)!} \ - \ \cdots$ $0 < \left \vert x \right \vert < \pi \qquad$ $\frac{1}{\cos x} \ = \ 1 \ + \ \frac {x^2}{2} \ + \ \frac{x^4}{24} \ + \ \frac{61x^6}{720} \ + \ \cdots \ - \ \frac{E_n x^{2n}}{(2n)!} \ + \ \cdots$ $\left \vert x \right \vert < \frac {\pi}{2} \qquad$ $\frac{1}{\sin x} \ = \ \frac{1}{x} \ + \ \frac {x}{6} \ + \ \frac{7x^3}{360} \ + \ \frac{31x^5}{15120} \ + \ \cdots \ + \ \frac{2(2^{2n-1}-1)B_n x^{2n-1}}{(2n)!} \ + \ \cdots$ $0 < \left \vert x \right \vert < \pi \qquad$ $\arcsin x = x + {1 \over 2}{x^3 \over 3} + \frac{1 \cdot 3}{ 2 \cdot 4} {x^5 \over 5} + \frac {1 \cdot 3 \cdot 5}{ 2 \cdot 4 \cdot 6}{x^7 \over 7} + \cdots$ $\left \vert x \right \vert < 1 \qquad$ $\arccos x = {\pi \over 2} - \sin ^{-1} x = {\pi \over 2} - \left ( x + {1 \over 2}{x^3 \over 3} +\frac{1 \cdot 3}{2 \cdot 4} {x^5 \over 5} + \cdots \ \right )$ $\left \vert x \right \vert < 1 \qquad$ $\arctan x = \begin{cases} x - {x^3 \over 3} + {x^5 \over 5} - { x^7 \over 7} + \cdots, & \left \vert x \right \vert < 1 \\ {\pi \over 2} - {1 \over x} + {1 \over 3x^3} - {1 \over 5x^5} + \cdots, &\mbox{ if } x \geqq 1 \\ -{\pi \over 2} - {1 \over x} + {1 \over 3x^3} - {1 \over 5x^5} + \cdots, &\mbox{ if } x \leqq -1 \end{cases}$ $\arccot x = {\pi \over 2} - \arctan x = \begin{cases} {\pi \over 2} - \left ( x - {x^3 \over 3} + {x^5 \over 5} - \cdots \right ), &\left \vert x \right \vert < 1 \\ {\pi} + {1 \over x} - {1 \over 3x^3} + {1 \over 5x^5} - \cdots, & \mbox{ if } x > 1\\ -{\pi} + {1 \over x} - {1 \over 3x^3} + {1 \over 5x^5} - \cdots, & \mbox{ if } x < -1 \end{cases}$ $\arccos ({1 \over x}) = {\pi \over 2} - \left ( {1 \over x} + \frac{1}{2 \cdot 3 x^3} + \frac{1 \cdot 3}{2 \cdot 4 \cdot 5 x^5} + \cdots \right )$ $\left \vert x \right \vert > 1 \qquad$ $\arcsin ({1 \over x}) = {1 \over x} + {1 \over 2 \cdot 3 x^3} + \frac{1 \cdot 3}{2 \cdot 4 \cdot 5 x^5} + \cdots$ $\left \vert x \right \vert > 1$ Series Expansion of Hyperbolic functions $\, \sinh x = x + {x^3 \over 3!} + {x^5 \over 5!} + { x^7 \over 7!} + \cdots\,$ $- \infty < x < \infty \qquad$ $\, \cosh x = 1 + {x^2 \over 2!} + {x^4 \over 4!} + { x^6 \over 6!} + \cdots\,$ $- \infty < x < \infty \qquad$ $\, \tanh x = x - {x^3 \over 3} + {2x^5 \over 15} - { 17x^7 \over 315} + \cdots \ \frac{(-1)^{n-1}2^{2n}(2^{2n} -1)B_nx^{2n-1}}{(2n)!} + \cdots\,$ $\vert x \vert < {\pi \over 2} \qquad$ $\, \coth x = {1 \over x} + {x \over 3} - {x^3 \over 45} + { 2x^5 \over 945} + \cdots \frac{(-1)^{n-1}2^{2n}b_nx^{2n-1}}{(2n)!} + \cdots\,$ $0 < \vert x \vert < \pi \qquad$ $\frac {1}{\cosh x} = 1 - {x2 \over 2} + {5x^4 \over 24} -{61x^6 \over 720} + \cdots \frac{(-1)^nE_nx^{2n}}{(2n)!} + \cdots$ $\vert x \vert < {\pi \over 2}$ $\frac{1}{\sinh x} = {1 \over x} - {x \over 6} + {7x^3 \over 360} - {31x^5 \over 15,120} + \cdots \frac{(-1)^n2(2^{2n-1}-1)B_nx^{2n-1}}{(2n)!} + \cdots$ $0 < \vert x \vert < \pi$ $\operatorname{arsinh}\,x = \begin{cases} x - {x^3 \over 2 \cdot 3} + {1 \cdot 3 x^5 \cdot 2 \cdot 4 \cdot 5} - {1 \cdot 3 \cdot 5 x^7 \over 2 \cdot 4 \cdot 6 \cdot 7} + \cdots, & \left \vert x \right \vert < 1 \\ \left ( \ln \vert 2x \vert + {1 \over 2 \cdot 2 x^2} - {1 \cdot 3 \over 2 \cdot 4 \cdot 4x^4} + {1 \cdot 3 \cdot 5 \over 2 \cdot 4 \cdot 6 \cdot 6x^6} - \cdots \right ), & x \geqq 1\\ -\left ( \ln \vert 2x \vert + {1 \over 2 \cdot 2 x^2} - {1 \cdot 3 \over 2 \cdot 4 \cdot 4x^4} + {1 \cdot 3 \cdot 5 \over 2 \cdot 4 \cdot 6 \cdot 6x^6} - \cdots \right ), & x \leqq -1 \end{cases}$ $\operatorname{arcosh} \,x = \begin{cases} \{ \ln (2x) - ( \frac{1}{2 \cdot 2x^2} + \frac{1 \cdot 3}{2 \cdot 4 \cdot 4x^4} + \frac { 1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6 \cdot 6x^6} + \cdots ) \}, & \operatorname{arsinh}\,x > 0, x \geqq 1 \\ - \{ \ln (2x) - ( \frac{1}{2 \cdot 2x^2} + \frac{1 \cdot 3}{2 \cdot 4 \cdot 4x^4} + \frac { 1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6 \cdot 6x^6} + \cdots ) \}, & \operatorname{arsinh} \,x < 0, x \geqq 1 \end{cases}$ $\operatorname{argth} \,x = x + { x^3 \over 5} + {x^5 \over 5 } + {x^7 \over 7 }+ \cdots$ $\vert x \vert < 1 \qquad$ $\operatorname{argcoth} \,x = {1 \over x} + { 1 \over 3x^3} + {1 \over 5x^5 } + {1 \over 7x^7 }+ \cdots$ $\vert x \vert > 1 \qquad$ Various Series $\, e^{\sin x} = 1 + x + {x^2 \over 2} - {x^4 \over 8} - {x^5 \over 15} + \cdots\,$ $- \infty < x < \infty$ $\, e^{\cos x} = e \left ( 1 - {x^2 \over 2} + {x^4 \over 6} - {31x^6 \over 720} + \cdots \right ) \,$ $- \infty < x < \infty$ $\, e^{\tan x} = 1 + x + {x^2 \over 2} + {x^3 \over 2} + {3x^4 \over 8} + \cdots \,$ $\vert x \vert < { \pi \over 2}$ $e^x \sin x = x + x^2 + {2x^3 \over 3 } - {x^5 \over 30} - {x^6 \over 90} + \cdots + \frac{2^{n/2} \sin (n \pi /4)\ x^n}{n!} + \cdots$ $- \infty < x < \infty$ $e^x \cos x = 1 + x - {x^3 \over 3 } - {x^4 \over 6} + \cdots + \frac{2^{n/2} \cos (n \pi /4)\ x^n}{n!} + \cdots$ $- \infty < x < \infty$ $\ln \vert \sin x \vert = \ln \vert x \vert - {x^2 \over 6} - {x^4 \over 180} - {x^6 \over 2835} - \cdots - \frac{2^{2n-1}B_nx^{2n}}{n(2n)!} + \cdots$ $0 < \vert x \vert < \pi$ $\ln \vert \cos x \vert = - {x^2 \over 2} - {x^4 \over 12} - {x^6 \over 45} - {17x^8 \over 2520} - \cdots - \frac{2^{2n-1}(2^{2n}-1)B_nx^{2n}}{n(2n)!} + \cdots$ $\vert x \vert < {\pi \over 2}$ $\ln \vert \tan x \vert = \ln \vert x \vert + {x^2 \over 3} + {7x^4 \over 90} + {62x^6 \over 2835}+ \cdots + \frac{2^{2n}(2^{2n-1}-1)B_nx^{2n}}{n(2n)!} + \cdots$ $0 < \vert x \vert < {\pi \over 2}$ $\frac{\ln (1+x)}{1+x} = x - (1+ {1 \over 2})^{x^2} + (1 + {1 \over 2} + {1 \over 3})^{x^3} - \cdots$ $\vert x \vert < 1$ Series of Reciprocal Power Series $\text{if }\ y = c_1x +c_2x^3 +c_3x^3 + c_4x^4 + c_5x^5 + c_6x^6 + \cdots\,\qquad \text{then }\ x = C_1y+C_2y^2+C_3y^3+C_4y^4+C_5y^5+C_6y^6+\cdots$ $\text{where }\ c_1C_1 = 1, \qquad c_1^3C_2= -c_2, \qquad c_1^7C_3 = 2c_2^2 - c_1c_3$ $c_1^7C_4 = 5c_1c_2c_3 - 5c_2^3 - c_2^2c_4, \qquad c_1^9C_5 = 6c_1^2c_2c_4 +$ $c_1^{11}C_6 = 7 c_1^3c_2 c_5 + 84 c_1 c_2^3c_3 + 7c_1^3c_3c_4 - 28c_1^2c_2c_3^2 - c_1^4c/-6 - 28c_1^2c_2^2c_4 - 42c_2^5$ Taylor Series of Two Variables function $\, f(x,y) = f(a,b) + (x-a)f_x(a,b) + (y-b)f_y(a,b) +$ ${1 \over 2!} \left \{ (x-a)^2f_{xx}(a,b) + 2(x-a)(y-b)f_{xy}(a,b)+(y-b)^2f_{yy}(a,b) \right \} + \cdots\,$ $f_x(a,b),f_y(a,b) , \cdots \text {denote the partial derivatives with respect to } x ,\ y \cdots$ ## Alumni Liaison Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy Francisco Blanco-Silva
Change search Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf The Rare Decay of the Neutral Pion into a Dielectron Uppsala University, Disciplinary Domain of Science and Technology, Physics, Department of Physics and Astronomy, Nuclear Physics. 2013 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis ##### Abstract [en] We give a rather self-contained introduction to the rare pion to dielectron decay which in nontrivial leading order is given by a QED triangle loop. We work within the dispersive framework where the imaginary part of the amplitude is obtained via the Cutkosky rules. We derive these rules in detail. Using the twofold Mellin-Barnes representation for the pion transition form factor, we derive a simple expression for the branching ratio B(π0 $\rightarrow$ e+e-) which we then test for various models. In particular a more recent form factor derived from a Lagrangian for light pseudoscalars and vector mesons inspired by effective field theories. Comparison with the KTeV experiment at Fermilab is made and we find that we are more than 3σ below the KTeV experiment for some of the form factors. This is in agreement with other theoretical models, such as the Vector Meson Dominance model and the quark-loop model within the constituent-quark framework. But we also find that we can be in agreement with KTeV if we explore some freedom of the form factor not fixed by the low-energy Lagrangian. 2013. , 178 p. ##### Series FYSAST, FYSMAS1013 ##### Keyword [en] rare pion decay, Mellin transform, Mellin-Barnes representation, dispersive representation, dispersion relation, Kramers-Kronig relations, Cutkosky cutting rules, Feynman Loop diagram calculations ##### National Category Subatomic Physics Subatomic Physics ##### Identifiers OAI: oai:DiVA.org:uu-211683DiVA: diva2:668048 ##### Educational program Master Programme in Physics ##### Presentation 2013-11-07, Å11167, Uppsala, 15:15 (English) ##### Examiners Available from: 2013-11-29 Created: 2013-11-28 Last updated: 2014-06-16Bibliographically approved #### Open Access in DiVA ##### File information File name FULLTEXT01.pdfFile size 1550 kBChecksum SHA-512 Type fulltextMimetype application/pdf #### Search in DiVA Nuclear Physics ##### On the subject Subatomic PhysicsSubatomic Physics #### Search outside of DiVA The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available urn-nbn #### Altmetric score urn-nbn Total: 2191 hits Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf v. 2.29.1 |
# Is a language with only a stack of fixed-size integers Turing-complete? I encountered the brainfuck programming language which I know is turing complete. However I then decided to create a high level language that gets compiled to brainfuck code. There is only one data type in it (integer, since that's the only data type brainfuck supports). It supports functions and subroutines (to which you can pass integers by value, but not arrays, though you can access an array in the global scope from within a function/subroutine) but it does not support recursion (all function and subroutine calls are inlined). It supports statically allocated arrays (size known at compile time) and has just one unbounded stack. You can store as many items as you like in the stack, unlike with arrays, but you only have one stack for your entire program. I have these limitations to achieve a balance between ease of use, and fast generated code. However, I never studied any of this stuff and I actually started this project to learn about compilers (by making one), therefore my question is: From the above description, is this language turing complete? • Are you describing your language or brainfuck? And, why? – Dave Clarke Apr 18 '13 at 13:25 • I am describing my language. Brainfuck is a very simple language with only 8 commands operating on a "tape" of cells. Shouldn't take more than 2 minutes to learn the entirety of the language off of wikipedia. I would like to know whether the language I am creating is turing complete with that weird set of available tools. – Cedric Mamo Apr 18 '13 at 13:30 If your language supports an arbitrarily large array of integers (even of numbers in $\{0,1\}$), then you can simulate the tape of a TM. Then, clearly your language is Turing complete. EDIT: based on your comment, you don't have an arbitrary array. In this case, you can do the following: If you can keep two counters (integers), and increase or decrease them, and check whether they are 0, then you can simulate a two counter machine (Minsky Machine), which is Turing complete (with a caveat). If you also bound the maximal value that an integer can take, then I believe that your machine is no more powerful than a PDA. • I asked the question simply because of the weird combination of arrays and stack. Array sizes have to be declared at compile time, meaning you can't create an array of arbitrary size at runtime, but you have one stack which is unbounded. – Cedric Mamo Apr 18 '13 at 13:28 • Edited the answer. – Shaull Apr 18 '13 at 13:54 • Thanks for your answer. The range of the values depends on the interpreter running the code generated by my compiler so it's not really my problem :D. – Cedric Mamo Apr 18 '13 at 14:37 • By the way, if you are really talking about a practical machine, then it is a finite model, and is therefore no more expressive than a DFA. – Shaull Apr 18 '13 at 16:02 If you want to build a compiler, brainfuck is near the worst language to start with... it was designed to be a total mess. A practical way to understand a compiler is Fraser and Hanson's "A Retargetable C Compiler: Design and Implementation", the book contains the full source code for a simple (but complete) ANSI C compiler. From the link you can get the source code for the (much improved) last version. But note that real compilers are much more complex, LCC has carefully selected schemas for conditionals and loops, does careful instruction selection, and a bit of on-the-fly expression simplification; but little else in terms of "code optimization". If you'd like to mess with a real compiler, take a peek at LLVM, the official compiler for MacOS. It is nicely structured from the ground up, thus much more understandable than the organically grown GCC. Perhaps tcc, the Tiny C Compiler (outgrowth of an International Obfuscated C Code Contest entry for 2002, where they asked for the smallest C compiler able to compile itself) might be of amusement value... • those compilers output mwchine code at the end, mine outputs bf. doesn't really matter to me as I still learned about the parsing, some optimizations etc. and that was kind of the whole point – Cedric Mamo Apr 19 '13 at 5:38
# Energy Equation of an ideal gas in expanding space-time Short question: If the energy equation of an ideal plasma is written as follows: $$$$\frac{\mathrm{d}p}{\mathrm{d}t}=-\Gamma p\nabla\cdot v-\left(\Gamma-1\right)\left( \nabla\cdot q-\Pi\left(\nabla v\right) \right),$$$$ with p the pressure, $$\Gamma$$ the adiabatic index, v the velocity, q the heat flux through the boundary, and $$\Pi$$ the viscous stress tensor ($$\Pi\left(\nabla v\right)$$ the volumetric viscous heating rate), how would this look like in an expanding universe (preferably in a 3+1 notation)? $$\\$$ $$\\$$ Long derivation and explanation of above formula: If we are in the Lagrangian frame (a volume element co-moving with the fluid), the energy flow across the surface $$-\nabla\cdot q$$ plus the viscous heating rate $$\Pi\left(\nabla v\right)$$ must equal the heat input rate per unit volume $$\rho\frac{\mathrm{d}Q}{\mathrm{d}t}$$: $$$$\rho\frac{\mathrm{d}Q}{\mathrm{d}t}=-\nabla\cdot q+\Pi\left(\nabla v\right),$$$$ with $$Q$$ the heat unit per mass and $$q$$ the heatflux through the boundary. The viscous heating rate $$\Pi\left(\nabla v\right)$$ comes from the viscous stress tensor $$\Pi$$ which is of the form $$$$\Pi= \begin{pmatrix} 0 & a & b \\ a & 0 & c\\ b & c & 0 \end{pmatrix},$$$$ with, $$a$$, $$b$$, and $$c$$ scalars. Lets assume there is no heat loss through the surface, but volume elements can interchange heat, so that the first law of thermodynamics $$$$\mathrm{d}Q=p\mathrm{d}\left(\frac{1}{\rho}\right)+\mathrm{d}e,$$$$ with $$p\mathrm{d}\left(\frac{1}{\rho}\right)$$ the PV-work per unit mass and $$de$$ the change in energy per unit mass can be substituted in above equation. We find $$$$p\rho\frac{\mathrm{d}\left(\frac{1}{\rho}\right)}{\mathrm{d}t} +\rho\frac{\mathrm{d}e}{\mathrm{d}t}=-\nabla\cdot q+\Pi\left(\nabla v\right).$$$$ The factor $$\frac{\mathrm{d}\left(\frac{1}{\rho}\right)}{\mathrm{d}t}$$ can be rewritten: \begin{align} \frac{\mathrm{d}\left(\frac{1}{\rho}\right)}{\mathrm{d}t}&=\frac{-1}{\rho^2}\frac{\mathrm{d}\rho}{\mathrm{d}t}\\ &=\frac{-1}{\rho^2}\left(-\rho\nabla\cdot v\right)\\ &=\frac{1}{\rho}\nabla\cdot v. \end{align} With this we can express the rate of change of energy per unit volume as $$$$\rho\frac{\mathrm{d}e}{\mathrm{d}t}=-p\nabla\cdot v-\nabla\cdot q+\Pi\left(\nabla v\right).$$$$ Again, we can use the continuity equation, but this time to rewrite $$$$\rho\frac{\mathrm{d}e}{\mathrm{d}t}=\frac{\mathrm{d}}{\mathrm{d}t}\left(\rho e\right)+\rho e\nabla\cdot v,$$$$ which leads to $$$$\frac{\mathrm{d}}{\mathrm{d}t}\left(\rho e\right)=-\rho e\nabla\cdot v-p\nabla\cdot v-\nabla\cdot q+\Pi\left(\nabla v\right).$$$$ For an ideal gas we have $$$$\rho e= \frac{p}{\Gamma-1},$$$$ which leads to the equation on the top. $$\frac{dQ}{dt} \rightarrow \frac{1}{a^3(t)}\frac{d}{dt}(a^3(t) Q)$$ where $$a(t)$$ is the scale factor? $$\dot a/a = H$$, the Hubble constant. The reason is that the first equation looks like a 4-divergence, and a divergence with metric $$g$$ is $$\frac{1}{\sqrt{g}} \partial_\mu( \sqrt{g} f^\mu)$$. Not sure about my answer, especially because there is no time derivative of $$\rho$$ and because I am confused about Lagrangian coordinates.
# Linear Algebra A matrix is a 2-D array of numbers. A tensor is a n-D array of numbers. Matrices are associative but not commutative. Not all square matrices have inverse. A matrix that is not invertible is called Singular or Degenerative. A matrix is “singular” if any of the following are true: -Any row or column contains all zeros. -Any two rows or columns are identical. -Any row or column is a linear combination of other rows or columns. A square matrix A is singular, if and only if the determinant(A) = 0 Inner product of two vectors $$\overrightarrow{u}$$ and $$\overrightarrow{v}$$: $$\overrightarrow{u} * \overrightarrow{v} = p * ||\overrightarrow{u}|| = p * \sqrt{\sum_{i=1}^{n}u_{i}^{2}} = u^{T} * v = \sum_{i=1}^{n}u_{i} * v_{i}$$ p is the projection of $$\overrightarrow{v}$$ on $$\overrightarrow{u}$$ and $$||\overrightarrow{u}||$$ is the Euclidean norm of $$\overrightarrow{u}$$. $$A = \begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix}$$ Determinant(A) = |A| = a(ei-hf) – b(di-gf) + c(dh – ge) If $$f: R^{n*m} \mapsto R$$ $$\frac{\partial}{\partial A} f(A) = \begin{bmatrix} \frac{\partial}{\partial a_{11}}f(A) & … & \frac{\partial}{\partial a_{1m}} f(A)\\ … & … & …\\ \frac{\partial}{\partial a_{n1}}f(A) & … & \frac{\partial}{\partial a_{nm}}A\\ \end{bmatrix}$$ Example: $$f(A) = a_{11} + … + a_{nm}$$ $$\frac{\partial}{\partial A} f(A) = \begin{bmatrix} 1 & … & 1\\ … & … & …\\ 1 & … & 1 \\ \end{bmatrix}$$ If A is a squared matrix: trace(A) = $$\sum_{i=1}^n A_{ii}$$ trace(AB) = trace(BA) trace(ABC) = trace(CAB) = trace(BCA) trace(B) = trace($$B^T$$) trace(a) = a $$\frac{\partial}{\partial A} trace(AB) = B^T$$ $$\frac{\partial}{\partial A} trace(ABA^TC) = CAB + C^TAB^T$$ Eigenvector Given a matrix A, if a vector μ satisfies the equation A*μ = λ*μ then μ is called an eigenvector for the matrix A, and λ is called the eigenvalue for the matrix A. The principal eigenvector for the matrix A is the eigenvalue with the largest eigenvalue. Example: The normalized eigenvectors for $$\begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix}$$ are $$\begin{bmatrix}\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}$$ and $$\begin{bmatrix}-\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}$$, the eigenvalues are 1 and -1. Eigendecomposition Given a squared matrix A∈$$R^{n*n}$$, ∃ Q∈$$R^{n*n}$$, Λ∈$$R^{n*n}$$ and Λ diagonal, such as $$A=QΛQ^T$$. Q’s columns are the eigenvectors of $$A$$ Λ is the diagonal matrix whose diagonal elements are the eigenvalues Example: The eigendecomposition of $$\begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix}$$ is Q=$$\begin{bmatrix}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix}$$, Λ=$$\begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{bmatrix}$$ Eigenvectors are vect Single Value Decomposition Given a matrix A∈$$R^{n*m}$$, ∃ U∈$$R^{n*m}$$, D∈$$R^{m*m}$$ and D diagonal, V∈$$R^{m*m}$$ such as $$A=UDV^T$$. U’s columns are the eigenvectors of $$AA^T$$ V’s columns are the eigenvectors of $$A^TA$$ Example: The SVD decomposition of $$\begin{bmatrix} 0 & 1 \\1 & 0 \end{bmatrix}$$ is U=$$\begin{bmatrix}0 & -1 \\ -1 & 0\end{bmatrix}$$, D=$$\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}$$, V=$$\begin{bmatrix}-1 & 0 \\ 0 & -1\end{bmatrix}$$. The Moore-Penrose Pseudoinverse The Moore-Penrose pseudoinverse is a matrix that can act as a partial replacement for the matrix inverse in cases where it does not exist (e.g. non-square matrices). The pseudoinverse matrix is defined as: $$pinv(A) = \lim_{α \rightarrow 0} (A^TA + αI)^{-1} A^T$$ # Analysis 0! = 1 exp(1) = 2.718 exp(0) = 1 ln(1) = 0 $$ln(x) = log_e(x) \\ log_b (b^a) = a$$ exp(a + b) = exp(a) * exp(b) ln(a * b) = ln(a) + ln(b) $$cos(x)^2 + sin(x)^2 = 1$$ Euler’s formula exp(iθ) = cos(θ) + i sin(θ) Complex numbers Rectangular form z = a + ib (real part + imaginary part and i an imaginary unit satisfying $$i^2 = −1$$). Polar form z = r (cos(θ) + i sin(θ)) Exponential form z = r.exp(iθ) Multivariate equations The solution set of a system of linear equations with 3 variables is the intersection of hyperplanes defined by each linear equation. Derivatives $$\frac{\partial f(x)}{\partial x} = \lim_{h \rightarrow 0} \frac{f(x+h) – f(x)}{h}$$ Function Derivative x^n n * x^(n-1) exp(x) exp(x) f o g (x) g’(x) * f’ o g(x) ln(x) 1/x sin(x) cos(x) cos(x) -sin(x) Integration by parts $$\int_{a}^{b} (f(x) g(x))’ dx = \int_{a}^{b} f'(x) g(x) dx+ \int_{a}^{b} f(x) g'(x) dx$$ Binomial theorem $$(x + y)^n = \sum_{k=0}^{n} C_n^k x^k y^{n-k}$$ Chain rule Z = f(x(u,v), y(u,v)) $$\frac{\partial Z}{\partial u} = \frac{\partial Z}{\partial x} * \frac{\partial x}{\partial u} + \frac{\partial Z}{\partial y} * \frac{\partial y}{\partial u}$$ Entropy Entropy measures the uncertainty associated with a random variable. $$H(X) = -\sum_{i=1}^n p(x^{(i)}) log(p(x^{(i)}))$$ Example: Entropy({1,1,1,1}) = 0 Entropy({0,1,0,1}) = -½ (4*log(½)) Hessian $$H = \begin{bmatrix}\frac{\partial^2 f(θ)}{\partial θ_1\partial θ_1} & \frac{\partial^2 f(θ)}{\partial θ_1 \partial θ_2} \\ \frac{\partial^2 f(θ)}{\partial θ_2\partial θ_1} & \frac{\partial^2 f(θ)}{\partial θ_2\partial θ_2} \end{bmatrix}$$ Example: $$f(θ) = θ_1^2 + θ_2^2 \\ H(f) = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}$$ A function f(θ) is convex if its Hessian matrix is positive semidefinite ($$x^T.H(θ).x >= 0$$, for every $$x∈R^2$$). $$x^T.H(θ).x = \begin{bmatrix} x_1 & x_2 \end{bmatrix} . \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} . \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = 2 x_1^2 + 2 x_2^2 >= 0$$ Method of Lagrange Multipliers To maximize/minimize f(x) with the constraints $$h_r(x) = 0$$ for r in {1,..,l}, We need to define the Lagrangian: $$L(x, α) = f(x) – \sum_{r=1}^l α_r h_r(x)$$ and find x, α by solving the following equations: $$\frac{\partial L}{\partial x} = 0$$ $$\frac{\partial L}{\partial α_r} = 0$$ for all r $$h_r(x) = 0$$ for all r Calculate the Hessian matrix (f”(x)) to know if the solution is a minimum or maximum. Method of Lagrange Multipliers with inequality constraints To minimize f(x) with the constraints $$g_i(x) \geq 0$$ for i in {1,..,k} and $$h_r(x) = 0$$ for r in {1,..,l}, We need to define the Lagrangian: $$L(x, α, β) = f(w) – \sum_{i=1}^k α_i g_i(x) – \sum_{r=1}^l β_r h_r(x)$$ and find x, α, β by solving the following equations: $$\frac{\partial L}{\partial x} = 0$$ $$\frac{\partial L}{\partial α_i} = 0$$ for all i $$\frac{\partial L}{\partial β_r} = 0$$ for all r $$h_r(x) = 0$$ for all r $$g_i(x) \geq 0$$ for all i $$α_i * g_i(x) = 0$$ for all i (Karush–Kuhn–Tucker conditions) $$α_i >= 0$$ for all i (KTT conditions) Lagrange strong duality – hard to understand 🙁 Lagrange dual function $$d(α, β) = \underset{x}{min} L(x, α, β)$$, and x satisfies equality and inequality constraints. We define $$d^* = \underset{α \geq 0, β}{max}\ d(α, β)$$ We define $$p^* = \underset{w}{min}\ f(x)$$ (x satisfies equality and inequality constraints) Under certain conditions (Slater conditions: f convex,…), $$p^* = d^*$$ Jensen’s inequality If f a convex function, and X a random variable, then f(E[X]) <= E[f(X)]. If f a concave function, and X a random variable, then f(E[X]) >= E[f(X)]. If f is strictly convex (f”(x) > 0), then f(E[X]) = E[f(X)] holds true only if X = E[X] (X is a constant). # Probability Below the main probability theorems. Law of total probability If A is an arbitrary event, and B are mutually exclusive events such as $$\sum_{i=1}^{n} P(B_{i}) = 1$$, then: $$P(A) = \sum_{i=1}^{n} P(A|B_{i}) P(B_{i}) = \sum_{i=1}^{n} P(A,B_{i})$$ Example: Suppose that 15% of the population of your country was exposed to a dangerous chemical Z. If exposure to Z quadruples the risk of lung cancer from .0001 to .0004. What’s the probability that you will get lung cancer. P(cancer) = .15 * .0004 + .85 * .0001 = .000145 Bayes’ rule P(A|B) = P(B|A) * P(A) / P(B) Where A and B are events. Example: Suppose that 15% of the population of your country was exposed to a dangerous chemical Z. If exposure to Z quadruples the risk of lung cancer from .0001 to .0004. If you have lung cancer, what’s the probability that you were exposed to Z? P(Z|Cancer) = P(Cancer|Z) * P(Z) / P(Cancer) We can calculate the P(Cancer) using the law of total probability: P(Cancer) = P(Cancer|Z) * P(Z) + P(Cancer|~Z) * P(~Z) P(Z|Cancer) = .0004 * 0.15 / (.0004 * 0.15 + .0001 * .85) = 0.41 Chain rule $$P(A_1,A_2,…,A_n) = P(A_1) P(A_2|A_1) ….P(A_n|A_{n-1},…,A_1)$$ P(Y,X1,X2,X3,X4) = P(Y,X4,X3,X2,X1) = P(Y|X4,X3,X2,X1) * P(X4|X3,X2,X1) * P(X3|X2,X1) * P(X2|X1) * P(X1) = P(Y|X4,X3) * P(X4) * P(X3|X2,X1) * P(X2) * P(X1) The Union Bound $$P(A_1 \cup A_2 … \cup A_n) \leq P(A_1) + P(A_2) + … + P(A_n)$$ Nb of permutations with replacement Nb of permutations with replacement = $${n^r}$$, r the number of events, n the number of elements. Probability = $$\frac{1}{n^r}$$ Example: Probability of getting 3 six when rolling a dice = 1/6 * 1/6 * 1/6 Probability of getting 3 heads when flipping a coin = 1/2 * 1/2 * 1/2 Nb of permutations without replacement and with ordering Nb of permutations without replacement = $$\frac{n!}{(n-r)!}$$, r the number of events, n the number of elements. Probability = $$\frac{(n-r)!}{n!}$$ Example: Probability of getting 1 red ball and then 1 green ball from an urn that contains 4 balls (1 red, 1 green, 1 black and 1 blue) = 1/4 * 1/3 Nb of combinations without replacement and without ordering Nb of combinations $$\frac{n!}{(n-r)! \, r!}$$, r the number of events, n the number of elements. Probability = $$\frac{(n-r)! \, r!}{n!}$$ Example: Probability of getting 1 red ball and 1 green ball from an urn that contains 4 balls (1 red, 1 green, 1 black and 1 blue) = 1/4 * 1/3 + 1/4 * ⅓
# zbMATH — the first resource for mathematics ## Ji, Lizhen Compute Distance To: Author ID: ji.lizhen Published as: Ji, L.; Ji, Lizhen Homepage: http://www.math.lsa.umich.edu/~lji/ External Links: MGP · Wikidata Documents Indexed: 146 Publications since 1992, including 42 Books Reviewing Activity: 94 Reviews all top 5 #### Co-Authors 61 single-authored 34 Yau, Shing-Tung 13 Papadopoulos, Athanase 9 Liu, Kefeng 7 Yang, Lo 5 A’Campo, Norbert 5 Borel, Armand 5 Poon, Yat Sun 5 Weber, Andreas 4 Li, Peter 3 Anker, Jean-Philippe 3 Cheng, Shiu-Yuen 3 Schoen, Richard Melvin 3 Simon, Leon Melvin 2 Guivarc’h, Yves 2 Leuzinger, Enrico 2 Mazzeo, Rafe R. 2 Oort, Frans 2 Taylor, John C. 2 Wolpert, Scott A. 2 Xu, Hao 2 Zworski, Maciej 1 A’Campo-Neuen, Annette 1 Atiyah, Michael Francis 1 Chiou, Wen-Lin 1 Cohen, Daniel C. 1 do Carmo, Manfredo Perdigão 1 Farb, Benson 1 Goresky, Robert Mark 1 Greene, Robert Everist 1 Hirzebruch, Friedrich 1 Huang, Jie (Jenny) 1 Huang, Wenling 1 Ivanov, Nikolai V. 1 Jost, Jürgen 1 Kuh, Ernest Shiu-jen 1 Kunstmann, Peer Christian 1 Li, Jian-Shu 1 Li, Jun 1 Lin, Zongzhu 1 Looijenga, Eduard J. N. 1 Lu, Jiang-Hua 1 Lyubich, Mikhail Yur’evich 1 MacPherson, Robert Duncan 1 McMullen, Curtis Tracy 1 Müller, Werner G. 1 Müller, Werner 1 Murty, Vijaya Kumar 1 Reich, Karin 1 Saper, Leslie 1 Scherk, John 1 Schilling, Anna-Sofie 1 Sesum, Natasa 1 Shen, Y. Ron 1 Singer, Isadore M. 1 Van der Geer, Gerard 1 Vasy, András 1 Wang, Chang 1 Wang, Jiaping 1 Wang, Liping 1 Weinstein, Alan David 1 Wentworth, Richard A. 1 Wolf, Joseph Albert 1 Wu, Baosen 1 Xiao, Jie 1 Xu, Huawei 1 Yamada, Sumio 1 Yang, Xiao-Kui 1 Zelditch, Steve 1 Zhang, Shou-Wu 1 Zheng, Zhujun 1 Zucker, Steven Mark all top 5 #### Serials 24 Advanced Lectures in Mathematics (ALM) 9 Pure and Applied Mathematics Quarterly 7 Journal of Differential Geometry 6 ICCM Notices 5 AMS/IP Studies in Advanced Mathematics 3 Geometric and Functional Analysis. GAFA 3 L’Enseignement Mathématique. 2e Série 3 Mathematical Research Letters 3 The Asian Journal of Mathematics 2 Archive for History of Exact Sciences 2 Journal of Functional Analysis 2 Proceedings of the American Mathematical Society 2 Transactions of the American Mathematical Society 2 Ergodic Theory and Dynamical Systems 2 Notices of the American Mathematical Society 2 Surveys of Modern Mathematics 1 Letters in Mathematical Physics 1 Journal of Geometry and Physics 1 Annales de l’Institut Fourier 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Commentarii Mathematici Helvetici 1 Duke Mathematical Journal 1 Journal of Pure and Applied Algebra 1 Journal für die Reine und Angewandte Mathematik 1 Mathematische Annalen 1 Mathematische Zeitschrift 1 Annals of Global Analysis and Geometry 1 $$K$$-Theory 1 International Journal of Mathematics 1 Comptes Rendus de l’Académie des Sciences. Série I 1 Communications in Analysis and Geometry 1 Bulletin des Sciences Mathématiques 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Oberwolfach Reports 1 Innovations in Incidence Geometry 1 Progress in Mathematics 1 IRMA Lectures in Mathematics and Theoretical Physics 1 Surveys in Differential Geometry 1 Journal of Topology 1 Science China. Mathematics all top 5 #### Fields 46 Differential geometry (53-XX) 42 General and overarching topics; collections (00-XX) 38 Topological groups, Lie groups (22-XX) 37 History and biography (01-XX) 34 Several complex variables and analytic spaces (32-XX) 32 Functions of a complex variable (30-XX) 28 Algebraic geometry (14-XX) 27 Number theory (11-XX) 25 Group theory and generalizations (20-XX) 24 Global analysis, analysis on manifolds (58-XX) 22 Manifolds and cell complexes (57-XX) 10 Partial differential equations (35-XX) 8 Algebraic topology (55-XX) 7 Potential theory (31-XX) 6 $$K$$-theory (19-XX) 6 Quantum theory (81-XX) 5 Abstract harmonic analysis (43-XX) 4 Commutative algebra (13-XX) 4 General topology (54-XX) 3 Operator theory (47-XX) 3 Geometry (51-XX) 3 Probability theory and stochastic processes (60-XX) 3 Relativity and gravitational theory (83-XX) 2 Nonassociative rings and algebras (17-XX) 2 Category theory; homological algebra (18-XX) 2 Dynamical systems and ergodic theory (37-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Measure and integration (28-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Numerical analysis (65-XX) 1 Systems theory; control (93-XX) #### Citations contained in zbMATH Open 60 Publications have been cited 328 times in 270 Documents Cited by Year Compactifications of symmetric and locally symmetric spaces. Zbl 1100.22001 Borel, Armand; Ji, Lizhen 2006 Heat kernel and Green function estimates on noncompact symmetric spaces. Zbl 0942.43005 Anker, J.-P.; Ji, L. 1999 Dynamics of the heat semigroup on symmetric spaces. Zbl 1185.37077 Ji, Lizhen; Weber, Andreas 2010 Compactifications of symmetric spaces. Zbl 1053.31006 Guivarc’h, Yves; Ji, Lizhen; Taylor, J. C. 1998 Spectral degeneration of hyperboloc Riemann surfaces. Zbl 0793.53051 Ji, Lizhen 1993 Geometry of compactifications of locally symmetric spaces. Zbl 1017.53039 Ji, Lizhen; MacPherson, Robert 2002 Ricci flow on surfaces with cusps. Zbl 1176.53067 Ji, Lizhen; Mazzeo, Rafe; Sesum, Natasa 2009 Asymptotic dimension and the integral $$K$$-theoretic Novikov conjecture for arithmetic groups. Zbl 1079.55012 Ji, Lizhen 2004 The asymptotic behavior of Green’s functions for degenerating hyperbolic surfaces. Zbl 0792.53040 Ji, Lizhen 1993 On the Künneth formula for intersection cohomology. Zbl 0765.57014 Cohen, Daniel C.; Goresky, Mark; Ji, Lizhen 1992 Integral Novikov conjectures and arithmetic groups containing torsion elements. Zbl 1225.22009 Ji, Lizhen 2007 Scattering matrices and scattering geodesics of locally symmetric spaces. Zbl 1026.53026 Ji, Lizhen; Zworski, Maciej 2001 The remainder estimate in spectral accumulation for degenerating hyperbolic surfaces. Zbl 0783.58078 Ji, Lizhen; Zworski, Maciej 1993 A cofinite universal space for proper actions for mapping class groups. Zbl 1205.57004 Ji, Lizhen; Wolpert, Scott A. 2010 Compactifications of locally symmetric spaces. Zbl 1122.22005 Borel, Armand; Ji, Lizhen 2006 Well-rounded equivariant deformation retracts of Teichmüller spaces. Zbl 1303.32007 Ji, Lizhen 2014 $$L^p$$ spectral theory and heat dynamics of locally symmetric spaces. Zbl 1190.58024 Ji, Lizhen; Weber, Andreas 2010 Buildings and their applications in geometry and topology. Zbl 1163.22010 Ji, Lizhen 2006 Metric compactifications of locally symmetric spaces. Zbl 0929.32017 Ji, Lizhen 1998 Riesz transform on locally symmetric spaces and Riemannian manifolds with a spectral gap. Zbl 1187.58029 Ji, Lizhen; Kunstmann, Peer; Weber, Andreas 2010 Pointwise bounds for $$L^{2}$$ eigenfunctions on locally symmetric spaces. Zbl 1154.58003 Ji, Lizhen; Weber, Andreas 2008 A summary of the work of Gregory Margulis. Zbl 1279.01039 Ji, Lizhen 2008 The integral Novikov conjectures for linear groups containing torsion elements. Zbl 1162.57022 Ji, Lizhen 2008 Heat kernel and Green function estimates on noncompact symmetric spaces. II. Zbl 0988.22006 Anker, Jean-Philippe; Ji, Lizhen 2001 The Weyl upper bound on the discrete spectrum of locally symmetric spaces. Zbl 1036.58028 Ji, Lizhen 1999 Exact behavior of the heat kernel and of the Green function on noncompact symmetric spaces. Zbl 0907.43010 Anker, Jean-Philippe; Ji, Lizhen 1998 The trace class conjecture for arithmetic groups. Zbl 0926.11034 Ji, Lizhen 1998 Spectral convergence on degenerating surfaces. Zbl 0774.58041 Ji, Lizhen; Wentworth, Richard 1992 Ends of locally symmetric spaces with maximal bottom spectrum. Zbl 1270.58018 Ji, Lizhen; Li, Peter; Wang, Jiaping 2009 Infinite topology of curve complexes and non-Poincaré duality of Teichmüller modular groups. Zbl 1162.57014 Ivanov, Nikolai; Ji, Lizhen 2008 The integral Novikov conjectures for $$S$$-arithmetic groups. I. Zbl 1130.22005 Ji, Lizhen 2007 Compactifications of symmetric spaces. Zbl 1110.53036 Borel, Armand; Ji, Lizhen 2007 Compactifications of symmetric and locally symmetric spaces. Zbl 1088.53034 Borel, Armand; Ji, Lizhen 2005 Convergence of heat kernels for degenerating hyperbolic surfaces. Zbl 0813.58057 Ji, Lizhen 1995 Hyperbolic cusp forms and spectral simplicity on compact hyperbolic surfaces. Zbl 0814.58039 Ji, Lizhen; Zelditch, Steven 1994 Metric Schottky problem and Satake compactifications of moduli spaces. Zbl 1427.53065 Ji, Lizhen 2019 Toric varieties vs. horofunction compactifications of polyhedral norms. Zbl 1402.14065 Ji, Lizhen; Schilling, Anna-Sofie 2017 Universal moduli spaces of Riemann surfaces. Zbl 1361.32018 Ji, Lizhen; Jost, Jürgen 2017 On Grothendieck’s tame topology. Zbl 1361.32017 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2016 On Grothendieck’s construction of Teichmüller space. Zbl 1345.30051 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2016 On the early history of moduli and Teichmüller spaces. Zbl 1361.30071 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Arthanase 2015 The $$L^{p}$$ spectrum and heat dynamics of locally symmetric spaces of higher rank. Zbl 1401.58014 Ji, Lizhen; Weber, Andreas 2015 The fundamental group of reductive Borel-Serre and Satake compactifications. Zbl 1329.20063 Ji, Lizhen; Murty, V. Kumar; Saper, Leslie; Scherk, John 2015 A commentary on Teichmüller’s paper “Veränderliche Riemannsche Flächen”. Zbl 1314.30078 A’Campo-Neuen, Annette; A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2014 Spectral theory for the Weil-Petersson Laplacian on the Riemann moduli space. Zbl 1323.35119 Ji, Lizhen; Mazzeo, Rafe; Müller, Werner; Vasy, Andras 2014 Historical development of Teichmüller theory. Zbl 1266.01022 2013 Buildings and their applications in geometry and topology. Zbl 1275.20025 Ji, Lizhen 2012 Arithmetic groups vs. mapping class groups: similarities, analogies and differences. Abstracts from the workshop held June 5–11, 2011. Zbl 1334.00092 Farb, Benson (ed.); Ji, Lizhen (ed.); Leuzinger, Enrico (ed.); Müller, Werner (ed.) 2011 Arithmetic groups, mapping class groups, related groups, and their associated spaces. Zbl 1210.22007 Ji, Lizhen 2010 Automorphic forms and the Langlands program. Selected papers of the conference on Langlands and geometric Langlands program, Guangzhou, China, June 18–21, 2007. Zbl 1185.11005 Ji, Lizhen (ed.); Liu, Kefeng (ed.); Yau, Shing-Tung (ed.); Zheng, Zhu-Jun (ed.) 2010 The asymptotic Schottky problem. Zbl 1193.14044 Ji, Lizhen; Leuzinger, Enrico 2010 From symmetric spaces to buildings, curve complexes and outer spaces. Zbl 1264.20031 Ji, Lizhen 2009 Large scale geometry, compactifications and the integral Novikov conjectures for arithmetic groups. Zbl 1179.19002 Ji, Lizhen 2008 Handbook of geometric analysis. No. 1. Zbl 1144.53004 Ji, Lizhen (ed.); Li, Peter (ed.); Schoen, Richard (ed.); Simon, Leon (ed.) 2008 Arithmetic groups and their generalizations. What, why and how. Zbl 1148.11019 Ji, Lizhen 2008 Armand Borel as a mentor. Zbl 1072.01536 Ji, Lizhen 2004 Satake and Martin compactifications of symmetric spaces are topological balls. Zbl 0883.53048 Ji, Lizhen 1997 Compactifications of symmetric spaces and locally symmetric spaces. Zbl 0936.53033 Ji, Lizhen 1996 Degeneration of pseudo-Laplace operators for hyperbolic Riemann surfaces. Zbl 0802.58061 Ji, Lizhen 1994 Compactifications of symmetric spaces. Zbl 0814.53038 Guivarc’h, Yves; Ji, Lizhen; Taylor, John 1993 Metric Schottky problem and Satake compactifications of moduli spaces. Zbl 1427.53065 Ji, Lizhen 2019 Toric varieties vs. horofunction compactifications of polyhedral norms. Zbl 1402.14065 Ji, Lizhen; Schilling, Anna-Sofie 2017 Universal moduli spaces of Riemann surfaces. Zbl 1361.32018 Ji, Lizhen; Jost, Jürgen 2017 On Grothendieck’s tame topology. Zbl 1361.32017 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2016 On Grothendieck’s construction of Teichmüller space. Zbl 1345.30051 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2016 On the early history of moduli and Teichmüller spaces. Zbl 1361.30071 A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Arthanase 2015 The $$L^{p}$$ spectrum and heat dynamics of locally symmetric spaces of higher rank. Zbl 1401.58014 Ji, Lizhen; Weber, Andreas 2015 The fundamental group of reductive Borel-Serre and Satake compactifications. Zbl 1329.20063 Ji, Lizhen; Murty, V. Kumar; Saper, Leslie; Scherk, John 2015 Well-rounded equivariant deformation retracts of Teichmüller spaces. Zbl 1303.32007 Ji, Lizhen 2014 A commentary on Teichmüller’s paper “Veränderliche Riemannsche Flächen”. Zbl 1314.30078 A’Campo-Neuen, Annette; A’Campo, Norbert; Ji, Lizhen; Papadopoulos, Athanase 2014 Spectral theory for the Weil-Petersson Laplacian on the Riemann moduli space. Zbl 1323.35119 Ji, Lizhen; Mazzeo, Rafe; Müller, Werner; Vasy, Andras 2014 Historical development of Teichmüller theory. Zbl 1266.01022 2013 Buildings and their applications in geometry and topology. Zbl 1275.20025 Ji, Lizhen 2012 Arithmetic groups vs. mapping class groups: similarities, analogies and differences. Abstracts from the workshop held June 5–11, 2011. Zbl 1334.00092 Farb, Benson (ed.); Ji, Lizhen (ed.); Leuzinger, Enrico (ed.); Müller, Werner (ed.) 2011 Dynamics of the heat semigroup on symmetric spaces. Zbl 1185.37077 Ji, Lizhen; Weber, Andreas 2010 A cofinite universal space for proper actions for mapping class groups. Zbl 1205.57004 Ji, Lizhen; Wolpert, Scott A. 2010 $$L^p$$ spectral theory and heat dynamics of locally symmetric spaces. Zbl 1190.58024 Ji, Lizhen; Weber, Andreas 2010 Riesz transform on locally symmetric spaces and Riemannian manifolds with a spectral gap. Zbl 1187.58029 Ji, Lizhen; Kunstmann, Peer; Weber, Andreas 2010 Arithmetic groups, mapping class groups, related groups, and their associated spaces. Zbl 1210.22007 Ji, Lizhen 2010 Automorphic forms and the Langlands program. Selected papers of the conference on Langlands and geometric Langlands program, Guangzhou, China, June 18–21, 2007. Zbl 1185.11005 Ji, Lizhen (ed.); Liu, Kefeng (ed.); Yau, Shing-Tung (ed.); Zheng, Zhu-Jun (ed.) 2010 The asymptotic Schottky problem. Zbl 1193.14044 Ji, Lizhen; Leuzinger, Enrico 2010 Ricci flow on surfaces with cusps. Zbl 1176.53067 Ji, Lizhen; Mazzeo, Rafe; Sesum, Natasa 2009 Ends of locally symmetric spaces with maximal bottom spectrum. Zbl 1270.58018 Ji, Lizhen; Li, Peter; Wang, Jiaping 2009 From symmetric spaces to buildings, curve complexes and outer spaces. Zbl 1264.20031 Ji, Lizhen 2009 Pointwise bounds for $$L^{2}$$ eigenfunctions on locally symmetric spaces. Zbl 1154.58003 Ji, Lizhen; Weber, Andreas 2008 A summary of the work of Gregory Margulis. Zbl 1279.01039 Ji, Lizhen 2008 The integral Novikov conjectures for linear groups containing torsion elements. Zbl 1162.57022 Ji, Lizhen 2008 Infinite topology of curve complexes and non-Poincaré duality of Teichmüller modular groups. Zbl 1162.57014 Ivanov, Nikolai; Ji, Lizhen 2008 Large scale geometry, compactifications and the integral Novikov conjectures for arithmetic groups. Zbl 1179.19002 Ji, Lizhen 2008 Handbook of geometric analysis. No. 1. Zbl 1144.53004 Ji, Lizhen (ed.); Li, Peter (ed.); Schoen, Richard (ed.); Simon, Leon (ed.) 2008 Arithmetic groups and their generalizations. What, why and how. Zbl 1148.11019 Ji, Lizhen 2008 Integral Novikov conjectures and arithmetic groups containing torsion elements. Zbl 1225.22009 Ji, Lizhen 2007 The integral Novikov conjectures for $$S$$-arithmetic groups. I. Zbl 1130.22005 Ji, Lizhen 2007 Compactifications of symmetric spaces. Zbl 1110.53036 Borel, Armand; Ji, Lizhen 2007 Compactifications of symmetric and locally symmetric spaces. Zbl 1100.22001 Borel, Armand; Ji, Lizhen 2006 Compactifications of locally symmetric spaces. Zbl 1122.22005 Borel, Armand; Ji, Lizhen 2006 Buildings and their applications in geometry and topology. Zbl 1163.22010 Ji, Lizhen 2006 Compactifications of symmetric and locally symmetric spaces. Zbl 1088.53034 Borel, Armand; Ji, Lizhen 2005 Asymptotic dimension and the integral $$K$$-theoretic Novikov conjecture for arithmetic groups. Zbl 1079.55012 Ji, Lizhen 2004 Armand Borel as a mentor. Zbl 1072.01536 Ji, Lizhen 2004 Geometry of compactifications of locally symmetric spaces. Zbl 1017.53039 Ji, Lizhen; MacPherson, Robert 2002 Scattering matrices and scattering geodesics of locally symmetric spaces. Zbl 1026.53026 Ji, Lizhen; Zworski, Maciej 2001 Heat kernel and Green function estimates on noncompact symmetric spaces. II. Zbl 0988.22006 Anker, Jean-Philippe; Ji, Lizhen 2001 Heat kernel and Green function estimates on noncompact symmetric spaces. Zbl 0942.43005 Anker, J.-P.; Ji, L. 1999 The Weyl upper bound on the discrete spectrum of locally symmetric spaces. Zbl 1036.58028 Ji, Lizhen 1999 Compactifications of symmetric spaces. Zbl 1053.31006 Guivarc’h, Yves; Ji, Lizhen; Taylor, J. C. 1998 Metric compactifications of locally symmetric spaces. Zbl 0929.32017 Ji, Lizhen 1998 Exact behavior of the heat kernel and of the Green function on noncompact symmetric spaces. Zbl 0907.43010 Anker, Jean-Philippe; Ji, Lizhen 1998 The trace class conjecture for arithmetic groups. Zbl 0926.11034 Ji, Lizhen 1998 Satake and Martin compactifications of symmetric spaces are topological balls. Zbl 0883.53048 Ji, Lizhen 1997 Compactifications of symmetric spaces and locally symmetric spaces. Zbl 0936.53033 Ji, Lizhen 1996 Convergence of heat kernels for degenerating hyperbolic surfaces. Zbl 0813.58057 Ji, Lizhen 1995 Hyperbolic cusp forms and spectral simplicity on compact hyperbolic surfaces. Zbl 0814.58039 Ji, Lizhen; Zelditch, Steven 1994 Degeneration of pseudo-Laplace operators for hyperbolic Riemann surfaces. Zbl 0802.58061 Ji, Lizhen 1994 Spectral degeneration of hyperboloc Riemann surfaces. Zbl 0793.53051 Ji, Lizhen 1993 The asymptotic behavior of Green’s functions for degenerating hyperbolic surfaces. Zbl 0792.53040 Ji, Lizhen 1993 The remainder estimate in spectral accumulation for degenerating hyperbolic surfaces. Zbl 0783.58078 Ji, Lizhen; Zworski, Maciej 1993 Compactifications of symmetric spaces. Zbl 0814.53038 Guivarc’h, Yves; Ji, Lizhen; Taylor, John 1993 On the Künneth formula for intersection cohomology. Zbl 0765.57014 Cohen, Daniel C.; Goresky, Mark; Ji, Lizhen 1992 Spectral convergence on degenerating surfaces. Zbl 0774.58041 Ji, Lizhen; Wentworth, Richard 1992 all top 5 #### Cited by 343 Authors 20 Ji, Lizhen 8 Kostić, Marko 7 Weber, Andreas 5 Leuzinger, Enrico 5 Lohoué, Noël 4 Conejero, José Alberto 4 Dranishnikov, Alexander Nikolaevich 4 Jorgenson, Jay Alan 4 Lu, Guozhen 4 Meda, Stefano 4 Murillo-Arcila, Marina 4 Sarkar, Rudra P. 4 Vallarino, Maria 4 Yang, Qiaohua 3 Anker, Jean-Philippe 3 Biliotti, Leonardo 3 Friedman, Greg 3 Kaizuka, Koichi 3 Kim, Inkang 3 Kim, Sungwoon 3 Ledrappier, François 3 Li, Hongquan 3 Lundelius, Rolf E. 3 Murata, Minoru 3 Peris, Alfredo 3 Thangavelu, Sundaram 2 Aramayona, Javier 2 Attwell-Duval, Dylan 2 Banica, Valeria 2 Brasselet, Jean-Paul 2 Caprace, Pierre-Emmanuel 2 Chang, Stanley S. 2 Chen, Chung-Chuan 2 Cupit-Foutou, Stéphanie 2 Di Cerbo, Gabriele 2 Di Cerbo, Luca Fabrizio 2 Falliero, Thérèse 2 Finis, Tobias 2 Funke, Jens 2 Ghigi, Alessandro 2 Goldfarb, Boris 2 González, María del Mar 2 Gorodnik, Alexander 2 Ionescu, Alexandru D. 2 Itoh, Mitsuhiro 2 Jost, Jürgen 2 Judge, Christopher M. 2 Lacoste, Cyril 2 Lapid, Erez Moshe 2 Li, Jun-Gang 2 Mauceri, Giancarlo 2 Mazzeo, Rafe R. 2 McClure, James E. 2 Millson, John J. 2 Mondal, Sugata 2 Müller, Werner 2 Naik, Muna 2 Papadopoulos, Athanase 2 Parthasarathy, Aprameyan 2 Pierfelice, Vittoria 2 Punzo, Fabio 2 Ramacher, Pablo 2 Rochon, Frédéric 2 Sá Barreto, Antônio 2 Sáez, Mariel 2 Sarig, Omri M. 2 Topping, Peter Miles 2 Vasy, András 2 Wang, Yiran 2 Weinberger, Shmuel 2 Yu, Guoliang 1 Albin, Pierre 1 Aldana, Clara L. 1 Alldridge, Alexander 1 Álvarez López, Jesús A. 1 Ammann, Bernd Eberhard 1 Antonakoudis, Stergios M. 1 Aroza, Javier 1 Astengo, Francesca 1 Avdispahić, Muharem 1 Avramidi, Grigori 1 Ayoub, Joseph 1 Bader, Uri 1 Bahuaud, Eric 1 Bainbridge, Matt 1 Bakker, Benjamin 1 Bamler, Richard H. 1 Banagl, Markus 1 Bartels, Arthur C. 1 Behrstock, Jason A. 1 Bell, Gregory C. 1 Ben Farah, Slaïm 1 Benoist, Yves 1 Berger, Franz 1 Blackman, Terrence Richard 1 Boggarapu, Pradeep 1 Bohm, Christoph 1 Borthwick, David 1 Bougerol, Philippe 1 Broaddus, Nathan ...and 243 more Authors all top 5 #### Cited in 118 Serials 19 Journal of Functional Analysis 13 Transactions of the American Mathematical Society 8 Duke Mathematical Journal 8 Mathematische Annalen 8 Mathematische Zeitschrift 7 Advances in Mathematics 6 Annales de l’Institut Fourier 6 Geometriae Dedicata 6 Differential Geometry and its Applications 5 Inventiones Mathematicae 5 The Journal of Geometric Analysis 5 Communications in Partial Differential Equations 4 Israel Journal of Mathematics 4 Journal of Mathematical Analysis and Applications 4 Topology and its Applications 4 Calculus of Variations and Partial Differential Equations 4 Geometry & Topology 4 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 Groups, Geometry, and Dynamics 3 Annali di Matematica Pura ed Applicata. Serie Quarta 3 Manuscripta Mathematica 3 Proceedings of the American Mathematical Society 3 Bulletin des Sciences Mathématiques 3 Annals of Mathematics. Second Series 3 Journal of Topology and Analysis 2 Archive for History of Exact Sciences 2 Communications in Mathematical Physics 2 Communications on Pure and Applied Mathematics 2 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 2 Journal of Differential Equations 2 Journal of Number Theory 2 Journal of Pure and Applied Algebra 2 Publications of the Research Institute for Mathematical Sciences, Kyoto University 2 Ergodic Theory and Dynamical Systems 2 Annals of Global Analysis and Geometry 2 Revista Matemática Iberoamericana 2 Journal of the American Mathematical Society 2 Geometric and Functional Analysis. GAFA 2 Bulletin of the American Mathematical Society. New Series 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Potential Analysis 2 Journal of Lie Theory 2 Selecta Mathematica. New Series 2 Transformation Groups 2 Abstract and Applied Analysis 2 Acta Mathematica Sinica. English Series 2 Algebraic & Geometric Topology 2 International Journal of Number Theory 2 Annales Mathématiques du Québec 2 Open Mathematics 1 Bulletin of the Australian Mathematical Society 1 Discrete Mathematics 1 Journal d’Analyse Mathématique 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Russian Mathematical Surveys 1 Arkiv för Matematik 1 Journal of Geometry and Physics 1 Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 1 Acta Mathematica 1 The Annals of Probability 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 1 Archiv der Mathematik 1 Commentarii Mathematici Helvetici 1 Functional Analysis and its Applications 1 Illinois Journal of Mathematics 1 Publications Mathématiques 1 Integral Equations and Operator Theory 1 Journal of Algebra 1 Journal of the London Mathematical Society. Second Series 1 Journal of the Mathematical Society of Japan 1 Journal für die Reine und Angewandte Mathematik 1 Kodai Mathematical Journal 1 Mathematische Nachrichten 1 Monatshefte für Mathematik 1 Nagoya Mathematical Journal 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Osaka Journal of Mathematics 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Proceedings of the London Mathematical Society. Third Series 1 Quaestiones Mathematicae 1 Results in Mathematics 1 Tohoku Mathematical Journal. Second Series 1 Tokyo Journal of Mathematics 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Probability Theory and Related Fields 1 $$K$$-Theory 1 International Journal of Algebra and Computation 1 IMRN. International Mathematics Research Notices 1 L’Enseignement Mathématique. 2e Série 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Proceedings of the National Academy of Sciences of the United States of America 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Indagationes Mathematicae. New Series 1 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 1 Russian Mathematics 1 Electronic Research Announcements of the American Mathematical Society 1 Differential Equations and Dynamical Systems 1 Mathematical Physics, Analysis and Geometry 1 Communications of the Korean Mathematical Society 1 Journal of Evolution Equations ...and 18 more Serials all top 5 #### Cited in 38 Fields 77 Global analysis, analysis on manifolds (58-XX) 74 Differential geometry (53-XX) 57 Topological groups, Lie groups (22-XX) 47 Partial differential equations (35-XX) 43 Number theory (11-XX) 37 Algebraic geometry (14-XX) 34 Group theory and generalizations (20-XX) 34 Several complex variables and analytic spaces (32-XX) 33 Manifolds and cell complexes (57-XX) 30 Abstract harmonic analysis (43-XX) 28 Operator theory (47-XX) 20 Functions of a complex variable (30-XX) 17 Dynamical systems and ergodic theory (37-XX) 15 Algebraic topology (55-XX) 13 Functional analysis (46-XX) 12 Probability theory and stochastic processes (60-XX) 9 Harmonic analysis on Euclidean spaces (42-XX) 8 $$K$$-theory (19-XX) 8 General topology (54-XX) 7 Ordinary differential equations (34-XX) 6 Potential theory (31-XX) 5 Category theory; homological algebra (18-XX) 5 Geometry (51-XX) 5 Quantum theory (81-XX) 4 Combinatorics (05-XX) 3 History and biography (01-XX) 3 Real functions (26-XX) 3 Measure and integration (28-XX) 3 Integral transforms, operational calculus (44-XX) 3 Convex and discrete geometry (52-XX) 2 General and overarching topics; collections (00-XX) 2 Special functions (33-XX) 2 Statistics (62-XX) 1 Mathematical logic and foundations (03-XX) 1 Commutative algebra (13-XX) 1 Associative rings and algebras (16-XX) 1 Nonassociative rings and algebras (17-XX) 1 Calculus of variations and optimal control; optimization (49-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
# The series $\sum \frac{1}{n_k}$ where $n_k$ are the natural numbers with no $6$ in their decimal expansion is convergent. Let $\{n_1,n_2,\ldots\}$ be the set of natural numbers that do not use the digit $6$ in their decimal expansion. Then, the series $$\sum_{k=1}^\infty \frac{1}{n_k}$$ converges to a number less than $80$. How can we prove that? - It certainly sounds reasonable, since the density of $n_k$ in $\mathbb{N}$ will drop exponentially with $\log_{10} n_k$. To calculate the number of such $n_k<N$ for some upper limit $N$, I suggest to try inclusion-exclusion on the set of $n\leq N$ that contain a single $6$, two $6$'s, three $6$'s, and so forth in their decimal expansions. –  Douglas B. Staple Apr 17 '13 at 17:05 This is a classic (and was already asked on the site...). The idea is to enumerate the integers with $i\geqslant1$ digits and to estimate crudely their contribution. More precisely, there are $8\cdot9^{i-1}$ integers with no digit $6$ and they are all at least equal to $10^{i-1}$ hence the sum of their contributions is at most equal to $8\cdot9^{i-1}/10^{i-1}$. Summing these over $i\geqslant1$, one sees that the sum of the resulting geometric series, an upper bound of the sum of the series you are interested in, is $8/(1-9/10)=80$.
Plausibility of an Ecosystem Based on Giant Diatoms? In this wildly divergent Earth timeline, a group of diatoms evolved to be up to a few inches across (rather than 2mm or less). With this group presumably developing some similar traits to organisms like macroscopic unicellular algae and slime mold. The diatoms in this group all share the following traits: • Frustules are impermeable glass, with small openings which can be sealed shut with a trapdoor like mechanism. This means a giant diatom should theoretically be able to survive passing intact through the digestive tract of a whale (since glass can resist hydrochloric acid). • Have anaerobic internal chambers which let them efficiently fix nitrogen from the air or water. • Most of the diatoms volume is made of vacuules for storage and regulating bouyancy. Diatoms can fix nitrogen, and get other nutrients by regularly descending into deeper waters. The combination of traits described means giant diatoms aren't limited by the availability of nutrients in the surface water like other pelagic photosynthesizers, so they should have a massive pelagic biomass. Am I missing some obvious reason why large floating glass photosynthesizers like this are implausible? Given the surface area issue any living parts of the cell will be no more than a few mm thick. Many will grow into connected clonal colonies, like real life diatoms. If giant diatoms are feasible, then could fish, marine mammals, and reptiles plausibly evolve to prey on them? Probably, but you might have to make some compromises. Macroscopic unicellular life has been well-documented, especially in marine environments, with examples including the foraminiferan Xenophyophora and the giant amoeba Gromia, as well as the algal Caulerpa you mentioned. However, these all kinda cheat by being coenocytic organisms; consisting of multiple nuclei enclosed in a single cell membrane. They get around the issue of nutrient absorption by either having a highly wrinkled and porous surface, or with a massive central vacuole that would take the function of a coelom in multicellular organisms. To my knowledge multinucleate diatoms have not been observed — but given that this property has arisen multiple different times I don't think it's a stretch that this could happen in your alternate history. They might look quite different from the diatoms we're used too though, and require completely different cellular architecture. I imagine they might look something like a sphere or ovoid surrounded by a glass shell, with a large central cavity and multiple pores to allow movement of fluid into the interior. I think you will face the problem given by the surface growing less than the volume with increasing the size. The surface grows with $$R^2$$ and is what you have to exchange with the environment, while the volume, which grows with $$R^3$$, is what you need both to fill and clean by means of those exchanges. While multicellular organisms can use tricks to increase the exchange area, a unicellular organism is pretty much limited in what can achieve. I guess there is a limit after which increasing the size will make the organism less efficient at interacting with the environment. • Unicellular Caulerpa already gets big enough here, so does that mean you can get enough surface area if ensure no living part of the organism is more than a few mm from a pore? Apr 5 at 19:25 • I don't know what a diatom is. But presumably if they have little hatches they can stick little cilia out of the hatches to increase surface area. How much bigger this gets you is a mystery. Apr 5 at 20:17 Frame challenge (minor): You don't get crystalline silica or silica glass from biological sources. They both require high temperatures. There are enormous kinetic barriers to forming glass and/or quartz. That said, silica gel and even plain old amorphous silica also resist virtually all acids. The latter is what diatoms basically are, so they could just grow a slightly more acid-impervious outer shell. • I suppose if glass sponges are anything to go by the composite glass structure will probably perform better on every metric than inorganically occuring glass. Apr 7 at 14:57 • Glass sponges don't actually use glass, they use silica, an ordered form of amorphous silica. Yes, it negates the need for actual glass. No, it won't be more chemically resistant than actual glass; caustic soda will dissolve any amorphous silica relatively quickly, while silica glass and quartz will only dissolve very slowly. Apr 7 at 20:20
rabbit population • February 12th 2009, 03:31 PM razorfever rabbit population dN/dt = cN^1.01 (i) show that, if there are N(0) = 2 rabbits initially, and 16 rabbits after 3 months, then the population N(t) become unbounded at a finite time t (ii) does the population become unbounded far any initial condition N(0) > 0, and any c>0, expain your answer i solved the DE and got N(t) = [(0.68t - 99.31) / -100]^(-100) how do i show that the population becomes unbounded at a finite time t can someone give me a detailed explanation of both parts?? • February 12th 2009, 11:28 PM Rincewind Quote: Originally Posted by razorfever dN/dt = cN^1.01 (i) show that, if there are N(0) = 2 rabbits initially, and 16 rabbits after 3 months, then the population N(t) become unbounded at a finite time t (ii) does the population become unbounded far any initial condition N(0) > 0, and any c>0, expain your answer i solved the DE and got N(t) = [(0.68t - 99.31) / -100]^(-100) how do i show that the population becomes unbounded at a finite time t can someone give me a detailed explanation of both parts?? Another way to write your solution is $N(t) = \left(\frac{100}{D - ct}\right)^{100}$ where you have determined $D \approx 99.31$ and $c \approx 0.68$. Now you can see from this that when the denominator is zero then N become unbounded (division by zero) so this will happen at the critical time $t_{critical} = \frac{D}{c}$ For part one the critical time is $t_{critical} \approx \frac{99.31}{0.68} \approx 146$ For part 2, if N(0) > 0 then $D = \frac{100}{N(0)^{0.01}}$ which is positive and if c is positive then also $t_{critical}$ will be positive. Hope this helps
- Complex Quadratic Optimization and Semidefinite Programming Shuzhong Zhang (zhangse.cuhk.edu.hk) Yongwei Huang (ywhuangse.cuhk.edu.hk) Abstract: In this paper we study the approximation algorithms for a class of discrete quadratic optimization problems in the Hermitian complex form. A special case of the problem that we study corresponds to the max-3-cut model used in a recent paper of Goemans and Williamson. We first develop a closed-form formula to compute the probability of a complex-valued normally distributed bivariate random vector to be in a given angular region. This formula allows us to compute the expected value of a randomized (with a specific rounding rule) solution based on the optimal solution of the complex SDP relaxation problem. In particular, we study the limit of that model, in which the problem remains NP-hard. We show that if the objective is to maximize a positive semidefinite Hermitian form, then the randomization-rounding procedure guarantees a worst-case performance ratio of $\pi/4 \approx 0.7854$, which is better than the ratio of $2/\pi \approx 0.6366$ for its counter-part in the real case due to Nesterov. Furthermore, if the objective matrix is real-valued positive semidefinite with non-positive off-diagonal elements, then the performance ratio improves to 0.9349. Keywords: Hermitian quadratic functions, approximation ratio, randomized algorithms, complex SDP relaxation. Category 1: Combinatorial Optimization (Approximation Algorithms ) Category 2: Linear, Cone and Semidefinite Programming (Semi-definite Programming ) Citation: Technical Report SEEM2004-3, Department of Systems Engineering & Engineering Management, The Chinese University of Hong Kong. Download: [PDF]Entry Submitted: 08/31/2004Entry Accepted: 08/31/2004Entry Last Modified: 08/31/2004Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Programming Society and by the Optimization Technology Center.
# Integral of shifted distribution function Assume the integral exists. Show that for any distribution function $M$ with density function $m$ exists and $y\geq 0$, $\int_{-\infty}^{\infty} [M(x+y) - M(x)] dx = y$, My attempt: I see that $F(x+a) - F(x) = P(x< X < x+a)$ for $X$ is a random variable. But this means the indefinite integral is just the sum of all the probabilities, so why is it $a$, but not $1$? I think I made a serious mistake somewhere in this argument. Could someone give me some help on this problem? Would really appreciate any of your help. • I think you need to assume the expectation of $X$ exists and is finite. Then, you might try integration by parts. – Michael Sep 15 '16 at 3:10 • A more clever way (with no assumptions on expectation of $X$ being finite) is to find an expression for $\int_{x=-\infty}^{\infty} 1_{\{X \in (x, x+y]\}}dx$ and then use a Fubini-Tonelli argument on $E[\int_{x=-\infty}^{\infty} 1_{\{X \in (x, x+y]\}}dx]$. – Michael Sep 15 '16 at 3:19 • @Michael: thank you so much for your great help! I got it. Could you please give this problem a try as well? math.stackexchange.com/questions/1927392/… – user177196 Sep 15 '16 at 4:13
# University of Florida/Egm4313/s12.team5.R3 Report 3 ## R 3.1 ### Question Consider the following L2-ODE-CC: ${\displaystyle \displaystyle y^{''}-10y^{'}+25y=7e^{5x}-2x^{2}}$ With initial conditions y(0)=4 , y'(0)=-5 Find the solution. Plot this solution and the solution in the example on p.7-3 ### Solution General solution of our ODE: ${\displaystyle \displaystyle y^{''}-10y^{'}+25y=0}$ ${\displaystyle \displaystyle a^{2}-4b=10^{2}-4(25)=0\Rightarrow }$ double real roots ${\displaystyle \displaystyle \lambda =\lambda _{1}=\lambda _{2}=-{\frac {a}{2}}=-{\frac {-10}{2}}=5}$ Giving us a general solution of: ${\displaystyle \displaystyle y_{g}=(c_{1}+c_{2}x)e^{5x}}$ Particular solution of our ODE by method of undetermined coefficients: Since our excitation r(x) is of the form ${\displaystyle \displaystyle r(x)=ke^{\gamma x}-kx^{n}}$ Our two particular solutions will be of the form: ${\displaystyle \displaystyle y_{p1}=Cx^{2}e^{\gamma x},y_{p2}=K_{2}x^{2}+K_{1}x+K_{0}}$ • Have to multiply ${\displaystyle \displaystyle y_{p1}}$ by ${\displaystyle \displaystyle x^{2}}$ because the ${\displaystyle \displaystyle e^{5x}}$ term already appears in the general solution. Taking derivatives of ${\displaystyle \displaystyle y_{p1}}$: ${\displaystyle \displaystyle y_{p1}=Cx^{2}e^{5x}}$ ${\displaystyle \displaystyle y_{p1}^{'}=2Cxe^{5x}+5Cx^{2}e^{5x}}$ ${\displaystyle \displaystyle y_{p1}^{''}=2Ce^{5x}+10Cxe^{5x}+10Cxe^{5x}+25Cx^{2}e^{5x}}$ Substituting derivatives into the original ODE and collecting like terms: ${\displaystyle \displaystyle C[2+10x+10x+25x^{2}-20x-50x^{2}+25x^{2}]e^{5x}=7e^{5x}}$ ${\displaystyle \displaystyle 2C=7\Rightarrow C={\frac {7}{2}}}$ Giving us our first particular solution: ${\displaystyle \displaystyle y_{p1}={\frac {7}{2}}x^{2}e^{5x}}$ Taking derivatives of ${\displaystyle \displaystyle y_{p2}}$: ${\displaystyle \displaystyle y_{p2}=K_{2}x^{2}+K_{1}x+K_{0}}$ ${\displaystyle \displaystyle y_{p2}^{'}=2K_{2}x+K_{1}}$ ${\displaystyle \displaystyle y_{p2}^{''}=2K_{2}}$ Substituting derivatives into original ODE and collecting like terms: ${\displaystyle \displaystyle 25K_{2}=-2\Rightarrow K_{2}=-{\frac {2}{25}}}$ ${\displaystyle \displaystyle -20K_{2}+25K_{1}=0\Rightarrow K_{1}=-{\frac {8}{125}}}$ ${\displaystyle \displaystyle 2K_{2}-10K_{1}+25K_{0}=0\Rightarrow K_{0}=-{\frac {12}{625}}}$ Giving us our second particular solution: ${\displaystyle \displaystyle y_{p2}=-{\frac {2}{25}}x^{2}-{\frac {8}{125}}x-{\frac {12}{625}}}$ With a final solution being the sum of the general and particular solutions: ${\displaystyle \displaystyle y=y_{g}+y_{p1}+y_{p2}}$ ${\displaystyle \displaystyle y=(c1+c2x)e^{5x}+{\frac {7}{2}}x^{2}e^{5x}-{\frac {2}{25}}x^{2}-{\frac {8}{125}}x-{\frac {12}{625}}}$ Using initial conditions to solve for c1 and c2: ${\displaystyle \displaystyle y(0)=4\Rightarrow 4=c_{1}-{\frac {12}{625}}\Rightarrow c_{1}={\frac {2512}{625}}}$ ${\displaystyle \displaystyle y^{'}(0)=-5}$ where ${\displaystyle \displaystyle y^{'}=e^{5x}[5c_{1}+c_{2}+5c_{2}x+7x+{\frac {35}{2}}x^{2}]-{\frac {4}{25}}x-{\frac {8}{125}}}$ ${\displaystyle \displaystyle -5=5c_{1}+c_{2}-{\frac {8}{125}}\Rightarrow c_{2}=-{\frac {3129}{125}}}$ Final solution: ${\displaystyle \displaystyle y=({\frac {2512}{625}}-{\frac {3129}{125}}x)e^{5x}+{\frac {7}{2}}x^{2}e^{5x}-{\frac {2}{25}}x^{2}-{\frac {8}{125}}x-{\frac {12}{625}}}$ Matlab Code: x=0:0.001:10; y=((2512/625)-(3129/125).*x).*exp(5*x)+(7/2).*x.^2.*exp(5*x)-(2/25).*x.^2-(8/125).*x-(12/625); plot(x,y),xlabel('x'),ylabel('y(x)') Solution in example on P.7-3 ${\displaystyle \displaystyle y=4e^{5x}-25xe^{5x}+{\frac {7}{2}}x^{2}e^{5x}}$ Matlab code: x=0:0.001:10; y=4.*exp(5*x)-25.*x.*exp(5*x)+(7/2).*x.^2.*exp(5*x); plot(x,y),xlabel('x'),ylabel('y(x)') ### Author This problem was solved and uploaded by: Joshua House This problem was proofread by: David Herrick ## R 3.2 ### Question #### Part 1 Find the homogeneous L2-ODE-CC having the following roots: ${\displaystyle \displaystyle \lambda _{1}=\lambda ,\lambda _{2}=\lambda +\varepsilon }$ #### Part 2 Show that the following is a homogeneous solution: ${\displaystyle \displaystyle {\frac {e^{(\lambda +\varepsilon )x}-e^{\lambda x}}{\varepsilon }}}$ #### Part 3 Find the limit of the homogeneous solution as ${\displaystyle \displaystyle \varepsilon \to 0}$ #### Part 4 Take the derivative of ${\displaystyle \displaystyle e^{\lambda x}}$ with respect to ${\displaystyle \displaystyle \lambda }$ #### Part 5 Compare the results in parts (3) and (4), and relate to the result by using variation of parameters #### Part 6 Compute (2) using ${\displaystyle \displaystyle \lambda =5}$ and ${\displaystyle \displaystyle \varepsilon =0.001}$ and compare to the value obtained from the exact 2nd homogeneous solution ### Solution #### Part 1 Letting ${\displaystyle \displaystyle \lambda _{1}=x,\lambda _{2}=x+\epsilon }$ ${\displaystyle \displaystyle (\lambda -x)(\lambda -[x+\varepsilon ])=0}$ ${\displaystyle \displaystyle \lambda ^{2}-\lambda x-\lambda \varepsilon -\lambda x+x^{2}+x\varepsilon =0}$ ${\displaystyle \displaystyle \lambda ^{2}-\lambda [2x+\varepsilon ]+x[x+\varepsilon ]=0}$ Replacing the x's with lambda's: ${\displaystyle \displaystyle y^{''}-y^{'}[2\lambda +\varepsilon ]+y\lambda [\lambda +\varepsilon ]=0}$ #### Part 2 ${\displaystyle \displaystyle y={\frac {e^{(\lambda +\varepsilon )x}-e^{\lambda x}}{\varepsilon }},y'={\frac {(\lambda +\varepsilon )e^{(\lambda +\varepsilon )x}-\lambda e^{\lambda x}}{\varepsilon }},y''={\frac {(\lambda +\varepsilon )^{2}e^{(\lambda +\varepsilon )x}-\lambda ^{2}e^{\lambda x}}{\varepsilon }}}$ Plugging this into the original ODE ${\displaystyle \displaystyle y^{''}-y^{'}[2\lambda +\varepsilon ]+y\lambda [\lambda +\varepsilon ]=0}$ and collecting like terms: ${\displaystyle \displaystyle e^{(\lambda +\varepsilon )x}[\lambda ^{2}+2\lambda \varepsilon +\varepsilon ^{2}-2\lambda ^{2}-2\lambda \varepsilon -\lambda \varepsilon -\varepsilon ^{2}+\lambda ^{2}+\varepsilon \lambda ]+e^{\lambda x}[-\lambda ^{2}+2\lambda ^{2}+\lambda \varepsilon -\lambda ^{2}-\lambda \varepsilon ]=0}$ • Much of the trivial algebra was left out in order to greatly reduce coding time. These intermediate steps, which were done on paper, can be presented if necessary. From inspection, all of the bracketed terms add up to zero, thus verifying we were given a correct homogeneous solution. #### Part 3 ${\displaystyle \displaystyle \lim _{\varepsilon \rightarrow 0}{\frac {e^{(\lambda +\varepsilon )x}-e^{\lambda x}}{\varepsilon }}}$ Since the limit of the numerator and the limit of the denominator are both 0 as epsilon approaches 0, we can use L'Hopitals rule to find the limit of the entire function L'Hopital's Rule: If: ${\displaystyle \displaystyle \lim _{\varepsilon \rightarrow 0}{\frac {f'(\varepsilon )}{g'(\varepsilon )}}}$ exists And: ${\displaystyle \displaystyle \lim _{\varepsilon \rightarrow 0}f(\varepsilon )=\lim _{\varepsilon \rightarrow 0}g(\varepsilon )=0}$ Then: ${\displaystyle \displaystyle \lim _{\varepsilon \rightarrow 0}{\frac {f(\varepsilon )}{g(\varepsilon )}}=\lim _{\varepsilon \rightarrow 0}{\frac {f'(\varepsilon )}{g'(\varepsilon )}}}$ Taking derivatives of the numerator and denominating with respect to epsilon, and finding the limit as epsilon approaches 0: ${\displaystyle \displaystyle \lim _{\varepsilon \rightarrow 0}{\frac {xe^{(\lambda +\varepsilon )x}}{1}}=xe^{\lambda x}}$ #### Part 4 ${\displaystyle \displaystyle {\frac {d[e^{\lambda x}]}{d\lambda }}=xe^{\lambda x}}$ #### Part 5 The results from (3) and (4) show that ${\displaystyle \displaystyle {\frac {d[e^{\lambda x}]}{d\lambda }}=\lim _{e\rightarrow 0}{\frac {e^{(\lambda +\varepsilon )x}-e^{\lambda x}}{\varepsilon }}=xe^{\lambda x}}$ #### Part 6 From ${\displaystyle \displaystyle {\frac {e^{(\lambda +\varepsilon )x}-e^{\lambda x}}{\varepsilon }}}$ and letting ${\displaystyle \displaystyle \lambda =5,\varepsilon =0.001}$ ${\displaystyle \displaystyle {\frac {e^{5.001x}-e^{5x}}{0.001}}}$ Comparing this to the value of the exact 2nd homogeneous solution (values were plugged into the expression in part 2) ${\displaystyle \displaystyle e^{5.001x}[5^{2}+2(5)(0.001)+0.001^{2}-2(5^{2})-2(5)(0.001)-5(0.001)-(0.001^{2})+5^{2}+0.001(5)]+e^{5x}[-5^{2}+2(5^{2})+5(0.001)-5^{2}-5(0.001)]=0}$ ### Author This problem was solved and uploaded by: Joshua House This problem was proofread by: Michael Wallace ## R 3.3 ### Question Find the complete solution for equation 5 from pg. 7.7 in the notes: ${\displaystyle \displaystyle \ y''-2y'+3y=4x^{2}\ }$ with initial conditions given as: ${\displaystyle \displaystyle \ y(0)=1,y'(0)=0\ }$ Plot the solution. ### Solution First, find the homogenous solution to this differential equation, i.e. where r(x) = 0. ${\displaystyle \displaystyle \ y''_{h}-3y'_{h}+2y_{h}=0\ }$ ${\displaystyle \displaystyle \ (\lambda -1)(\lambda -2)=0\ }$ ${\displaystyle \displaystyle \ \lambda =1,2\ }$ Thus, the homogenous solution is of the form ${\displaystyle \displaystyle \ y_{h}=C_{1}e^{\lambda _{1}x}+C_{2}e^{\lambda _{2}x}\ }$ Plugging in the answers for lambda yields: ${\displaystyle \displaystyle \ y_{h}=C_{1}e^{x}+C_{2}e^{2x}\ }$ Now to solve for the particular solution, we must look at the excitation: ${\displaystyle \displaystyle \ r(x)=4x^{2}\ }$ A particular solution to an excitation of the form ${\displaystyle \displaystyle \ Cx^{n}\ }$ is defined as ${\displaystyle \displaystyle \ \sum _{j=0}^{n}c_{j}x^{j}\ }$ Thus, ${\displaystyle \displaystyle \ y_{p}=c_{0}+c_{1}x+c_{2}x^{2}\ }$ To solve for the 3 unknown constants, plug the particular solution into the differential equation: ${\displaystyle \displaystyle \ y_{p}''-3y_{p}'+2y_{p}=4x^{2}\ }$ ${\displaystyle \displaystyle \ (c_{0}+c_{1}x+c_{2}x^{2})''-3(c_{0}+c_{1}x+c_{2}x^{2})'+2(c_{0}+c_{1}x+c_{2}x^{2})=4x^{2}\ }$ ${\displaystyle \displaystyle \ y_{p}'=2c_{2}x+c_{1}\ }$ ${\displaystyle \displaystyle \ y_{p}''=2c_{2}\ }$ Substituting these relations into the equation we get: ${\displaystyle \displaystyle \ 2c_{2}-3(2c_{2}x+c_{1})+2(c_{2}x^{2}+c_{1}x+c_{0})=4x^{2}\ }$ Grouping like terms we get: ${\displaystyle \displaystyle \ 2c_{2}x^{2}+(2c_{1}-6c_{2})x+(2c_{2}-3c_{1}+2c_{0})=4x^{2}\ }$ Matching up the coefficients for the ${\displaystyle \displaystyle \ x^{2}\ }$ term on the left and right sides we get: ${\displaystyle \displaystyle \ 2c_{2}=4\ }$ Therefore ${\displaystyle \displaystyle \ c_{2}=2\ }$ Matching up the coefficients for the ${\displaystyle \displaystyle \ x\ }$ term on the left and right sides we get: ${\displaystyle \displaystyle \ 2c_{1}-6c_{2}=0\ }$ Therefore ${\displaystyle \displaystyle \ c_{1}=6c_{2}/2=6(2)/2=6\ }$ Finally, matching up the coefficients for the constant terms on the left and right sides we get: ${\displaystyle \displaystyle \ 2c_{2}-3c_{1}+2c_{0}=0\ }$ Therefore ${\displaystyle \displaystyle \ c_{0}=(3c_{1}-2c_{2})/2=(3(6)-2(2))/2=(18-4)/2=7\ }$ The particular solution is then: ${\displaystyle \displaystyle \ y_{p}=2x^{2}+6x+7\ }$ The general solution is the sum of the homogenous solution and the particular solution, therefore: ${\displaystyle \displaystyle \ y(x)=y_{h}+y_{p}=C_{1}e^{x}+C_{2}e^{2x}+2x^{2}+6x+7\ }$ To solve for the 2 remaining unknown constants, we use our initial conditions from the problem statement. ${\displaystyle \displaystyle \ y(0)=1=C_{1}+C_{2}+7\ }$ To use the second initial condition, we must take a derivative of our general solution. ${\displaystyle \displaystyle \ y'(x)=C_{1}e^{x}+2C_{2}e^{2x}+4x+6\ }$ ${\displaystyle \displaystyle \ y'(0)=0=C_{1}+2C_{2}+6\ }$ Rewriting the equations we get: ${\displaystyle \displaystyle \ C_{1}+2C_{2}+6=0\ }$ ${\displaystyle \displaystyle \ C_{1}+C_{2}+6=0\ }$ Subtracting these equations we get: ${\displaystyle \displaystyle \ C_{2}=0\ }$ Therefore: ${\displaystyle \displaystyle \ C_{1}=-6-C_{2}=-6\ }$ Thus the complete solution is: ${\displaystyle \displaystyle \ y(x)=2x^{2}+6x+7-6e^{x}\ }$ MATLAB code; x = 0:0.001:10; y = 2*x.^2 + 6*x + 7 - 6*exp(x); plot(x,y) xlabel( 'x') ylabel( 'y(x)') ### Author This problem was solved and uploaded by: David Herrick This problem was proofread by: Michael Wallace ## R 3.4 ### Question Use the Basic Rule (1) and the Sum Rule (3) on p.7-2 of the notes to show that the appropriate particular solution for: ${\displaystyle \ y''-3y'+2y=4x^{2}-6x^{5}\ }$ is of the form: ${\displaystyle \ y_{p}(x)=\sum _{j=0}^{5}c_{j}x^{j}\ }$ with n = 5, i.e. (1) on p.7-12. ### Solution From the Basic Rule: ${\displaystyle \ r_{1}(x)=4x^{2}\ }$ ${\displaystyle \ r_{2}(x)=6x^{5}\ }$ Therefore, from Table 2.1: ${\displaystyle y_{p_{1}}(x)=K_{2}x^{2}+K_{1}x+K_{0}}$ since k = 4, x = x, and n = 2. ${\displaystyle y_{p_{2}}(x)=K_{5}x^{5}+K_{4}x^{4}+K_{3}x^{3}+K_{2}x^{2}+K_{1}x+K_{0}}$ since k = 6, x = x, and n = 5. For ${\displaystyle y_{p_{1}}(x)}$: ${\displaystyle \ y_{p_{1}}'(x)=2K_{2}x+K_{1}\ }$ ${\displaystyle \ y_{p_{1}}''(x)=2K_{2}\ }$ Substituting ${\displaystyle y_{p_{1}}(x)}$ into the original equation gives: ${\displaystyle \ (2K_{2})-3(2K_{2}x+K_{1})+2(K_{2}x^{2}+K_{1}x+K_{0})=4x^{2}-6x^{5}\ }$ ${\displaystyle \ =x^{2}(2K_{2})+x(2K_{1}+3K_{2})+(2K_{2}+3K_{1}+2K_{0})=4x^{2}-6x^{5}\ }$ Comparing the ${\displaystyle x^{2}}$, ${\displaystyle x}$, and ${\displaystyle x^{0}}$ coefficients gives: For ${\displaystyle x^{2}}$: ${\displaystyle \ 2K_{2}=4\ }$ Therefore ${\displaystyle K_{2}=2}$. For ${\displaystyle x}$: ${\displaystyle \ 2K_{1}+3K_{2}=0\ }$ Therefore ${\displaystyle K_{1}=-3}$. For ${\displaystyle x^{0}}$: ${\displaystyle \ 2K_{2}+3K_{1}+2K_{0}=0\ }$ Therefore ${\displaystyle K_{0}=2.5}$. Therefore, ${\displaystyle y_{p_{1}}(x)=2x^{2}-3x+2.5}$. For ${\displaystyle y_{p_{2}}(x)}$: ${\displaystyle \ y_{p_{2}}'(x)=5K_{5}x^{4}+4K_{4}x^{3}+3K_{3}x^{2}+2K_{2}x+K_{1}\ }$ ${\displaystyle \ y_{p_{2}}''(x)=20K_{5}x^{3}+12K_{4}x^{2}+6K_{3}x+2K_{2}\ }$ Substituting ${\displaystyle y_{p_{1}}(x)}$ into the original equation gives: ${\displaystyle \ (20K_{5}x^{3}+12K_{4}x^{2}+6K_{3}x+2K_{2})-3(5K_{5}x^{4}+4K_{4}x^{3}+3K_{3}x^{2}+2K_{2}x+K_{1})+2(K_{5}x^{5}+K_{4}x^{4}+K_{3}x^{3}+K_{2}x^{2}+K_{1}x+K_{0})=4x^{2}-6x^{5}\ }$ Simplifying yields: ${\displaystyle \ x^{5}(K_{5})+x^{4}(K_{4}-15K_{5})+x^{3}(K_{3}-12K_{4}+40K_{5})+x^{2}(K_{2}-9K_{3}+24K_{4})+x(K_{1}-6K_{2}+12K_{3})+(K_{0}-3K_{1}+4K_{2})=4x^{2}-6x^{5}\ }$ Comparing the ${\displaystyle x^{5}}$, ${\displaystyle x^{4}}$, ${\displaystyle x^{3}}$, ${\displaystyle x^{2}}$, ${\displaystyle x}$, and ${\displaystyle x^{0}}$ coefficients gives: For ${\displaystyle x^{5}}$: ${\displaystyle \ K_{5}=-6\ }$ Therefore ${\displaystyle K_{5}=-6}$. For ${\displaystyle x^{4}}$: ${\displaystyle \ K_{4}-15K_{5}=0\ }$ Therefore ${\displaystyle K_{4}=-90}$. For ${\displaystyle x^{3}}$: ${\displaystyle \ K_{3}-12K_{4}+40K_{5}=0\ }$ Therefore ${\displaystyle K_{3}=840}$. For ${\displaystyle x^{2}}$: ${\displaystyle \ K_{2}-9K_{3}+24K_{4}=0\ }$ Therefore ${\displaystyle K_{2}=-9724}$. For ${\displaystyle x}$: ${\displaystyle \ K_{1}-6K_{2}+12K_{3}=0\ }$ Therefore ${\displaystyle K_{1}=59425}$. For ${\displaystyle x^{0}}$: ${\displaystyle \ K_{0}-3K_{1}+4K_{2}=0\ }$ Therefore ${\displaystyle K_{0}=-217171}$. Therefore, ${\displaystyle y_{p_{2}}(x)=-6x^{5}-90x^{4}+840x^{3}-9724x^{2}+59425x-217171}$ By the Sum Rule: ${\displaystyle \ y_{p}(x)=y_{p_{1}}(x)+y_{p_{2}}(x)\ }$ ${\displaystyle \ y_{p}(x)=(2x^{2}-3x+2.5)+(-6x^{5}-90x^{4}+840x^{3}-9724x^{2}+59425x-217171)\ }$ Simplifying gives: ${\displaystyle \ y_{p}(x)=-6x^{5}-90x^{4}+840x^{3}-9722x^{2}+59422x-217168.5\ }$ Choose: ${\displaystyle \ K_{5}=-6\ }$ ${\displaystyle \ K_{4}=-90\ }$ ${\displaystyle \ K_{3}=840\ }$ ${\displaystyle \ K_{2}=-9722\ }$ ${\displaystyle \ K_{1}=59422\ }$ ${\displaystyle \ K_{0}=-217168.5\ }$ Then the equation becomes of the form: ${\displaystyle \ y_{p}(x)=\sum _{j=0}^{n}(c_{j}x^{j})\ }$ where n = 5. ### Author This problem was solved and uploaded by John North. This problem was proofread by Michael Wallace ## R 3.5 ### Question Complete the solution for ${\displaystyle \ y''-3y'+2y=4x^{2}-6x^{5}\ }$ ( (2) p. 7-11) ) as follows: #### Part 1 1) Obtain equations (2) - (4) and (6) p. 7-14 (2) Coefficients of ${\displaystyle \ x\ }$: (3) Coefficients of ${\displaystyle \ x^{2}\ }$: (4) Coefficients of ${\displaystyle \ x^{3}\ }$: (6) Coefficients of ${\displaystyle \ x^{5}\ }$: #### Part 2 Verify all equations by long hand expansion of the series in (4) p. 7-12, instead of using the series in (2) p. 7-13. ${\displaystyle \ \sum _{j=2}^{5}c_{j}\cdot j\cdot (j-1)\cdot x^{j-2}-3\sum _{j=1}^{5}c_{j}\cdot j\cdot x^{j-1}+2\sum _{j=0}^{5}c_{j}x^{j}=4x^{2}-6x^{5}\ }$ #### Part 3 Put the system of equations for ${\displaystyle \ \left\{c_{0},...,c_{5}\right\}\ }$ in matrix form. #### Part 4 Solve for the coefficients ${\displaystyle \ \left\{c_{0},...,c_{5}\right\}\ }$ by back substitution. #### Part 5 Consider the initial conditions: ${\displaystyle \ y(0)=1,y'(0)=0\ }$. Find the solution y(x) and plot it. ### Solution #### Part 1 For all the coefficient equations, we will use the series from (2) p. 7-13 ${\displaystyle \ \sum _{j=0}^{3}\left[c_{j+2}(j+2)(j+1)-3c_{j+1}(j+1)+2c_{j}\right]x^{j}-3c_{5}(5)x^{4}+2\left[c_{4}x^{4}+c_{5}x^{5}\right]=4x^{2}-6x^{5}\ }$ Solving for the coefficients of x, we use j = 1. ${\displaystyle \ (c_{3}(3)(2)-3c_{2}(2)+2c_{1})x\ }$. These are the coefficients for x on the left hand side of the equation, and the coefficients for x on the right hand side = 0. Coefficients of x: ${\displaystyle \ 6c_{3}-6c_{2}+2c_{1}=0\ }$ Solving for the coefficients of ${\displaystyle \ x^{2}\ }$ we use j = 2. ${\displaystyle \ (c_{4}(4)(3)-3c_{3}(3)+2c_{2})x^{2}\ }$ These are the coefficients for ${\displaystyle \ x^{2}\ }$ on the left hand side of the equation, and the coefficients for ${\displaystyle \ x^{2}\ }$ on the right hand side = 4 Coefficients of ${\displaystyle \ x^{2}\ }$: ${\displaystyle \ 12c_{4}-9c_{3}+2c_{2}=4\ }$ Solving for the coefficients of ${\displaystyle \ x^{3}\ }$ we use j = 3. ${\displaystyle \ (c_{5}(5)(4)-3c_{4}(4)+2c_{3})x^{3}\ }$ These are the coefficients for ${\displaystyle \ x^{3}\ }$ on the left hand side of the equation, and the coefficients for ${\displaystyle \ x^{3}\ }$ on the right hand side = 0 Coefficients of ${\displaystyle \ x^{3}\ }$: ${\displaystyle \ 20c_{5}-12c_{4}+2c_{3}=0\ }$ Solving for the coefficients of ${\displaystyle \ x^{5}\ }$, all we have is ${\displaystyle \ 2c_{5}x^{5}\ }$ This is the only coefficient of ${\displaystyle \ x^{5}\ }$ on the left hand side of the equation, and the coefficients for ${\displaystyle \ x^{5}\ }$ on the right hand side = -6 Coefficients of ${\displaystyle \ x^{5}\ }$: ${\displaystyle \ 2c_{5}=-6\ }$ #### Part 2 Expanding the 3 series using j = 0 to 5, we get: ${\displaystyle \ 2c_{0}+2c_{1}x-3c_{1}+2c_{2}x^{2}-3c_{2}(2)x+2c_{2}+2c_{3}x^{3}-3(3)c_{3}x^{2}+3(2)c_{3}x+2c_{4}x^{4}-3(4)c_{4}x^{3}+4(3)c_{4}x^{2}+2c_{5}x^{5}-3(5)c_{5}x^{4}+5(4)c_{5}x^{3}=4x^{2}-6x^{5}\ }$ Simplifying and grouping like terms we get: ${\displaystyle \ (2c_{0}-3c_{1}+2c_{2})+(2c_{1}-6c_{2}+6c_{3})x+(2c_{2}-9c_{3}+12c_{4})x^{2}+(2c_{3}-12c_{4}+20c_{5})x^{3}+(2c_{4}-15c_{5})x^{4}+2c_{5}x^{5}=4x^{2}-6x^{5}\ }$ Compared to the answers of part 1, it can be seen that all the coefficients are equivalent, verifying that the two series are equivalent. #### Part 3 Matrix form of the solution: ${\displaystyle \ {\begin{bmatrix}2&-3&2&0&0&0\\0&2&-6&6&0&0\\0&0&2&-9&12&0\\0&0&0&2&-12&20\\0&0&0&0&2&-15\\0&0&0&0&0&2\end{bmatrix}}{\begin{bmatrix}c_{0}\\c_{1}\\c_{2}\\c_{3}\\c_{4}\\c_{5}\end{bmatrix}}={\begin{bmatrix}0\\0\\4\\0\\0\\-6\end{bmatrix}}\ }$ #### Part 4 Solving by back substitution, the first constant we solve for is c5 ${\displaystyle \ 2c_{5}=-6\ }$ ${\displaystyle \ c_{5}=-3\ }$ ${\displaystyle \ -15c_{5}+2c_{4}=0\ }$ ${\displaystyle \ -15(-3)+2c_{4}=0\ }$ ${\displaystyle \ c_{4}=-22.5\ }$ ${\displaystyle \ 20c_{5}-12c_{4}+2c_{3}=0\ }$ ${\displaystyle \ 20(-3)-12(-22.5)+2c_{3}=0\ }$ ${\displaystyle \ c_{3}=-105\ }$ ${\displaystyle \ 12c_{4}-9c_{3}+2c_{2}=4\ }$ ${\displaystyle \ 12(-22.5)-9(-105)+2c_{2}=4\ }$ ${\displaystyle \ c_{2}=-335.5\ }$ ${\displaystyle \ 6c_{3}-6c_{2}+2c_{1}=0\ }$ ${\displaystyle \ 6(-105)-6(-335.5)+2c_{1}=0\ }$ ${\displaystyle \ c_{1}=-691.5\ }$ ${\displaystyle \ 2c_{2}-3c_{1}+2c_{0}=0\ }$ ${\displaystyle \ 2(-335.5)-3(-691.5)+2c_{0}=0\ }$ ${\displaystyle c_{0}=-701.75\ }$ #### Part 5 The particular solution is of the form ${\displaystyle \ y_{p}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+c_{5}x^{5}\ }$ Therefore, the particular solution to the differential equation = ${\displaystyle \ y_{p}=-701.75-691.5x-335.5x^{2}-105x^{3}-22.5x^{4}-3x^{5}\ }$ To find the general solution, we need to add the particular solution to the homogenous solution. The homogenous solution is given by: ${\displaystyle \ (\lambda _{1}-1)(\lambda _{2}-2)=0\ }$ ${\displaystyle \ y_{h}=C_{1}e^{\lambda _{1}x}+C_{2}e^{\lambda _{2}}x=C_{1}e^{x}+C_{2}e^{2x}\ }$ Therefore the general solution is: ${\displaystyle \ y_{h}+y_{p}=C_{1}e^{x}+C_{2}e^{2x}-701.75-691.5x-335.5x^{2}-105x^{3}-22.5x^{4}-3x^{5}\ }$ Now using the initial conditions to solve for the remaining constants we need the general solution and the derivative of the general solution. ${\displaystyle \ y'(x)=C_{1}e^{x}+2C_{2}e^{2x}-691.5-671x-315x^{2}-90x^{3}-15x^{4}\ }$ ${\displaystyle \ y(0)=1=C_{1}+C_{2}-701.75\ }$ ${\displaystyle \ y'(0)=0=C_{1}+2C_{2}-691.5\ }$ We can move the 1 over, and then subtract the equations to solve for ${\displaystyle \ C_{2}\ }$ ${\displaystyle \ -C_{2}-11.25=0\Rightarrow C_{2}=-11.25\ }$ ${\displaystyle \ C_{1}+C_{2}-701.75=1\ }$ ${\displaystyle \ C_{1}=1+701.75--11.25=714\ }$ Therefore the final general solution y(x) is: ${\displaystyle \ y(x)=714e^{x}-11.25e^{2x}-701.75-691.5x-335.5x^{2}-105x^{3}-22.5x^{4}-3x^{5}\ }$ Matlab code: x = 0:0.001:100; y = 714*exp(x) - 11.25*exp(2*x) - 701.75 - 691.5*x - 335.5*x.^2 - 105*x.^3 - 22.5*x.^4 - 3*x.^5; plot(x,y) xlabel('x') ylabel('y(x)') ### Author This problem was solved and uploaded by: David Herrick This problem was proofread by Michael Wallace ## R 3.6 ### Question Solve the L2-ODE-CC (2) p.7-11 with initial conditions (2b) p.3-6 differently as follows. Consider the following two L2-ODE-CC (see p. 7-2b) ${\displaystyle y_{p,1}''-3y_{p,1}'+2y_{p,1}=r_{1}(x)=4x^{2}\!}$ ${\displaystyle y_{p,2}''-3y_{p,2}'+2y_{p,2}=r_{2}(x)=-6x^{5}\!}$ The particular solution to ${\displaystyle y_{p,1}\!}$ had been found in R3.3 p.7-11. Find the particular solution ${\displaystyle y_{p,2}\!}$ , and then obtain the solution ${\displaystyle y\!}$ for the L2-ODE-CC (2) p.7-11 with initial conditions (2b) p.3-6. ### Given First particular solution: ${\displaystyle y_{p,1}=2x^{2}+6x+7\!}$ Initial Conditions: ${\displaystyle y(0)=1\!}$ ${\displaystyle y'(0)=0\!}$ ### Solution ##### Particular Solution Because of the specific excitation ${\displaystyle r_{2}(x)=-6x^{5}\!}$ , using table 2.1 from K 2011 p.82, the correct form for the particular solution is ${\displaystyle y_{p}(x)=\sum _{j=0}^{n}K_{j}x^{j}\!}$ Then the following is presented: ${\displaystyle y_{p,2}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+c_{5}x^{5}\!}$ ${\displaystyle y_{p,2}'=c_{1}+2c_{2}x+3c_{3}x^{2}+4c_{4}x^{3}+5c_{5}x^{4}\!}$ ${\displaystyle y_{p,2}''=2c_{2}+6c_{3}x+12c_{4}x^{2}+20c_{5}x^{3}\!}$ Plug these equations back into the original ${\displaystyle y_{p,2}''-3y_{p,2}'+2y_{p,2}=-6x^{5}\!}$ ${\displaystyle (2c_{2}+6c_{3}x+12c_{4}x^{2}+20c_{5}x^{3})-3(c_{1}+2c_{2}x+3c_{3}x^{2}+4c_{4}x^{3}+5c_{5}x^{4})+2(c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+c_{5}x^{5})=-6x^{5}\!}$ Use coefficient matching, the following equations are a result: ${\displaystyle c:2c_{2}-3c_{1}+2c_{0}=0\!}$ ${\displaystyle x:6c_{3}-6c_{2}+2c_{1}=0\!}$ ${\displaystyle x^{2}:12c_{4}-9c_{3}+2c_{2}=0\!}$ ${\displaystyle x^{3}:20c_{5}-12c_{4}+2c_{3}=0\!}$ ${\displaystyle x^{4}:-15c_{5}+2c_{4}=0\!}$ ${\displaystyle x^{5}:2c_{5}=-6\!}$ Use back substitution method to solve for every coefficient, starting with ${\displaystyle c_{5}\!}$ ${\displaystyle 2c_{5}=-6\rightarrow c_{5}=-3\!}$ ${\displaystyle 2c_{4}-15(-3)=0\rightarrow c_{4}=-{\frac {45}{2}}\!}$ ${\displaystyle 2c_{3}-12(-{\frac {45}{2}})+20(-3)=0\rightarrow c_{3}=-105\!}$ ${\displaystyle 2c_{2}-9(-105)+12(-{\frac {45}{2}})=0\rightarrow c_{2}=-{\frac {675}{2}}\!}$ ${\displaystyle 2c_{1}-6(-{\frac {675}{2}})+6(-105)=0\rightarrow c_{1}=-{\frac {1395}{2}}\!}$ ${\displaystyle 2c_{0}-3(-{\frac {1395}{2}})+2(-{\frac {675}{2}})=0\rightarrow c_{0}=-{\frac {2835}{4}}\!}$ Plug these values back into ${\displaystyle y_{p,2}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+c_{5}x^{5}\!}$ : ${\displaystyle y_{p,2}=-{\frac {2835}{4}}-{\frac {1395}{2}}x-{\frac {675}{2}}x^{2}-105x^{3}-{\frac {45}{2}}x^{4}-3x^{5}\!}$ Superposition principle applies ( L2-ODE-CC ), ${\displaystyle y_{p}=y_{p,1}+y_{p,2}\!}$ gives the general particular solution: ${\displaystyle y_{p}(x)=-{\frac {2807}{4}}-{\frac {1383}{2}}x-{\frac {671}{2}}x^{2}-105x^{3}-{\frac {45}{2}}x^{4}-3x^{5}\!}$ ##### Homogeneous Solution ${\displaystyle y_{h}''-3y_{h}'+2y_{h}=0\!}$ Because of the linearity, a combination of two linear independent solutions is also a solution to this homogeneous equation: ${\displaystyle y_{h,1}''-3y_{h,1}'+2y_{h,1}=0\!}$ ${\displaystyle y_{h,2}''-3y_{h,2}'+2y_{h,2}=0\!}$ With the solutions as: ${\displaystyle y_{h,1}=e^{\lambda _{1}x}\!}$ ${\displaystyle y_{h,2}=e^{\lambda _{2}x}\!}$ To determine the value of ${\displaystyle \lambda _{1,2}\!}$, the characteristic equation must be determined from the homogeneous equation ${\displaystyle \lambda ^{2}-3\lambda +2=0\!}$ ${\displaystyle \lambda _{1,2}={\frac {-a\pm {\sqrt {a^{2}-4b}}}{2}}\!}$ ${\displaystyle \lambda _{1,2}={\frac {-(-3)\pm {\sqrt {(-3)^{2}-4(2)}}}{2}}\!}$ ${\displaystyle \lambda _{1}=2\;\;\;\;\;\;\;\lambda _{2}=1\!}$ The solutions for each distinct linearly independent homogeneous equation become: ${\displaystyle y_{h,1}=e^{2x}\!}$ ${\displaystyle y_{h,2}=e^{x}\!}$ The combination of the previous two equations, multiplied by two constants that satisfy two initial conditions, is also a solution: ${\displaystyle y_{h}(x)=C_{1}e^{2x}+C_{2}e^{x}\!}$ #### General Solution The general or overall solution for the L2-ODE-CC: ${\displaystyle y(x)=y_{h}+y_{p}\!}$ ${\displaystyle y(x)=C_{1}e^{2x}+C_{2}e^{x}-{\frac {2807}{4}}-{\frac {1383}{2}}x-{\frac {671}{2}}x^{2}-105x^{3}-{\frac {45}{2}}x^{4}-3x^{5}\!}$ ${\displaystyle y'(x)=2C_{1}e^{2x}+C_{2}e^{x}-{\frac {1383}{2}}-671x-315x^{2}-90x^{3}-15x^{4}\!}$ Use initial conditions to solve for ${\displaystyle C_{1}\!}$ and ${\displaystyle C_{2}\!}$: ${\displaystyle 1=C_{1}+C_{2}-{\frac {2807}{4}}\!}$ ${\displaystyle 0=C_{1}+C_{2}-{\frac {1383}{2}}\!}$ ${\displaystyle C_{1}=-{\frac {45}{4}}\;\;\;C_{2}=714\!}$ The general solution: ${\displaystyle y(x)=-{\frac {45}{4}}e^{2x}+714e^{x}-{\frac {2807}{4}}-{\frac {1383}{2}}x-{\frac {671}{2}}x^{2}-105x^{3}-{\frac {45}{2}}x^{4}-3x^{5}\!}$ ### Author This problem was solved and uploaded by Derik Bell This problem was proofread by Michael Wallace ## R 3.7 ### Question Expand the series on both sides of (1)-(2) p.7-12b to verify these equalities. Equation (1) ${\displaystyle \displaystyle \sum _{j=2}^{5}c_{j}j(j-1)x^{j-2}=\sum _{j=0}^{3}c_{j+2}(j+2)(j+1)x^{j}}$ Equation (2) ${\displaystyle \displaystyle \sum _{j=1}^{5}c_{j}jx^{j-1}=\sum _{j=0}^{4}c_{j+1}(j+1)x^{j}}$ ### Solution Expanding both sides of Equation (1) yields the following expression: ${\displaystyle \displaystyle c_{2}(2)(1)x^{0}+c_{3}(3)(2)x+c_{4}(4)(3)x^{2}+c_{5}(5)(4)x^{3}=2c_{2}+6c_{3}x+12c_{4}x^{2}+20c_{5}x^{3}}$ Therefore, they are equivalent. Expanding both sides of Equation (2) yields the following expression: ${\displaystyle \displaystyle c_{1}(1)x^{0}+c_{2}(2)x^{1}+c_{3}(3)x^{2}+c_{4}(4)x^{3}+c_{5}(5)x^{4}=c_{1}+2c_{2}x+3c_{3}x^{2}+4c_{4}x^{3}+5c_{5}x^{4}}$ Therefore, they are equivalent. ### Author This problem was solved and uploaded by: William Knapper This problem was proofread by: Joshua House ## R 3.8 ### Question K 2011 p84 pbs. 5,6 Find a (real) general solution. State which rule you are using. Show each step of your work. ### Problem 5 Solution ${\displaystyle \ y''+4y'+4y=e^{-x}\cos(x)\ }$ Step 1: General Solution of the Homogeneous ODE Characteristic Equation: ${\displaystyle \ \lambda ^{2}+4\lambda +4=0\ }$ Solve for the determinate: ${\displaystyle \ a^{2}-4b=16-16=0\ }$ Indicates a double real root, therefore: ${\displaystyle \ y_{h}=e^{{\frac {-a}{2}}x}(C_{1}+C_{2}x)\ }$ ${\displaystyle =e^{{\frac {-4}{2}}x}(C_{1}+C_{2}x)=e^{2x}(C_{1}+C_{2}x)\ }$ Step 2: Particular solution ${\displaystyle \ y_{p}\ }$ of the non-homogeneous ODE. Using the Basic Rule and Table 2.1: ${\displaystyle \ r(x)=e^{-x}cos(x)\ }$ ${\displaystyle \ y_{p}(x)=e^{-x}(K\cos(x)+M\sin(x))\ }$ since k = 1, alpha = -1, and omega = 1. ${\displaystyle \ y_{p}'(x)=e^{-x}(-K\sin(x)+M\cos(x))-e^{-x}(K\cos(x)+M\sin(x))\ }$ ${\displaystyle \ =e^{-x}(\cos(x)(M-K)-\sin(x)(K+M))\ }$ ${\displaystyle \ y_{p}''(x)=e^{-x}(-\sin(x)(M-K)-\cos(x)(K+M))-e^{-x}(\cos(x)(M-K)-\sin(x)(K+M))\ }$ ${\displaystyle \ =e^{-x}(\sin(x)(2K)+\cos(x)(2M))\ }$ Plugging particular solution to original equation: ${\displaystyle \ e^{-x}(\sin(x)(2K)+\cos(x)(2M))+4(e^{-x}(\cos(x)(M-K)-\sin(x)(K+M)))+4(e^{-x}(K\cos(x)+M\sin(x)))=e^{-x}\cos(x)\ }$ Simplifying and canceling gives: ${\displaystyle \ \sin(x)(2K)+\cos(x)(2M)+\cos(x)(4M-4K)-\sin(x)(4K+4M)+\cos(x)(4K)+\sin(x)(4M)=\cos(x)\ }$ Further simplification gives: ${\displaystyle \ \sin(x)(-2K)+\cos(x)(6M)=\cos(x)\ }$ Comparing sin(x) and cos(x) terms gives: ${\displaystyle \ -2K=0\ }$ Therefore K = 0. ${\displaystyle \ 6M=1\ }$ Therefore M = ${\displaystyle \ {\frac {1}{6}}\ }$. Therefore the General Solution is: ${\displaystyle \ y_{g}(x)=y_{h}(x)+y_{p}(x)\ }$ ${\displaystyle \ y_{g}(x)=e^{-2x}(C_{1}+C_{2}x)+e^{-x}({\frac {1}{6}}\sin(x))\ }$ ### Problem 6 Solution ${\displaystyle \ y''+y'+(\pi ^{2}+{\frac {1}{4}})y=e^{-{\frac {x}{2}}}\sin(\pi x)\ }$ Step 1: Find the homogeneous solution to the ODE. Characteristic Equation: ${\displaystyle \ \lambda ^{2}+\lambda +\pi ^{2}+{\frac {1}{4}}=e^{-{\frac {x}{2}}}\sin(\pi x)\ }$ Find the Determinate: ${\displaystyle a^{2}-4b=1-4({\frac {1}{4}}+\pi ^{2})=-4\pi }$ Indicates unreal solutions. Therefore, real homogeneous solution is: ${\displaystyle \ y_{h}=e^{-2x}(A\cos(wx)+B\sin(wx))\ }$ where: ${\displaystyle \ w=(b-{\frac {1}{4}}a^{2})^{\frac {1}{2}}\ =\pi \ }$ ${\displaystyle \ y_{h}=e^{-2x}(A\cos(\pi x)+B\sin(\pi x))\ }$ Step 2: Solution ${\displaystyle \ y_{p}\ }$ of the non-homogeneous ODE. Using the Basic Rule and Table 2.1: ${\displaystyle \ r(x)=e^{-{\frac {x}{2}}}\sin(\pi x)\ }$ ${\displaystyle \ y_{p}(x)=e^{-{\frac {x}{2}}}(K\cos(\pi x)+M\sin(\pi x))\ }$ since alpha = ${\displaystyle \ {\frac {-1}{2}}\ }$, w = ${\displaystyle \ \pi \ }$, and k = 1. Therefore: ${\displaystyle \ y_{p}'(x)=-{\frac {1}{2}}e^{-{\frac {x}{2}}}(K\cos(\pi x)+M\sin(\pi x))+e^{-{\frac {x}{2}}}(-K\pi \sin(\pi x)+M\pi \cos(\pi x))\ }$ ${\displaystyle \ =e^{-{\frac {x}{2}}}(\cos(\pi x)(M\pi -{\frac {1}{2}}K)-\sin(\pi x)(K\pi +{\frac {1}{2}}M))\ }$ ${\displaystyle \ y_{p}''(x)=-{\frac {1}{2}}e^{-{\frac {x}{2}}}(\cos(\pi x)(M\pi -{\frac {1}{2}}K)-\sin(\pi x)(K\pi +{\frac {1}{2}}M))+e^{-{\frac {x}{2}}}(-\sin(\pi x)(M\pi ^{2}-{\frac {1}{2}}K\pi )-\cos(\pi x)(K\pi ^{2}+{\frac {1}{2}}M\pi ))\ }$ ${\displaystyle \ =e^{-{\frac {x}{2}}}(\cos(\pi x)(-M\pi +{\frac {1}{4}}K+K\pi ^{2})+\sin(\pi x)({\frac {1}{4}}M+K\pi -M\pi ^{2}))\ }$ Substituting in to the original equation: ${\displaystyle \ (e^{-{\frac {x}{2}}}(\cos(\pi x)(-M\pi +{\frac {1}{4}}K+K\pi ^{2})+\sin(\pi x)({\frac {1}{4}}M+K\pi -M\pi ^{2})))+(e^{-{\frac {x}{2}}}(\cos(\pi x)(M\pi -{\frac {1}{2}}K)-\sin(\pi x)(K\pi +{\frac {1}{2}}M)))+(\pi ^{2}+{\frac {1}{4}})(e^{-{\frac {x}{2}}}(K\cos(\pi x)+M\sin(\pi x)))=e^{-{\frac {x}{2}}}\sin(\pi x)\ }$ Simplifying and Canceling gives: ${\displaystyle \ \cos(\pi x)(K\pi ^{2}-{\frac {1}{4}}K+M\pi ^{2}+{\frac {1}{4}}M)+\sin(\pi x)(-{\frac {1}{4}}M-M\pi ^{2}+K\pi ^{2}+{\frac {1}{2}}K)=\sin(\pi x)\ }$ Comparing ${\displaystyle \ \sin(\pi x)\ }$ and ${\displaystyle \ \cos(\pi x)\ }$ terms gives: ${\displaystyle \ K\pi ^{2}+-{\frac {1}{4}}K+M\pi ^{2}+{\frac {1}{4}}M=0\ }$ ${\displaystyle \ -{\frac {1}{4}}M-M\pi ^{2}+K\pi ^{2}+{\frac {1}{2}}K=1\ }$ Solve for K and M: ${\displaystyle \ M\pi ^{2}={\frac {1}{4}}K-{\frac {1}{4}}M-K\pi ^{2}\ }$ Therefore: ${\displaystyle \ {\frac {1}{4}}M+K\pi ^{2}+{\frac {1}{2}}K-{\frac {1}{4}}K+{\frac {1}{4}}M+K\pi ^{2}-{\frac {1}{2}}M=1\ }$ Reducing gives: ${\displaystyle \ 2K\pi ^{2}+{\frac {1}{4}}K=1\ }$ Solving for K gives K = 0.05. Therefore: ${\displaystyle \ 0.05\pi ^{2}-{\frac {1}{2}}(0.05)+M\pi ^{2}+{\frac {1}{4}}M=0\ }$ Solving for M gives M = -0.046. Therefore: ${\displaystyle \ y_{g}(x)=y_{h}(x)+y_{p}(x)\ }$ ${\displaystyle \ y_{g}(x)=e^{-2x}(A\cos(\pi x)+B\sin(\pi x))+e^{-{\frac {x}{2}}}(0.05\cos(\pi x)-0.046\sin(\pi x))\ }$ ### Author This problem was solved and uploaded by: John North This problem was proofread by: Michael Wallace ## R 3.9 ### Question Solve the initial value problem. State which rule you are using. Show each step of your calculation in detail. 13. ${\displaystyle \displaystyle 8y''-6y'+y=6coshx,y(0)=0.2,y'(0)=0.05}$ 14. ${\displaystyle \displaystyle y''+4y'+4y=e^{-2x}sin2x,y(0)=1,y'(0)=-1.5}$ ### Solution ##### Problem 13 First, we find the homogeneous ODE solution ${\displaystyle \displaystyle 8\lambda ^{2}-6\lambda +1=0}$ ${\displaystyle \displaystyle (4\lambda -1)(2\lambda -1)=0}$ ${\displaystyle \displaystyle \lambda ={\frac {1}{4}},\lambda ={\frac {1}{2}}}$ which gives the general solution: ${\displaystyle \displaystyle y_{h}=c_{1}e^{{\frac {1}{4}}x}+c_{2}e^{{\frac {1}{2}}x}}$ Second, we get the particular solution of the non homogeneous ODE. We can substitute ${\displaystyle coshx={\frac {e^{x}+e^{-x}}{2}}}$ Using the sum rule we know that ${\displaystyle y_{p}=y_{p1}+y_{p2}}$. Then using Table 2.1 we can solve: ${\displaystyle \displaystyle y_{p1}=c_{3}e^{x},y_{p2}=c_{4}e^{-x}}$ Then we get ${\displaystyle \displaystyle y_{p}=c_{3}e^{x}+c_{4}e^{-x}}$ ${\displaystyle \displaystyle y_{p}'=c_{3}e^{x}-c_{4}e^{-x}}$ ${\displaystyle \displaystyle y_{p}''=c_{3}e^{x}+c_{4}e^{-x}}$ The original ODE becomes: ${\displaystyle \displaystyle 8y''-6y'+y=3e^{x}+3e^{-x}}$ Substituting ${\displaystyle y_{p}}$ into the original ODE and separating into the two components we get: ${\displaystyle \displaystyle 8c_{3}-6c_{3}+c_{3}=3}$ and ${\displaystyle \displaystyle 8c_{4}+6c_{4}+c_{4}=3}$ Solving these two equations we get: ${\displaystyle \displaystyle c_{3}=1}$ ${\displaystyle \displaystyle c_{4}={\frac {1}{5}}}$ Thus, ${\displaystyle \displaystyle y_{p}=e^{x}+{\frac {1}{5}}e^{-x}}$ We know that ${\displaystyle y=y_{h}+y_{p}}$. so, ${\displaystyle \displaystyle y=c_{1}e^{{\frac {1}{4}}x}+c_{2}e^{{\frac {1}{2}}x}+e^{x}+{\frac {1}{5}}e^{-x}}$ Finding the solution to the initial value problem: ${\displaystyle \displaystyle y(0)=0.2=c_{1}(1)+c_{2}(1)+1+{\frac {1}{5}}}$ ${\displaystyle \displaystyle -1=c_{1}+c_{2}}$ ${\displaystyle \displaystyle c_{2}=-c_{1}-1}$ ${\displaystyle \displaystyle y'={\frac {1}{4}}c_{1}e^{{\frac {1}{4}}x}+{\frac {1}{2}}c_{2}e^{{\frac {1}{2}}x}+e^{x}-{\frac {1}{5}}e^{-x}}$ ${\displaystyle \displaystyle y'(0)=0.05={\frac {1}{4}}c_{1}(1)+{\frac {1}{2}}c_{2}(1)+1-{\frac {1}{5}}}$ ${\displaystyle \displaystyle -0.75={\frac {1}{4}}c_{1}+{\frac {1}{2}}c_{2}}$ Substituting the known value for ${\displaystyle c_{2}}$ ${\displaystyle \displaystyle -0.75={\frac {1}{4}}c_{1}+{\frac {1}{2}}(-c_{1}-1)}$ ${\displaystyle \displaystyle -0.75={\frac {1}{4}}c_{1}-{\frac {1}{2}}c_{1}-{\frac {1}{2}}}$ ${\displaystyle \displaystyle -0.25=-{\frac {1}{4}}c_{1}}$ Solving this we find that: ${\displaystyle \displaystyle c_{1}=1}$ ${\displaystyle \displaystyle c_{2}=-2}$ The final solution is: ${\displaystyle \displaystyle y=e^{{\frac {1}{4}}x}-2e^{{\frac {1}{2}}x}+e^{x}+{\frac {1}{5}}e^{-x}}$ ##### Problem 14 First, we find the homogeneous ODE solution ${\displaystyle \displaystyle \lambda ^{2}+4\lambda +4=0}$ ${\displaystyle \displaystyle (\lambda +2)(\lambda +2)=0}$ ${\displaystyle \displaystyle \lambda =-2,\lambda =-2}$ which gives the general solution: ${\displaystyle \displaystyle y_{h}=c_{1}e^{-2x}+c_{2}xe^{-2x}}$ Second, we get the particular solution of the non homogeneous ODE. Using the sum rule and using Table 2.1 we can solve: ${\displaystyle \displaystyle y_{p}=e^{-2x}(Kcos2x+Msin2x)}$ ${\displaystyle e^{-2x}}$ is a homogeneous solution so ${\displaystyle y_{p}}$ becomes: ${\displaystyle \displaystyle y_{p}=xe^{-2x}(Kcos2x+Msin2x)}$ Then we find: ${\displaystyle \displaystyle y_{p}'=(e^{-2x}-2xe^{-2x})(Kcos2x+Msin2x)+xe^{-2x}(-2Ksin2x+2Mcos2x)}$ ${\displaystyle \displaystyle y_{p}'=Ke^{-2x}cos2x-2Kxe^{-2x}cos2x+Me^{-2x}sin2x-2Mxe^{-2x}sin2x-2Kxe^{-2x}sin2x+2Mxe^{-2x}cox2x}$ ${\displaystyle \displaystyle y_{p}'=Ke^{-2x}cos2x+Me^{-2x}sin2x+(-2K-2M)xe^{-2x}sin2x+(-2K+2M)xe^{-2x}cos2x}$ And, ${\displaystyle \displaystyle y_{p}''=K[-2e^{-2x}cos2x-2e^{-2x}sin2x]+M[-2e^{-2x}sin2x+2e^{-2x}cos2x]+(-2K+2M)[(e^{-2x}-2xe^{-2x})(cos2x)+(-2xe^{-2x}sin2x)]}$ ${\displaystyle \displaystyle +(-2K-2M)[(e^{-2x}-2xe^{-2x})(sin2x)+(2xe^{-2x}cos2x)]}$ ${\displaystyle \displaystyle y_{p}''=-2Ke^{-2x}cos2x-2Ke^{-2x}sin2x-M2e^{-2x}sin2x+2Me^{-2x}cos2x-2Ke^{-2x}cos2x+4Kxe^{-2x}sin2x+4Kxe^{-2x}cos2x}$ ${\displaystyle \displaystyle +2Me^{-2x}cos2x-4Mxe{^{-}2x}cos2x-4Mxe^{-2x}sin2x-2Ke^{-2x}sin2x+4Kxe^{-2x}sin2x-4Kxe^{-2x}cos2x}$ ${\displaystyle \displaystyle -2Me^{-2x}sin2x+4Mxe^{-2x}sin2x-4Mxe^{-2x}cos2x}$ ${\displaystyle \displaystyle y_{p}''=(-4K+4M)e^{-2x}cos2x+(-4K-4M)e^{-2x}sin2x+8Kxe^{-2x}sin2x-8Mxe^{-2x}cos2x}$ Substituting ${\displaystyle y_{p}}$ into the original ODE: ${\displaystyle \displaystyle (-4K+4M)e^{-2x}cos2x+(-4K-4M)e^{-2x}sin2x+8Kxe^{-2x}sin2x-8Mxe^{-2x}cos2x-4Ke^{-2x}cos2x+4Me^{-2x}sin2x}$ ${\displaystyle \displaystyle +(-8K-8M)xe^{-2x}sin2x+(-8K+8M)xe^{-2x}cos2x+4Kxe^{-2x}cos2x+4Mxe^{-2x}sin2x=e^{-2x}sin2x}$ ${\displaystyle \displaystyle (-8K+4M)e^{-2x}cos2x+-4Ke^{-2x}sin2x=e^{-2x}sin2x}$ Separating into components we get: ${\displaystyle \displaystyle -4K=1}$ ${\displaystyle \displaystyle -8K+4M=0}$ Solving the two equations we get: ${\displaystyle \displaystyle K=-{\frac {1}{4}}}$ ${\displaystyle \displaystyle M=-{\frac {1}{2}}}$ Thus, ${\displaystyle \displaystyle y_{p}=e^{-2x}(-{\frac {1}{4}}cos2x-{\frac {1}{2}}sin2x)}$ We know that ${\displaystyle y=y_{h}+y_{p}}$. so, ${\displaystyle \displaystyle y=c_{1}e^{-2x}+c_{2}xe^{-2x}+e^{-2x}(-{\frac {1}{4}}cos2x-{\frac {1}{2}}sin2x)}$ Finding the solution to the initial value problem: ${\displaystyle \displaystyle y(0)=1=c_{1}(1)+(1)[-{\frac {1}{4}}(1)]}$ We solve, ${\displaystyle \displaystyle c_{1}=-{\frac {5}{4}}}$ Also, ${\displaystyle \displaystyle y'=-2c_{1}e^{-2x}-2c_{2}xe^{-2x}+c_{2}e^{-2x}-2e^{-2x}(-{\frac {1}{4}}cos2x-{\frac {1}{2}}sin2x)+e^{-2x}({\frac {1}{2}}sin2x-cos2x)}$ ${\displaystyle \displaystyle y'(0)=-1.5=-2(-{\frac {5}{4}})(1)+c_{2}(1)-2(1)[-{\frac {1}{4}}(1)]+(1)(-1)}$ We solve, ${\displaystyle \displaystyle c_{2}=-3.5=-{\frac {7}{2}}}$ The final solution is: ${\displaystyle \displaystyle y=-{\frac {5}{4}}e^{-2x}-{\frac {7}{2}}xe^{-2x}+e^{-2x}(-{\frac {1}{4}}cos2x-{\frac {1}{2}}sin2x)}$ ### Author This problem was proofread by: David Herrick ## Contribution Summary Problems 3 and 5 were solved and Problems 1 and 9 were proofread by David Herrick Problems 1 and 2 were solved and Problem 7 was proofread by Joshua House Problem 9 was solved by Radina Dikova Problems 4 and 8 were solved by John North Problem 7 was solved by William Knapper Problem 6 was solved by Derik Bell Problems 2, 3, 4, 5, 6, and 8 were proofread by Michael Wallace
# Price and Efficiency Variance 7-41 Price and efficiency variances, problems in standard-setting, and benchmarking. Stuckey, Inc., manufactures industrial 55 gallon drums for storing chemicals used in the mining industry. The body of the drums is made from aluminum and the lid is made of chemical resistant plastic. Andy Jorgenson, the controller, is becoming increasingly disenchanted with Stuckey’s standard costing system. The budgeted information for direct materials and direct manufacturing labor for June 2011 were as follows: Drums and lids produced Direct materials price per sq. ft. Aluminum Plastic Direct materials per unit Aluminum (sq. ft.) Plastic (sq. ft.) Direct labor-hours per unit Direct labor cost per hour Budget 5,200 $3.00$ 1.50 20 7 2.3 $12.00 The actual number of drums and lids produced was 4,920. The actual cost of aluminum and plastic was$283,023 (95,940 sq. ft.) and $50,184 (33,456 sq. ft.), respectively. The actual direct labor cost incurred was$118,572 (9,840 hours). There were no beginning or ending inventories of materials. Standard costs are based on a study of the operations conducted by an independent consultant six months earlier. Jorgenson observes that since that study he has rarely seen an unfavorable variance of any magnitude. He notes that even at their current output levels, the workers seem to have a lot of time for sitting around and gossiping. Jorgenson is concerned that the production manager, Charlie Fenton, is aware of this but does not want to tighten up the standards because the lax standards make his performance look good. 1. Compute the price and efficiency variances of Stuckey, Inc., for each direct material and direct manufacturing labor in June 2011. 2. Describe the types of actions the employees at Stuckey, Inc., may have taken to reduce the accuracy of the standards set by the independent consultant. Why would employees take those actions? Is this behavior ethical? 3. If Jorgenson does nothing about the standard costs, will his behavior violate any of the Standards of Ethical Conduct for Management Accountants described in Exhibit 1-7 on page 16? 4. What actions should Jorgenson take? 5. Jorgenson can obtain benchmarking information about the estimated costs of Stuckey’s major competitors from Benchmarking Clearing House (BCH). Discuss the pros and cons of using the BCH information to compute the variances in requirement. ## Related Questions in Variance Analysis • ### COST ACCOUNTING 1. Bowden Corporation used the following data to evaluate their current operating... (Solved) February 28, 2015 efficiency variance for direct manufacturing labor might indicate that: machines were not properly maintained, budgeted time standards are too lax , work was efficiently scheduled, or more higher skilled workers were scheduled than planned? Diana Industries, Inc . (DII), developed standard costs for... Answer to Part 1 Actual Budgeted Units Sold 46,000 45,000 Selling price $20$20 Revenues(Units Sold * Selling price) $920,000$900,000 Actual Revenues $920,000 Budgeted Revenues$900,000... • ### COST ACCOUNTING 1. Bowden Corporation used the following data to evaluate their current operating... (Solved) March 21, 2015 efficiency variance for direct manufacturing labor might indicate that: machines were not properly maintained, budgeted time standards are too lax , work was efficiently scheduled, or more higher skilled workers were scheduled than planned? Diana Industries, Inc . (DII), developed standard costs for... Answer to Part 1 Actual Budgeted Units Sold 46,000 45,000 Selling price $20$20 Revenues(Units Sold * Selling price) $920,000$900,000 Actual Revenues $920,000 Budgeted Revenues$900,000... • ### Koontz Company manufactures a number of products. The standards relating to one of these products... (Solved) August 14, 2016 Koontz Company manufactures a number of products. The standards relating to one of these products are shown below, along with actual cost data for May. Standard Cost per Unit Actual Cost per Unit Direct materials: Standard : 1.90 feet at $3.20 per foot$ 6 1. a. Materials price and quantity variances: Materials price variance = Actual Quantity (actual price – standard price) = 28,675 (3.6 - 3.2) = 28,675 * 0.4 = 11,470 U Actual Quantity 15,500... • ### Comprehensive Variance Problem Sweetwater Company manufactures two products, Mountain Mist and Valley Stream. (Solved) September 26, 2014 • ### Labor standards Geeta & Company has experienced increased production costs. The primary area of... (Solved) November 10, 2014 or 12 units per day for each worker would be more reasonable. he president of Geeta & Company believed the standard should be set at a high level to motivate the workers and to provide adequate information for control and reasonable cost comparison. After much discussion, management decided to use
# 108 number 108 is the natural number following 107 and preceding 109. Template:Numbers 100s Cardinal one hundred [and] eight Ordinal 108th Factorization ${\displaystyle 2^{2}\cdot 3^{3}}$ Divisors 2, 3, 4, 6, 9, 12, 18, 27, 36, 54 Roman numeral CVIII Binary 1101100 Hexadecimal 6C ## In mathematics One hundred eight is an abundant number, a tetranacci number, a Harshad number and a self number. It is the hyperfactorial of 3. 108 is a number that is divisible by the value of its φ function, which is 36. In normal space, the interior angles of an equilateral pentagon measure 108 degrees each. There are 108 free polyominoes of order 7. It is a split double prime of 2 and 3. ## In other fields One hundred eight is also: • The number of beads on a Mala (Sanskrit word for a rosary of beads) usually has beads for 108 repetitions of a mantra • The atomic number of hassium. • In Homer's Odyssey, the number of suitors coveting Penelope, wife of Odysseus. • Chinese astrology holds that there are 108 sacred stars. (This legend is the basis of the Suikoden series of video games: each of the 108 playable characters represents a star in Chinese astrology.) • Several different tai chi long forms consist of 108 moves. • It is a sacred number in Hinduism used and referred to in a variety of manners • A symbol of Siva among the Saivites, for Siva Nataraja dances his cosmic dance in 108 poses. • Hindu deities have 108 names. Recital of these names, often accompanied by counting of 108-beaded Mala, is considered sacred and often done during religious ceremonies. The recital is called namajapa. • Apart from Hinduism, it is also held sacred or otherwise significant by various Buddhist, Sikh, and Jain traditions. • The number of surat al-Kawthar in the Qur'an, the smallest one of the Book. • In Islam, used to refer to Allah. • The year CE 108 or 108 BCE. • A significant number in the TV series Lost (Also: 4+8+15+16+23+42=108) • The number of stitches on a baseball.
## Beta Spectrum Generator: High precision allowed $\beta$ spectrum shapes 2019-07-22T08:39:15Z (GMT) by Several searches for Beyond Standard Model physics rely on an accurate and highly precise theoretical description of the allowed beta spectrum. Following recent theoretical advances, a C++ implementation of an analytical description of the allowed beta spectrum shape was constructed. It implements all known corrections required to give a theoretical description accurate to a few parts in 10^4 . The remaining nuclear structure-sensitive input can optionally be calculated in an extreme single-particle approximation with a variety of nuclear potentials, or obtained through an interface with more state-of-the-art computations. Due to its relevance in modern neutrino physics, the corresponding (anti)neutrino spectra are readily available with appropriate radiative corrections. In the interest of user-friendliness, a graphical interface was developed in Python with a coupling to a variety of nuclear databases. We present several test cases and illustrate potential usage of the code. Our work can be used as the foundation for current and future high-precision experiments related to the beta decay process. CC BY 4.0
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer. QUESTION # How many liters of a 3.0 M H3PO4 solution are required to react with 4.5 g of zinc? __ H3PO4 + __ Zn _x0001_ __ Zn3(PO4)2 + __ H2 The volume is 15.6 mL, or 0.0156 L 2H_3PO_4 + 3Zn -> Zn_3(PO_4)_2 + 3H_2 As you can see, we have a 2:3 mole ratio between H_3PO_4 and Zn. Knowing that the molar mass of Zn is 65.4 g/(mol), we can determine the number of Zn moles to be n_(Zn) = m/(molar mass) = (4.5 g)/(65.4 g/(mol)) = 0.07 This means that the number of H_3PO_4 moles is equal to n_(H_3PO_4) = 0.07 * 2/3 = 0.047 Therefore, the volume required is V = n_(H_3PO_4)/C = (0.047 m ol e s)/(3 (mo l e s)/L) = 0.0156 L = 15.6 mL
# Homework Help: Ballistic cylinder 1. Nov 27, 2005 ### Punchlinegirl A 14.0 g bulllet is fired at 396.1 m/s into a solid cylinder of mass 18.1 kg and a radius 0.45 m. The cylinder is initially at rest and is mounted on a fixed vertical axis that runs through it's center of mass. The line of motion of the bullet is perpendicular to the axle and at a distance 9.00 cm from the center. Find the angular velocity of the system after the bullet strikes and adheres to the surface of the cylinder. First I converted the velocity to angular velocity by dividing by the radius. I used conservation of angular momentum $$(MR^2) \omega = (MR^2 + (1/2)MR^2) \omega$$ (.14)(.09)(880.2) =((.14)(.09^2) + (1/2)(18.1)(.45^2)) \omega [/tex] Solving for omega gave me .545 which wasn't right.. can someone tell me what I'm doing wrong? Thanks 2. Nov 27, 2005 ### stuplato Conservation of momentum (mv = (M+m)v') will get a linear speed, which can be converted to rotational speed if I am remembering right... 3. Nov 27, 2005 ### stuplato You lost a square on the .09 on the left side... 4. Nov 27, 2005 ### lightgrav initial L should be .5 [m.kg.m/s] , (easiest by r x p) ... your omega_i is off. Also, 14 gram = 0.014 kg . Last edited: Nov 27, 2005 5. Nov 28, 2005 ### Punchlinegirl Can you explain to me how you got the .5 for the initial L?
# Overview The page is dedicated to how to control a brushless DC (BLDC) motor in an embedded system. This includes control methods for both trapezoidal and sinusoidal wound BLDC motors. There are many different ways to control a BLDC motor, from simple hall-effect based switching, to complex encoder based field-orientated control with space-vector modulation (if you have no idea what these mean, don’t worry, read on). # Acronyms And Terminology Subscript Parameter Units (metric) Units (imperial) $$T_C$$ Continuous Torque Nm oz-in $$T_{PK}$$ Peak Torque Nm oz-in $$T_{CS}$$ Continuous Stall Torque Nm oz-in $$T_F$$ Friction Torque Nm oz-in $$I_C$$ Continuous Current A A $$I_{PK}$$ Peak Current A A $$N_{nl}$$ No-Load Speed rad/s rpm $$P_T$$ Rated Power W W $$V_T$$ Terminal Voltage V V $$P_I$$ Input Power W W $$P_O$$ Output Power W W $$K_T$$ Torque Constant Nm/A oz-in/A $$K_E$$ Back EMF Constant V/rad/s V/krpm $$K_M$$ Motor Constant Nm/sqrt(W) oz-in/sqrt(W) $$J_R$$ Rotational Inertia gram-cm^2 oz-in-s^2 # Motors Important Parameters: • Torque Constant: An important parameter which relates the torque to the DC current draw. All other things being equal, higher-quality motors will have a higher torque constant. • Rated Speed: Most are rated in the range of 3000-6000rpm. The motor can be driven at slightly higher speeds than those rated by a technique that is called ‘flux weakening’ • Rated Current • Cogging torque – This is the maximum torque the motor has when it is not being driven. This causes the rotation of the axle to feel ‘bumpy’ if rotated by hand when it’s not being driven. For smooth operation at low speeds (< 100rpm), the cogging torque has to be small (or nothing at all). You shouldn’t be able to feel this ‘bumpy’ phenomenom on motors with small cogging torques. • Mechanical Time Constant: The time for the motor to spin up to full speed at rated voltage. Normally between 4-10ms. High performance motors can have a time constant as low as 1ms to speed up to 30,000rpm! • Electrical Time Constant • Winding Inductance: High precision motors typically have lower inductance. The winding inductance has the useful property of limiting the rate of change of current through the motor, and is what allows you to use PWM to produce smooth current. The lower the inductance, the higher the PWM needed to the same current ripple. • Number of pole pairs. The number of pole pairs is equal to the number of electrical revolutions per mechanical revolution. Typical BLDC motors have a number of pole pairs varying from 1 to 5. If the motors datasheet instead states the number of poles, this is just 2x num. pole pairs. Note that if the motor has more than 1 pole pair, you cannot orientate the motor to a known mechanical position without some type of feedback (hall-effect, encoder). The 3 phases of a BLDC motor are usually labelled either A, B, C or U, V and W. Cogging torque is due to the variation in airgap permanence or reluctance of the stator teeth and slots above the magnets as the motor rotates. Ripple torque is the torque produces by the interaction between the stator and rotor MMF. Ripple torque is mainly due to fluctuations in the field distribution. The motor windings can be wound to either give trapezoidal or sinusoidal feedback, which relates to the control methods mentioned further down on this page. As quoted by Shiyoung Lee, Ph.D., A COMPARISON STUDY OF THE COMMUTATION METHODS FOR THE THREE-PHASE PERMANENT MAGNET BRUSHLESS DC MOTOR), The simulation results verify that mismatch of the back-EMF waveform and commutation method produces ripple rich torque. Therefore, the BLDC motor and trapezoidal(six-step) commutation and the PMSM and sinusoidal commutation are the most desirable combination to produce minimum torque ripple. So it pays to get the right motor for the right job! # Trapezoidal vs. Sinusoidal There are two standard ways of winding the coils. One is to produce a trapezoidal shaped BEMF, the other produces a sinusoidal shapes BEMF. Sinusoidal motors have lower torque ripple (less vibration, mechanical stress e.t.c e.t.c), but suffers from higher switching losses and greater drive complexity than a trapezoidal motor. For this reason trapezoidal motors are very common. In a sinusoidal motor, current travels through all three windings at any point, while in a trapezoidal motor, current only flows through 2 of the 3 windings. Sinusoidal motors come under a number of names, including PMSM (permanent magnet synchronous motor), an AC servo motor, or BLAC (brushless AC, a term used by Atmel). Trapezoidal designed motors go under the name BLDC (brushless DC). # Positional Sensing Because there are no brushes to switch the current in the windings (commutation), the position of the rotor relative to the stator needs to be known so that the current can be switched externally. There a three main methods for detecting where the rotor is: 1. Hall-Effect Sensors 2. An Encoder 3. Zero-crossing (aka sensor-less) Hall-Effect sensors output a voltage relative the magnetic field strength (or more technically, the magnetic flux density). Three of them are spaced 120° apart from each other and designed so that their output voltage changes rapidly as the phase coils require switching. This can make the switching electronics easy to implement. Be careful, some hall-effect sensors can be open-drain, even though the motor’s datasheet suggest that the output is fully driven! The encoder method uses, well, an encoder attached to the axle to determine rotor position. This is more complex than the hall-effect method as the encoder output requires decoding. The encoder is typically of the incremental qunadrature type, which requires a counter to count the pulses and a phase detection logic to determine the direction. This feedback method can also suffer from glitches which causes the encoder count to drift from the correct value. The PSoC microcontroller has a very nice quadrature decoding component with built-in glitch filtering. Zero-crossing has become popular in recent years due to the fact it requires no sensors, making it cheap to implement. It is the method of measuring the voltage of the floating winding during operation (1 winding is always undriven), to determine the position of the rotor. One disadvantage of this method it does not work below a minimum speed (because the voltage is too small). # Trapezoidal Control A common example of a trapezoidal (or block) commutation cycle of a BLDC motor with hall-sensor feedback is shown in the below table. This will drive the motor in one particular direction. This can be used to form a LUT for quick hardware/software commutation. Commutation Table Sensor Output Driver Input HS1 HS2 HS3 A B C 0 0 1 X HI LO 0 1 1 HI X LO 0 1 0 HI LO X 1 1 0 X LO HI 1 0 0 LO X HI 1 0 1 LO HI X where H1, H2, and H3 are the hall-effect sensors, and A, B, and C are the motor phases of a 3-phase star-connected BLDC motor. Note that the either the high-side, low-side, or both driver inputs can be modulated with a PWM signal to provide speed control. The schematic below shows the hardware used in a PSoC 5 microcontroller to perform hall-effect based trapezoidal commutation. With the correct mux selects, this control method can run completely from hardware and be completely processor independent (when running at a constant duty-cycle). A PSoC schematic containing LUT’s and multiplexors for trapezoidal-control of a BLDC motor. # Sinusoidal Control The benefits: • Smoother operation/less torque ripple than trapezoidal • Greater efficiency/less heat dissapation • Able to run the motor at slower speeds • Complex control • Embedded firmware uses more flash/ROM space • More processing power used • Slightly lower maximum torque (although third-harmonic injection can reduce this) • Hall-effect feedback becomes insufficient at low speed with varying loads, and optical encoders are preferred • Back EMF feedback is not possible Sinusoidal control (also known as voltage-over-frequency control) is more complex than trapezoidal techniques, but offers smoother operation and better control at slow speeds. Look-up tables (LUT’s) are recommended over using the sin() function due to speed issues. The sin() function in C is computationally intensive and can easily create delays that effect the performance of the control algorithm. The implementation of the sin() is platform dependent, but for example, using the GCC compiler on a PSoC 5 Cortex-M3 processor, calculating the sin() function three times (once for each phase), took approximately 24,000 clock cycles. With a processor running at 48MHz, this is about a 500us delay. Considering a 4-pole BLDC motor spinning at 6000rpm takes about 830us to move between commutation states. In each commutation cycle you want at least 6-bit resolution (64 PWM changes), and as you can see, the delay in calculating sin() is far too large. With a LUT that stores floats, and a small amount of float multiplication (no divide) on a embedded processor that does not have floating point hardware support (such as the ARM Cortex-M3), you could expect the look-up and assign process to take around 500-1000 clock cycles (maybe 20us at 48MHz). This is a big reduction over using the $$sin()$$ function! Phase displacement. Sinusoidal control requires three PWM signal’s, preferably dual-output with adjustable dead-time for synchronous switching. ## Example LUT Code The following code is an example of how to create a LUT in the C programming language. This function calculates the LUT values at run-time, using the $$sin()$$ function. In space constrained applications it may pay to pre-calculate the values and save them directly into flash. More help with the C language can be found here. ## Third-Harmonic Injection With a pure phase-neutral sinusoidal drive, the maximum phase-to-phase voltage is only roughly 0.86Vbus. This can be improved by ‘injecting’ the sine wave with the third harmonic. The first thing you may think is, won’t this disrupt my nice and smooth sine wave control? Well, no, because as it happens, the third harmonic is in-phase with every winding (which are 120 apart), and since it is applied to every winding, the phase-to-phase waveform does not change. It does however flatten the phase-neutral waveform, making the PWM become under-modulated. You can then scale this back up to full-modulation, and gives approximately a 16% phase-to-phase voltage increase. To implement third-harmonic injection, all you have to do is add the third-harmonic to the sine-wave LUT. The third-harmonic has an amplitude that is 1/6 of that of the fundamental (your original sine wave). You’ll notice that the maximum value in the LUT has decreased. At this point, scale up all the values in the LUT so they use the full-range again. # Sensor Field Orientated Control (FOC) The benefits: • You can control motor parameters which relate directly to it’s physical behaviour (the variables make sense to a human) • Angle can be determined from phase currents (usually using a sliding state estimator), no encoder needed • Greater control complexity than trapezoidal or sinusoidal • Requires fast processor to execute neccessary maths • Requires phase current to be measured (usually with low-side current sense resistors and an ADC) • Requires the tuning of three PID loops (usually) Sensor field orientated control (also called vector control) is a permanent magnet motor control method that allows one to regulated both the speed and torque independently from each other. However, this form of control is more computationally intensive than trapezoidal or sinusoidal. Part of the complexity is due to the need to be able to measure both the current going through the windings, while both trapezoidal and sinusoidal only requires positional feedback. Clark (aka alpha-beta) and Park transforms (aka d-q) have to be calculated with the phase currents. ## The Clark Transformation (alpha-beta) The Clark transformation (also called the $\alpha \beta$ transformation, and occasionally called the Concordia transformation, but I don’t know why!) is the projection of three separate sinusoidal phase values onto a stationary 2D axis. The Clark transformation equation is shown below: $$I_{\alpha\beta\gamma} = TI_{abc} = \frac{2}{3} \begin{bmatrix} 1 & -\frac{1}{2} & -\frac{1}{2} \\ 0 & \frac{\sqrt{3}}{2} & -\frac{\sqrt{3}}{2} \\ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{bmatrix} \begin{bmatrix} I_a \\ I_b \\ I_c \end{bmatrix} \text{(unsimplified Clark transform)}$$ where: $$I_a$$ = current in motor winding A ($$A$$) $$I_b$$ = current in motor winding B ($$A$$) $$I_c$$ = current in motor winding C ($$A$$) $$I_{\alpha\beta\gamma}$$ = the Clark transformed currents ($$A$$) We are fortunate that when using a star-connected BLDC motor (most are!), is 0, so that we can simplify the equation to: $$I_{\alpha\beta} = TI_{ab} = \begin{bmatrix} 1 & 0 \\ \frac{1}{\sqrt{3}} & \frac{2}{\sqrt{3}} \end{bmatrix} \begin{bmatrix} I_a \\ I_b \end{bmatrix} \text{(simplified Clark transform for star-connected BLDC)}$$ If you want the code to do the Clark Transformation (written in C++, and designed for embedded applications), check out the GitHub repository Cpp-Maths-ClarkTransformation. ## The Park Transformation (dq) Park transformation is a projection of three seperate sinusoidal phase values onto a rotating 2D axis. The rotating axis (d, q) of the Park transformation rotates at the same speed as the rotor. When the projection occurs, the currents $I_d$ and $I_q$ remain constant (when the motor is at steady-state). It just so happens that $I_d$ controls the magnetizing flux, while $I_q$ controls the torque, and since both parameters are separate, we can control each individually! The Park transformation equation is shown below: $$I_{dqo} = TI_{abc} = \sqrt{\frac{2}{3}} \begin{bmatrix} \cos(\theta) & \cos(\theta – \frac{2\pi}{3}) & \cos(\theta + \frac{2\pi}{3}) \\ -\sin(\theta) & -\sin(\theta – \frac{2\pi}{3}) & -\sin(\theta + \frac{2\pi}{3}) \\ \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \end{bmatrix} \begin{bmatrix} I_a \\ I_b \\ I_c \end{bmatrix} \text{(forward Park transformation)}$$ where: $$I_a$$ = current in motor winding A (A) $$I_b$$ = current in motor winding B (A) $$I_c$$ = current in motor winding C (A) $$I_{dqo}$$ = the Park transformed currents (A) If you want the code to do the Park Transformation (written in C++, and designed for embedded applications), check out the GitHub repository Cpp-Maths-ParkTransformation. ## The Control Loop The following picture shows the control architecture for a PMSM motor controlled with a PSoC microcontroller. It is standard practice to set $$I_q$$ to some value depending on the torque/speed required, while keeping $$I_d$$ zero. This is because $$I_d$$ does nothing to help make the motor spin, and just wastes electrical energy as heat. However, there is a technique called flux weakening, and this is done by making $$I_d$$ negative. It will allow the motor to spin faster than it’s rated speed, in a zone called ‘constant power’. I have had good experiences at using this to squeeze more RPM out of BLDC motors that weren’t requiring much torque. A good method is to make $$I_d$$ proportional to $$I_q$$ (but always negative, no matter what sign $$I_q$$ is, which essentially gives you a fixed drive is angle which is over 90. You can use this equation to work out the proportion of $$I_q$$ that $$I_d$$ has to be for a certain angle. $$r = \tan (\theta)$$ where: $$r$$ = ratio ($$I_d = rI_q$$) $$\theta$$ = drive angle FOC typically requires three PID loops, one medium-speed loop for controlling the velocity and two high-speed loops for controlling $$I_d$$ and $$I_q$$. The are two methods to measure the phase currents. The first involves just one current sense resistor on the DC return path from the motor, while the second requires three current-sense resistors, one on each leg of the three-phase bridge controlling the motor. The dynamic equations for FOC linking voltages, currents and torques are: $$v_{q} = r_{s}i_{q} + L_{q}\frac{di_{q}}{dt} – w_{e}L_{d}i_{d} + w_{e}\lambda_{f} \\ \\ v_{d} = r_{s}i_{d} + L_{d}\frac{di_{d}}{dt} – w_{e}L_{q}i_{q} \\ \\ T_{m} = \frac{3}{2}\frac{P}{2}[\lambda_{f}i_{q} + (L_{d} – L_{q})i_{d}i_{q}] \\ \\$$ During constant flux operation (which is normal operation, all the flux is created by the permanent magnets), $$I_d$$ becomes 0. This simplifies the torque equation into that similar to a brushed DC motor as: $$T_{m} = \frac{3}{2}\frac{P}{2}\lambda_{f}i_{q} = K_{T}i_{q}$$ ## Equations $$\text{Field Orientated Control:} \\ \\ |V_{abc}| = R|I_{abc}| + \frac{d}{dt}|\Phi_{abc}| \text{Lenz-Faraday model}\\ \\ R = \text{statoric resistance} |V_{abc}| = \begin{bmatrix}v_a & v_b & v_c \end{bmatrix}^T = \text{statoric voltages,} \\ \\ |I_{abc}| = \begin{bmatrix} i_a & i_b & i_c \end{bmatrix}^T = \text{statoric currents,} \\ \\ |\Phi_{abc}| = \begin{bmatrix} \Phi_a & \Phi_b & \Phi_c \end{bmatrix} = \text{global fields} \\ \\$$ # PWM Frequency So what PWM frequency do I choose? The PWM frequency is a trade-off between torque ripple and switching losses. Most controllers use a frequency between 10-20kHz. If you want to reduce the audible noise from the motor, try using a frequency between 17-20kHz (outside the audible range of most adults). Dead-time is only important if you are going synchronous rectification (i.e. switching on the MOSFET rather than letting the body diode conduct when current is flowing through it in the reverse direction). In this case, during the off-time of the PWM signal, the complementary (on the same leg of the bridge) MOSFET(s) to those that are being switched are turned on. This allows the reverse current that would be usually be flowing through the inherent body diodes to instead flow through the MOSFET, resulting in lower heat losses (MOSFET’s have a lower on ‘resistance’ than diodes). Dead-time between turning on the leg MOSFET’s off and the other on is needed to prevent shoot-through. Note that if the duty-cycle of the PWM causes either the on or off-time to be less than the dead-time, it will appear as if the PWM stops working. This will happen at it’s two duty cycle extremes (at the top and bottom of the sine wave). Don’t be alarmed, this is normal and totally acceptable behaviour. # Open-Loop Feedback Fully open-loop control (no hall-effect, encoder, or BEMF feedback) can be used in applications where torque ripple and efficiency are not important. The load has to be also relatively constant for this to work. # Closed-Loop Voltage Feedback Closed-loop voltage control is when feedback is used to commutate the current’s, and the PWM duty cycle is varied from 0-100% to control the speed of the motor (but with no velocity control).  With this method, you have full control of the motors speed from 0-100%, but the motor’s speed may vary as the load changes. As the load is increased, the current will increase, and the rpm of the motor will drop. It is called ‘voltage’ control because you can set the PWM duty cycle, which determines the voltage across the windings. # Closed-Loop Velocity Feedback Closed-loop velocity control involves controlling the rpm of the motor to a desired speed. The maximum currents must be taken into consideration when doing constant velocity control, as they may get very large if the motor stalls. Feedback Can Come From: • Hall-effect sensors (o.k.) • Optical encoder (better, especially at slow speeds) • Back E.M.F (good, but not at slow speeds) Control Methods: • PID PWM Control (soft or hard-chopping) # Closed-Loop Current Feedback Closed-loop current control will give you a constant output torque. This means that the motor will slow down when the load is increased, and speed up when it is decreased. Feedback Can Come From: Control Methods: • PID PWM Control # Sliding Mode Observer Used to provide estimates on the position and speed of the rotor when not using encoder or hall-based feedback. # Controller Bandwidth Controller bandwidth is an important term which is used to define the update rate or speed of a particular control section for a motor. It is usually referred to when talking about the “Current Controller Bandwidth”, a control loop that measures the phase currents of a brushless motor and updates the control variable accordingly (usually with a PID loop). This is commonly used with FOC control and SVM. Slow current controller bandwidths are around 300Hz, while faster bandwidths are in the 10-50kHz range (which updates the duty at the same speed as the PWM itself!). Higher current controller bandwidths are needed for lower torque ripple and slower motion (1-50rpm). # Code Execution Time The execution time of the code which controls the PWM duty cycle is critical, especially when using sinusoidal or sensor field orientated control. Here are the best ways of making sure the code runs quickly • Do not use the sin() function, instead use a LUT • Do not use any division operators (instead, convert to a double, and multlply by the inverse e.g. number1*(1/number2) • Using fixed-point arithmetic rather than floats or doubles. I have an open-source fixed-point library designed for running on embedded systems. • Try to minimise function calls. Use the inline parameter if possible • Precalculate any maths that can be done before-hand • Make sure the compiler optimises for speed, not space See the C Programming page for more help on this subject. # BLDC Maths Some useful equations… $$\text{Basic Motor Control:} \\ \\ v_{rot} (Hz) = \frac{rpm}{60} \\ \\ \text{Commutation Cycles Per Second} = \frac{v_{rot}}{6} \\ \\ \text{num. electrical cycles per mechanical cycle} = \text{num. pole pairs} = \frac{\text{num. poles}}{2}\\ \\ \\ \\ \text{Sinusoidal Control:} \\ \\ V_{phase-neutral} = \frac{V_{bus}}{2} \\ \\ V_{phase-phase} = \frac{\sqrt{3}}{2}*V_{bus} \\ \\ V_{third-harmonic} = \frac{1}{6}V_{fundamental} \\ \\$$ Flux = Magnetic field lines per unit area # Related IC’s FSB50825US – Fairchild Smart Power Module (SPM) Package: SPM-23 Shoot-through Protection: No Onboard Logic: None (micorcontroller required) # Standard Enclosures The NEMA17 and NEMA23 are two common sizes that BLDC motors come in. This designation defines the mounting hole pattern for the motor. The NEMA17 mounting hole dimensions are shown below. The dimensions are in millimeters. The NEMA23 mounting hole dimensions are shown below. The dimensions are in inches. # External Resources One of the best documents that I’ve seen about BLDC motor theory is James Robert Mevey’s “Sensorless Field Orientated Control Of Brushless Permanent Magnet Synchronous Motors”. It’s so good, I even decided to host it on this site incase the original link above goes down. Click below to download locally. [wpfilebase tag=file id=17 /] The second best is MicroChips AN885 – Brushless DC (BLDC) Motor Fundamentals. And then once you’ve read those, check out AN857 – Brushless DC Motor Control Made Easy. This links the theory with a practical implementation of the hardware and firmware, supported with examples. A competing document is NXP’s AN10661 – Brushless DC Motor Control Using The LPC2141. For the visual learners, this one has more colourful pictures than Microchip’s, but it has less depth. RoboTeq (www.roboteq.com) have some very cool BLDC motor controllers. They are highly configurable, offer a huge number of features, and can drive very powerful motors (+100A). If your a fan of animations (who isn’t, visual learning rocks!), then check out Nanotech’s Stepper/BLDC animations (http://en.nanotec.com/support/tutorials/stepper-motor-and-bldc-motors-animation/). A flash program which allows you visually watch the motors operation as you step through commutation cycles. Also let’s you change the type of motor! If you want to delve into the arena of Sensor Field Orientated control, the Atmel’s AVR32723 – Sensor Field Orientated Control For Brushless DC Motors with AT32UC3B0256 should be the first port of call. “Brushless Permanent Magnet Servomotors” is a good article explaining the differences between BLDC motors and PMSM’s, as well as equations for FOC. Posted: August 3rd, 2012 at 9:09 am Last Updated on: March 27th, 2017 at 7:38 am • user99 Good article. Why wasn’t BLDC in the acronym list. I had to read half the article to find out what it meant. • Ha! I’ve added to term into the Overview section. 🙂 • Jerry Morrow great article, thanks!
# How to find the axis with minimum moment of inertia? If a system of particles is given, in a 2D plane, with particles having masses $M_1$, $M_2$, $M_3, \ldots M_n$ and coordinates $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3), \ldots (x_n y_n)$, then how can one find the axis about which the system has the minimum moment of inertia ? I know that among parallel axes, the one that passes through the center of mass has minimum inertia, but which axis among those that pass through COM has least inertia about it ? Bonus: if possible please explain how one can find this axis for a system of particles in 3D space. • You can find a general equation for the moment of inertia based on the angles the axis of rotation makes with the z-axis and the x-y plane. Then it becomes an optimization problem. Of course this is easier said than done. This would work in both 2D and 3D. Also, I think in 2D the axis needs to lie in the plane of the particles, so you would only have to optimize with respect to one angle rather than two. Sep 2, 2018 at 13:16 • Related meta post: physics.meta.stackexchange.com/q/10793/2451 Sep 2, 2018 at 17:02 This is actually a nice example of tensors and minimization using Lagrange multipliers. For rotation about the COM, the inertia tensor $\mathbf{I}$ is defined as a symmetric $3\times3$ matrix with elements such as $$I_{xx} = \sum_k m_k (y_k^2+z_k^2), \quad I_{xy} = I_{yx} = -\sum_k m_k x_k y_k, \quad I_{xz} = I_{zx} = -\sum_k m_k x_k z_k, \quad \ldots$$ where the position vectors $(x_k,y_k,z_k)$ are relative to the COM. Even a 2D arrangement of particles will, in general, have a $3\times3$ inertia tensor: you can rotate them about any axis in 3D space. Because it is a tensor, the moment of inertia associated with rotation about any axis through the COM, represented by a unit vector $\mathbf{n}$, will have a value $$\mathbf{n}\cdot\mathbf{I}\cdot\mathbf{n}$$ So we can seek the vector $\mathbf{n}$ that minimizes this quadratic form. However, we must remember the constraint that $\mathbf{n}$ is a unit vector, i.e. satisfies $\mathbf{n}\cdot\mathbf{n}=1$. So we can apply the method of Lagrange undetermined multipliers, and minimize without constraints the function $$\Phi(\mathbf{n}) = \mathbf{n}\cdot\mathbf{I}\cdot\mathbf{n} - \lambda \mathbf{n}\cdot\mathbf{n}$$ This minimum (or maximum) occurs when the gradient of the function with respect to $\mathbf{n}$ vanishes, and this will happen when $$\mathbf{I}\cdot\mathbf{n} = \lambda \mathbf{n}$$ This is an eigenvalue problem. So the answer to your question is 1. Diagonalize the inertia tensor, to give its three principal eigenvalues $I_1$, $I_2$, $I_3$. 2. Pick the smallest of these. 3. The corresponding eigenvector is the axis you want. As mentioned above, provided you calculate the inertia tensor as a $3\times3$ matrix, it makes no difference whether the arrangement of masses is in 2D or 3D. If the particles are all in the $xy$ plane, though, it is easy to show that the $z$ axis is an eigenvector of the inertia tensor, and also (because of the perpendicular axis theorem) that the moment of inertia about the $z$ axis is larger than about any of the axes that lie in the $xy$ plane. Essentially, the problem becomes a $2\times2$ matrix eigenvalue problem. • because the symmetry of the inertia tensor all eigenvalues are real! – Eli Sep 2, 2018 at 21:19 • I am still in school and haven't studied many of those topics yet (eigenvalue, tensors etc) so I don't fully understand your solution. Maybe I will understand more once I study those topics.Anyway thanks for your answer. Sep 4, 2018 at 15:14 • I understand; sorry about that. In the UK, these would typically be covered in the second year of a university physics course. The $\mathbf{n}\cdot\mathbf{I}\cdot\mathbf{n}$ expression is calculating the moment of inertia along any desired axis; then we want to minimize this quantity by finding where its derivative (with respect to all possible rotations of $\mathbf{n}$) is zero. The Lagrange multipliers help us do that. Finding the eigenvalues/eigenvectors is equivalent to finding the "special" orientations of the axes that make the inertia tensor diagonal. One of these is the one you seek. – user197851 Sep 4, 2018 at 15:28 • Anyway, each of these topics (inertia tensor, Lagrange multipliers, eigenvalues) can be studied separately, and hopefully the links given in the answer will be suitable jumping-off points. Good luck! – user197851 Sep 4, 2018 at 15:30 I am solving this for particles distributed in a 2D space. First we know that the moment of inertia of a particle about an axis is given by $$I=Mr^2$$ And we know that the axes passing through COM have the minimum moment of inertia. Our job is to find the slope of the axes with min Inertia, then we can use the slope point form to find the equation of the required axis : $$y - y_0 = m ( x - x_0 )$$ (Where m is the slope of the line and $(x_0, y_0)$ is the point through which it passes) Now let us consider the distribution of masses, the coordinate of COM is given by: $$x_{cm} = \frac{1}{M} \sum_i M_i x_i$$ $$y_{cm} = \frac{1}{M} \sum_i M_i y_i$$ And now let us shift the origin to COM to make our calculations easy. If the original coordinate of mass $M_i$ was $(x_i, y_i)$ then the new shifted coordinates are: \begin{align} x_i' & = x_i - x_{cm} \\ y_i' & = y_i - y_{cm} \end{align} Hence the axis that we seek now passes to the new shifted origin $(x_0, y_0)$. Now let the equation of the line be $$y = mx$$ (as this line passes through origin, $c = 0$). Then the distance $r_i$ of the $i$th particle from this line, is given by : \begin{align} r_i = \frac{|m x_i' -y_i'|}{\sqrt{1+m^2}} \end{align} Hence the total moment of Inertia of the system is : \begin{align} I = \sum M_i r_i^2 \end{align} Now we can differentiate it with respect to $m$ (slope) and equate it to zero to find the minima: $$\frac{dI}{dm} = \sum \frac{2M_i(mx_i'-y_i)(x_i'+my_i)}{(1+m^2)^2}$$ Equating it to zero gives : $$\sum M_i(mx_i'-y_i)(x_i'+my_i)=0$$ Solving for m gives the following quadratic equation : $$\left(\sum M_i x_i'y_i'\right)m^2 + \left(\sum M_i (x_i'^2-y_i'^2)\right)m - \left(\sum M_i x_i'y_i'\right) = 0$$ Note that the product of roots (that is slopes) is -1 i.e. this gives two two axes perpendicular to each other and one among them has minimum momentum and other has max about it (they can be found out by differentiating the above quadratic equation again) Thus the the axis we seek, in the original coordinate system maybe be written as: $$y-y_{cm} = m(x-x_{cm}).$$ where $m$ is the slope which corresponds to minimum inertia.
Afiles and Avars commands 08-23-2015, 04:00 PM Post: #1 Giancarlo Member Posts: 228 Joined: Dec 2013 Afiles and Avars commands Hello, I took my prime on vacation since this year i was busy on other stuff (my job). I think that i loss a lot of things and now I am trying to catch up. The first issue is related to the very useful commands AFiles and AVars. I understood that these commands allow the storage in a folder and then usage of files and variables which are part of an app. Apart from the theory, i don't know where i can start from. What i would need is an example of code showing how these commands are used and works. I think these commands are very important for the development of apps with their own variables and files, icons and so on. Do you have an example of working code to share? I searched in the previous threads but i didn't find a lot of informations. Thanks Giancarlo 08-23-2015, 06:59 PM Post: #2 Marcel Member Posts: 156 Joined: Mar 2014 RE: Afiles and Avars commands Hi! Marcel 08-23-2015, 08:25 PM Post: #3 Giancarlo Member Posts: 228 Joined: Dec 2013 RE: Afiles and Avars commands Hello Marcel, Thank you very much. Unfortunately i cannot open the files since i don't have the laptop with me. As soon as i am back i am going to dig into that code. As a general approach do i have to save an existing app with a different name? Does it suffice to create a new app with its own folder where i can store all the different files like AVars, AFiles and icon.png? Thank you very much Giancarlo 08-23-2015, 08:44 PM Post: #4 Marcel Member Posts: 156 Joined: Mar 2014 RE: Afiles and Avars commands Hi! I don't know for AFILES but AVARS is simple to use. In AstroLab4, many DATA are associated with the program. If you open (double click) the Variables in AstroLab4 in the ConnKit, you will see all the variables that I use for the program. When the program is in use, this variables are disponible. Marcel. 08-23-2015, 09:01 PM (This post was last modified: 08-23-2015 09:06 PM by Tim Wessman.) Post: #5 Tim Wessman Senior Member Posts: 2,231 Joined: Dec 2013 RE: Afiles and Avars commands (08-23-2015 08:25 PM)Giancarlo Wrote:  As a general approach do i have to save an existing app with a different name? Does it suffice to create a new app with its own folder where i can store all the different files like AVars, AFiles and icon.png? AFiles and AVars create or access things within the current application. It doesn't matter if that is a copy of the base application, or a base application. AFiles will store things to disk. You cannot directly use those items by name. Nor can they be directly accessed. They do not remain in RAM taking up space when not in use. They must be recalled into RAM before use. (similar to an item saved in a port on the 48/50 series calcs). Example: Code: AFiles("MyFile"):={1,2,3,4,5}  //stores to a file named "MyFile" inside the current app folder L1:="MyFile" //would store into L1 the string "MyName" - not recall your file L1:=AFiles("MyFile") //recalls content of file, and then store it into L1 AVars on the other had, creates a variable by name that can hold any object in the current application. They always remain in RAM like any other variable. This is like a normal variable accessed by the menu key in the 48/50 series. Code: AVars("MyVar"):={1,2,3,4,5}  //creates a variable named "MyFile" inside the current app L1:=MyVar; //would store into L1 Note that the normal name resolution rules still apply. Unless MyVar has already been created, the program parser won't know what to do with it should you send that piece of code to someone. Code: MyVar; //I've declared this to be a variable EXPORT Function() BEGIN   AVars("MyVar"):={1,2,3,4,5}  //creates a variable named "MyFile" inside the current app   L1:=MyVar; //will always work END; Wheras: Code: EXPORT Function() BEGIN   AVars("MyVar"):={1,2,3,4,5}  //creates a variable named "MyFile" inside the current app   L1:=MyVar; //will only work if you have *already* created this before, else on compile will error here END; Recommended uses for AFiles: archiving things - especially if they are large! You don't want your app to take up 5mb of space when not in use do you? Recommended uses for AVars: variables in common use - especially those that will be smaller or used frequently. If your variable starts taking up lots of space, store it to a disk when not in use and clear that memory out to be courteous. A store into AFiles will happen immediately - e.g. on call it writes at that instant to disk. AVars will only be saved to disk when the application in memory is saved. Usually, this will be on the next off. TW Although I work for the HP calculator group, the views and opinions I post here are my own. 08-23-2015, 09:40 PM Post: #6 Giancarlo Member Posts: 228 Joined: Dec 2013 RE: Afiles and Avars commands Hello Tim, Thank you very much for taking the time to read and write this answer. Now things seems more clear now. I am going to play with these commands and in case i am lost i am going to use (and abuse) the support of the forum. Thanks Giancarlo P.S. thanks Marcel for astro lab4. As soon as i am back i am going to see your code. 08-24-2015, 05:19 AM Post: #7 cyrille de brébisson Senior Member Posts: 976 Joined: Dec 2013 RE: Afiles and Avars commands Hello, Files can also be accessed in binary form (AFilesB) which makes them useable for data storage. you can also store pictures (png) in files which can be automaticaly assigned to/from graphics (G0-G9)... AVars was created as an alternative to user app program exported function for people who do not want to program, AND/OR for programmer who need variables that 'stay' between 2 recompilation compilations of a program. Cyrille 08-29-2015, 10:26 PM Post: #8 bobkrohn Member Posts: 140 Joined: Dec 2014 RE: Afiles and Avars commands Oh My. I don't see any entry in the Help or any reference in the Toolbox for these commands. When I do the following from Home AFiles("MyFile"):={1,2,3,4,5} //stores to a file named "MyFile" inside the current app folder It works and I can recall the Variable MyFile (apparently a List). But I don't see it in Mem/UserVariables. How can you clear it? And I still don't know where the "App Folders" that everyone is talking about exist at. All I see are Apps. 08-30-2015, 01:43 AM (This post was last modified: 08-30-2015 01:45 AM by Tim Wessman.) Post: #9 Tim Wessman Senior Member Posts: 2,231 Joined: Dec 2013 RE: Afiles and Avars commands (08-29-2015 10:26 PM)bobkrohn Wrote:  But I don't see it in Mem/UserVariables. A file would not be a variable and wouldn't should up. Think of it more like an archival location. However, you are correct that at the moment neither files nor app vars appear in the mem browser. That is something we'd planned to get resolved later in a more cohesive way when the memory screen gets its planned overhaul. AFiles() with no input returns a list of all the files for the app. Quote:How can you clear it? DelAFiles() (or DelAVars() for variables) Quote:And I still don't know where the "App Folders" that everyone is talking about exist at. All I see are Apps. That is the "folder". Each application is represented internally by a folder that contains several files. If you'd like to see what they are exactly, you can open a folder in either your %APPDATA%/HPPrime directory (where the emulator stores stuff), or else in the MyDocuemnts/HP Connectivity Kit/Calcs/<calc_name> folder. However, you don't really need to understand the underlying structure or details in order to use things. Just know that you can have files associated with an application. Variables can also be associated with the application (but the contents of those variables remain in RAM even when not in active use at the instant). TW Although I work for the HP calculator group, the views and opinions I post here are my own. « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
# struct.node.disp.global Syntax m := struct.node.disp.global(sn<,i>) Get the displacement of structure node expressed in a global system. Returns: m - matrix of displacement with all degrees-of-freedom if the argument i is not assigned, or displacement at the ith degree-of-freedom if the argument i is assigned. sn - pointer to the structure node i - degree-of-freedom, i ∈ {1, 2, … , 6}. The default is 1.
# Proving the special property of diagonal matrix? 1. Jan 8, 2014 ### Seydlitz Is it possible to prove the fact that any function of diagonal matrix is just a function of its element? I don't know how I could express the proof. I can prove that a multiplication of diagonal matrix will just be the multiplication of its element using summation notation, or diagonal matrix is automatically symmetrical matrix, but for the more general case I'm at lost. The book that I'm reading also assume directly that it's the case. Indeed it's the case, but is it just a definition that is unprovable? I'm asking this because I want to prove that $e^{iD}$ of diagonal matrix is unitary using other method than series expansion and the fact that diagonal matrix is just very important for eigenvalues. If I use the fact that the exponent of diagonal matrix is just the exponent of its element, then proof is straightforward. But then you might think this is difficult to justify without proof: $$(e^{iD})_{ij}=e^{iD_{ij}}$$ Thanks 2. Jan 8, 2014 ### maajdl One approach is to use a Taylor expansion. Another is to assume that -in the end- everything end up in expressions involving the four elementary operations. Taking it as a definition in not that bad, as this is very close to the two other approaches I mentioned. 3. Jan 8, 2014 ### Seydlitz That actually makes sense to me, thanks! :) 4. Jan 8, 2014 ### Office_Shredder Staff Emeritus It's not true that any function of a diagonal matrix will give a diagonal matrix; for example the function which sends every matrix to $$\left( \begin{array}{cc} 0 & 1\\ 0 & 0 \end{array} \right)$$ clearly outputs non-diagonal matrices. It's a special property that if you define your function with a Taylor series, and the constant term is diagonal, that every diagonal matrix will get mapped to a diagonal matrix (whose entries are the appropriate Taylor series)
# Proper way to use LaTeX fonts in XeLaTeX What is the proper way to use LaTeX fonts, for example calligra or other fonts from the Font Catalogue in XeLaTeX. Something like \usepackage{fontspec} \setromanfont{Calligra} doesn't work. However using it just like in pdflatex via \usepackage{calligra} and \calligra works. So what's the best way to use LaTeX fonts from the Fonts Catalogue in XeLaTeX? - \setromanfont is obsolete and \setmainfont should be used. Apart from this, fonts declared via fontspec's features should be OpenType or TrueType, which Calligra isn't. –  egreg May 20 '13 at 21:40 So I should just use it as I described it? What type of font is Calligra? –  student May 21 '13 at 8:18 Calligra is a traditional TeX font designed with Metafont; there is a Type1 version. It's not difficult to turn a Type1 font into an OpenType one, just use Fontforge; it will have no OpenType features, of course. –  egreg May 21 '13 at 8:36 The reason why this is necessary is that Type 1 fonts in pdftex are typically used with a package that provides metrics and encodings. If no equivalent package has been prepared for XeTeX or LuaTeX, the Type 1 font cannot be directly used, as it does not have the right mappings. By contrast, the libertine package for example provides mappings for the Type 1 version for use with pdftex and separate mappings for the OpenType version for use with XeTeX and LuaTeX. Furthermore, the inputenc and fontenc packages are not meant for XeTeX, so one may not load them in order to use packaged Type 1 fonts with this engine. In other words, one may not arbitrarily mix packaged Type 1 and user-provided OpenType fonts in XeTeX. With LuaTeX, as I understand it, it is possible to use a modified fontenc to use packaged Type 1 fonts with something that works like the traditional T1 encoding. Lastly, it is important to note that the T1 (Text 1) font encoding and the Type 1 font format are two different things that happen to have confusingly similar names, even though they are often used together in the context of pdftex. Type 1 is Adobe’s font format, while T1 is the so-called Cork encoding that was developed as a standard in the early nineties by the TeX community.
# LLA to Flat Earth Estimate flat Earth position from geodetic latitude, longitude, and altitude ## Library Utilities/Axes Transformations ## Description The LLA to Flat Earth block converts a geodetic latitude $\left(\overline{\mu }\right)$, longitude $\left(\overline{\iota }\right)$, and altitude (h) into a 3-by-1 vector of Flat Earth position$\left(\overline{p}\right)$. Latitude and longitude values can be any value. However, latitude values of +90 and -90 may return unexpected values because of singularity at the poles. The flat Earth coordinate system assumes the z-axis is downward positive. The estimation begins by finding the small changes in latitude and longitude from the output latitude and longitude minus the initial latitude and longitude. $\begin{array}{l}d\mu =\mu -{\mu }_{0}\\ d\iota =\iota -{\iota }_{0}\end{array}$ To convert geodetic latitude and longitude to the North and East coordinates, the estimation uses the radius of curvature in the prime vertical (RN) and the radius of curvature in the meridian (RM). RN and RM are defined by the following relationships: $\begin{array}{l}{R}_{N}=\frac{R}{\sqrt{1-\left(2f-{f}^{2}\right){\mathrm{sin}}^{2}{\mu }_{0}}}\\ {R}_{M}={R}_{N}\frac{1-\left(2f-{f}^{2}\right)}{1-\left(2f-{f}^{2}\right){\mathrm{sin}}^{2}{\mu }_{0}}\end{array}$ where (R) is the equatorial radius of the planet and $f$ is the flattening of the planet. Small changes in the North (dN) and East (dE) positions are approximated from small changes in the North and East positions by $\begin{array}{l}dN=\frac{d\mu }{\text{atan}\left(\frac{1}{{R}_{M}}\right)}\\ dE=\frac{d\iota }{\text{atan}\left(\frac{1}{{R}_{N}\mathrm{cos}{\mu }_{0}}\right)}\end{array}$ With the conversion of the North and East coordinates to the flat Earth x and y coordinates, the transformation has the form of $\left[\begin{array}{c}{p}_{x}\\ {p}_{y}\end{array}\right]=\left[\begin{array}{cc}\mathrm{cos}\psi & \mathrm{sin}\psi \\ -\mathrm{sin}\psi & \mathrm{cos}\psi \end{array}\right]\left[\begin{array}{c}N\\ E\end{array}\right]$ where $\left(\psi \right)$ is the angle in degrees clockwise between the x-axis and north. The flat Earth z-axis value is the negative altitude minus the reference height (href). ${p}_{z}=-h-{h}_{ref}$ ## Dialog Box Units Specifies the parameter and output units: Units Position Altitude `Metric (MKS)` Meters Meters Meters `English` Feet Feet Feet This option is available only when Planet model is set to `Earth (WGS84)`. Planet model Specifies the planet model to use: `Custom` or ```Earth (WGS84)```. Flattening Specifies the flattening of the planet. This option is available only with Planet model Custom. Specifies the radius of the planet at its equator. The units of the equatorial radius parameter should be the same as the units for flat Earth position. This option is available only with Planet model Custom. Initial geodetic latitude and longitude Specifies the reference location, in degrees of latitude and longitude, for the origin of the estimation and the origin of the flat Earth coordinate system. Direction of flat Earth x-axis Specifies angle for converting flat Earth x and y coordinates to North and East coordinates. ## Inputs and Outputs InputDimension TypeDescription First 2-by-1 vectorContains the geodetic latitude and longitude, in degrees. Second ScalarContains the altitude above the input reference altitude, in same units as flat Earth position. Third ScalarContains the reference height from the surface of the Earth to the flat Earth frame, in same units as flat Earth position. The reference height is estimated with regard to Earth frame. OutputDimension TypeDescription First 3-by-1 vectorContains the position in flat Earth frame. ## Assumptions and Limitations This estimation method assumes the flight path and bank angle are zero. This estimation method assumes the flat Earth z-axis is normal to the Earth at the initial geodetic latitude and longitude only. This method has higher accuracy over small distances from the initial geodetic latitude and longitude, and nearer to the equator. The longitude has higher accuracy with smaller variations in latitude. Additionally, longitude is singular at the poles. ## References Etkin, B. Dynamics of Atmospheric Flight New York: John Wiley & Sons, 1972. Stevens, B. L., and F. L. Lewis. Aircraft Control and Simulation, 2nd ed. New York: John Wiley & Sons, 2003.
Converted document $\newcommand{\lyxlock}{}$ Section 10.2: Error Messages Up Main page Section A.1: Math parameters # A Heterogeneous model parameters Many model parameters can be made dependent on the reference coordinates. This is useful, for instance, for defining inhomogeneous materials. There are two mechanisms for creating heterogeneous parameters: a mathematical expression, or a map. The type of the parameter is set via the type attribute, which can take on two values. type description math Define a math parameter via a mathematical expression map Define a mapped parameter const Define a constant parameter (default) The two types of parameters are described in more detail in the sections below. The type attribute can be omitted. In that case, it is assumed to be a const parameter, unless the tag's value is not a number. In that case, it is assumed to be a math parameter. Section 10.2: Error Messages Up Main page Section A.1: Math parameters
# Could the Monty-Hall Problem be applied to multiple choice tests? Given a multiple choice test where each question contains 4 possible answers, what would happen if before beginning the test (before reading the questions), someone were to make a random selection for each question? At this point it seems logical that for a given question the student has a 1/4 chance of their choice being correct and a 3/4 chance of one of the other choices being correct. Let's say that they now begin to read the questions and in some cases they can deduce that one of the provided answers which was not the one that they picked is not correct (let's assume that there is no error in this deduction). In the scenario with the Monty Hall Problem, the probabilities did not change once the door was opened, they just shifted. By applying the same logic, the original selected answer has a 1/4 chance of being correct and the other three have a 3/4 chance of being correct, except that since one was deduced to be incorrect, the two remaining options have a 3/4 chance of being correct and so switching answers would increase the odds of being correct to $\frac{1}{2} * \frac{3}{4}$. Is this an accurate assumption or are there pitfalls in doing this? If this is the case, then what happens if another deduction is made such that their original answer was determined to be incorrect? It seems that there would be no change in the odds, but that seems unlikely. No, you can't apply the Monty-Hall problem to a multiple choice test. The difference is, that in the Monty-Hall problem, there is a person which knows where the winning door is, and always opens a door which you didn't select and which contains a goat after you made your first choose. In the situation you describe, you assume to know one incorrect answer, which allows you to "open a door with a goat". However, it is possible that this exactly the answer which you blindly selected first, a scenario which isn't possible in the Monty-Hall problem, because the other person chooses the door he opens depending on your choice, which is not the case in the multiple choice test. # Mathematical explanation Let's assume we have a multiple choice question, with four alternatives. You chose an answer at random, without reading the question. After that you read the question with the answers. We assume that you can exclue one possible answer with certainty, but you have no knowledge concerning the other three answers, they are all as likely to be correct. There are now two possibilities: Alternative 1: You initially chose the answer that surely isn't correct (chance of this happening is $\frac14$) You obviously want to switch. The chance of getting the right answer is $\frac13$, assuming that each answer is just as likely to be right. Alternative 2: You initially chose another answer than the one that surely isn't correct (chance of this happening is $\frac34$) Now we can analyze this like the Monty-Hall problem. Possibility 1: you initially chose the correct answer (chance is $\frac13$). You switch answers and are now incorrect. Possibility 2: you initially chose a wrong answer (chance is $\frac23$). You switch answers and are now correct with chance $\frac12$). So your chance of being correct for alternative 2 is: $\frac13 * 0 + \frac23 * \frac12 = \frac13$. Conclusion Your chances of getting the right answer are always $\frac13$. The same as if you immediately had read the question, eliminated the answer you know is incorrect, and chose one of the remaining at random. Instinct tells me no - switching shouldn't make a difference, but instinct is what gets most people the incorrect answer for Monty Hall. However, I'll try to explain it: In the Monty hall problem, the host chooses which doors to show you (or to show you which answers are incorrect). In your exam, the person who can "show you" incorrect answers is you. You do not, unlike the host, have knowledge of what the incorrect/ correct answers are which I think is the fundamental difference between the situations. Therefore, finding out whether an answer is incorrect is random. You have an equal chance of finding out that the answer you selected was incorrect as finding out one of the other answers was incorrect. In Monty Hall, the host will never tell you that the door you selected was a goat. • This is a good counter-argument; could you provide more proof to support it? – Klik Mar 5 '16 at 22:10 • Are you looking for a mathematical one? – Shuri2060 Mar 5 '16 at 22:17 • Yes, something more definitive that can remove all doubt. – Klik Mar 5 '16 at 22:22 • I'm afraid I can't think of anything for now. But perhaps a better answer will come along which can. Part of the problem for me is understanding the exact conditions of the situation you provide - is the student only able to find out if his non-selected answers are incorrect (and never his selected one)? Can he only find a maximum of one definitely incorrect for every question? Because unless if you put something like those conditions, this situation is slightly different from Monty Hall, but if you do put conditions like those, then the situation is non-realistic (compared to an actual exam). – Shuri2060 Mar 5 '16 at 22:26 • The situation should model an actual exam. It should be as I've described it above where any answer could be determined as being incorrect by deduction (with 100% accuracy) and where an answer was originally randomly selected. – Klik Mar 6 '16 at 0:29
Diff from to doc/beamerug-overlays.tex The |\pause| command also updates the counter |beamerpauses|. You can change this counter yourself using the normal \LaTeX\ commands |\setcounter| or |\addtocounter|. -Any occurence of a |+|-sign may be followed by an \emph{offset} in round brackets. This offset will be added to the value of |beamerpauses|. Thus, if |beamerpauses| is 2, then |<+(1)->| expands to |<3->| and |<+(-1)-+>| expands to |<1-2>|. +Any occurence of a |+|-sign may be followed by an +\emph{offset} in round brackets. This offset will +be added to the value of |beamerpauses|. Thus, if +|beamerpauses| is 2, then |<+(1)->| expands to +|<3->| and |<+(-1)-+>| expands to |<1-2>|. For example +\begin{verbatim} +\begin{frame} +\frametitle{Method 1} +\begin{itemize} +\item<2-> Apple +\item<3-> Peach +\item<4-> Plum +\item<5-> Orange +\end{itemize} +\end{verbatim} +and +\begin{verbatim} +\begin{itemize}[<+(1)->] +\item Apple +\item Peach +\item Plum +\item Orange +\end{itemize} +\end{verbatim} +are equivalent. There is another special sign you can use in an overlay specification that behaves similarly to the |+|-sign: a dot. When you write |<.->|, a similar thing as in |<+->| happens \emph{except} that the counter |beamerpauses| is \emph{not} incremented and \emph{except} that you get the value of |beamerpauses| decreased by one. Thus a dot, possibly followed by an offset, just expands to the current value of the counter |beamerpauses| minus one, possibly offset. This dot notation can be useful in case like the following: \begin{verbatim} Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > FORRELATION: Reports tagged with Forrelation: TR19-179 | 7th December 2019 Avishay Tal #### Towards Optimal Separations between Quantum and Randomized Query Complexities Revisions: 1 The query model offers a concrete setting where quantum algorithms are provably superior to randomized algorithms. Beautiful results by Bernstein-Vazirani, Simon, Aaronson, and others presented partial Boolean functions that can be computed by quantum algorithms making much fewer queries compared to their randomized analogs. To date, separations of $O(1)$ vs. ... more >>> TR20-101 | 7th July 2020 Uma Girish, Ran Raz, Wei Zhan #### Lower Bounds for XOR of Forrelations The Forrelation problem, first introduced by Aaronson [AA10] and Aaronson and Ambainis [AA15], is a well studied computational problem in the context of separating quantum and classical computational models. Variants of this problem were used to give tight separations between quantum and classical query complexity [AA15]; the first separation between ... more >>> TR20-127 | 21st August 2020 Nikhil Bansal, Makrand Sinha #### $k$-Forrelation Optimally Separates Quantum and Classical Query Complexity Revisions: 2 Aaronson and Ambainis (SICOMP '18) showed that any partial function on $N$ bits that can be computed with an advantage $\delta$ over a random guess by making $q$ quantum queries, can also be computed classically with an advantage $\delta/2$ by a randomized decision tree making ${O}_q(N^{1-\frac{1}{2q}}\delta^{-2})$ queries. Moreover, they conjectured ... more >>> TR20-128 | 3rd September 2020 Alexander A. Sherstov, Andrey Storozhenko, Pei Wu #### An Optimal Separation of Randomized and Quantum Query Complexity Revisions: 1 We prove that for every decision tree, the absolute values of the Fourier coefficients of given order $\ell\geq1$ sum to at most $c^{\ell}\sqrt{{d\choose\ell}(1+\log n)^{\ell-1}},$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant. This bound is essentially tight and settles a ... more >>> TR21-164 | 19th November 2021 Scott Aaronson, DeVon Ingram, William Kretschmer #### The Acrobatics of BQP We show that, in the black-box setting, the behavior of quantum polynomial-time (${BQP}$) can be remarkably decoupled from that of classical complexity classes like ${NP}$. Specifically: -There exists an oracle relative to which ${NP}^{{BQP}}\not \subset {BQP}^{{PH}}$, resolving a 2005 problem of Fortnow. Interpreted another way, we show that ${AC^0}$ circuits ... more >>> ISSN 1433-8092 | Imprint
# Question 3c3a8 Aug 1, 2017 $520$ $\text{kJ}$ #### Explanation: The formula for kinetic energy is ${E}_{\text{K}} = \frac{1}{2} m {v}^{2}$; where $m$ is the mass of an object and $v$ is its velocity. Let's substitute the given values into the formula: $R i g h t a r r o w {E}_{\text{K}} = \frac{1}{2} \cdot 1612$ "kg" cdot (95.4 " km/h")^(2) First, let's express the units $95.4$ $\text{km/h}$ in terms of ${\text{m s}}^{- 1}$: $R i g h t a r r o w {E}_{\text{K}} = 806$ "kg" cdot (95.4 cdot frac(1000)(3600) "m s"^(- 1))^(2)# $R i g h t a r r o w {E}_{\text{K}} = 806$ $\text{kg} \cdot 645.16$ ${\text{m"^(2) cdot "s}}^{- 2}$ $R i g h t a r r o w {E}_{\text{K}} = 519 , 998.96$ ${\text{kg" cdot "m"^(2) cdot "s}}^{- 2}$ Then, let's express this kinetic energy in joules: $R i g h t a r r o w {E}_{\text{K}} = 519 , 998.96$ $\text{J}$ Now, we must express the kinetic energy in kilojoules. So let's divide the value by ${10}^{3}$: $R i g h t a r r o w {E}_{\text{K}} = \frac{519 , 998.96}{{10}^{3}}$ $\text{kJ}$ $\therefore {E}_{\text{K}} = 519.99896$ $\text{kJ}$ Therefore, the kinetic energy of this vehicle is around $520$ $\text{kJ}$.
# Problem with Simplify, Sqrt, and Set: What's going on? I am having some trouble getting Mathematica to simplify an expression of the form Sqrt[x]*Sqrt[1/x], where x>0. The problem is that x is assigned to some complicated form by the time Mathematica encounters it, and it fails to recognize that it will simplify. While debugging this problem, I wrote the following code that fails to simplify only when x gets sufficiently complicated. The cases involving x0,x1,x2,x3 will all simplify, but the x4 case will not. What's going on here? x0 = a; x1 = a + b; x2 = a + b*c; x3 = a + b*c*d; x4 = a + b*c*d*e; Simplify[1 == Sqrt[a] Sqrt[1/a], x0 > 0] Simplify[1 == Sqrt[a + b] Sqrt[1/(a + b)], x1 > 0] Simplify[1 == Sqrt[a + b*c] Sqrt[1/(a + b*c)], x2 > 0] Simplify[1 == Sqrt[a + b*c*d] Sqrt[1/(a + b*c*d)], x3 > 0] Simplify[1 == Sqrt[a + b*c*d*e] Sqrt[1/(a + b*c*d*e)], x4 > 0] • Welcome to Mathematica.SE! I suggest that: 1) You take the introductory Tour now! 2) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! 3) As you receive help, try to give it too, by answering questions in your area of expertise. Feb 27 '15 at 23:01 • This looks very strange. fyi, I found a problem tracing this when there are 4 symbols only. Feb 28 '15 at 0:17 Probably some internal weirdness with the ComplexityFunction, but: Simplify[Sqrt[1/(a + b c d e )] Sqrt[a + b c d e ]==1] // PowerExpand Simplify[1 == Sqrt[a + b*c*d*e] Sqrt[1/(a + b*c*d*e)], x4 > 0] // PowerExpand (* True True *) • Yes, but PowerExpand yields True even without Simplify. Feb 28 '15 at 0:58 • @bbgodfrey: Of course it does. What would you expect it to do with the equality? – ciao Feb 28 '15 at 0:59 • No offense meant. I merely was observing that the use of PowerExpand does not explain why Simplify is not producing the expected result in this case. Feb 28 '15 at 1:04 • @bbgodfrey:Oh, none taken! Hypothesis non fingo on why MMA is behaving that way, other than theory in my answer. Just offering a work-around... – ciao Feb 28 '15 at 1:07 The cases involving x0, x1, x2, x3 will all simplify, but the x4 case will not. What's going on here? Simplify[1 == Sqrt[a + b*c*d*e] Sqrt[1/(a + b*c*d*e)], x4 > 0] Sqrt[1/(a + b c d e)] Sqrt[a + b c d e] == 1 With x4 added, you have one too many variables for the assumptions to work. Maximum number of variables in non-linear expressions for the assumptions to be processed in Simplify and FullSimplify is 4 (the value of "AssumptionsMaxNonlinearVariables" sub-option of the system option "SimplificationOptions") "SimplificationOptions" /. SystemOptions["SimplificationOptions"] "AssumptionsMaxNonlinearVariables" -> 4, "AssumptionsMaxVariables" -> 21, "AutosimplifyTrigs" -> True, "AutosimplifyTwoArgumentLog" -> True, "FiniteSumMaxTerms" -> 30, "FunctionExpandMaxSteps" -> 15, "ListableFirst" -> True, "RestartELProver" -> False, "SimplifyMaxExponents" -> 100, "SimplifyToPiecewise" -> True} You can reset this sub-option value to a large enough number SetSystemOptions["SimplificationOptions" -> {"AssumptionsMaxNonlinearVariables" -> 10}]; Simplify[1 == Sqrt[a + b*c*d*e] Sqrt[1/(a + b*c*d*e)], x4 > 0] True Related Q/As:
# Merging bed records based on name I generated a file starting with the following bed lines: $head -6 /tmp/bed_with_gene_ids.bed I 3746 3909 "WBGene00023193" . - I 3746 3909 "WBGene00023193" . - I 4118 4220 "WBGene00022277" . - I 4118 4358 "WBGene00022277" . - I 4118 10230 "WBGene00022277" . - I 4220 4223 "WBGene00022277" . - I would like to merge them based on the name field (the 4-th column), taking the min for the start and the max for the end. Other fields are expected to be the same for all records having the same name. Expected result: I 3746 3909 "WBGene00023193" . - I 4118 10230 "WBGene00022277" . - I found a potential solution based on bedtools groupby here: https://www.biostars.org/p/145751/#145775 Sample data: cat genes.bed chr14 49894259 49895806 ENSMUST00000053290 0.000000 ... chr14 49894873 49894876 ENSMUST00000053290 0.000000 ... chr14 49894876 49895800 ENSMUST00000053291 0.000000 ... chr14 49895797 49895800 ENSMUST00000053291 0.000000 ... chr14 49901908 49901941 ENSMUST00000053291 0.000000 ... Example output: sort -k4,4 genes.bed \ | groupBy -g 1,4 -c 4,2,3 -o count,min,max \ | awk -v OFS='\t' '{print$1, $4,$5, $2,$3}' chr14 49894259 49895806 ENSMUST00000053290 2 chr14 49894876 49901941 ENSMUST00000053291 3 However: 1. I don't understand the groupBy behaviour (Why -g 1,4 and not just -g 4?, Why -c 4,2,3 in this order and then rearrange things using awk?) 2. This code doesn't work for me. Here is what happens when I try the solution given above: $head -3 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 1,4 -c 4,2,3 -o count,min,max | awk -v OFS='\t' '{print$1, $4,$5, $2,$3}' 3 3746 4220 Here are attempt based on what I thought could work according to the documentation: $head -6 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 4 -c 1,2,3,4,5,6 -o first,min,max,distinct,first,first I 3746 10230 "WBGene00022277","WBGene00023193" . -$ head -6 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 4 -c 1,2,3,4,5,6 -o first,min,max,last,first,first I 3746 10230 "WBGene00022277" . - $head -6 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 4 -c 1,2,3,5,6 -o first,min,max,first,first I 3746 10230 . - I don't get why when I group based on the 4-th column, for which I have two distinct values, I cannot obtain two lines in the resulting output. I understand based on the comments on the documentation page that the documentation is not up-to-date. In particular, there is a -full option that is needed if one wants all fields to be outputted. Re-reading the solution mentioned above, I think I now understand the reason for the multiple columns for the -g option and for the awk rearrangement. Hence the following attempt. $ head -6 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 1,4,5,6 -c 2,3 -o min,max -full I 3746 3909 "WBGene00023193" . - 3746 10230 But this still doesn't give me two lines. Are there other tools that could do what I want efficiently? ### Edit: Solution According to this answer, the problem with bedtools is that there is a bug in the latest release (2.26.0 as of august 2017). In order to have a functional bedtools groupby, one needs to get the development version from github. With the github version of bedtools, I can now get the expected result as follows: $head -6 /tmp/bed_with_gene_ids.bed | bedtools groupby -g 1,4,5,6 -c 2,3 -o min,max | awk -v OFS="\t" '{print$1,$5,$6,$2,$3,$4}' I 3746 3909 "WBGene00023193" . - I 4118 10230 "WBGene00022277" . - I include fields 1, 5 and 6 in -g (besides field 4) in order to have them printed out. In my bed file, they should be the same for a given value of field 4. The awk part is needed because one has apparently not total control on the output order: the -g fields come before the -c fields. • What do you want to do with the score and strand fields if they're different between lines, or does that never happen? – Devon Ryan Aug 10 '17 at 12:55 • Actually, I don't care about the score field, and I would ideally set it to "." if it is not already the case. I cannot guarantee that the strand field will always be the same, but since these bed lines come from transcript annotations whose gene_id I've put in the name field, I suppose it will generally be true that for a same name, the strand will be the same. I should check this, though. – bli Aug 10 '17 at 15:06 ## 5 Answers Although you don't mention it, I'm guessing you're using bedtools v2.26.0. Version 2.26.0 of groupBy has a bug in it, which you've encountered (it was fixed shortly after release, so you'll either have to use a version before the bug was introduced, or compile the current source code yourself from https://github.com/arq5x/bedtools2) v2.26.0: local10:~/Documents/tmp$ cat asdf.bed I 3746 3909 WBGene00023193 . - I 3746 3909 WBGene00023193 . - I 4118 4220 WBGene00022277 . - I 4118 4358 WBGene00022277 . - I 4118 10230 WBGene00022277 . - I 4220 4223 WBGene00022277 . - local10:~/Documents/tmp$groupBy -i asdf.bed -g 4 -c 2,3 -o min,max 3746 10230 v2.26.0-125-g52db654 (I.E. compiling the source code from github): local10:~/Documents/tmp$ bedtools2/bin/groupBy -i asdf.bed -g 4 -c 2,3 -o min,max WBGene00023193 3746 3909 WBGene00022277 4118 10230 1) You might notice that my output above gives the grouped columns first; you'll have to reorder the output via awk in order to get it back in order. As for why they chose to group on both columns 1 and 4: if you have the same name on multiple chromosomes, you may want to treat them as separate features. 2) Version differences, as stated in the first part of my answer. To actually merge the file: Make sure to run this with a version other than v2.26.0 (As Devon Ryan writes in the comments, you may want to add column 6 to -g to make it strand-specific): ./bedtools2/bin/groupBy -i asdf.bed -g 1,4 -c 2,3,5,6 -o min,max,first,first \ | awk -v OFS='\t' '{print $1,$3, $4,$2, $5,$6}' I 3746 3909 WBGene00023193 . - I 4118 10230 WBGene00022277 . - • If you include 6 in -g 1,4 then you benefit by not merging genes on different strands. UCSC sometimes has those and they really aren't the same gene and shouldn't be merged together. You don't need 1 in -c, or 6 if you add it to -g. – Devon Ryan Aug 10 '17 at 18:30 You could do this with the CGAT toolkit: cgat bed2bed --method=merge --merge-by-name -I bed_with_gene_ids.bed Installing such a massive package might be overkill for this task though. • It happens that cgat is already installed on my computer (though I forgot for what purpose). I tried the command you suggest, and I end up with a duplicate of I 3746 3909 "WBGene00023193" . - . Granted, there were duplicate lines in the original bed. But is this behaviour expected? – bli Aug 10 '17 at 15:32 • Also, If I run this on the whole file and not just the first 6 lines, after a while, the program fails on TypeError: '<' not supported between instances of 'Bed' and 'Bed'. I'm upgrading cgat to see if the error persists. – bli Aug 10 '17 at 15:45 • I reported the issues here: github.com/CGATOxford/cgat/issues/347 – bli Aug 10 '17 at 16:27 • Your first problem is not as far as I am aware the intended behaviour. And I'm pretty sure the second is not intended. I suggest you file a bug report. – Ian Sudbery Aug 10 '17 at 16:27 • We crossed posts! – Ian Sudbery Aug 10 '17 at 16:30 You can do this easily with Hail. Hail primarily uses BED files to annotate genetic datasets (see the last annotate_variants_table example), but you can manipulate BED files using Hail's general facilities for manipulating delimited text files. For example: $cat genes.bed I 3746 3909 "WBGene00023193" . - I 3746 3909 "WBGene00023193" . - I 4118 4220 "WBGene00022277" . - I 4118 4358 "WBGene00022277" . - I 4118 10230 "WBGene00022277" . - I 4220 4223 "WBGene00022277" . - The Hail script (python code): from hail import * hc = HailContext() (hc .import_table('genes.bed', impute=True, no_header=True) .aggregate_by_key('f0 = f0, f3 = f3', 'f1 = f1.min(), f2 = f2.max(), f4 = ".", f5 = "-"') .select(['f0', 'f1', 'f2', 'f3', 'f4', 'f5']) .export('genes_merged.bed', header=False)) The result: $ cat genes_merged.bed I 3746 3909 WBGene00023193 . - I 4118 10230 WBGene00022277 . - I aggregate over chrom and name so this solution won't merge entries on different chromosomes. The select is necessary to reorder the fields because aggregate_by_key places the keys being aggregated over first. Disclosure: I work on Hail. $cut -f4-6 in.bed | sed 's/\t/_/g' | sort | uniq | awk -F'_' '{ system("grep "$1" in.bed | bedops --merge - "); print $0; }' | paste -d "\t" - - | sed 's/_/\t/g' | sort-bed - > answer.bed Given your sample input: $ more in.bed I 3746 3909 "WBGene00023193" . - I 3746 3909 "WBGene00023193" . - I 4118 4220 "WBGene00022277" . - I 4118 4358 "WBGene00022277" . - I 4118 10230 "WBGene00022277" . - I 4220 4223 "WBGene00022277" . - The answer.bed file: $more answer.bed I 3746 3909 "WBGene00023193" . - I 4118 10230 "WBGene00022277" . - Sorting with sort-bed is useful at the end, so that you can pipe it or work with it with other BEDOPS tools, or other tools that now accept sorted BED input. Streaming is a pretty efficient way to do things, generally. How this works Here's the pipeline again: $ cut -f4-6 in.bed | sed 's/\t/_/g' | sort | uniq | awk -F'_' '{ system("grep "$1" in.bed | bedops --merge - "); print$0; }' | paste -d "\t" - - | sed 's/_/\t/g' | sort-bed - > answer.bed We start by cutting columns 4 through 6 (id, score and strand), replacing tabs with underscores, sorting and removing duplicates: cut -f4-6 in.bed | sed 's/\t/_/g' | sort | uniq What we get out of this is a sorted list of "needles" — one for each ID-score-strand combination: an ID-needle — that we can use to grep or filter the original BED file. This list is piped to awk which, for each ID-needle, runs grep against the original BED file and pipes the subset to bedops --merge -, which merges overlapping intervals. Note that merging only works for overlapping intervals. Merging is not necessarily the same as returning a min-max pair, and this pipeline will break if there are intervals that do not overlap. But you could modify the awk statement to process the input intervals and return the minimum and maximum interval coordinates, if that is really what you want, by tracking the min and max values over all intervals that come into awk, and printing a final interval with an END block. The system command prints the merged interval on one line. The following print $0 statement prints the needle on the next line: awk -F'_' '{ system("grep "$1" in.bed | bedops --merge - "); print $0; }' We take each pair of alternating lines and re-linearize them with paste. This result now contains four columns: the three columns of each merged interval, and the ID-needle. We then use sed to replace underscores with tabs, so that we turn the ID-needle back into three, tab-separated ID-score-strand columns: paste -d "\t" - - | sed 's/_/\t/g' The output is now a six-column BED file, but it is ordered by the sort order we applied on ID-needles further up the pipeline, which we don't want. What we really want is BED that is sorted per BEDOPS sort-bed, so that we can do more set operations and get a correct result. So we pipe this to sort-bed - to write a sorted file to answer.bed: sort-bed - > answer.bed • Thanks for the answer, it works, and I think I understood how. Maybe some explanations about the different steps could be useful. – bli Aug 11 '17 at 10:58 If you're 100% sure that everything but the start and end positions will be the same for all lines sharing a name, you could just do it yourself. For example, in Perl: $ perl -lane '$start{$F[3]}||=$F[1]; if($F[1] < $start{$F[3]} ){ $start{$F[3]} = $F[1] } if($F[2] > $end{$F[3]} ){ $end{$F[3]} = $F[2] }$chr{$F[3]} =$F[0]; $rest{$F[3]} = join "\t", @F[4,$#F]; END{ foreach$n (keys %chr){ print "$chr{$n}\t$start{$n}\t$end{$n}\t$n\t$rest{\$n}" } }' file.bed I 3746 3909 "WBGene00023193" . - I 4118 10230 "WBGene00022277" . - • I was hoping that an efficient tool already existed and would avoid me re-inventing the wheel in a slow scripting language. – bli Aug 10 '17 at 15:03 • @bli absolutely, that makes much more sense. I just figured this is simple enough, so I may as well give a scripting solution. But yes, this will be slow and is also very naive so it will break if your files are slightly different. – terdon Aug 10 '17 at 15:12
### Home > CCA2 > Chapter 10 > Lesson 10.1.4 > Problem10-69 10-69. Use reference angles, the symmetry of a circle, and the knowledge that cos$(\frac { \pi } { 3 })=\frac { 1 } { 2 }$ to write three other true statements using cosine and angles that are multiples of $\frac { \pi } { 3 }$. Homework Help ✎ Draw a unit circle diagram for the given angle. Reflect the triangle in each of the $4$ quadrants. Quadrant $2$ is shown for you. $\text{cos}\left( \frac{2\pi}{3} \right)=-\frac{1}{2}$
# Looking for cube recommendations #### Elysium82 ##### Member Hey guys, I have bought a normal Cube for 3 euros and then a GAN 356M one. I am pretty happy with the GAN cube and I guess GAN 12 would be the one that could bring much difference in quality, but are there any other brands that one would recommend that offer cubes at the level of GAN 356M or above (near Gan 12)? I am looking for a new cube that I could add to the collection. One that could offer something new (even if it is just a little bit of a difference in quality). Thank you. #### Osric ##### Member I’m a fan of the bluetooth cubes. They’re great for figuring out what part of your solve to work on. #### Elysium82 ##### Member I’m a fan of the bluetooth cubes. They’re great for figuring out what part of your solve to work on. Bluetooth? That sounds way too high-tech. Which brand does them? How do they work? An app monitors how you solve the cube? #### Osric ##### Member Bluetooth? That sounds way too high-tech. Which brand does them? How do they work? An app monitors how you solve the cube? My tl;dr would be: Monster GO AI if you're on a budget, or GAN i3 or Moyu AI if not; the Monster GO AI is made by GAN and uses their app and the Moyu AI app is terrible but the cube is good. Then use cubeast for solving / solve analysis. Some day my OSS project will be ready for use but don't hold your breath. Osric #### OldSwiss ##### Member Yes. Like Osric said, bluetooth cubes are very cool to keep track of records, analyze weaknesses... They show you time, number of moves algs you used... for every step. The GAN Cubes have also nice algorithm trainiers where you can follow the pictures until the alg is in your muscle memory. But they are not tournament legal. If you plan do go at some competitions you should get a different one. (in addition ) The best bluetooth cube i have tested is the GAN i3, but the GAN iCarry is also quite good and much cheaper. For the normal cube there are tons of other threads in this subforum. I think you should read these Maybe this video can show you the advantages: #### Elysium82 ##### Member My tl;dr would be: Monster GO AI if you're on a budget, or GAN i3 or Moyu AI if not; the Monster GO AI is made by GAN and uses their app and the Moyu AI app is terrible but the cube is good. Then use cubeast for solving / solve analysis. Some day my OSS project will be ready for use but don't hold your breath. Osric Dont call me Shirley. See...how would YouTube tell me in an instant that the MoYu app is rubbish. They review things and just want to sell things. Thank you bro. #### Elysium82 ##### Member Yes. Like Osric said, bluetooth cubes are very cool to keep track of records, analyze weaknesses... They show you time, number of moves algs you used... for every step. The GAN Cubes have also nice algorithm trainiers where you can follow the pictures until the alg is in your muscle memory. But they are not tournament legal. If you plan do go at some competitions you should get a different one. (in addition ) The best bluetooth cube i have tested is the GAN i3, but the GAN iCarry is also quite good and much cheaper. For the normal cube there are tons of other threads in this subforum. I think you should read these Maybe this video can show you the advantages: Thank you. The Gan 356 i carry looks pretty darn good and that is definitely a budget cube. #### LBr ##### Member I don’t have a smart cube, but they have their advantages. One thing to bear in mind when choosing is that the MGAi and the I Carry don’t have motion sensors so they don’t track rotations. That is part of the reason why they are cheaper, besides the addition of the disposable batteries. So if rotating during solves is a problem you might want an upgrade on those cubes #### Foreright ##### Member I’d go for the Gan i3 or Moyu ai - the others are all rubbish. The iCarry is terrible - magnets way too strong and it catches like a mother#%#. I replaced it with the Moyu ai and haven’t looked back - just my 2p * just to say : I like my cubes with very light magnets and fast turning - the iCarry simply cannot be set up like that whereas the Moyu can. In fact my main is the Wrm 2021 maglev and the Moyu ai feels just like it. I expect the Gan i3 is as good but I don’t have one. The iCarry was a huge disappointment for me as the replaceable battery is a great idea, it just sucks as a speedcube - honestly my v1 little magic feels nicer. #### Osric ##### Member Maybe it's because I'm nowhere near as fast as Foreright but I don't think the iCarry is so horrible. It does lack the gyro but for the price it is a pretty good cube; I feel the same way about the Monster Go AI which has a slightly cheaper feel. It is true that the Moyu is a really nice cube and the i3 is also very good. But they are more than double the price and you can still get a lot of the value of a bluetooth cube out of the cheaper options. #### CatoWeeksbooth ##### Member On the subject of bluetooth cubes, there is also the Gan 12 UI, which is already out in China and will supposedly soon be available globally. I haven't tried it, but it looks like a major upgrade from the Gan i3. #### Dutch Speed ##### Member I am in love with my GAN13 now . Expensive but worth every dollar #### Kaiju_cube ##### Member DaYan Tengyun V2 very underrated No idea why everyone love Gan, MoYu, and Qiyi but hardly anyone mentions DaYan. DaYan make fantastic cubes. #### Isaiah Scott ##### Member I would highly recommend the Moyu WeiLong WRM 2019. I get that this cube is kinda old, but it has fantastic performance in my opinion. This doesn’t have to be your choice but this is a great puzzle. LBr
# Is $O(T+\log T)= O(T\log T)$? Is $$O(T+\log T)= O(T\log T)$$? I think this is true but I do not know how to show it mathematically? Please show it using the definition. Also, if it is true, is the following true? $$O((T+\log T)^{1/n})= O((T\log T)^{1/n})$$ • What makes you think it may be true? – Sandro Lovnički Dec 30 '18 at 19:09 • "Please show it using the definition." What definition are you talking about? I know at least three common ones for $O(\cdot)$. – dkaeae Dec 30 '18 at 19:16 Let $$b$$ be the base of the logarithm. If $$T > \max(b^2, 2)$$, then $$\log T =\log_b T> 2$$. So $$T\log T - (T+\log T) = (T-1)(\log T -1) -1 > 1\times 1 -1 =0$$ i.e., $$T+\log T< T\log T$$. So, any function that grows asymptotically slower than $$T+\log T$$ modulo a constant factor also grows asymptotically slower than $$T\log T$$ modulo the same constant factor. According to the definition of multiple usages of big O-notation, $$O(T+\log T)= O(T\log T)$$ Yes, it is true that $$O((T+\log T)^{1/n})= O((T\log T)^{1/n})$$, where we consider $$n$$ as a constant. • The last $1$ in the r.h.s of the first equality should be $-1$. It works for $t \geq 4$ when $\log$ is natural logarithm and $t \geq 15$ when the base is $10$. – Saeed Dec 30 '18 at 19:51 • @Saeed Note the abuse of notation here. While "O(n + log n) = O(n log n)" is wrong, mathematically speaking, "=" is often supposed to be read as "$\subseteq$" in this context. – Raphael Dec 30 '18 at 21:39 • @ Raphael: Good point. thank you. I didn't know that. Shall I use, say $O(n) \subseteq O(n+1)$ instead of $O(n) =O(n+1)$? – Saeed Dec 30 '18 at 22:26
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Jan 2019, 22:13 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### The winners of the GMAT game show January 22, 2019 January 22, 2019 10:00 PM PST 11:00 PM PST In case you didn’t notice, we recently held the 1st ever GMAT game show and it was awesome! See who won a full GMAT course, and register to the next one. • ### Key Strategies to Master GMAT SC January 26, 2019 January 26, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. # If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 52392 If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is  [#permalink] ### Show Tags 15 Feb 2017, 02:27 00:00 Difficulty: 25% (medium) Question Stats: 79% (00:52) correct 21% (00:51) wrong based on 50 sessions ### HideShow timer Statistics If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is the value of y? A. −12 B. −3 C. 3 D. 6 E. 12 _________________ Manager Joined: 13 Apr 2010 Posts: 89 Re: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is  [#permalink] ### Show Tags 15 Feb 2017, 02:54 Bunuel wrote: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is the value of y? A. −12 B. −3 C. 3 D. 6 E. 12 Substitute x = 9 in the Quadratic equation . 81 - 9y + 27 = 0 y = 108/9 y = 12 Senior Manager Joined: 19 Apr 2016 Posts: 274 Location: India GMAT 1: 570 Q48 V22 GMAT 2: 640 Q49 V28 GPA: 3.5 WE: Web Development (Computer Software) Re: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is  [#permalink] ### Show Tags 15 Feb 2017, 03:04 Bunuel wrote: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is the value of y? A. −12 B. −3 C. 3 D. 6 E. 12 x^2 − yx + 27 = 0 If x = 9 81-9y+27=0 9y=108 y=12 Hence Option E is correct Hit Kudos if you liked it . Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4353 Location: India GPA: 3.5 Re: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is  [#permalink] ### Show Tags 15 Feb 2017, 07:23 Bunuel wrote: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is the value of y? A. −12 B. −3 C. 3 D. 6 E. 12 Plug in the value of x = 9 and check - $$9^2 − 9y + 27 = 0$$ Or, $$9y = 108$$ So, y = 12 Hence, correct answer must be (E) $$y = 12$$ _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) VP Joined: 07 Dec 2014 Posts: 1153 If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is  [#permalink] ### Show Tags 15 Feb 2017, 15:10 Bunuel wrote: If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is the value of y? A. −12 B. −3 C. 3 D. 6 E. 12 we know 27 is a multiple of 9 27/9=3 we know signs of both numbers are negative (x-9)(x-3)=x^2-12x+27=0 y=12 If x = 9 is one solution to the quadratic x^2 − yx + 27 = 0, what is &nbs [#permalink] 15 Feb 2017, 15:10 Display posts from previous: Sort by
# Correlation between prices or returns? If you are interested in determining whether there is a correlation between the Federal Reserve Balance Sheet and PPI, would you calculate the correlation between values (prices) or period-to-period change (returns)? I've massaged both data sets to be of equal length and same date range and have labeled them WWW (WRESCRT) and PPP (PPIACO). Passing them into R we get the following: > cor(WWW, PPP) [1] 0.7879144 Then applying the Delt() function: > PPP.d <- Delt(PPP) Then applying the na.locf() function: PPP.D <- na.locf(PPP.d, na.rm=TRUE) Then passing it through cor() again: > cor(WWW.D, PPP.D) [1] -0.406858 So, bottom line is that it matters. NOTE: To view how I created the data view http://snipt.org/wmkpo. Warning: it needs refactoring but good news is that it's only 27 iines. - If the period-to-period change shows a correlation, you've shown a correlation between the first derivative (with respect to time) of the two quantities. That's a little different than showing correlation between the values themselves (although I'm guessing there's some relation between the two). –  barrycarter Feb 13 '11 at 20:55 WRT your bottom line: don't forget that you now have quantitative easing, which you should consider as a structural break in the model. Using only data up to 2007 I get a correlation of only -0.1524359 –  Owe Jessen Feb 14 '11 at 14:50 The blog post: http://www.portfolioprobe.com/2011/01/12/the-number-1-novice-quant-mistake/ shows (and tries to explain) why you generally want to use returns and not prices. - This blog post explains it well, and extra bonus points for using R. Thanks. –  Milktrader Feb 14 '11 at 19:39 Short answer, you want to use the correlation of returns, since you're typically interested in the returns on your portfolio, rather than the absolute levels. Also, correlations on price series have very strange properties. If you think about a time series of prices, you could write it out as [P0,P1,P2,...,PN], or [P0,P0+R1,P0+R1+R2,...,P0+R1+...+RN], where Ri = Pi-P(i-1). Written this way you can see that the first return R1, contributes to every entry in the series, whereas the last only contributes to one. This gives the early values in the correlation of prices more weight than they should have. See the answers in this thread for some more details. - This answer makes intuitive sense based on the premise that R1 has an inordinate influence. –  Milktrader Feb 14 '11 at 1:07 I would add that one may want to look at either correlation of returns or cointegration of prices rather than correlation on prices –  RockScience Feb 15 '11 at 11:10 It depends, usually you would want to measure correlation between variables that are both stationary, else you would always be able to measure a correlation in the case of variables developing with a trend, even if they are unrelated. In this case I would guess that you should use first differences. -
# Phase Angle for Simple Harmonic Motion Phase Angle / Phase Question - Simple Harmonic Motion ## Homework Statement If I were given a function of displacement for simple harmonic motion in the form of: x = Acos($$\omega$$t + $$\phi$$) Would the phase angle, $$\phi$$, always be the same? Say if I derived the equations into forms for velocity and acceleration as well. The phase angle would not change.. is that correct? Also, I am confused about what is the phase and what is the phase angle? My textbook lists the "phase angle" as $$\phi$$ and some other sources list "phase" as ($$\omega$$t + $$\phi$$). What's the difference between these?? ## Homework Equations x = Acos($$\omega$$t + $$\phi$$) v = -$$\omega$$Acos($$\omega$$t + $$\phi$$) a = -$$\omega$$2Acos($$\omega$$t + $$\phi$$) ## The Attempt at a Solution My belief is that it wouldn't change, but I want to be 100% sure. Last edited: Hi, sorry to "bump" a thread but it's been a while now and I still want some clarity on this. Can anyone offer help? As far as your first question is concerned, phase does not change if you take derivative of 'x' as long as the phase is independent of time. ehild Homework Helper It is better to call Φ "the phase constant" It is not really angle, as no angle is involved in the SHM. Φ does not change if you calculate the time derivatives of the displacement. The phase is ωt+Φ. It depends on time. When x=Acos(ωt+Φ) has its maximum value the phase is 2kΠ (k is integer). And x=0 if ωt+Φ=(k+1/2)Π . ehild
Installs a list of new models to the model library. Models may be either single files or wildcard specifications. Duplicates will be ignored Arguments Number Type Compulsory Default Description 1 string array Yes Model path names Argument 1 String array containing library specifications to be added. A library specification can either be a single file or a wildcard definition, e.g. path$\backslash$*.lb Returns Return type: real Number of models actually installed. This may be less than the number supplied if any are already installed
# I am a beginner user of MCNP and is still learning how to use it. When 1. May 19, 2012 ### NuclearEng12 I am a beginner user of MCNP and is still learning how to use it. When I run my program, I get a message that says 'bad trouble in subroutine main of mcnp'. What exactly does that mean and how can i correct it. Thanks. 2. May 21, 2012 ### QuantumPion Re: Mcnp Try searching for error messages in the output file. If you have no output file at all, verify that you have installed the software correctly. The package should have included some test cases which you can run to verify that MCNP is correctly installed. 3. May 21, 2012 ### NuclearEng12 Re: Mcnp The following are the error messages I am getting warning. print table 128 requires 1088303 dec. words of storage. bad trouble in subroutine newcel of mcrun source particle no. 672 starting random number = 111194823047613 zero lattice element hit. 1problem summary run terminated because of bad trouble. 4. May 21, 2012 ### QuantumPion Re: Mcnp It sounds like you have defined a lattice but not all of the elements are filled, or you have filled an array element with an invalid universe. Check your fill and lattice cards and make sure they are set up correctly. 5. May 22, 2012 ### NuclearEng12 Re: Mcnp I have reviewed my input file and can not identify where I might have went wrong. c Cell Cards 1 1 -10.41 -1 -9 u=1 imp:n=1 $Fuel Pellet 2 2 -0.130 (1 -2 -9):(9 -2) u=1 imp:n=1$ Helium Gap 3 3 -6.52 2 -3 u=1 imp:n=1 $ZIRLO Cladding 4 4 -0.727 3 u=1 imp:n=1$ Water/ Coolant 5 4 -0.727 -11 u=2 imp:n=1 $Outside Fuel Element 6 0 -4 5 -7 6 LAT=1 u=3 imp:n=1 Fill=-9:9 -9:9 0:0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2$ Physical Boundary 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2$ Row 2 2 1 1 1 1 1 2 1 1 2 1 1 2 1 1 1 1 1 2 $Row 3 2 1 1 1 2 1 1 1 1 1 1 1 1 1 2 1 1 1 2$ Row 4 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 5 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2$ Row 6 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 7 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2$ Row 8 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2 $Row 9 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2$ Row 10 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 11 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2$ Row 12 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 13 2 1 1 1 2 1 1 1 1 1 1 1 1 1 2 1 1 1 2$ Row 14 2 1 1 1 1 1 2 1 1 2 1 1 2 1 1 1 1 1 2 $Row 15 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2$ Row 16 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 $Row 17 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2$ Physical Boundary 7 0 15 14 -12 -13 8 -10 Fill=3 u=4 imp:n=1 $Lattice Physical Boundary 8 4 -0.727 -16 u=5 imp:n=1$ Outside Assembly 9 0 14 -12 -13 15 LAT=1 u=6 imp:n=1 Fill=-8:8 -8:8 0:0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 $Physical Boundary 2 2 2 2 2 2 2 3 3 3 2 2 2 2 2 2 2$ Row 1 2 2 2 2 2 3 3 3 3 3 3 3 2 2 2 2 2 $Row 2 2 2 2 2 3 3 3 3 3 3 3 3 3 2 2 2 2$ Row 3 2 2 2 3 3 3 3 3 3 3 3 3 3 3 2 2 2 $Row 4 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2$ Row 5 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 $Row 6 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2$ Row 7 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 $Row 8 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2$ Row 9 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 $Row 10 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2$ Row 11 2 2 2 3 3 3 3 3 3 3 3 3 3 3 2 2 2 $Row 12 2 2 2 2 3 3 3 3 3 3 3 3 3 2 2 2 2$ Row 13 2 2 2 2 2 3 3 3 3 3 3 3 2 2 2 2 2 $Row 14 2 2 2 2 2 2 2 3 3 3 2 2 2 2 2 2 2$ Row 15 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 $Physical Boundary 10 0 -17 8 -10 Fill=6 imp:n=1$ Lattice Physical Boundary 11 0 17:-8:10 imp:n=0 $Outside Core c Surface Cards 1 cz .410 2 cz .418 3 cz .475 4 px .630 5 px -.630 6 py -.630 7 py .630 8 pz 0 9 pz 366 10 pz 400 11 cz 100 12 px 10.71 13 py 10.71 14 px -10.71 15 py -10.71 16 cz 500 17 cz 199.4 c Data Cards m1 92235.70c -0.003338 92238.70c -0.063422 8016.70c -0.008980$ 5% Enriched UO2 Fuel 92235.71c -0.040734 92238.71c -0.773946 8016.71c -0.109580 m2 2004.70c -0.07574 $Helium 2004.71c -0.92426 m3 40090.70c 0.038189 40091.70c 0.008328 40092.70c 0.012730$ ZIRLO Cladding 40094.70c 0.012900 40096.70c 0.002078 50112.70c 0.000007 50114.70c 0.000005 50115.70c 0.000003 50116.70c 0.000110 50117.70c 0.000058 50118.70c 0.000183 50119.70c 0.000065 50120.70c 0.000247 50122.70c 0.000035 50124.70c 0.000044 41093.70c 0.000757 40090.71c 0.466021 40091.71c 0.101628 40092.71c 0.155340 40094.71c 0.157424 40096.71c 0.025362 50112.71c 0.000090 50114.71c 0.000061 50115.71c 0.000031 50116.71c 0.001344 50117.71c 0.000710 50118.71c 0.002239 50119.71c 0.000794 50120.71c 0.003011 50122.71c 0.000428 50124.71c 0.000535 41093.71c 0.009243 m4 1001.70c 0.050487 1002.70c 0.000006 8016.70c 0.025247 \$ H2O 1001.71c 0.616102 1002.71c 0.000071 8016.71c 0.308087 kcode 1000 1.0 20 120 ksrc 1.26 0 183 44.1 -86.94 183 86.94 -22.68 183 22.68 -44.1 183 108.36 -86.94 183 129.78 1.26 183 44.1 44.1 183 86.94 65.52 183 65.52 108.36 183 22.68 65.52 183 -86.94 86.94 183 -22.68 22.68 183 -65.52 65.52 183 -22.68 151.2 183 -1.26 0 183 -1.26 -86.94 183 -65.52 -44.1 183 -44.1 -22.68 183 -151.2 -22.68 183 -86.94 -44.1 183 PRINT 128 6. May 23, 2012 ### marvin_NIFB Re: Mcnp how do i get my hands on the MCNP code? 7. May 23, 2012 ### NuclearEng12 Re: Mcnp You have to request it from Oak Ridge i think. 8. May 23, 2012 ### QuantumPion Re: Mcnp It looks to me like you are not defining your lattices correctly, or at least the surfaces the cells are assigned to. It looks like you are trying to use universe 2 for both a reflector region as well as a guide tube region although their geometry does not match. Maybe you meant to use universe 5 for this purpose since it is not used anywhere. Last edited: May 23, 2012 9. May 23, 2012 ### QuantumPion Re: Mcnp I don't have an example of a whole core case, I think that would run very very slowly on my computer :( Last edited: May 23, 2012 10. May 23, 2012 ### NuclearEng12 Re: Mcnp Thanks for the pointers. It will really help me. 11. May 23, 2012 ### Astronuc Staff Emeritus Re: Mcnp So one is trying to model a 17x17 lattice with a 366 cm active fuel zone. There are 157 assemblies in the core. 2 is in the lattice map is water. The guide tube and water between assemblies would have different geometries and size. The densities in the cell cards seem correct, and the radial and axial dimensions in the fuel rods and the cell pitch of ~12.6 cm seem correct. Above the active fuel zone would be the plenum (filled with He and stainless steel spring ~ 0.09 - 0.15 of the void volume with a height of about 20 cm). There are upper and lower endplugs (Zr-4 or ZIRLO) and stainless steel nozzle below and above the core. It looks like you might have some gad in there. BTW - 5% is a bit high for a fresh core, without extensive holddown. 12. May 23, 2012 ### Astronuc Staff Emeritus Re: Mcnp Access is restricted and usually limited to government facilities, e.g., DOE labs, universities and corporations who a then subject to export controls. 13. May 25, 2012 ### NuclearEng12 Re: Mcnp I have corrected the universes in the 2nd lattice by replacing universe 2 with universe 5 however it is still saying 'Zero Lattice Element Hit'. I looked at the code closely and still have no idea what the problem is.
# Covariance matrix in linear regression I have read about linear regression and interpreting OLS results i.e coefficients, t-value, p-value. But unable to find any material related to covariance matrix in linear regression. I was reading about assumptions in linear regression, came across the term heteroscedasticity and was researching about its consequences. So while going through one of the answer on Stack Exchange about consequences of Heteroscedasticity I came across the term covariance matrix. Here's a link to that StackExchange What is covariance matrix and how should one interpret it? One of the OLS assumptions is the zero conditional mean assumption, which states that $$E[u|X]=0$$, so that errors average out to 0. Another assumption is homoskedasticity, which means that there is no (auto)correlation in the residuals $$E[u u'|X]=\sigma I$$. So the covariance matrix (sometimes also called variance-covariance matrix) is: $$\sigma I = \sigma \left[ \begin{array}{rrrr} 1 & 0 & \cdots & 0\\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{array}\right] = \Omega .$$ The important thing is that all elements in the matrix (apart of the diagonal elements from top left to bottom right) are zero, which means "no correlation between residuals". Also there is a constant variance equal to $$\sigma$$. In case $$\Omega \neq \sigma I$$, you face heteroscedasticity and you would need to "model" $$\Omega$$, e.g. by "Feasible Generalized Least Squares" (FGLS). Say you have a linear model $$y=X\beta+u$$ with $$E(u^2)=exp(Z\gamma)$$. Here $$exp(Z\gamma)$$ is an example for a scedastic function which describes the conditional variance. In order to obtain consistent estimates for $$\gamma$$, one need to obtain consistent estimates for $$u$$ in the first place. This can be done by using OLS to obtain $$\hat{\beta}$$ and $$\hat{u}$$. Based on this, one can run an auxiliary linear regression $$\log \hat{u}^2 = Z \gamma + v$$ to get an estimate for $$\hat{\gamma}$$. These estimates can be used to compute $$\hat{\omega} = (\exp(Z\hat{\gamma}))^{1/2}$$. Finally, FGLS estimates of $$\beta$$ are obtained by OLS with weights $$\hat{\omega}$$ which is called feasible weighted least squares. See Davidson/MacKinnon: "Econometric Theory and Methods", Ch. 7.4.
## anonymous one year ago What matrix 2x2 P1 projects the vector (x, y) onto the x axis to produce (x, 0)? What matrix P2 projecs onto the y axis to produce (0, y)? 1. beginnersmind |dw:1442100623341:dw| Which gives a = 1, b=c=d=0. You can do P_2 the same way, noting that P_2*(x,y) = (0,y) 2. beginnersmind |dw:1442101038643:dw| 3. anonymous Thank you. Your answer is very straight foward. Later I found the answer with a time consumin and painful reasoning, thinking about the equation of the line that gives the x-axis as the answer, instead of a single point. From there, I figured that it looked like the identity matrix (2 by 2) times the vector (x, y) gone wrong, instead of giving x+y, would give just x, after playing a little, I end up with this: $\left[\begin{matrix}1 &0 \\ 0 &0\end{matrix}\right]\left[\begin{matrix}x \\ y\end{matrix}\right]$ Too much pain for a very simple question. The way you solved it is much better. 4. beginnersmind BTW, there's a useful fact about multiplying a matrix with a column vector. The result is a linear combination of the columns of the matrix, with the coordinates of the vector as coefficients. Written out: |dw:1442280304311:dw| So the first coordinate of the vector multiplies the first column, the second coordinate the second column, etc. (It work both in 2 dimensions or 3. Or any other number actually). A consequence of this is that there's a very easy way to calculate the product of a standard unit vector with a matrix. A*(1,0) is just the first column of the matrix. A*(0,1) is the second column of the matrix. You can use this idea backward as well. If you are given the image of the standard unit vectors under a (unknown) matrix and are told to find the entries of the matrix you are in luck: The image of (1,0) is the first column of the matrix and the image of (0,1) is the second column. Or in 3 dimensions, the image of (1,0,0) is the first column, etc.
Does $30$ divide $n^5-n$ for all positive integers $n$? 1. Factorise $n^2-2n-3$. 2. Is $n^2+n$ always even when $n$ is an integer? 3. Does $6$ divide $100002$? Try to reason about the divisors of both numbers. How do you split $30$ into a product of prime factors? Can you factorise $n^5 - n$? You should obtain $n(n-1)(n+1)(n^2+1)$. What can you say about the product of 3 consecutive numbers in terms of divisibility by $2$ and $3$? If you assume that $5$ divides $n^5-n$, how can you prove that 5 divides $(n+1)^5-(n+1)$? First notice that $30=2\cdot3\cdot5$. Since $2$, $3$ and $5$ are all coprime we will prove that each of them divides $n^5-n$ and hence conclude that their product does too. We now factorise $n^5-n$: \begin{aligned} n^5-n &= n(n^4-1) \\ &= n(n^2-1)(n^2+1)\\ &= n(n-1)(n+1)(n^2+1) \end{aligned} The product $(n-1)n(n+1)$ is of 3 consecutive numbers, hence necessarily both 2 and 3 must divide it. We must now prove divisibility by $5$. There are several approaches to do this, but here we shall use induction. It’s easy to verify the base case for $n=0$ or $n=1$. Then do the inductive step: \begin{aligned} (n+1)^5 - (n+1) &= n^5 + 5n^4 + 10n^3 + 10n^2 + 5n - n \\ &= n^5-n + 5(n^4+2n^3+2n^2+n) \end{aligned} First part is divisible by $5$ owing to the induction hypothesis, while the rest is obviously a multiple of 5.
## Study23 Group Title Okay. I'm having trouble with series. Here's the problem (I'm looking to see whether the series is convergent or divergent, if convergent I have to find the sum it converges to): summation from n=1 to infinity of $$\ \frac{n}{n+5}$$. one year ago one year ago 1. Study23 Group Title I'm thinking partial fractions or factoring n out but I'm not sure... 2. macknojia Group Title can you type this out i can't read it. Sorry :( 3. Study23 Group Title ? 4. macknojia Group Title nvm, i got it. You can view this as a geometric series.$n/(n-(-5))$ 5. macknojia Group Title so your first term is n and your ratio is -5. now using the formula you can find the sum it converges to. 6. Study23 Group Title So you factor the n out? 7. Study23 Group Title ??? 8. macknojia Group Title not really. are you familiar with the formula to find the sum of a convergent series ? 9. Study23 Group Title I meant to find what a and r is before I find the sum. 10. Study23 Group Title Because r is less than 1 (-5) does that mean this is divergent? 11. tkhunny Group Title What is the limit of the TERMS as n increases without bound? If it's not zero, it doesn't converge. 12. Study23 Group Title But r has to be -1<r<1 right? 13. Study23 Group Title r is -5 so... 14. tkhunny Group Title Yes, that is good for 'r', but you don' even need that. Just look at the limit of the terms. Test #1, in ALL cases. 15. tkhunny Group Title BTW, r is NOT 5. It's 1. 16. Study23 Group Title . ? And wouldn't the limit as n approaches infinity be 0 because denominator is larger? 17. tkhunny Group Title No. The limit of the terms is 1. As n increases, the 5 becomes insignificant. 18. Study23 Group Title Okay. And how do I find r to be one then? 19. macknojia Group Title the coefficient of numerator and the denominator is 1 therefore, the limit is one. 20. macknojia Group Title try graphing it. 21. tkhunny Group Title First, I don't actually care. Since the limit of the terms is 1, it cannot converge. There is no reason to do the ration test. Second, just do it. $$\dfrac{\dfrac{n+1}{n+6}}{\dfrac{n}{n+5}} = \dfrac{n^{2}+6n+5}{n^{2}+6n}$$. The limit of this, as n increases without bound, is 1. As n increases, 5 becomes insignificant and it is clearly 1.
# Compression algorithms specific to complex signals I am looking for (lossy or lossless) compression algorithms dedicated to complex signals. The latter could be composite data (like the left and right for stereo audio), a Fourier transformation or an intermediate step of a complex processing, an Hilbert pair of generic signals, or complex measurements like some NMR (Nuclear magnetic resonance ) or mass spectrometry data. A little background: I have been working on real signal and image coding, and mildly on video and geological mesh compression. I have practiced adaptive lossless and lossy compression (RLS, LPC, ADPCM), Huffman and arithmetic coding, transforms (DCT, wavelets), etc. Using standard compression tools separately on each of the real and the imaginary parts is not the main aspect of the question. I am interested in compression-related ideas specific to complex data, for instance: • sampling of the complex plane, • joint quantization of module and phase, • optimal quantization of 2D complex values: Loyd-Max is "optimal" for 1D data. I remember that 2D optimal quantization is generally more complicated. Are there 2D binnings dedicated to complex, or for instance Gauss integers? • entropy coding methods, arranging along complex "features" (angle, magnitude), • entropy calculation for complex data, • use of specific statistical tools (e.g. from Statistical Signal Processing of Complex-Valued Data: The Theory of Improper and Noncircular Signals) • integer transformations for Gauss integers. I would be interested in more authoritative references. A compression file format dedicated to complex data would be a plus. • I'm not sure I understand the point of this question. If you just join two arbitrarily related real signals to a complex signal, what kind of advantage do you expect from a complex encoding? I would expect some kind of complex structure to be present for a joint encoding to be sensible. So complex analyticity, complex boundedness, complex bandlimitedness, etc. Do you have a more specific use case maybe and an idea of what kind of answer you expect? – Jazzmaniac Nov 15 '16 at 8:56 • @Jazzmaniac Excellent point. I became aware of the lack of context of my question reading the first answers. I have added some more background and details. Additional suggestions welcome – Laurent Duval Nov 15 '16 at 9:44 Personally, I'd assume that coders that work well on real-valued data would work well on the separated imaginary and real parts too – and after those, another round of trying to compress by exploiting the fact that I and Q very often will have very similar derivatives. Thus, my first action would be to try a DPCM coder on the separated real and imaginary parts. If that doesn't fit your needs, then do DPCM on the interleaved results, then do the huffman. Alternatively, go first from $\Re\{z\}+j\,\Im\{z\}$ to $|z|+e^{j\angle z}$ and compress $|z|$ based on a simple DPCM, and $\angle z$ with an elegantly wrapping linear predictor (reminds me a bit of the title of the '97 paper you cite). In fact, I'm trying to remember all the details, but we had an interesting discussion on the topic on the GNU Radio mailing list not so long ago (SDR users sometimes run into problems like having a hard time finding storage that can write 100MS/s of int16 complexes…). I remember looking at FLAC for the purpose, and then not writing any code. I wonder why that is. Anyway, I'd really try to convince flac to compress your samples as audio (maybe just simply stereo) and see how well linear prediction gets the most of your signal's entropy :) EDIT I remember why I didn't look into FLAC: zstd came up and distracted me. So: I went ahead and generated a sum of sinusoids and noise: and saved everything as interleaved signed complex int16. Then, I did a quick flac --force-raw-format --channels 2 --bps 16 --sample-rate $((50*1000)) --endian little --sign signed -8 -o samples.flac samples.complex_sc16 (faking a 50 kHz sampling rate... but as far as I can tell, the FLAC coder doesn't care – the sampling rate info is just used to tell potential audio players what sampling rate to use). The result was a 31% compression over the 191 MB of original data. Not that bad! Zstd, on noise-ridden data, of course isn't that effective - a mere 5% compression was achieved (which is really meagre, considering that of the 16 bit per I and Q, most of the time, maybe 10 are used, and very few samples even came close to the maximum integer values, indicating that there should be significant runs of 0 and 1 in the bitstream. Anyway, as soon as I switched of noise, the 191 MB were immediately reduced to 12 MB – strength of a dictionary-based compressor on periodic data, especially since zstd uses a "modern PC-sized" dictionary size (and not the "oh, 16kB is soooo much" of gzip). Flac didn't do much better than on the noise-ridden signal. • Your answer is the best in hints so far. – Laurent Duval Nov 24 '19 at 12:01 Complex signals are a special case of multidimensonal signals (where the dimension is two). A lossy approach tackling compression of multidimensional signals is vector quantization. A very good resource is the book: "Vector Quantization and Signal Compression", co-authored by Robert M. Gray. Vector Qquantization is a classic lossy source coding technique that is used to quantize$M$-dimensional source vectors into a set of$N$-dimensional code vectors which is referred to as the codebook, and$N<M$. A scalar quantizer is a special case with$M = 1. A complex signal in general can be seen as two correlated sources. Of course, if you don't want to assume any correlation between the real and imaginary part, then the answer is trivial since the two sources can be compressed independently and separately (using the standard compression techniques of real signals). If you are interested in information-theoretic aspects of lossy/lossless compression of correlated sources, Wyner-Ziv and Slepian-Wolf coding are the keywords. These schemes are refereed to as distributed source coding schemes. However, since a complex signal can be modeled by two correlated sources, they can be considered here as well. Finally, for analytic signals, since the imaginary/real part can be perfectly reconstructed from the other one by using the hilbert transform, the compression problem boils down to compression of a single real signal, for which there are plenty of standard schemes. In response to the updated question: Loyd algorithm generalized for vector quentization is referred to as LBG Algorithm (abbreviation of the name of the authors of the following paper) Y. Linde, A. Buzo and R. Gray, "An Algorithm for Vector Quantizer Design," in IEEE Transactions on Communications, vol. 28, no. 1, pp. 84-95, Jan 1980. It was again extended in 1989 for entropy coded vector quantization: P. A. Chou, T. Lookabaugh, R. M. Gray, “Entropy constrained vector quantization,” IEEE Transactions on Signal Processing, vol. 37, no. 1, pp. 3142, January 1989. I can recommend visiting www.data-compression.com/vq for a descent explanation of the scheme. It works in a simple way: • Divide the vector space. • Update the partitions. • Repeat the procedure for the new partitions until the quantization error is acceptable. At the end, all data points that belong to a partition are quantized to the coordination of the center of partitions. The centers are indexed such that to represent all quantized values only the index is stored/sent. Animated illustration from the same website for 2D Gaussian source: to losslessly compress audio the simplest way is to use LPC (the LPC coefficients would be stored and updated for each frame of audio) to predict a sample and encode the difference between the LPC prediction and the actual sample. that difference should be reasonably white (otherwise the LPC could use knowledge of that color and make a better prediction). so the LPC delta output is pretty white, but still mostly gaussian. so then something like Huffman coding can be done on the delta values because the p.d.f. of those values is not Uniform, so the more likely values have few bits encoded than the less likely values. no, for complex data, like an analytic signal, then it can be represented as \begin{align} a_x(t) &= x(t) + j\hat{x}(t) \\ \\ &= r_x(t) e^{j \theta_x (t)} \\ \end{align} and\hat{x}(t)$is the Hilbert transform of$x(t)$. and$ \omega_x(t) \triangleq \frac{d \theta_x(t)}{dt} $. if the bandwidth of the analytic signal is low, then$r_x(t)$and$\omega_x(t)$will have much less bandwidth than the real signal$x(t)$. in that case, many fewer bits (and many fewer samples) can be afforded$r_x(t)$and$\omega_x(t)\$. Are you looking for already available, or you want your own specifically customized? You can try mine, it is called irolz, google it and you find it. There is an explanation, but you have to be an expert in data compression to customize the code. Also you can try another my code ezcodesample.com/abs/abs_article.html it works well for small alphabets. For large alphabets it is not effective. • I am interested in both options. – Laurent Duval Nov 27 '19 at 20:36