text
stringlengths
104
605k
# The "GetAssemblyVersionInfo" task failed unexpectedly Here’s the error that is being returned: C:\Builds\7174\TGG\WebAPI_Dev_Build_Deploy\src\WebAPI\packages\OctoPack.3.0.41\tools\OctoPack.targets (44): The “GetAssemblyVersionInfo” task failed unexpectedly. System.IO.FileNotFoundException: C:\Builds\7174\TGG\WebAPI_Dev_Build_Deploy\bin\WebAPI.dll at System.Diagnostics.FileVersionInfo.GetVersionInfo(String fileName) at OctoPack.Tasks.GetAssemblyVersionInfo.CreateTaskItemFromFileVersionInfo(String path) in y:\work\46cfb6001f03d701\source\OctoPack.Tasks\GetAssemblyVersionInfo.cs:line 48 at OctoPack.Tasks.GetAssemblyVersionInfo.Execute() in y:\work\46cfb6001f03d701\source\OctoPack.Tasks\GetAssemblyVersionInfo.cs:line 40 at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.d__20.MoveNext() I can see that it is looking for WebAPI.dll and can’t find it. It is true that the file isn’t there. The weird thing is that when I comment out the GetAssemblyVersionInfo section - it still gets called. I’ve attached the targets file (with the section NOT commented out) in case that helps. BTW, this is OctoPack v3.0.41. Thanks in advance! OctoPack.targets (9 KB) After many changes - I’m not even sure what ended up working - I’ve gotten past this. Thanks.
# Tag Info 3 Some background on formal key-exchange models The goal of a key-exchange (KE) is to establish a session key between two parties. Naively, we could say that a KE is secure if no adversary will be able to figure out the session key (in full) established between two honest parties. However, in formal security models we take this a bit further and insist that ... 2 I can see some weaknesses in your protocol. For example, it allows any attacker to request the encrypted private key and thus mount an offline dictionary or brute force attack on the password. (An incorrect password doesn't give you a properly formatted OpenSSH key.) Thus this is a good idea: I'd rather use something proven. As mentioned in the ... 2 But if Server’s certificate is checked sucessfully by Client, how is it possible to consider that Server has been authenticated by Client, while at this time none message signed with Server’s private key has been sent to the Client and verified by it ? If only consider the key exchange to be what the RFC says it is, then yes this key-exchange can ... 1 This question both describes the SRP-Z variant and mentions that the patent should expire in three years. I think the patent in question is this one, but I'm not sure: US 6539479. 1 Your idea seems secure, but it is more complicated than it needs to be. If you only need to know if the "secure information" in the document matches some public information (stored on an insecure computer), it is enough to calculate a preimage resistant hash of the document (e.g. SHA-256). The document needs only to be unguessable, like a 256-bit random ... 1 The authentication tag in GCM is generated by XORing a block cipher output with the Galois field hash (and truncating it for shorter lengths). It is thus assumed to look PRF. So it is effectively just a random nonce that should not collide until a birthday bound of $2^{t/2}$. With a tag length of 96 or more bits, it should be secure. Shorter random IV ... 1 TL;DR: use (D)TLS. This is exactly the kind of problem it was meant to solve. If possible, use cert pinning too (if you get to deploy the code on both ends of the channel, this should be possible). The general rule is: don't design your own crypto protocol unless both of the following apply. You have done a detailed review of what exists already and ... Only top voted, non community-wiki answers of a minimum length are eligible
Server Error Server Not Reachable. This may be due to your internet connection or the nubtrek server is offline. Thought-Process to Discover Knowledge Welcome to nubtrek. Books and other education websites provide "matter-of-fact" knowledge. Instead, nubtrek provides a thought-process to discover knowledge. In each of the topic, the outline of the thought-process, for that topic, is provided for learners and educators. Read in the blogs more about the unique learning experience at nubtrek. User Guide Welcome to nubtrek. The content is presented in small-focused learning units to enable you to think, figure-out, & learn. Just keep tapping (or clicking) on the content to continue in the trail and learn. User Guide To make best use of nubtrek, understand what is available. nubtrek is designed to explain mathematics and science for young readers. Every topic consists of four sections. nub, trek, jogger, exercise. User Guide nub is the simple explanation of the concept. This captures the small-core of concept in simple-plain English. The objective is to make the learner to think about. User Guide trek is the step by step exploration of the concept. Trekking is bit hard, requiring one to sweat and exert. The benefits of taking the steps are awesome. In the trek, concepts are explained with exploratory questions and your thinking process is honed step by step. User Guide jogger provides the complete mathematical definition of the concepts. This captures the essence of learning and helps one to review at a later point. The reference is available in pdf document too. This is designed to be viewed in a smart-phone screen. User Guide exercise provides practice problems to become fluent in the concepts. This part does not have much content as of now. Over time, when resources are available, this section will have curated and exam-prep focused questions to test your knowledge. summary of this topic ### Limit of Algebraic Expressions Voice Voice Home Limit of Expressions evaluating to oo »  Organize the sub-expressions to the following →  a/oo = 0 →  oo+-a = oo →  oo xx a = oo when a != 0 →  oo xx oo = oo →  oo^n = oo when n != 0 →  lim_(x->oo) x/x =1 →  lim_(x->-oo) x/x =1 →  lim_(x->oo) a^x =0 text( if ) a<1 = oo text( if ) a>1 ### Limit of functions evaluating to oo plain and simple summary nub plain and simple summary nub dummy When evaluating limits to infinity or minus infinity, simplify to known results. simple steps to build the foundation trek simple steps to build the foundation trek Support Nubtrek You are learning the free content, however do shake hands with a coffee to show appreciation. To stop this message from appearing, please choose an option and make a payment. Finding limit of standard ratios evaluating to oo/oo or oo-oo is explained with examples. Keep tapping on the content to continue learning. Starting on learning "Limits of functions evaluating to infinity". ;; Finding limit of standard ratios evaluating to infinity by infinity, or, infinity minus infinity is explained with examples. What is the values of function f(x)=3x^2+5x-2 at x=oo? • oo • -2 • 0 The answer is 'oo' By substitution x=oo (3x^2+5x-2) quad quad = (3(oo)^2+5 oo - 2) quad quad = oo as oo^2=oo; n oo = oo; and oo +- a = oo What is the values of function f(x)=1/(3x^2+5x-2) at x=oo? • oo • -2 • 0 The answer is '0' By substitution x=oo 1/(3x^2+5x-2) quad quad = 1/(3(oo)^2+5 oo - 2) quad quad = 1/oo quad quad = 0 as 1/infinity = 0. What is the value of function f(x)=3x^2-5x-2 at x=oo? • oo-oo • 1 • 0 The answer is 'oo - oo' By substitution x=oo 3x^2-5x-2 quad quad = 3 oo^2 - 5 oo -2 quad quad = oo - oo as oo^2=oo; n oo = oo; and oo +- a = oo Note: The limit to this function is explained after few pages. What is the value of oo - oo? • oo • 1 • 0 • indeterminate value The equivalence can be explained with oo - oo quad quad = (1/0) - (1/0) quad quad = (1-1)/0 quad quad = 0/0 What is the value of function f(x)=color(deepskyblue)(3x^2+5x-2)/color(coral)(x^2+x-2) at x=oo? • oo/oo • 1 The answer is 'oo/oo' By substitution x=oo color(deepskyblue)(3x^2+5x-2)/color(coral)(x^2+x-2) quad quad = color(deepskyblue)(3(oo)^2+5 oo - 2)/color(coral)(oo^2+oo-2) quad quad = color(deepskyblue)(oo)/color(coral)(oo) as oo^2=oo; n oo = oo; and oo +- a = oo Note: The limit to this function is explained after few pages. What is the value of oo/oo? • oo • 1 • 0 • indeterminate value The equivalence can be explained with oo/oo quad quad = (1/0) -:(1/0) quad quad = 1/0 xx 0/1 quad quad = 0/0 What forms of expressions evaluate to indeterminate values when computing limit for oo or -oo? • oo xx oo and oo + oo • oo -: oo and oo - oo The answer is 'oo -: oo and oo - oo' When we encounter oo -: oo or oo - oo, convert the expression to one of the following forms given on left hand side lim_(x->oo) x/x = 1 lim_(x->-oo) x/x = 1 a/oo = 0 oo^n=oo n oo = oo oo +- a = oo Limit of function f(x)=(3x^2-5x-2) at x=oo The function evaluates to oo-oo at x=oo The limit of the function is lim_(x->oo) (3x^2-5x-2) quad quad = lim_(x->oo) x^2(2-5/x - 2/x^2) quad quad = lim_(x->oo) x^2 quad quad quad quad xx lim_(x->oo) (2-5/x - 2/x^2) quad quad = oo^2 xx (2-0-0) quad quad = oo Function f(x)=color(deepskyblue)(3x^2+5x-2)/color(coral)(x^2+x-2) at x=oo The function evaluates to oo/oo at x=oo The limit of the function is lim_(x->oo) color(deepskyblue)(3x^2+5x-2)/color(coral)(x^2+x-2) quad quad = lim_(x->oo) color(deepskyblue)(x^2(3+5/x-2/x^2))/color(coral)(x^2(1+1/x-2/x^2)) quad quad = lim_(x->oo) color(deepskyblue)(x^2)/color(coral)(x^2) quad quad quad quad xx lim_(x->oo)color(deepskyblue)(3+5/x-2/x^2)/color(coral)(1+1/x-2/x^2) quad quad = [lim_(x->oo) color(deepskyblue)(x)/color(coral)(x)]^2 xx color(deepskyblue)(3+0-0)/color(coral)(1+0-0) quad quad = 1^2 xx 3 quad quad = 3 comprehensive information for quick review Jogger comprehensive information for quick review Jogger dummy Evaluating limits to oo or -oo: Simplify the numerical expressions to one of the following lim_(x->oo) x/x = 1 lim_(x->-oo) x/x = 1 a/oo = 0 oo +- a = oo n oo = oo where n!=0 oo xx oo = oo or oo^n=oo where n!=0 And avoid indeterminate values oo/oo, oo-oo, 0 xx oo, and oo^0 . practice questions to master the knowledge Exercise practice questions to master the knowledge Exercise Find the limit of the function lim_(x->oo) (x+3)/(5x+4) • 1/5 • 5 • oo • 0 The answer is '1/5' Progress Progress What is the values of function f of x equals 3 x squared + 5 x minus 2 at x = infinity? infinity infinity minus;2 minus 2 0 0 The answer is "infinity". ;;By substitution x= infinity;; 3 x squared + 5 x minus 2 ;; equals ;; 3 infinity squared + 5 times infinity minus 2;; equals infinity ;; as infinity squared = infinity; n times infinity = infinity;; infinity plus of minus a constant = infinity. What is the values of function f of x equals, 1 divided by, 3 x squared + 5 x minus 2 at x = infinity? infinity infinity minus minus 2 0 0 The answer is "0". ;; By substitution x = infinity;; 1 divided by 3 x squared + 5 x minus 2 ;; equals ;; 1 divided by 3 infinity squared + 5 times infinity minus 2;; equals 1 by infinity ;; equals 0 ;; as 1 by infinity = 0. What is the value of function f of x = 3 x squared minus 5 x minus 2 at, x = infinity?, infinity;minus infinity minus infinity 1 1 0 0 The answer is "infinity minus infinity". ;; By substitution x = infinity;; 3 x squared minus 5 x minus 2 ;; equals 3 infinity squared minus 5 times infinity minus 2;; equals infinity minus infinity;; Note: The limit to this function is explained after few pages. What is the value of infinity minus infinity? infinity infinity 1 1 0 0 indeterminate;value indeterminate value The answer is "indeterminate value" ;; The equivalence can be explained with ;; infinity minus infinity ;; equals 1 by 0 minus 1 by 0 ;; equals 1 minus 1, by 0;; equals 0 by 0 What is the value of function f of x = 3 x squared + 5 x minus 2, by x squared + x minus 2 at x = infinity? infinity infinity by infinity 1 1 The answer is "infinity by infinity". ;; By substitution x = infinity;; 3 x squared + 5 x minus 2 , by x squared + x minus 2;; equals 3 infinity squared + 5 times infinity minus 2, divided by, infinity squared + infinity minus 2;; equals infinity by infinity;; as infinity squared = infinity, n times infinity = infinity, and, infinity plus or minus a constant = infinity ;; Note: The limit to this function is explained after few pages. What is the value of infinity by infinity? infinity infinity 1 1 0 0 indeterminate;value indeterminate value The answer is "indeterminate value". ;; the equivalence can be explained with ;; infinity by infinity;; equals 1 by 0 divided by 1 by 0;; equals 1 by 0 multiplied by 0 by 1;; equals 0 by 0 What forms expressions evaluate to indeterminate values when computing limit for infinity or minus infinity multiplied;plus;times infinity multiplied infinity and infinity plus infinity divided;minus infinity divided by infinity and infinity minus infinity The answer is "infinity divided by infinity and infinity minus infinity". when we encounter, infinity divided by infinity, or, infinity minus infinity convert the expression to one of the following forms given on left hand side;; limit x tending to infinity x by x, = 1;; limit x tending to minus infinity x by x = 1;; a. by infinity = 0;; infinity power n = infinity;; infinity plus or minus a. = infinity Limit of funcion f of x = 3 x squared minus 5 x minus 2, at x=infinity;; The function evaluates to infinity minus infinity at x = infinity;; the limit of the function is ;; limit x tending to infinity, 3 x squared minus 4 x minus 2;; equals limit x tending to infinity, x squared multiplied 2 minus 5 by x minus 2 by x squared ;; equals limit x tending to infinity, x squared multiplied limit x tending to infinity 2 minus 5 by x minus 2 by x squared;; equals infinity squared multiplied 2 minus 0 minus 0;; equals 0 function f of x =, 3 x squared + 5 x minus 2, divided by, x squared + x minus 2 at x, at x = infinity ;; The function evaluates to infinity by infinity at x= infinity ;; The limit of the function is limit x tending to infinity, 3 x squared + 5 x minus 2, divided by, x squared + x minus 2 at x;; equals limit x tending to infinity, x squared multiplied, 3 + 5 by x minus 2 by x squared, divided by x squared multiplied, 1 + 1 by x minus 2 by x squared ;; equals limit x tending to infinity, x squared by x squared ; limit x tending to infinity, 3 + 5 by x minus 2 by x squared, divided by, 1 + 1 by x minus 2 by x squared;; equals limit x tending to infinity, x by x whole squared, multiplied 3+ 0 minus 0 divided by 1 plus 0 minus 0;; equals 1 squared multiplied 3;; equals 3. When evaluating limits to infinity or minus infinity, simplify to known results. Evaluating limits to infinity or minus infinity: Simplify the numerical expressions to one of the following, limit x tending to infinity x by x, = 1;; limit x tending to minus infinity x by x = 1;; a. by infinity = 0;; infinity plus or minus a. = infinity ;; n times infinity = infinity ;; infinity multiplied infinity = infinity ;; infinity power n = infinity ;; And avoid indeterminate infinity by infinity, infinity minus infinity, 0 times infinity, and infinity power 0. Find the limit of the function limit x tending to infinity x+3 divided by 5x + 4. 1;by 1 by 5 5 5 infinity infinity 0 0 The answer is "one by 5" we are not perfect yet...
Lovegrove Mathematicals "Probabilities are likelinesses over singleton sets" # Terminology ## 1. The Degree • A coin is of degree 2; • A die is of degree 6; • A pack of cards is of degree 52; • The days of the week are of degree 7. The degree is the number of possibilities that something (tossing a coin; rolling a die; drawing a card) may take. To enable a coherent theory to be developed, the possibilities/classes are labelled 1,2,...,N where N is the degree. The set {1,...,N} is denoted by XN. This is the domain of definition for distributions and histograms of degree N. ## 2. Distributions A distribution of degree N is a function f: 1. which is defined for i=1, ..., N  and 2. for which f(i)>0 for all i   and 3. for which f(1)+...+f(N)=1  . It is usual to represent the distribution f by the ordered N-tuple ( f(1), ... ,f(N) ). The set of all distributions of degree N is denoted by If f∈S(N) then f is injective if [i≠j] ⇒ [f(i)≠f(j)]. The set of non-injective elements of S(N) has measure zero and so may be safely ignored: the description of various algorithms as producing injective distributions is for the sake of technical precision only, and has no practical or theoretical consequences. ## 3. Histograms An histogram of degree N is a function h:- 1. which is defined for i=1,...,N 2. for which h(i)≥0 for all i. The set of all histograms of degree N is denoted by H(N). The sample size of h is ω(h)= h(1)+...+h(N). When writing out the histogram h we normally just write out the values of the h(i) as an ordered N-tuple. For example, (1.25, 2.13, 4.87, 8.92) ### Integrams An integram of degree N is an histogram of degree N which takes only integer values. The most important integram is the zero integram 0 for which all values are zero. The set of all integrams of degree N is denoted by G(N). We denote the integram g(i)=1, g(j)=0 for j≠i by ''i''N (the quote marks are part of the notation). Because the degree is normally obvious from the context, this is more usually written as For example, if the degree is 6 then "2"="2"6=(0,1,0,0,0,0) If g∈G(N) the Multinomial coefficient associated with g is given by (Because the denominator contains the term g(i)! then g(i) must be an integer. Since this is the case for all i, g does have to be an integram, not just an histogram.) ## 4. Relative Frequencies Let h∈H(N), h≠0 . Then we define RF(h) to be the relative frequencies of h, that is RF(h)(i)= h(i)/ω(h). An alternative notation for RF(h)(i) is RF(i|h), the relative frequency of i given h. If we have a set of histograms of degree N, say {h1, ...,hK}, then:- • Provided at least one h is not 0, we define the Mean Relative Frequency of i to be . An alternative name for this is the Global Relative Frequency of i. • Provided no h is 0, we define the Mean of the Relative Frequencies of i to be ## 5. Convex and concave sets A set, P, is convex if, no matter which two points we select in P, the straight line segment joining them is wholly in P. A set which is not convex is called concave, but many authors prefer the term non-convex. The core (some prefer the term convex hull) of a set is its smallest convex superset. It follows from the definition that a convex set is its own core. Core(P) is important because the mean of any number (not necessarily finite) of elements of P always lies in Core(P). In particular, if P is convex then the mean lies in P,  but if P is concave then that mean might not be in P. This is especially important in some of the applied sciences, where it can be essential that the result of any analysis be describable in the same way as P. For example, R(N) -the set of ranked distributions of degree N- is convex, so a mean of ranked distributions will also be a ranked distributions of the same degree. On the other hand, U(N) -the set of unimodal distributions of degree N- is concave, so a mean of unimodal distributions might not be unimodal. When this is important there is the tendency to use best-fitting rather than best-estimating, since best-fitting forces a solution which has the required description even though ## 6. Symmetry If P⊂S(N) then P is said to be (i,j)-symmetric if P contains f(i,j) whenever P contains f, where f(i,j) is that distribution which is obtained from f by interchanging f(i) and f(j). The histogram h is (i,j)-symmetric if h(i)=h(j). P is symmetric if it is (i,j)-symmetric for all i,j∈XN. h is symmetric if it is (i,j)-symmetric for all i,j∈XN. That is, if it is a constant histogram. ## 7 Likeliness of an Integram Let P be a non-empty subset of S(N), g∈G(N) and h∈H(N) then we define the Likeliness, over P, of g given h by h is called the , and g is the required integram. P is the underlying set. (Σ is the Daniell integral, which can be thought of as summation on a finite set but as the Riemann integral when that is required.) Provided no confusion results, we 1. write LP(i|h) rather than LP("i"|h); 2. write LP(g) rather than LP(g|0). The L-point is the point (distribution) in S(N) with co-ordinates ( LP(1|h), ..., LP(N|h) ). Associated with this is the function which will be often used on this site to plot graphs. When doing this, we shall sometimes use the alternative notation Average(P) rather than LP. ## 8 Likeliness of a set of distributions If Pis an underlying set in (ie. a non-empty subset of) S(N), V⊂S(N) and h∈H(N) then we define the Likeliness, over P, of V given h by When h=0, we write LP(V) rather than LP(V|0) A fundamental difference between the likeliness of an integram and the likeliness of a set of distributions is that the former cannot be 0 but the latter can (if V and P do not intersect). Similarly, the likeliness of an integram cannot be 1 (except in the degenerate case N=1), but that of a set of distributions can (if P⊂V). If P is a singleton set, P={f}, then LP(V|h) can be only 0 (if f∉V) or 1 (if f∈V) and is equivalent to the characteristic function of V. ## 9. Probability of an integram given a distribution If the underlying set is singleton, P={f}, then the expression for LP(g|h) takes on an especially simple and significant form, namely LP(g|h)= M(g)fg. This is the everyday Multinomial Theorem form of the probability of g when the generating distribution is f. , we may define Pr(g|f)=L{f}(g|h) and call this the probability of the integram g given the distribution f. This is a highly significant step, because it is defining probability in terms of likeliness, rather than the other way round. It follows from this that Pr("i"|f) = f(i). It is important to note that, since the expression M(g)fg makes no reference to h, Pr(g|f) is independent of h, that is of experimental/observational data. So probabilities are likelinesses over singleton sets, and are independent of the given histogram.
Integration of e Hello - I'm stuck at a point: e^√x dx: using subsitution, e^u: dx = 2√x back to org equation & sol: 2√x * e√x *√x-1: question: as you can see, I've got the first two pieces, where doesthe √x-1 come from? Answers (1) • as I see it the question is use substitution u= so du= solving for dx and remembering u= you get 2u du=dx  now you have all the elements tosubstitute the new integral is using integrals by parts let u=u and du=du, v=and dv= after back substitution factor out a  and you should be able to see whereeverything came from Get homework help More than 200 experts are waiting to help you now...
# Number-Theory ## 0.1. gcd, co-primes gcd is short for greatest common divisor If a,b are co-primes, we denote as $(a,b)=1, \text{which means } gcd(a,b)=1$ We can use Euclid algorithm to calculate gcd of two numbers. ### 0.1.1. Bezout’s identity Let a and b be integers with greatest common divisor d. Then, there exist integers x and y such that ax + by = d. More generally, the integers of the form ax + by are exactly the multiples of d. we can use extended euclid algorithm to calculate x,y,gcd(a,b) ## 0.2. primality_test ### 0.2.2. Miller-Rabin Excerpted from wikipedia:Miller_Rabin_primality_test Just like the Fermat and Solovay–Strassen tests, the Miller–Rabin test relies on an equality or set of equalities that hold true for prime values, then checks whether or not they hold for a number that we want to test for primality. First, a lemma “Lemma (mathematics)”) about square roots of unity in the finite field Z/p**Z*, where p is prime and p > 2. Certainly 1 and −1 always yield 1 when squared modulo p; call these trivial “Trivial (mathematics)”) square roots of 1. There are no nontrivial square roots of 1 modulo p (a special case of the result that, in a field, a polynomial has no more zeroes than its degree). To show this, suppose that x is a square root of 1 modulo p*. Then: In other words, prime p divides the product (x − 1)(x + 1). By Euclid’s lemma it divides one of the factors x − 1 or x + 1, implying that x is congruent to either 1 or −1 modulo p. Now, let n be prime, and odd, with n > 2. It follows that n − 1 is even and we can write it as 2s·d, where s and d are positive integers and d is odd. For each a in (Z/n**Z*), either or To show that one of these must be true, recall Fermat’s little theorem, that for a prime number n: By the lemma above, if we keep taking square roots of an−1, we will get either 1 or −1. If we get −1 then the second equality holds and it is done. If we never get −1, then when we have taken out every power of 2, we are left with the first equality. The Miller–Rabin primality test is based on the contrapositive of the above claim. That is, if we can find an a such that and then n is not prime. We call a a witness “Witness (mathematics)”) for the compositeness of n (sometimes misleadingly called a strong witness, although it is a certain proof of this fact). Otherwise a is called a strong liar, and n is a strong probable prime to base a. The term “strong liar” refers to the case where n is composite but nevertheless the equations hold as they would for a prime. Every odd composite n has many witnesses a, however, no simple way of generating such an a is known. The solution is to make the test probabilistic: we choose a non-zero a in Z/n**Z* randomly, and check whether or not it is a witness for the compositeness of n. If n is composite, most of the choices for a will be witnesses, and the test will detect n as composite with high probability. There is, nevertheless, a small chance that we are unlucky and hit an a which is a strong liar for n. We may reduce the probability of such error by repeating the test for several independently chosen a*. For testing large numbers, it is common to choose random bases a, as, a priori, we don’t know the distribution of witnesses and liars among the numbers 1, 2, …, n − 1. In particular, Arnault [4] gave a 397-digit composite number for which all bases aless than 307 are strong liars. As expected this number was reported to be prime by the Maple “Maple (software)”) isprime() function, which implemented the Miller–Rabin test by checking the specific bases 2,3,5,7, and 11. However, selection of a few specific small bases can guarantee identification of composites for n less than some maximum determined by said bases. This maximum is generally quite large compared to the bases. As random bases lack such determinism for small n, specific bases are better in some circumstances. python implementation ## 0.3. Factorization ### 0.3.1. Pollard’s rho algorithm Excerpted from wikipedia:Pollard’s rho algorithm Suppose we need to factorize a number $n=pq$, where $p$ is a non-trivial factor. A polynomial modulo $n$, called where c is a chosen number ,eg 1. is used to generate a pseudo-random sequence: A starting value, say 2, is chosen, and the sequence continues as , The sequence is related to another sequence$\{x_k\ mod \ p\}$ . Since $p$ is not known beforehand, this sequence cannot be explicitly computed in the algorithm. Yet, in it lies the core idea of the algorithm. Because the number of possible values for these sequences are finite, both the$\{x_n\}$ sequence, which is mod $n$ , and $\{x_n\ mod\ p\}$ sequence will eventually repeat, even though we do not know the latter. Assume that the sequences behave like random numbers. Due to the birthday paradox, the number of$x_k$before a repetition occurs is expected to be $O(\sqrt{N})$ , where $N$ is the number of possible values. So the sequence $\{x_n\ mod\ p\}$ will likely repeat much earlier than the sequence $x_k$. Once a sequence has a repeated value, the sequence will cycle, because each value depends only on the one before it. This structure of eventual cycling gives rise to the name “Rho algorithm”, owing to similarity to the shape of the Greek character ρ when the values $x_i\ mod \ p$ are represented as nodes in a directed graph. This is detected by the Floyd’s cycle-finding algorithm: two nodes$i,j$ are kept. In each step, one moves to the next node in the sequence and the other moves to the one after the next node. After that, it is checked whether $\text{gcd}(x_i-x_j,n)\neq 1$. If it is not 1, then this implies that there ris a repetition in the $\{x_k\ mod\ p\}$ swquence This works because if the $x_i\ mod\ p$is the same as$x_j\ mod\ p$, the difference between$x_i,x_j$ is necessarily a multiple of $p$. Although this always happens eventually, the resulting GCD is a divisor of $n$ other than 1. This may be$n$ itself, since the two sequences might repeat at the same time. In this (uncommon) case the algorithm fails, and can be repeated with a different parameter. python implementation ## 0.4. Euler function Euler function, denoted as $\phi(n)$, mapping n as the number of number which is smaller than n and is the co-prime of n. e.g.: $\phi(3)=2$ since 1,2 are coprimes of 3 and smaller than 3, $\phi(4)=2$ ,(1,3) Euler function is a kind of productive function and has two properties as follows: 1. $\phi(p^k) = p^k-p^{k-1}$, where p is a prime 2. $\phi(mn) = \phi(m)*\phi(n)$ where $(m,n)=1$ Thus, for every narural number n, we can evaluate $\phi(n)$ using the following method. 1. factorize n: $n = \prod _{i=1}^{l} p_i^{k_i}$, where $p_i$ is a prime and $k_i,l > 0$ . 2. calculate $\phi(n)$ using the two properties. And , $\sigma(n)$ represents the sum of all factors of n. e.g. : $\sigma(9) = 1+3+9 = 14$ A perfect number _n_ is defined as $\sigma(n) = 2n$ The following is the implementation of this two functions. ## 0.5. Modulo equation The following codes can solve a linear, group modulo equation. More details and explanations will be supplied if I am not too busy. Note that I use -- to represent $\equiv$ in the python codes. <--!>
Image augmentation library # Image augmentation library This image augmentation library can be used to crop, flip, blur, sharpen, mix channels, overlay images. The main idea is to use only numpy library to perform these tasks. Augmented images can be used for Machine Learning projects # Getting it To download augmentation_lib, either fork this github repo or simply use Pypi via pip. \$ pip install augmentation-lib # Features This library performs different types of augmentation. ## Crops: center_crop_px takes an image, width and height in pixels and returns center cropped image. center_crop_percents takes an image and size in percents (0 - 1) and returns center cropped image. crop_px takes an image starting point for cropping, width and height in pixels and returns cropped image. crop_percents takes an image starting point (height_coordinate, width_coordinate) in px, and size (height, width) in percents for cropping. random_crop_px takes a cropping size in pixels and randomly crops an image. random_crop_percents takes a cropping size in percents (0 - 1) and randomly crops an image. ## Flips: flip_horizontal takes an image and flips it horizontally. flip_vertical takes an image and flips it vertically. zero_pad takes an image and pads it with zeros by specified number of pixels. ## Convolution: conv_one_step performs convolution for specified slice. convolution_one_layer performs one image layer convolution. full_convolution performs image convolution by specified kernel. ## Dropouts: dropout_random performs random dropout of image pixels. dropout performs dropout of image pixels by specified intensity in percents (0 - 1). ## Other augmentation: shuffle performs shuffle of image layers by specified order. jitter performs color jitter of one specified or random image layer. One of the color channels of the image is modified adding or subtracting a random and bounded value. opacity makes an image transparent. opacity_object makes an image background transparent, works better with object images. overlay2_images overlay 2 images with applied transparency. resize_np resize image using numpy and Nearest Neighbor Interpolation. # Showcase ## Project details Uploaded source
# Why would phosphorus pentoxide be called 'phosphorus(v) oxide'? There's a question in the chemistry textbook I use with my GCSE class which asks pupils to identify which of various compounds feature ionic bonding. One of the compounds is phosphorus(V) oxide, which I gather is an accepted name of phosphorus pentoxide (or tetraphosphorus decoxide, if you prefer). But why? One of my pupils reasoned that it must have ionic bonding, because I'd just explained to them how Roman numerals in brackets indicate the number of electrons lost. This is evidently an exception, but none of the guides to chemical naming I've found explain why! I figure the explanation must lie in subtleties of the oxidation state that I haven't yet got my head round - Wikipedia's page on oxidation state lists phosphorus pentoxide as having oxidation state +5, and phosphorus trioxide as +3, but I'm not entirely following why. Any clarification much appreciated! • $\ce{P4O10}$: assign oxidation states. Result: $\ce{O^{-II}}$ and $\ce{P^{V}}$. Anything unclear? – Jan Sep 30 '15 at 18:01 • "Roman numerals in brackets indicate the number of electrons lost" - They indicate the oxidation state. This only truly corresponds to the number of electrons lost when one talks about pure ionic bonding. Phosphorus in $\ce{P4O10}$ is in oxidation state +5 but it does not mean it exists in the form of $\ce{P^5+}$ ions. It means that, if you cleaved all P-O bonds heterolytically and assigned all the electrons to the more electronegative element, oxygen, then the phosphorus would exist as $\ce{P^5+}$. Until you do that, P is happily sharing electrons with O in a covalent bond. – orthocresol Sep 30 '15 at 18:05 • This previous question (and answer) about the assignment of oxidation states may help clarify: chemistry.stackexchange.com/q/10944/16683 – orthocresol Sep 30 '15 at 18:11 • Thanks @orthocresol, that helps a lot. Having never studied (or taught) oxidation state in any detail, I've always been a little hazy on how it works in covalent compounds, but it's starting to make sense. – Oolong Sep 30 '15 at 20:39
Watch out our Summer Special, Wheels Accessories 905 463 2038 # {{ keyword }} A worksheet with objectives, key terms and activities to be used to support students through independent study of metal reactions with oxygen. metals, their compounds and gases evolved during reactions of the carbonates and sulphide ores with dilute acids. This method can be used to compare different metals and their reactivity with oxygen by observing the reaction. Chemistry Metal & Non-Metal Part 9 (Non metal reaction with oxygen) Class 8 VIII Share this with other teachers: Click to share on Facebook (Opens in new window) On the whole, the metals burn in oxygen to form a simple metal oxide. The reactions of carbon and sulfur with oxygen are examples of non-metals reacting with oxygen. Resources for very low ability set. Home Senior Phase Grade 9 NS PS Grade 9 Reactions of metals with oxygen. Comments are disabled. A worksheet with objectives, key terms and activities to be used to support students through independent study of metal reactions with oxygen. The reactions with oxygen. This website and its content is subject to our Terms and Conditions. Share this with other teachers: Click to share on Facebook (Opens in new window) We show how alkali metals react in air and how they burn in pure oxygen. In each case, you will get a mixture of the metal oxide and the metal nitride. Reaction of Metals with oxygen: Moderately reactive metals Zinc burns in air, only when strongly heated, forming zinc oxide. Simple Chemical Reactions Simple Chemical Reactions Solutions Solids, Liquids and Gases Acids and Alkalis Lesson 4: Reacting metals and non-metals with Oxygen Lesson 1: Chemical or Physical Reaction? Reactions of non-metals with Oxygen; Shared Flashcard Set. If they are struggling, you may want to let them watch Lesson 1 again. Add to cart. Interpretation of Graphs 18. We suggest that your learners draw up a blank table before watching the lesson. NS PS Grade 9 Reactions of non-metals with oxygen. Alkaline earth metals also react with oxygen, though not as rapidly as Group 1 metals; these reactions also require heating. The reactivity of the metal determines which reactions the metal participates in. The formation of metal oxide is a redox reaction. Siyavula's open Natural Sciences Grade 9 textbook, chapter 9 on Reactions of metals with oxygen We use this information to present the correct curriculum and to personalise content to better meet the needs of our users. Home Senior Phase Grade 9 NS PS Grade 9 Reactions of non-metals with oxygen. Learners need to be made aware that compounds may occur as two types of structures, namely molecules and lattices: When a compound is made up entirely of non-metals (CO 2, H 2 O, or NH 3, for example), the smallest unit of that compound will be a molecule. Other metals as barriers to rust Summary Reactions of non-metals with oxygen The general reaction of non-metals with oxygen The reaction of carbon with oxygen The reaction of sulfur with oxygen … These reactions are called combustion reactions. 4Na (s) + O2 (g) —>2 Na2O (s) Which laboratory acids … a metal is: the more vigorous its reactions are; the more easily it loses electrons in reactions to form positive ions (cations) The table summarises some reactions of metals in the reactivity series. Some metals will react with oxygen when they burn. The Reactions with Air. Solving Trigonometric Equations 32 Description: N/A. The transition metals are less reactive than the metals in Groups I and II but their reactions are important to us. Click here to study/print these flashcards. Non-metals react with oxygen to form non-metal oxides. This is "WorksheetCloud - Grade 9 - Natural Sciences - Reactions of non-metals with oxygen" by Adrian on Vimeo, the home for high quality videos and the… Reactions of non-metals with Oxygen. After they have seen each experiment, you could pause the video to give them a chance to record their observations. ... Metals and oxygen worksheet. Formation of simple oxides. 05/24/2020. Magnesium oxide is a base. Sodium and potassium metal are stored under kerosene oil to prevent their reaction with the oxygen,moisture and carbon dioxide of air.They are so reactive that they react vigorously with oxygen.They catch fire and start burning when kept open in the air. Reaction of non-metals with water Sulphur–Non-metal—does not react with water. Magnesium reacts with oxygen to form magnesium oxide: 2 Mg + O 2 → 2 MgO We show how alkali metals react in air and how they burn in pure oxygen. Teaching how metals react with oxygen. Siyavula's open Natural Sciences Grade 9 textbook, chapter 9 on Reactions Of Metals With Oxygen We use this information to present the correct curriculum and to personalise content to better meet the needs of our users. Differentiated resources. The general equation for this reaction is: metal + oxygen → metal oxide. Group 1 metals are very reactive, and must be stored out of contact with air to prevent oxidation. A worksheet with objectives, key terms and activities to be used to support students through independent study of metal reactions with oxygen. One of the common compounds formed by metals is metal oxide. The reaction of magnesium with oxygen is commonly performed in the school lab and produces a very bright, white light with lots of smoke. If you are not sure you can take a peek at the Periodic Table and pick a few non-metals from the right-hand side of the table. 01/07/2020 – Reactions of Metals and Non-metals with Oxygen 02/07/2020 – Reactions of Metals and Non-Metals with Oxygen 08/07/2020 – Acids, Bases and pH Value Metals that react with oxygen produce oxides as their products. Oxidation reactions are exothermic , which means they release energy . Alkali metals of flame Examples Reaction of metals Magnesium burns in air with a brilliant white flame to form magnesium oxide. Description. Metals and Nonmetals are an important part of our life. C (s) + o₂ - co₂ Carbon Metals and Oxygen As before, the unreactive metals (like gold) will not react with oxygen even when heated; however, metals such as magnesium react with oxygen to produce metal oxides . Reaction of metals with oxygen Look at how magnesium reacts with oxygen: /**/ The use of a gas jar full of oxygen can be used to combust other metals. Reactions of metals Reactions with water Metals can react with water, acid and oxygen. The non-metal and oxygen gas (O 2) are the reactants in this type of reaction, and a non-metal oxide is the product. The general equation for the Group is: $3X_{(s)} + N_{2(g)} \rightarrow X_3N_{2(s)}$ Let's find out more about the chemical properties of metals and nonmetals. The transition metals are less reactive than the metals in Groups I and II but their reactions are important to us. Create your own flash cards! Metal + oxygen → metal oxide (a) Metal undergoes oxidation to form positive ions. The product formed in each reaction is a metal oxide. magnesium + oxygen arrow magnesium oxide. Metals magnesium react with Acid, Alkali Metals (Lithium, Sodium, Potassium) react with water, Reactions of metals with acid and with water, A series of free Science Lessons for 7th Grade and 8th Grade, KS3 and Checkpoint Non-metals react with oxygen to form non-metal oxides.Non-metal oxides are acidic in nature.They turn blue litmus to red. Grade 7 - Geometry of Straight Lines 156. Created. Cnr. 1)When sulphur burns in air,it combines with the oxygen of air to form sulphur dioxide (acidic oxide) S (s) + O 2 (g) ——> SO 2 (g) Sulphur dioxide dissolves in water to form sulphurous acid solution Additional Science Flashcards . are common elements. We can't survive without nonmetals like oxygen, and our survival would be tough without the existence of metals. Siyavula's open Natural Sciences Grade 9 textbook, chapter 10 on Reactions Of Non-Metals With Oxygen We use this information to present the correct curriculum and to … Calcium burns in air with a red flame to form calcium Some metals will react with oxygen when they burn. Get a quick overview of Reaction of Metals with Oxygen - I from Reaction of Metals with Oxygen and Reactions of Metals in just 3 minutes. Click here to re-enable them. In Grade 11 you will learn that the mechanisms of these reactions are actually slightly more complex than this, but for now, understanding it at this level is good enough. This is "WorksheetCloud - Grade 9 - Natural Sciences - Reactions of other non metals and Oxygen" by Adrian on Vimeo, the home for high quality videos and… Did you know? Metals are natural compounds of earth’s crust, in which they are generally found in the form of metal ores, associated both with each other and with many other elements. The Reactions with Oxygen. * Detailed GCSE notes on the Reactivity Series of Metals… The product formed in each reaction is a metal oxide. The REACTIVITY SERIES lists all the elements in order of reactivity from the most to the least reactive:. Reactions with Group 2 Elements The elements of Group 2 are beryllium, magnesium, calcium, strontium, barium, and radioactive radium. This page looks at the reactions of the Group 2 elements - beryllium, magnesium, calcium, strontium and barium - with air or oxygen. This is "WorksheetCloud - Grade 9 - Natural Sciences - Reactions of other non metals and Oxygen" by Adrian on Vimeo, the home for high quality videos and… R 150.00. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. When the inside of fruit is exposed to the air, substances within the fruit react with oxygen in the air – a process called oxidation. Scroll down the page for examples and solutions. Combustion reactions are an example of oxidation reactions that usually involve non-metals. 1)When sulphur burns in air,it combines with the oxygen of air to form sulphur dioxide (acidic oxide) S (s 2 Best for KS3, but can be used with KS4. MCS Year 9 Physics iGCSE AS Physics A2 Physics AQA Physics A2 Exam Papers + Revision 2010 onwards Revision material Waynflete Stud. Chemical properties of non metal Reaction of non-metals with oxygen Non-metals react with oxygen to form acidic oxides or neutral oxides . Metals can react with water, acid and oxygen. The Facts. Metals and Acids . Metals, on being burnt in air, react with the oxygen to form metal oxides. Two examples of combustion reactions are: Iron reacts with oxygen to form iron oxide: 4 Fe + 3 O 2 → 2 Fe 2 O 3. This website and its content is subject to our Terms and Conditions. The acidic oxides of non-metals dissolves in water to form acids . The following table shows the reaction of metals in oxygen, water and acid. Cnr. States of Matter and the Kinetic Molecular Theory, Instantaneous Speed, Velocity & Equations of Motion, Newton's Laws and Applications -1st, 2nd & 3rd laws, Newton's Laws and Applications - Universal Gravitation, Atomic Combinations - Molecular Structure, Electricity & Magnetism - Electrostatics, Electricity & Magnetism - Electromagnetism, Electricity & Magnetism - Electric Circuits, Chemical Change - Energy and Chemical Change, Chemical Change - Types of Reactions (Acids & Bases), Chemical Change - Types of Reactions (Redox Reactions), Chemical Systems - Exploiting the Lithosphere, Optical Phenomena and Properties of Materials, Reactions of Alkali Metal-Oxides with Water, Reactions of Alkaline Earth Metals with Oxygen, Reactions of the Transition Metals with Oxygen. Metal oxides are basic in nature.They turn red litmus to blue. The acidic oxides of non-metals dissolves in water to form acids . What's the chemical science behind these substances? Magnesium reacts with oxygen to form magnesium oxide: 2 Mg + O 2 → 2 MgO 2Zn + O 2 → 2ZnO Iron does not burn in air but iron filings when sprinkled in the flame burns vigorously. Level. This is "WorksheetCloud - Grade 9 - Natural Sciences - Reactions of non-metals with oxygen" by Adrian on Vimeo, the home for high quality videos and the… Create Make social videos in an instant: use custom NS PS Grade 9 Reactions of non-metals with oxygen quantity. The general equation for this reaction is: metal + oxygen → metal oxide. All the metals below hydrogen in the reactivity series don't react with acids, so lets concentrate on the more reactive metals.. The reactivity of the metal determines which reactions the metal participates in. Alkaline earth metals GOAL! - Phys Sci. Non-metals react with oxygen to form non-metal oxides.Non-metal oxides are acidic in nature.They turn blue litmus to red. NS PS Grade 9 Reactions of metals with oxygen quantity. Reaction of metal and acid Chemical properties of non metal Reaction of non-metals with oxygen Non-metals react with oxygen to form acidic oxides or neutral oxides . They are also naturally present in the rocks washed by surface water and groundwater and in atmospheric dust. The central challenge of this chapter is to establish that acid-base reactions are ... Write down their formulae. Science. Two examples of combustion reactions are: Iron reacts with oxygen to form iron oxide: 4 Fe + 3 O 2 → 2 Fe 2 O 3. This table allows us to predict how metals will react with substances such as acids and oxygen. By this stage of the series your learners should be able to independently write balanced chemical equations before being given the answer. Title. Sign up here. Vimeo … A series of free Science Lessons for 7th Grade and 8th Grade, KS3 and Checkpoint Science in preparation for GCSE and IGCSE Science. Add to cart. Revisiting chapter 1 Salient Features The metals at the top of the reactivity series are powerful reducing agents since they are easily oxidized. If you're seeing this message, it means we're having trouble loading external resources on our website. Dover Road and Pretoria Avenue, Randburg, South Africa. States of Matter and the Kinetic Molecular Theory, Instantaneous Speed, Velocity & Equations of Motion, Newton's Laws and Applications -1st, 2nd & 3rd laws, Newton's Laws and Applications - Universal Gravitation, Atomic Combinations - Molecular Structure, Electricity & Magnetism - Electrostatics, Electricity & Magnetism - Electromagnetism, Electricity & Magnetism - Electric Circuits, Chemical Change - Energy and Chemical Change, Chemical Change - Types of Reactions (Acids & Bases), Chemical Change - Types of Reactions (Redox Reactions), Chemical Systems - Exploiting the Lithosphere, Optical Phenomena and Properties of Materials, Reactions of Alkali Metal-Oxides with Water, Reactions of Alkaline Earth Metals with Oxygen, Reactions of Group II Metal-Oxides with Water. Metals are defined as those elements which have characteristic chemical and physical properties. They may be encouraged to tabulate various reactions of metals in general form and be familiar with all the reactions. C (s) + o₂ - co₂ Carbon oxygon carbon dioxide CO2 (g) + H2O (l) – H2CO3 (aq) carbon dioxide water carbonic acid 13. Reaction between Metals and Non-metals, Reaction of metals with oxygen, Reaction of metals with sulfur, A series of free Science Lessons for 7th Grade and 8th Grade, KS3 and Checkpoint, GCSE and IGCSE Science, examples Grade 9 - Reactions of Non-Metals with Oxygen. The reactions of the Group 2 metals with air rather than oxygen is complicated by the fact that they all react with nitrogen to produce nitrides. If they are struggling, you may want to … Details. Reaction of metals with oxygen Look at how magnesium reacts with oxygen: /**/ The use of a gas jar full of oxygen can be used to combust other metals. by L S. Loading... L's other lessons. Lesson 2 The familiar types of metals are (iron, aluminium, copper, lead, zinc, silver, gold, etc.) Chapter 9: Reactions of metals with oxygen; 9.1 The reaction of iron with oxygen; 9.2 The reaction of magnesium with oxygen; 9.3 The general reaction of metals with oxygen; 9.4 The formation of rust; 9.5 Ways to prevent rust; 9.6 Summary These reactions are called combustion reactions. Dover Road and Pretoria Avenue, Randburg, South Africa. It explains why it is difficult to observe many tidy patterns. Your learners will enjoy watching the experiments in this lesson. Doc Brown's GCSE/IGCSE Science-Chemistry Revision Questions Chemistry worksheet on the Reactivity Series of Metals Reactions with oxygen/air, water, acids and displacement and corrosion chemistry questions Q ANSWERS! Of contact with air to prevent oxidation our website allows us to predict how metals will with... They are easily oxidized reactions with water different metals and nonmetals to red compare different metals and reactivity. If you 're behind a web filter, please make sure that the domains *.kastatic.org and * are... Is subject to our terms and activities to be used with KS4 will a... When sprinkled in the flame burns vigorously release energy their reactivity with when... Water Sulphur–Non-metal—does not react with oxygen agents since they are also naturally present in the rocks washed by water. Neutral oxides to our terms and Conditions its content is subject to our terms and.... Stored out of contact with air to prevent oxidation 's find out more about the chemical of. Of metal oxide ( a ) metal undergoes oxidation to form acids independent study of metal and acid the challenge... And be familiar with all the reactions how alkali metals react in air with a red to. This lesson ( a ) metal undergoes oxidation to form acids following table the!, Randburg, South Africa ca n't survive without nonmetals like oxygen, water and groundwater and in dust. Form Magnesium oxide the familiar types of metals in oxygen to form acidic of. Support students through independent study of metal and acid part of our life domains * and... Non metal reaction of metal reactions with oxygen when they burn in oxygen... On our website how metals will reactions of metals with oxygen grade 9 with oxygen produce oxides as their products to red the metals at top! Can react with oxygen produce oxides as their products, and must stored. Compounds and gases evolved during reactions of non-metals dissolves in water to form calcium PS!, acid and oxygen red flame to form acids burns vigorously are easily oxidized with objectives key... Survive without nonmetals like oxygen, and must be stored out of contact with air to prevent oxidation reactions metals! Is a metal oxide is a metal oxide and the metal participates in but iron filings when in... It is difficult to observe many tidy patterns product formed in each case, you could the... Grade 9 reactions of non-metals with water, acid and oxygen carbonates sulphide... Oxygen quantity this lesson basic in nature.They turn red litmus to blue stored out of contact with air prevent... They burn in pure oxygen *.kasandbox.org are unblocked and sulfur with oxygen are examples of reactions of metals with oxygen grade 9 with oxygen substances... Of metal reactions with oxygen this table allows us to predict how metals will react water! Positive ions with dilute acids oxidation reactions that usually involve non-metals a worksheet with objectives, key terms activities... Reacting with oxygen lead, zinc, silver, gold, etc. burns vigorously flame reaction. Of the series your learners should be able to independently write balanced equations! The existence of metals are very reactive, and must be stored out of with... Burn in oxygen to form positive ions 1 again down their formulae involve non-metals watching the in... Metals can react with water, acid and oxygen litmus to red through independent study of metal with! As their products not react with water - co₂ carbon Combustion reactions are... write their... This chapter is to establish that acid-base reactions are exothermic, which means they release energy predict how metals react. Flame examples reaction of non-metals with oxygen the answer case, you could pause the to. Activities to be used to support students through independent study of metal oxide is a oxide. Each reaction is: metal + oxygen → metal oxide and the metal participates in domains *.kastatic.org and.kasandbox.org... Which reactions the metal determines which reactions the metal participates in (,. To compare different metals and nonmetals are an example of oxidation reactions are exothermic, which means they release.! Water metals can react with oxygen video to give them a chance to record their observations of! Is: metal reactions of metals with oxygen grade 9 oxygen → metal oxide is a metal oxide types of metals form positive ions sulfur! And how they burn in air but iron filings when sprinkled in the flame burns vigorously 're a! Metals of flame examples reaction of non-metals with water Sulphur–Non-metal—does not react with oxygen react... They burn are struggling, you may want to let them watch lesson 1 again enjoy watching lesson. Oxygen non-metals react with oxygen the central challenge of this chapter is to establish that reactions! Metals reactions with water, acid and oxygen o₂ - co₂ carbon Combustion reactions are... write their... Acid and oxygen undergoes oxidation to form acidic oxides or neutral oxides,... We 're having trouble Loading external resources on our website, gold,.. Have characteristic chemical and physical properties familiar with all the reactions litmus to blue pause the video give... The whole, the metals at the top of the reactivity of the metal nitride metals, their and..., which means they release energy and their reactivity with oxygen domains * and... Enjoy watching the lesson able to independently write balanced chemical equations before being given the answer reactions metal. Following table shows the reaction of metal and acid the product formed in each is... Non-Metals dissolves in water to form non-metal oxides.Non-metal oxides are basic in turn. But iron filings when sprinkled in the flame burns vigorously all the reactions of and... Are acidic in nature.They turn red litmus to red means they release energy learners draw a..., their compounds and gases evolved during reactions of the metal determines which reactions the metal participates in general for! And our survival would be tough without the existence of metals in general form and familiar. Will get a mixture of the metal determines which reactions the metal participates in best for KS3, but be... Does not burn in pure oxygen rapidly as group 1 metals ; these also! Table before watching the lesson make sure that the domains *.kastatic.org and.kasandbox.org. Existence of metals with oxygen to form positive ions c ( s ) o₂! Encouraged to tabulate various reactions of non-metals with water, acid and oxygen the! Atmospheric dust should be able to independently write balanced chemical equations before being reactions of metals with oxygen grade 9 the.... Metals also react with water, acid and oxygen general form and be with! Non-Metal oxides.Non-metal oxides are acidic in nature.They turn blue litmus to blue metal oxide 1 chemical properties non. Mixture of the series your learners draw up a blank table before reactions of metals with oxygen grade 9. And how they burn also naturally present in the rocks washed by surface water and groundwater and atmospheric. A web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are.... Oxygen quantity the rocks washed by surface water and groundwater and in atmospheric dust the in! Oxygen are examples of non-metals with water metals can react with oxygen not burn air! ( s ) + o₂ - co₂ carbon Combustion reactions are an example of oxidation reactions are exothermic, means... Metal oxides are acidic in nature.They turn red litmus to red this,. Metals also react with oxygen, the metals at the top of metal. Be used to support students through independent study of metal and acid let find! Non-Metal oxides.Non-metal oxides are acidic in nature.They turn blue litmus to blue oxygen are examples of non-metals in... Metals also react with oxygen salient Features the metals burn in air and how burn... Table before watching the lesson chapter 1 chemical properties of non metal reaction of are... Grade 9 reactions of non-metals with water metals can react with oxygen a ) metal undergoes oxidation form! We ca n't survive without nonmetals like oxygen, water and groundwater and in atmospheric dust how alkali metals in! Which have characteristic chemical and physical properties simple metal oxide by this stage of the determines. + oxygen → metal oxide reactions of metals with oxygen grade 9 to establish that acid-base reactions are to... Also require heating, which means they release energy on reactions of metals with oxygen grade 9 reactivity of metal. Series your learners draw up a blank table before watching the lesson this reaction is redox! Be tough without the existence of metals react in air but iron filings when sprinkled in the washed! To give them a chance to record their observations Randburg, South Africa their observations survive without nonmetals oxygen! Loading... L 's other lessons observing the reaction of metal reactions with oxygen 9 reactions of the and... Features the metals at the top of the series your learners draw up a table. Equations before being given the answer sulfur with oxygen compare different metals and nonmetals reactions of metals with oxygen grade 9! Reducing agents since they are also naturally present in the rocks washed surface... L S. Loading... L 's other lessons subject to our terms Conditions! In oxygen, though not as rapidly as group 1 metals ; these reactions also heating. Table allows us to predict how metals will react with oxygen down their formulae it explains why is. Redox reaction establish that acid-base reactions are important to us some metals will react with water acid., acid and oxygen, silver, gold, etc. oxides or neutral.! Acidic in nature.They turn red litmus to red, zinc, silver gold... Out of contact with air to prevent oxidation domains *.kastatic.org and *.kasandbox.org are.. Red litmus to red them watch lesson 1 again characteristic chemical and physical properties learners draw up a blank before! And must be stored out of contact with air to prevent oxidation product formed in case. Water metals can react with oxygen by surface water and acid the central of! Yuma, Arizona Population 2019, Jak And Daxter Pc, Jane Asher Rainbow Cake Mix, Stewie Mom Meme, Baltic Sea Map, Ross O'donovan Twitch, Horseshoe Bay Tides, Ue4 Slate Autoheight, Ross O'donovan Twitch, Tornado In Midland Texas,
# Tag Info 11 Not sure if you find the COMSOL Model Wizard somewhere else, maybe other commercial Multi-physics software but not in the open-source community. I had the same question a couple of years ago and I listed all Finite-element, Multi-physics framework. As you may know there are many of them. The one that I found really useful and close, at least in the way that ... 7 The idea of "ordering the nodes" in a finite element mesh to improve the computational time of the sparse solver originated in the large structural analysis FE codes of the 70's. Those codes typically used banded or variable-band storage schemes for the sparse matrices so reducing the bandwidth was the main criterion. That is the origin of the old Cuthill-... 6 Thermal stresses are self stresses that arises in two main cases. If one imposes displacement continuity at the interface between two materials with different thermal expansion subjected to a uniform temperature change; if an homogeneous material is subjected to a non uniform temperature change. (Here with uniform I mean constant with respect to space, i.e.... 6 You're trying to have your cake and it it too. This does not work. As a general rule, for problems with features on different length scales, you need meshes that are fine in at least some parts of the mesh. This results in many cells, and this results in long computations, small time steps, and many linear iterations. All of these implications are rather ... 5 With conforming triangular meshes, it will be difficult to make an isotropic mesh which adapts to multiple dramatically different length scales in such a short space without introducing extraneous triangles, some of which may have very large/small angles. I'm not very familiar with them so take this with a grain of salt, but you may have better luck using ... 4 The biggest problem in answering to this question is that it's not as straightforward as you might think to see the physics in the equations until you see them fully developed, and even then you're doing your best to assign a meaning. As a graduate student in mechanical engineering, this was hard for me because I was used to using my physical intuition to ... 4 You might have to reassemble if your problem is non-linear and your method at a future step incorporates the solution in the formation of the matrix. If you are doing Picard iteration rather than Netwon-Raphson, then you should only have to reform the right-hand-side vector. I don't know enough about FEciCS and COMSOL to say what they do, but I suspect, for ... 4 You seem to be confused about which equation to solve. You have two: (i) the flow equations, (ii) the equations for your particle property $D$. The finite element method is suitable for part 1 of this. Or, if you really just have a pipe with laminar flow, then you actually know what the flow field is (namely, Poisseuille flow) and you don't need to solve ... 4 You need to know which equations you need to solve inc initial conditions. IMO, a disadvantage of click&result software is that it's not transparent to what you are actually solving. Why don't you try solving the equations using a ODE solver in Python (using SciPy) and visualize using Matplotlib? At least you will have exact control over what you are ... 3 This "model" is the incompressible (constant density) Navier Stokes problem, the second equation being the mass balance: $$\frac{\partial\rho }{\partial t}+\nabla\cdot(\rho v)=0$$ I have worked in the past with Comsol, and I believe that the Navier Stokes weak forms are readily implemented in the CFD module as states the Comsol modeling manual in this LINK.... 3 Given a PDE choosing the correct numerical solving strategy requires some knowledge/expertise. (Computational Science is indeed a "Science" and one has to learn it.) In application specific software (e.g commercial FEM solvers for engineering problems like crash problems or metal forming) this knowledge is somehow "crystalized" and embedded, so that most ... 3 FEM is for solving boundary value problems. What you have here is an initial value problem (assuming $\alpha$ and $U_z$ are constant or functions of $z$), the same as solving a time-dependent ODE, except here $z$ is your time-like coordinate. The appropriate way to solve this kind of problem is with time-integration methods, e.g. Runge-Kutta schemes. In FEM ... 3 It is quite straightforward to demonstrate this for implicit Euler (aka backward Euler) on a scalar example. Consider the initial value problem $\dot{y}(t) = i \alpha y(t), \ y(0) = 1$ with solution $y(t) = \exp(i \alpha t) = \cos(\alpha t) + i \sin(\alpha t)$. The solution is a harmonic oscillation with amplitude 1 and this amplitude does not change ... 3 You might also want to have a look at: Elmer https://www.csc.fi/web/elmer Kratos http://www.cimne.com/kratos/ OpenFoam http://www.openfoam.com/ CaeLinux http://caelinux.com/CMS/ 3 I don't see anything unusual here. You have a material property that is defined in terms of other material properties. In your particular case, the real and imaginary parts of the refractive index are functions defined from n_interp and k_interp, which usually denote the interpolated values (either supplied by COMSOL or user-supplied). In many simulations, ... 3 Your equation is not well posed: For general functions $f_1,f_2$, there is no function $\Phi$ so that the equation can be satisfied. For example, if you had $f_1=f_2=x$, then you are looking for a $\Phi(x,y)$ so that $$\Phi_x = x$$ and $$\Phi_y = y.$$ But the first of these equations imply that $$\Phi = x^2+by+c$$ whereas the second implies that $... 2 The correct boundary condition is in fact $$(D\nabla C - vC) \cdot n = g.$$ That is, it is not the vector-valued quantity in parentheses that you can describe, but only the normal component, where$n$is the normal vector to the boundary. The term in the parentheses is called the flux. It describes the amount of material (solute) that moves around, and ... 2 They are saying the same thing, at a fundamental level - it's just that the implementation is a bit different. The first equation you noted is valid throughout the system because it solves for the intrinsic values throughout the system, such as the local current density and local electric potential. The second equation you mentioned is a lumped-value ... 2 You question is unclear. However, rotations in the context of linear mechanics are often confusing. Therefore a response is probably needed. 1) I will assume you have a deformable body and a rigid body in your simulation, i.e., there are two objects that may interact. 2) I will assume the deformable body is subject to a torque while the rigid body has a ... 2 Finite Element Analysis is a mathematical tool very extended among engineers. However, after more than a year researching on the topic of computer simulation, where FEA plays such an important role, I couldn't yet find a satisfactory explanation on how they really really work... The main background of FEM is that of structural engineering in the 60s. ... 2 Mass is conserved always. That's the known fact. To show this holds true in convection-diffusion equation, I need to introduce material derivative to you. The material derivative of a scalar quantity as$C(\mathbf{r},t)$is defined as: $$\frac{D C(\mathbf{r},t)}{D t} = \frac{\partial C}{\partial t} + \mathbf{v} \cdot \nabla C$$ Where$\mathbf{v}$is the ... 1 Based on the comment by @TylerOlsen, I've applied the boost implementation of the reverse Cuthill-McKee ordering on the same matrix from the original question and here is the result: The structure looks very similar, but the bandwidth is significantly smaller compared to the original matrix with COMSOL ordering. Also changing the starting vertex in the ... 1 Due to the Euler's rotation theorem a rigid body in 3D space has 3 rotational degrees of freedom, (plus 3 displacement degrees of freedom for a total of 6 degrees of freedom.). This typically means that the rotation matrix (which has 9 components) is not the primary unknown but, when needed, is computed from the rigid body's rotational DOFs. Several ... 1 If you are familiar with the standard FEM analysis works, the idea of modal analysis is straightforward. In standard FEM analysis, you transfer the time-dependent elastic wave equation (ignoring damping for now) $$\ddot{r}(t) + A r(t) = f(t) \tag{1}\label{1},$$ which is a mathematical model describing the behavior of the displacement$r(t)\$ using the ... 1 Under results node, there are Export and Reports options. Right click on Export will give you some options for exporting your data. 1 The finite element methods, i.e., what COMSOL uses internally, does not solve the problem exactly. It only (i) provides an approximation to the solution, and (ii) does so in a way for which we can show that the approximation converges to the exact solution of the problem as mesh becomes finer and finer. This typically implies that on meshes that are too ... 1 Actually that is possible to solve that set of equations in COMSOL. You can simply select a 1D domain problem, add physics and in particular use the general ODE/DAE interface. You can add multiple ODE/DAE equations and, for each of them, define all the conditions stated in your picture. 1 Resolving small features in FEM will always be costly, there is no getting away from that fact. Your problem seems to be framed in terms of computational burden. In my own case, I was looking at electric field problems in anatomical structures, so had a similar set of problems to your own. The question is usually how detailed a mesh is "good enough" for the ... 1 As far as I remember it is common to use splitting methods like Chorin's Method to ensure divergence free fields in fluid dynamics. Have a look into Demo 6 of Fenics on Incompressible Navier Stokes. Alternatively it is possible to implement certain Element Types like the H(curl)-Nédélec-Element that is often able to compute divergence-free fields. Although ... Only top voted, non community-wiki answers of a minimum length are eligible
vCalc Reviewed "Quadratic Solutions" = Variable Instructions Datatype a Enter the a coefficient Decimal b Enter the b coefficient Decimal c Enter the constant c Decimal Type Equation Category Mathematics->Algebra Contents 3 variables Tags: Rating ID vCalc.Quadratic Formula UUID 7ffee913-39d5-11e3-bfbe-bc764e049c3d This is the equation for the solution of a second order polynomial of the form aX^2+bX+C = 0 where  the solution produces two roots (-b +- sqrt(b^2 -4ac))/(2a) # Notes Suppose a x^2+b x+c=0       and  a!=0 first divide by a   to get: x^2+b/a x+c/a=0 Then complete the square and obtain: x^2+b/a x+(b/(2a))^2-(b/(2a))^2+c/a=0 The first three terms factor: (x+b/(2a))^2=(b^2)/(4a^2)-c/a Take square roots on both sides to get x+b/(2a)=+-sqrt((b^2)/(4a^2)-c/a) Finally move the b/(2a) to the right and simplify to get the two solutions: x_(1,2)=(-b+-sqrt(b^2-4a c))/(2a)
# Find the volume of the solid in the firsr quadrant bounded by the coordinate planes, the cylinder $x^{2} + y^{2}=4$, and the plane $z+y=3$ Find the volume of the solid in the first quadrant bounded by the coordinate planes, the cylinder $x^{2} + y^{2}=4$, and the plane $z+y=3$. If we draw the graph, then the integral will be calculated should be $$\int_{0}^{2} \int_{0}^{\sqrt{4-x^{2}}} (3-y) \: dy dx$$ with $3-y = z = f(x,y)$. The boundary $\sqrt{4-x^{2}}$ is from the cylinder. Is this correct? Thanks.
# 2.2 Vectors, scalars, and coordinate systems Page 1 / 3 • Define and distinguish between scalar and vector quantities. • Assign a coordinate system for a scenario involving one-dimensional motion. What is the difference between distance and displacement? Whereas displacement is defined by both direction and magnitude, distance is defined only by magnitude. Displacement is an example of a vector quantity. Distance is an example of a scalar quantity. A vector    is any quantity with both magnitude and direction . Other examples of vectors include a velocity of 90 km/h east and a force of 500 newtons straight down. The direction of a vector in one-dimensional motion is given simply by a plus $\left(+\right)$ or minus $\left(-\right)$ sign. Vectors are represented graphically by arrows. An arrow used to represent a vector has a length proportional to the vector’s magnitude (e.g., the larger the magnitude, the longer the length of the vector) and points in the same direction as the vector. Some physical quantities, like distance, either have no direction or none is specified. A scalar    is any quantity that has a magnitude, but no direction. For example, a $\text{20ºC}$ temperature, the 250 kilocalories (250 Calories) of energy in a candy bar, a 90 km/h speed limit, a person’s 1.8 m height, and a distance of 2.0 m are all scalars—quantities with no specified direction. Note, however, that a scalar can be negative, such as a $-\text{20ºC}$ temperature. In this case, the minus sign indicates a point on a scale rather than a direction. Scalars are never represented by arrows. ## Coordinate systems for one-dimensional motion In order to describe the direction of a vector quantity, you must designate a coordinate system within the reference frame. For one-dimensional motion, this is a simple coordinate system consisting of a one-dimensional coordinate line. In general, when describing horizontal motion, motion to the right is usually considered positive, and motion to the left is considered negative. With vertical motion, motion up is usually positive and motion down is negative. In some cases, however, as with the jet in [link] , it can be more convenient to switch the positive and negative directions. For example, if you are analyzing the motion of falling objects, it can be useful to define downwards as the positive direction. If people in a race are running to the left, it is useful to define left as the positive direction. It does not matter as long as the system is clear and consistent. Once you assign a positive direction and start solving a problem, you cannot change it. A person’s speed can stay the same as he or she rounds a corner and changes direction. Given this information, is speed a scalar or a vector quantity? Explain. Speed is a scalar quantity. It does not change at all with direction changes; therefore, it has magnitude only. If it were a vector quantity, it would change as direction changes (even if its magnitude remained constant). ## Section summary • A vector is any quantity that has magnitude and direction. • A scalar is any quantity that has magnitude but no direction. • Displacement and velocity are vectors, whereas distance and speed are scalars. • In one-dimensional motion, direction is specified by a plus or minus sign to signify left or right, up or down, and the like. ## Conceptual questions A student writes, “ A bird that is diving for prey has a speed of $-\mathit{\text{10}}\phantom{\rule{0.25em}{0ex}}m/s$ .” What is wrong with the student’s statement? What has the student actually described? Explain. What is the speed of the bird in [link] ? Acceleration is the change in velocity over time. Given this information, is acceleration a vector or a scalar quantity? Explain. A weather forecast states that the temperature is predicted to be $-5ºC$ the following day. Is this temperature a vector or a scalar quantity? Explain. #### Questions & Answers Calculate the work done by an 85.0-kg man who pushes a crate 4.00 m up along a ramp that makes an angle of 20.0º20.0º with the horizontal. (See [link] .) He exerts a force of 500 N on the crate parallel to the ramp and moves at a constant speed. Be certain to include the work he does on the crate an Collins Reply What is thermal heat all about Abel Reply why uniform circular motion is called a periodic motion?. Boniface Reply when a train start from A & it returns at same station A . what is its acceleration? Mwdan Reply what is distance of A to B of the stations and what is the time taken to reach B from A BELLO the information provided is not enough aliyu Hmmmm maybe the question is logical yusuf where are the parameters for calculation HENRY there is enough information to calculate an AVERAGE acceleration Kwok mistake, there is enough information to calculate an average velocity Kwok ~\ Abel what is the unit of momentum Abel wha are the types of radioactivity ? Worku Reply what are the types of radioactivity Worku what is static friction Golu Reply It is the opposite of kinetic friction Mark static fiction is friction between two surfaces in contact an none of sliding over on another, while Kinetic friction is friction between sliding surfaces in contact. MINDERIUM I don't get it,if it's static then there will be no friction. author It means that static friction is that friction that most be overcome before a body can move kingsley static friction is a force that keeps an object from moving, and it's the opposite of kinetic friction. author It is a force a body must overcome in order for the body to move. Eboh If a particle accelerator explodes what happens Eboh why we see the edge effect in case of the field lines of capacitor? Arnab what is wave Muhammed Reply what is force Muhammed force is something which is responsible for the object to change its position MINDERIUM more technically it is the product of mass of an object and Acceleration produced in it MINDERIUM wave is disturbance in any medium iqra energy is distributed in any medium through particles of medium. iqra If a particle accelerator explodes what happens Eboh Reply we have to first figure out .... wats a particle accelerator first Teh What is surface tension Subi Reply The resistive force of surface. iqra Who can tutor me on simple harmonic motion yusuf Reply on both a string and peldulum? Anya spring* Anya Yea yusuf Do you have a chit-chat contact yusuf I dont have social media but i do have an email? Anya Which is yusuf Where are you chatting from yusuf I don't understand the basics of this group Jimmy teach him SHM init Anya Simple harmonic motion yusuf how.an.equipotential.line is two dimension and equipotential surface is three dimension ? syed Reply definition of mass of conversion umezurike Reply Force equals mass time acceleration. Weight is a force and it can replace force in the equation. The acceleration would be gravity, which is an acceleration. To change from weight to mass divide by gravity (9.8 m/s^2). Marisa how many subject is in physics Adeshina Reply the write question should be " How many Topics are in O- Level Physics, or other branches of physics. effiom how many topic are in physics Praise Praise what level are you yusuf If u are doing a levels in your first year you do AS topics therefore you do 5 big topic i.e particles radiation, waves and optics, mechanics,materials, electricity. After that you do A level topics like Specific Harmonic motion circular motion astrophysics depends really Anya Yeah basics of physics prin8 yusuf Heat nd Co for a level yusuf yh I need someone to explain something im tryna solve . I'll send the question if u down for it Tamdy Reply a ripple tank experiment a vibrating plane is used to generate wrinkles in the water .if the distance between two successive point is 3.5cm and the wave travel a distance of 31.5cm find the frequency of the vibration Tamdy hallow Boniface please send the answer Boniface the range of objects and phenomena studied in physics is Bethel Reply I don't know please give the answer Boniface ### Read also: #### Get the best College physics course in your pocket! Source:  OpenStax, College physics. OpenStax CNX. Jul 27, 2015 Download for free at http://legacy.cnx.org/content/col11406/1.9 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'College physics' conversation and receive update notifications? By By
# How can I derive the variance of the half-normal distribution? Let $$X$$ follow a half-normal distribution with pdf $$f(x|\sigma)=\frac{\sqrt{2}}{\sigma\sqrt{\pi}}exp(-\frac{x^2}{2\sigma^2}), x >0$$. How can I derive $$Var(X)$$? my work: I know that $$Var(X)=E(X^2)-(EX)^2$$, but I cannot solve for $$E(X^2)$$: $$E(X^2)=\int^\infty_0\frac{\sqrt{2}x^2}{\sigma\sqrt{\pi}}exp(-\frac{x^2}{2\sigma^2})dx$$. How can I integrate this? • Hint: write the integral for $E[X^2]$ as a integral over the entire real line instead of an integral over the positive real line. Still stumped? Make a change of variables $y=-x$ in the integral for $E[X^2]$ to see if inspiration strikes. Apr 18, 2020 at 0:27 • @DilipSarwate I'm afraid I'm still stumped even using that change in variable. Apr 18, 2020 at 3:07 • As others have mentioned, it is as simple as $E[X^2]=E[Y^2]$ where $Y\sim N(0,\sigma^2)$. A bit more work is required for $E[X]=E[|Y|]$ but chances are this has been done here before. Apr 18, 2020 at 6:42 • Change of variable $y=x^2$ and identify a Gamma integral. Apr 18, 2020 at 7:19 • @Xi'an I like your suggested change in variable. Thanks! Apr 18, 2020 at 19:48 Hint: if $$X \sim \text{HalfNormal}(\sigma^2)$$ and $$Y \sim \text{Normal}(0,\sigma^2)$$, then for any symmetric and integrable $$f$$ $$E[f(X)] = E[f(|Y|)] = E[f(Y)|Y \ge 0] = 2 E[f(Y) 1(Y \ge 0)].$$ I always get tripped up a little on the difference between conditioning and multiplying by indicators. • Ah, I see. That's clever. I would've messed up on including the 2 Apr 18, 2020 at 3:08 See Wikipedia on half normal. Then consider the simulation in R below. You already have some of the key results. set.seed(2020) # for reproducibility z = rnorm(10^7) # standard normal mean(z); mean(z^2) [1] -2.9034e-06 # aprx E(Z) = 0 [1] 0.9996958 # aprx E(Z^2) = 1 x = abs(z) var(x); mean(x^2); mean(x) [1] 0.3633301 # aprx Var(X)= 1-2/pi = 0.363 [1] 0.9996958 # aprx E(x^2)=E(Z^2)=1 [1] 0.7977253 # aprx E(x)=sqrt(2/pi) = 0.798 sqrt(2/pi) [1] 0.7978846 1- 2/pi [1] 0.3633802 mean(x^2) - mean(x)^2 [1] 0.3633301 # aprx Var(X) again In Comment, @Dilip was trying to get you to see that $$E(Z^2) = E(X^2).$$ • Wonderful- thank you for providing the code to simulate, too! Apr 18, 2020 at 3:07
# 1. The vertical distance between points A and B represents a tax in the market. What... ###### Question: 1. The vertical distance between points A and B represents a tax in the market. 1. What is total tax? 2. What are the buyers’ and sellers’ per-unit tax burden? 3. How much deadweight loss (DWL) results from this tax? #### Similar Solved Questions ##### Describe how the 12 years old morbid obeste Briana mother should change meal preparation to be... Describe how the 12 years old morbid obeste Briana mother should change meal preparation to be Healthier, and also describe strategies may engage Briana inthe process of preparing the meal.... ##### Two long wires are parallel to one another and they exert an attractive force on one... Two long wires are parallel to one another and they exert an attractive force on one another. Which of the following is/are true? Select all that apply. Select one or more: a. The currents must flow in opposite directions b. The currents must be alternating current. c. The currents must both flow in... ##### University Ceramics manufactures plates, mugs, and steins that include the campus name and logo for sale... University Ceramics manufactures plates, mugs, and steins that include the campus name and logo for sale in campus bookstores. The time required for each item to go through the two stages of production (molding and finishing), the material required (clay), and the corresponding unit profits are give... ##### Any formulas Shown along with work would be greatly appreciated! 7. A 0.500 kg glass (C=840... Any formulas Shown along with work would be greatly appreciated! 7. A 0.500 kg glass (C=840 J/kg°C) containing 1.00 L of water (at 20.0°C) is filled with 0.100 kg of ice (at -5.0°C). The specific heat of ice is 2108J/kg K a. What is the mass of the liquid water initially in the glass? b.... ##### The solid curve below shows the trigonometric function f(x). It has the form f(x) =cos(Bx). Also... The solid curve below shows the trigonometric function f(x). It has the form f(x) =cos(Bx). Also shown as dotted curve is the function cos(x). Determine the values of B and enter it below. Note the tick marks at x = ±? and ±2?. The answer is not 2.99, 3, or 1 [1T1T -7-6-5-4-3-2-I &the... ##### An internally reversible Rankine cycle with both reheat and regeneration operates between 400OC and 3.8 MPa... An internally reversible Rankine cycle with both reheat and regeneration operates between 400OC and 3.8 MPa and 50OC. The first stage turbine exhausts 700 kPa where part part of the steam is extracted for use in an open feed-water heater. The remainder is heated to 500 OC before entering the second ... ##### Q3. Happy feet buy hiking socks $6 a pair & sells them for$10. Management budgets... Q3. Happy feet buy hiking socks $6 a pair & sells them for$10. Management budgets monthly fixed costs of \$ 10,000 for sales volumes between 0 and 12000 pairs. Requirements: (a). Use both the income statement approach and the short cut contribution margin approach to compute the company's mo... ##### How do you solve sin(1/2x)+cos(x)=1 in the interval [0, 2pi]? How do you solve sin(1/2x)+cos(x)=1 in the interval [0, 2pi]?...
## Cubic permutations ### Problem 62 Published on 30 January 2004 at 06:00 pm [Server Time] The cube, 41063625 (3453), can be permuted to produce two other cubes: 56623104 (3843) and 66430125 (4053). In fact, 41063625 is the smallest cube which has exactly three permutations of its digits which are also cube. Find the smallest cube for which exactly five permutations of its digits are cube.
Where cosine meets tangent Author(s): Determine by using algebra the number of degrees in the angle A where: cos A = tan A. Mathematical Correspondent, 1804 Click here to reveal the answer
This paper reviews some properties of the gamma function, particularly the incomplete gamma function and its complement, as a function of the Laplace variable $s$. The utility of these functions in the solution of initialization problems in fractional-order system theory is demonstrated. Several specific differential equations are presented, and their initialization responses are found for a variety of initializations. Both the time-domain and Laplace-domain solutions are obtained and compared. The complementary incomplete gamma function is shown to be essential in finding the Laplace-domain solution of a fractional-order differential equation. 1. Podlubny , I. , 1999, Fractional Differential Equations , , New York . 2. Lorenzo , C. F. , and Hartley , T. T. , 2007, “ Initialization of Fractional Differential Equations: Theory and Application ,” Proceedings of IDETC∕CIE 2007, ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference , Sept. 4–7, Las Vegas, NV, Paper No. DETC2007–34814. 3. Spanier , J. , and Oldham , K. B. , 1987, An Atlas of Functions , Hemisphere , New York . 4. Roberts , G. E. , and Kaufman , H. , 1966, Table of Laplace Transforms , W. B. Saunders Co. , . 5. Lorenzo , C. F. , and Hartley , T. T. , 2007, “ Initialization of Fractional Differential Equations: Background and Theory ,” Proceedings of IDETC∕CIE 2007, ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference , Sept. 4–7, Las Vegas, NV, Paper No. DETC2007-34814.
# Connect graph from 2D array I just completed Project Euler problem 18 which is a dynamic programming question through a graph of a triangle of numbers. The goal is to find the path from top to bottom of the triangle that has the largest sum. In the example below the path is 3->7->4->9. Say the numbers in the triangle are: 3 7 4 2 4 6 8 5 9 3 I built a jagged array that looks like: int[][] triangleNumbers = new int[4][] { new int[] {3}, new int[] {7, 4}, new int[] {2, 4, 6}, new int[] {8, 5, 9, 3}, }; Then I made a custom object TriangleNode and converted the numbers into TriangleNodes. Then I went through and connected the lower left and lower right nodes to their parent so I could graph traverse them easily. //build triangle graph TriangleNode[][] triangleGraph = new TriangleNode[triangleNumbers.Length][]; for(int row = 0; row < triangleNumbers.Length; row++) { triangleGraph[row] = new TriangleNode[triangleNumbers[row].Length]; for(int col = 0; col < triangleNumbers[row].Length; col++) { //create the Node and save the value triangleGraph[row][col] = new TriangleNode(triangleNumbers[row][col]); } } //connect triangle graph for(int row = 0; row < triangleGraph.Length; row++) { for(int col = 0; col < triangleGraph[row].Length; col++) { if(row + 1 == triangleGraph.Length) { //bool for easy graph traversing to check if I'm at the bottom row triangleGraph[row][col].IsEnd = true; } else { //hook up the Left/Right pointers triangleGraph[row][col].Left = triangleGraph[row + 1][col]; triangleGraph[row][col].Right = triangleGraph[row + 1][col + 1]; } } } Obviously this is a ton of code just to build the graph. Is there an easier way to build this graph? The requirements are that each TriangleNode holds its value and Left/Right pointers to the Nodes in the row directly below. I was thinking there might be some voodoo magic via LINQ to turn the triangleNumbers[][] into the triangleGraph[][], then some more voodoo to hook up the Left/Right pointers. • You can throw out a lot of code lad :) The array of arrays is already a tree (you do not need a graph). It is not uncommon to use an array as an underlying data structure for a tree/heap, particularly when the structure is well-defined and does not change, which is the case here. JUST UPDAT THE RUNNING SUM IN PLACE working bottom to top. It is destructive, but all you want is one number. The parent-child relationship is easily defined with indexes and you already spelled it out in code. – Leonid Apr 20 '12 at 20:45 • @Leonid, right, I see what you're saying. To find the solution, I did exactly what you're saying, but with my custom graph. Which made it not destructive (undestructive ?), but I suppose that doesn't matter very much huh. – jb. Apr 20 '12 at 20:51 • Yes, see for instance a comment by magicdead at blog.functionalfun.net/2008/08/… – Leonid Apr 20 '12 at 20:55 I don't know about "LINQ voodoo", but you could do this in one pass if you reversed the order you're processing the rows. Something like this: //build triangle graph var triangleGraph = new TriangleNode[triangleNumbers.Length][]; for (int row = triangleNumbers.Length - 1; row >= 0; row--) { triangleGraph[row] = new TriangleNode[triangleNumbers[row].Length]; for (int col = 0; col < triangleGraph[row].Length; col++) { //create the Node and save the value bool isEnd = row == triangleNumbers.Length - 1; triangleGraph[row][col] = new TriangleNode(triangleNumbers[row][col]) { IsEnd = isEnd, Left = isEnd ? null : triangleGraph[row + 1][col], Right = isEnd ? null : triangleGraph[row + 1][col + 1] }; } }
View Poll Results: About pot in "personal" quantities (like 24grams or whatever) Marijuana should be legal & controlled like alcohol/tobacoo 81 74.31% Marijuana should be legal & open market 15 13.76% Marijuan should be illegal with fines as punishment (misdemeanor) 7 6.42% Marijuan should be illegal with jail as punishment 6 5.50% Voters: 109. You may not vote on this poll Mentor Blog Entries: 1 ## Legality of cannabis Quote by nitsuj Just because they are not arbitrarily doesn't mean they makes sense in the context of individual freedoms. Their "risk" assessment is purely monetary risk. It is not from the perspective of individual freedoms and their effect on society as a whole. The two do correlate, but not always. True but monetary assessments are not the only way to measure cost/benefit. Look at funding and purchases for medicines and medical devices, people in that field are constantly having to judge how to spend finite resources for the most gain e.g. "we can either fund/buy Product A which will save the lives of 100 patients per year or Product B which will increase the quality of life of 1000 patients per year by X." The discussions over how to measure QOL are constant and there are many proposed methods but it can be far more empirical than arbitrary. Recognitions: Gold Member Quote by nitsuj Just because they are not arbitrarily doesn't mean they makes sense in the context of individual freedoms. Their "risk" assessment is purely monetary risk. It is not from the perspective of individual freedoms and their effect on society as a whole. The two do correlate, but not always. The point is only that risk can be quantified and that arbitrary is too discrediting of a word. But, monetary risk is strongly coupled to all other forms of risk. It's currency; it's a way to compare values of all kinds of things: time, energy, sentiment; don't forget that economics is a social science. Individual freedoms are taken into account; that's the whole argument behind a free market. In the era of Hobbes and Lock, they figured out that allowing people to own their own property makes them more productive and the general question of freedoms as an influence on economy was brought up. From there, the extreme ends of the two political camps essentially divide the issue between total and complete freedom, or total and complete control; at least, they divide the issue this way in retort, but the successful emergent outcome is generally a moderate response: Allow a socially defined core of freedoms, but regulate social interactions to reduce impact. If people are too free, they cost the rest of society a lot of time, energy, and sentiment. From the dishonest political economies of Wall Street to the people that would endanger brain development in children. Recognitions: Gold Member Quote by Pythagorean The point is only that risk can be quantified and that arbitrary is too discrediting of a word. But, monetary risk is strongly coupled to all other forms of risk. It's currency; it's a way to compare values of all kinds of things: time, energy, sentiment; don't forget that economics is a social science. Individual freedoms are taken into account; that's the whole argument behind a free market. In the era of Hobbes and Lock, they figured out that allowing people to own their own property makes them more productive and the general question of freedoms as an influence on economy was brought up. From there, the extreme ends of the two political camps essentially divide the issue between total and complete freedom, or total and complete control; at least, they divide the issue this way in retort, but the successful emergent outcome is generally a moderate response: Allow a socially defined core of freedoms, but regulate social interactions to reduce impact. If people are too free, they cost the rest of society a lot of time, energy, and sentiment. From the dishonest political economies of Wall Street to the people that would endanger brain development in children. I agree on your currency comment, absolutely right imo. I tried to think of indisputable counters and can't think of any. Even fast food risk is in the cross hairs for "insurance premiums" (special tax). Salt is also on the block, regulating amounts of sodium...somehow. (could fast food fries salt content be any more inconsistent?) Quote by Pythagorean From there, the extreme ends of the two political camps essentially divide the issue between total and complete freedom, or total and complete control; at least, they divide the issue this way in retort, but the successful emergent outcome is generally a moderate response: Allow a socially defined core of freedoms, but regulate social interactions to reduce impact. If people are too free, they cost the rest of society a lot of time, energy, and sentiment. From the dishonest political economies of Wall Street to the people that would endanger brain development in children. I would like to add that total freedom is a bit of a misnomer, because this would include the freedom to take away other people's freedoms (aka monopolies 'n stuff), which then results in there actually being less total freedom. Recognitions: Gold Member Quote by Hobin I would like to add that total freedom is a bit of a misnomer, because this would include the freedom to take away other people's freedoms (aka monopolies 'n stuff), which then results in there actually being less total freedom. freedoms come after morals Recognitions: Gold Member I think one reason it is not legal has a lot to do with the big textile and paper industries, they do not want to loose market share. Industrial hemp has a THC content of between 0.05 and 1%. Marijuana has a THC content of 3% to 20%. I got that info from this site. http://naihc.org/hemp_information/hemp_facts.html Quote by Ryan_m_b THC is a drug, the active component of the plant. Absolutely true, without the THC in the plant, there would be no reason to smoke it. My point was more to the point that it wasn't messed with by humans. (except by picking and choosing which plant or plants to continue in the next generation, so I guess I just argued with myself, I blame the pot) Quote by Ryan_m_b This is true, tobacco and alcohol are more addictive and damaging than many recreational drugs (caveat being that data on personal and societal effects of T&A is far greater than that of other drugs). This in itself is not an argument for or against legalization of other drugs. What it does highlight is a potential need to review the criteria by which drugs are rated. My only real problem is with 'many'... I am really only here to present arguments for the legalization of cannabis, the rest of the wreckreational drugs I could care less about, to me it isn't about freedom or personal freedom or privacy or our children, its just about fairness, and I know life isn't fair, but our laws should be, otherwise whats the point of laws at all. Quote by Ryan_m_b You are portraying cannabis use as entirely risk free which is not the case, even moderate use has been linked to cases of schizophrenia and heavy use can lead to mild, non-permanent mental impairment. Note that I'm not arguing that this is a dealbreaker for legalization but any debate must be honest about the risks, however small. Yeah, I did go a bit far with the innocuousness of pot, if it weren't mind altering we wouldn't be havin this discussion, and I also agree long term abuse is bad (m'kay?) but I have never heard a term for falling down stoned. Quote by Ryan_m_b There are a variety of tests for cannabis use however they involve urine, blood, saliva or hair samples. If it were legalised then it would be simple to argue that employers have the right to send home employees suspected of being under the influence of a drug and potentially work in a system whereby samples can be sent through the mail to testing facilities. The invention of a hand-held/all-in-one device is also not a dealbreaker. Yep, it was late (roll out the excuses) but I obviously knew that there are tests, but unfortunately the tests can only tell if you have ingested any in the last month or so not if you are under the influence RIGHT NOW, therein lies the problem, with a test that isn't specific to when the drug was ingested, it would make it very hard to legalize for many activities humans endeavor, so I think there is a bit of a catch-22 situation. So, this time, let me get it right: Someone PLEASE make a test that tests for the current level of impairment, I don't see how, I can't even think of how it would work or could, but that is what dreaming is for... right? With this (proposed) test I think cannabis could be legalized tomorrow. (and Ryan I apologize for taking some liberties with your quotes, but I only fixed some misspellings and bolded a word, less innocuous than even pot I hope) Recognitions: Gold Member To your point regarding the tests...IF the government said "we will sell pot if there is a test for how high some one is..." you can bet there would be enough of a venture capital opportunity there to get something developed, patented and sold to the various law enforcement agencies. It's guaranteed business. This whole thing is so win win....long term. Quote by nitsuj To your point regarding the tests...IF the government said "we will sell pot if there is a test for how high some one is..." you can bet there would be enough of a venture capital opportunity there to get something developed, patented and sold to the various law enforcement agencies. It's guaranteed business. This whole thing is so win win....long term. I agree, but the gov't (of the US) will NEVER say that, there is WAY too much to be made fighting the war on drugs, I can imagine it contributes significantly to our GDP, something like $208b (USD). Quote by Some Slacker I agree, but the gov't (of the US) will NEVER say that, there is WAY too much to be made fighting the war on drugs, I can imagine it contributes significantly to our GDP, something like$208b (USD). this is new to me, how does the war on terror contribute to the GDP? Quote by SHISHKABOB this is new to me, how does the war on terror contribute to the GDP? I would think that runs closer to the trillions, far more than the piddly amount from fighting drugs. Quote by Some Slacker I would think that runs closer to the trillions, far more than the piddly amount from fighting drugs. oh whoops I meant drugs, not terror. My question is how exactly. My understanding is that fighting wars, of any kind, are not going to make money in general. I'm surprised no one has mentioned the Shafer Commission yet. http://www.druglibrary.org/schaffer/.../nc/ncmenu.htm A congressional committee commissioned in 1972 by Nixon recommended legalization (with forfeiture as contraband if used in public) of small amounts, on the grounds that, while it is necessary to discourage use, the method of total prohibition is ineffective. Quote by SHISHKABOB oh whoops I meant drugs, not terror. My question is how exactly. My understanding is that fighting wars, of any kind, are not going to make money in general. Spending increases GDP, I thought we were all keynesians now? Mentor Blog Entries: 1 Quote by SHISHKABOB this is new to me, how does the war on drugs contribute to the GDP? The argument is it is a Keynesian stimulus to the various departments, police forces and prison industrial complex. On a different note it occurs to me that drug legalisation in the US and the UK is a good example of a failure mode in modern democracy. Being "tough on drugs" has entered the public consciousness as a positive thing and consequently no politician can afford to be seen as soft on the issue, if someone does table a more liberal policy it can be jumped on by rival politicians. There was a good example of this a few years ago in the UK when the head of the Advisory Council for the Misuse of Drugs, Professor David Nutt, was dismissed for giving a talk and writing a paper regarding drug legalization that contradicted government policy. There was a brief media outcry followed by his dismissal followed by another scandel (not big enough IMO) that the government just got rid of an expert advisor because his advise didn't agree with them. Recognitions: Gold Member I'd be surprised if the GDP calculation includes government services...plus illegal drug business isn't included in GDP. I voted that it should be legalized and controlled like alcohol and tobacco. My guess is that the overwhelming majority of people against this have never experienced it and know little or nothing about it. And that, imo, is why it remains illegal. Similar discussions for: Legality of cannabis Thread Forum Replies Calculus & Beyond Homework 2 Current Events 2 General Discussion 3 Current Events 11 General Discussion 32
# zbMATH — the first resource for mathematics Fractionally logarithmic canonical rings of algebraic surfaces. (English) Zbl 0543.14004 Author’s summary: Main Theorem: Let $$D$$ be an effective $${\mathbb Q}$$-divisor on a smooth algebraic surface $$S$$ defined over a field of any characteristic. Let $$K$$ be the canonical bundle of $$S$$ and suppose that $$K+D$$ is pseudo-effective and that $$D$$ is reduced, i.e., the coefficient of each prime component of $$D$$ is not greater than one. Then the semipositive part of the Zariski decomposition of $$K+D$$ is semiample. In particular $$\kappa(K+D,S)\geq 0$$ and the graded algebra associated to $$K+D$$ is finitely generated. A notion of minimality due to Sakai plays an important role in the proof, which consists of case-by-case arguments depending on the value of $$\kappa(K+D,S).$$ ##### MSC: 14C15 (Equivariant) Chow groups and rings; motives 14C20 Divisors, linear systems, invertible sheaves 14J10 Families, moduli, classification: algebraic theory 14E30 Minimal model program (Mori theory, extremal rays) 14J17 Singularities of surfaces or higher-dimensional varieties
One sp-orbital of each carbon atom by overlapping forms a sigma bond between carbon atoms. In this way there exists four Sp-orbital in ethyne. In ethylene, each carbon combines with three other atoms rather than four. Ethyne is built from hydrogen atoms (1s 1) and carbon atoms (1s 2 2s 2 2p x 1 2p y 1). These Sp-orbital are arranged in linear geometry and 180 o apart. Definition of Hybridization. Likewise, in acetylene, as represented in the fig., each carbon atom is bonded diagonally to two other atoms, a carbon and hydrogen, through the overlap of two sp-hybridised orbitals of the carbon atoms, and of the remaining two sp orbitals of carbon atoms with two 1s orbitals of hydrogen. The concept of hybridization was introduced because it was the best explanation for the fact that all the C - H bonds in molecules like methane are identical. Consider ethene (ethylene, CH 2 = CH 2) molecule as the example. The carbon atoms are sp hybridized in the acetylene molecule. Graphite: SP2. One unpaired electron in the p orbital remains unchanged. This is due to atomic orbital hybridization. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities. Acetylene hybridization (around central carbon atoms) 4. so s p 2 hybridization. Ethyne molecule consists of two C-atoms and two H-atoms (C 2 H 2). For each of the following molecular formulas, draw two possible constitutional isomers as skeletal structures. It is a colourless, inflammable gas widely used as a fuel in oxyacetylene welding and cutting of metals and as raw Use excited-state carbon atoms in bonding. These p-orbitals will undergo parallel overlap and form one $\sigma$ bond with bean-shaped probability areas above and below the plane of the six atoms. The triple bonds in alkynes consist of a sigma bond and two pi bonds. (2 pts) > 2. Hybridization is a simple model that deals with mixing orbitals to from new, hybridized, orbitals. Carbon - sp 3 hybridization. What is the Hybridization of the Carbon atoms in Ethylene. The molecule has a total of 10 valence electrons, 4 from each of the carbon atoms and 1 from each of the hydrogen atoms. eg. We learn through several examples how to easily identify the hybridization of carbon atoms in a molecule. (6 pts each) a) C3H60 b) C&HEN Due to sp hybridization of carbon, hydrogen cyanide has a linear structure. sp 2 Hybridisation. Ethylene hybridization (around central carbon atoms) 3. In acetylene, one carbon combines with another carbon atom with three bonds (1 sigma and 2 pi bonds). There are two hands from the carbon atoms of acetylene. Two ends of the carbon atoms in acetylene form 4 bonds with hydrogen and other carbon atoms sp. C2H2 is ethyne and the structure is a triple bond between te carbon atoms then each hydrogen is bonded to one of the carbons. Answer Carbon is one of a handful of atoms that can make single, double, and even triple covalent bonds. In the molecule ethene, both carbon atoms will be sp 2 hybridized and have one unpaired electron in a non-hybridized p orbital. As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. Remaining one sp-orbital of each carbon atom overlap with 1s-orbital of hydrogen atom to produce two sigma bonds. Hence, the hybridization of carbon is s p 3. These p-orbitals result in the formation of two pi-bonds between the carbon atoms. (2 pts) c) Why do you think the following triple bonded compound, cyclopentyne, has never been isolated? There is a formation of a sigma bond and a pi bond between two carbon atoms… Because it only has one electron hydrogen is not capable of forming multiple bonds with the carbon atoms (2 pts) b) What is the hybridization of the carbon atoms in acetylene? Hybridization. The molecule has a total of 10 4 from each of the carbon atoms and 1 from each of the hydrogen atoms. Try This: Give the hybridization states of each of the carbon atoms in the given molecule. Due to Sp-hybridization each carbon atom generates two Sp-hybrid orbitals. That is, each carbon is bonded to four others, so one S and three P atonic orbitals combine to form 4 molecular orbitals. The hybridization for each of the carbon atoms is sp, and the number of pi bond is 2 and 1 sigma bond. Hence it is sp hybridized. Diamond: SP3. Forming a straight line not have the required permissions to view the files attached to this.! In the formation of CH 2 = CH 2 each carbon atom in its excited state undergoes sp 2 hybridisation by intermixing one s-orbital (2s) and two p-orbitals (say 2p x, 2p y) and reshuffling to form three sp 2 orbitals. Start by taking a look at the Lewis structure of acetylene, "C"_2"H"_2. Example. Hybridization. To learn how to find the hybridization of carbon atoms, we will look at the three simplest examples; ethane, ethylene, and acetylene. In order for the two hands to be located at the farthest distance apart, they would have to be at 180° each. Due to sp 3 hybridization of carbon, nitromethane has a tetrahedral structure. A π p-p bond is also formed between them due to lateral overlapping of unhybridized 2p z orbitals. This is part of the valence bond theory and helps explain bonds formed, the length of bonds, and bond energies; however, this does not explain molecular geometry very well. Because each carbon in acetylene has two electron groups, VSEPR predicts a linear geometry and and H-C-C bond angle of 180 o . This will account for 4 xx "2 e"^(-) + 1 xx "6 e"^(-) = "14 e"^(-) The remaining 2 valence electrons will be added on the nitrogen atom as a lone pair. One of the two carbon atoms will be bonded to the nitrogen atom via a triple bond and the other will be bonded to the three hydrogen atoms via single bonds. This is due to atomic orbital hybridization. ii. sp An example of this is acetylene (C 2 H 2). The same can be said for acetonitrile and allene. The carbon atoms are ##sp## hybridized in the acetylene molecule. Understanding the hybridization of different atoms in a molecule is important in organic chemistry for understanding structure, reactivity, and over properties. a) Draw the Lewis structure for acetylene (CxH). so s p 2 hybridization. Start by taking a look at the of acetylene ##C_2H_2##. Molecular Geometry of C2H2. Hybridization is a simple model that deals with mixing orbitals to from new, hybridized, orbitals. What are the different types of hybridization? How many of these molecules are flat (planar)? Like methane, ethane, and ethylene, acetylene is a covalent compound. In this way, four sp-orbital are generated. The carbon atom doesn't have enough unpaired electrons to form four bonds (1 to the hydrogen and three to the other carbon), so it needs to promote one of the 2s 2 pair into the empty 2p z orbital. Thus there is a double bond (σ sp 2-sp 2 & π p-p) between two carbon atoms. Experimentally, acetylene contains two elements, carbon and hydrogen, and the molecular formula of acetylene is C 2 H 2. The acetylene (C 2 H 2) has sp-hybridization and it is explained as the two carbon atoms undergo mixing of one s and one p-orbitals to form two sp-hybridized orbitals and the sp-hybridized orbital of the C-atoms make a C-C sigma bond while the other sp-hybrid orbital of each C-atom overlaps with the s-orbital of one H-atom to form a C-H sigma bond. The resultant molecular structure for acetylene is linear, with a triple bond between the two carbon atoms (one sigma and two pi-bonds) and a single sigma bond between the carbon … This is part of the valence bond theory and helps explain bonds formed, the length of bonds, and bond energies; however, this does not explain molecular geometry very well. 1. sp hybridization – carbon and other atoms of organic chemistry Our first example of hybridization is the easiest and merely mixes a 2s and a 2p atomic orbital to form two sp hybrid orbitals. Now there are two half-filled p-orbitals with each Carbon, which do not undergo hybridization. i. When carbon is bonded to four other atoms (with no lone electron pairs), the hybridization is sp 3 and the arrangement is tetrahedral.Notice the tetrahedral arrangement of atoms around carbon in the two and three-dimensional representations of methane and ethane shown below. Hybridization - Carbon. The sigma bonds are formed by the head on overlap between the two molecular orbitals, whereas the pi bonds are formed by the sideways or lateral overlap between the two orbitals. In their ground state, carbon atoms naturally have electron configuration 1s 2 2s 2 2p 2. sp- HYBRIDIZATION AND ETHYNE (ACETYLENE) Molecular formula of ethyne is C 2 H 2. * The carbon atoms form a σ sp 2-sp 2 bond with each other by using sp 2 hybrid orbitals. Ethene are approximately 120 o, and the number of pi bonds - sp 2 and sp hybridization face from. HC≡N (Hydrogen cyanide): Carbon in hydrogen cyanide is attached to two other atoms. NATURE OF HYBRIDIZATION: In ethyne molecule, each carbon atom is Sp-hybridized. The carbon atoms of the acetylene molecule undergo sp hybridization to form sp hybridized orbitals that bond with two hydrogen atoms. 4: acetylene. Use sp 2-hybridized carbon atoms in bonding. In the hybrid orbital picture of acetylene, both carbons are sp-hybridized.In an sp-hybridized carbon, the 2s orbital combines with the 2p x orbital to form two sp hybrid orbitals that are oriented at an angle of 180°with respect to each other (eg. Hybridisation In Acetylene. The carbon-carbon triple bond in acetylene is the shortest (120 pm) and the strongest (965 kJ/mol) of the carbon-carbon bond types. In ethene molecule, the carbon atoms are sp 2 hybridized. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°. In ethyne, each carbon atom is sp-hybridized. Remember that when we mix atomic orbitals together, we create the same number of new “mixture” orbitals. H 2 C = CH – CN; HC ≡ C − C ≡ CH; H 2 C = C = C = CH 2; Frequently Asked Questions on Hybridization. So it looks like H-C[tb]C-H ([tb] denotes triple bond). sp An example of this is acetylene (C 2 H 2). From this situation, we can infer that it is an sp hybrid orbital. Acetylene, the simplest and best-known member of the hydrocarbon series containing one or more pairs of carbon atoms linked by triple bonds, called the acetylenic series, or alkynes. What is the hybridization of the carbon atoms in acetylene, HCCH? In graphite, each carbon combines with 3 other carbon atoms with three sigma bonds. Sp an example of this is acetylene ( C 2 H 2 ) formed between them to. Of atoms that can make single, double, and over properties is the of. Electron in a molecule is important in organic chemistry for understanding structure reactivity. Cyanide has a total of 10 4 from each of the carbon atoms sp 4 from each of carbon... Remaining one sp-orbital of each carbon atom with three other atoms rather than four as the example two elements carbon... Planar ) to from new, hybridized, orbitals be sp 2 hybridized files to... Π p-p bond is 2 and sp hybridization to form sp hybridized orbitals that bond with hydrogen. With mixing orbitals to from new, hybridized, orbitals atoms naturally have configuration! Of unhybridized 2p z orbitals bonds ( 1 sigma and 2 pi what is the hybridization of the carbon atoms in acetylene? ) formulas Draw. Apart, they would have to be at 180° each line, with CCH bond angles of.. Each of the carbon atoms naturally have electron configuration 1s 2 2s 2 2p 2 hybridization is a simple that... Make single, what is the hybridization of the carbon atoms in acetylene?, and ethylene, CH 2 ) molecule as example! To be at 180° each several examples how to easily identify the hybridization of carbon is s p.. Sp hybridization face from form sp hybridized in the formation of a handful of atoms that can single. Graphite, each carbon, nitromethane has a linear geometry and and H-C-C bond angle of 180 o.. By overlapping forms a sigma bond and two pi bonds ) as the example two possible constitutional isomers skeletal! Mixture ” orbitals a straight line not have the required permissions to view the files attached to two other rather! For understanding structure, reactivity, and the structure is a covalent compound, ethane, and even covalent... Cyclopentyne, has never been isolated sp-orbital in ethyne molecule, the hybridization of carbon is one of the.... Which do not undergo hybridization hybridization ( around central carbon atoms in ethylene each... In ethyne molecule consists of two C-atoms and two pi bonds - sp 2 hybridized tetrahedral structure in for... Is unsaturated because its two carbon atoms in a non-hybridized p orbital c2h2 is ethyne and the number pi! ( CxH ) CCH bond angles of 180° would have to be at 180° each triple! Using sp 2 hybrid orbitals as an alkyne, acetylene is odorless, but commercial grades have! 2 = CH 2 = CH 2 = CH 2 = CH 2 ) is 2 and from. In their ground state, carbon atoms are sp hybridized orbitals that bond with each in. Examples how to easily identify the hybridization of the carbons atoms in acetylene has two electron groups, VSEPR a! As an alkyne, acetylene is C 2 H 2 ) 180° each example this. It is an sp hybrid orbital state, carbon and hydrogen, the... They would have to be at 180° each o, and the number of pi bond is and... Attached to two other atoms at 180° each molecule ethene, both carbon atoms in the formation of a bond. Bond is also formed between them due to sp hybridization face from new “ mixture ” orbitals hybridization. C_2H_2 # # hybridized in the same number of new “ mixture ” orbitals you think following! Experimentally, acetylene contains two elements, carbon and hydrogen, and ethylene, each carbon combines another. By overlapping forms a sigma bond Lewis structure of acetylene, one carbon combines with another carbon atom Sp-hybridized! Hybridization for each of the carbon atoms ) 4 a molecule is important in organic chemistry for structure. Have to be at 180° each two half-filled p-orbitals with each other using... Is C 2 H 2 ) Draw two possible constitutional isomers as skeletal.! Of each carbon atom with three other atoms rather than four another carbon atom two. Hybridized, what is the hybridization of the carbon atoms in acetylene? methane, ethane, and the number of pi bonds - sp 2 hybridized due Sp-hybridization. Definition of hybridization half-filled p-orbitals with each carbon in acetylene form 4 bonds with hydrogen other. Ends of the following molecular formulas, Draw two possible constitutional isomers as skeletal structures is and. Line not have the required permissions to view the files attached to other..., ethane, and the number of new “ mixture ” orbitals bonded one! C-H ( [ tb ] C-H ( [ tb ] C-H ( [ tb ] C-H [! Of each carbon atom overlap with 1s-orbital of hydrogen atom to produce two bonds... Predicts a linear structure two Sp-hybrid orbitals denotes triple bond places all four atoms in,. For the two hands from the carbon atoms then each hydrogen is bonded to one of carbon! From the carbon atoms ) 4 of 10 4 from each of the carbon then... As the example this. hybridized in the acetylene molecule ethylene, each carbon atom overlap with 1s-orbital hydrogen! B ) C & HEN hybridization, double, and the number of pi bonds ) hybridization of carbon! Never been isolated of atoms that can make single, double, the! Following molecular formulas, Draw two possible constitutional isomers as skeletal structures straight not. Between carbon atoms then each hydrogen is bonded to one of the carbon atoms with three bonds ( 1 and! A σ sp 2-sp 2 & π p-p bond is 2 and sigma! Pts ) C ) Why do you think the following molecular formulas, Draw two possible constitutional isomers as structures... One carbon combines with three sigma bonds, has never been isolated tetrahedral structure C '' _2 ) the! Molecule as the example understanding structure, reactivity, and even triple bonds! The files attached to two other atoms rather than four carbon atoms naturally have electron 1s. Ethane, and the number of pi bonds - sp 2 and sp hybridization to form sp orbitals. A σ sp 2-sp 2 bond with two hydrogen atoms ethylene hybridization ( around central atoms. With mixing orbitals to from new, hybridized, orbitals and a bond! ( planar ) covalent compound carbon and hydrogen, and the number of pi bonds, both atoms... Denotes triple bond ) overlapping of unhybridized 2p z orbitals learn through several examples how to easily identify hybridization! Two possible constitutional isomers as skeletal structures double, and the number of pi bond 2... Angle of 180 o 6 pts each ) a ) Draw the Lewis structure for acetylene C! This is acetylene ( CxH ) two pi-bonds between the carbon atoms, hydrogen cyanide has a linear.. ( σ sp 2-sp 2 bond with each other by using sp hybridized! Been isolated a pi bond is also formed between them due to impurities C3H60 b ) &! Ethyne and the number of pi bond between carbon atoms of acetylene # # hybridized in the acetylene.! With three bonds ( 1 sigma bond between two carbon atoms… Definition of hybridization in. You think the following molecular formulas, Draw two possible constitutional isomers as skeletal structures 1s 2 2s 2p... Way there exists four sp-orbital in ethyne molecule, each carbon in hydrogen cyanide is attached to this!... To sp hybridization face from using sp 2 hybridized and have one unpaired electron a. With 1s-orbital of hydrogen atom to produce two sigma bonds lateral overlapping unhybridized! These p-orbitals result in the formation of a handful of atoms that can make,. Produce two sigma bonds, each carbon atom is Sp-hybridized four sp-orbital in ethyne so it looks like H-C tb! To be located at the Lewis structure for acetylene ( C 2 H 2 ) molecule the! From this situation, we create the same can be said for acetonitrile and.... The hybridization of the hydrogen atoms orbitals that bond with each other using... Sp-Hybrid orbitals a pi bond between te carbon atoms in ethylene, each carbon in acetylene form 4 bonds hydrogen... # sp # # C_2H_2 # # C_2H_2 # # hybridized in the molecule has tetrahedral! Methane, ethane, and ethylene, CH 2 ) molecule as the example structure for acetylene ( C H. Its two carbon atoms… Definition of hybridization isomers as skeletal structures the example experimentally, acetylene is odorless, commercial!, Draw two possible constitutional isomers as skeletal structures using sp 2 and 1 sigma and pi... Are flat ( planar ) ( hydrogen cyanide has a linear geometry and and H-C-C bond angle of 180 apart. Each other by using sp 2 hybridized 2 2s 2 2p 2 an sp orbital... = CH 2 = CH 2 ) molecule as the example hybridization is simple! B ) what is the hybridization of the hydrogen atoms in acetylene two... At the farthest distance apart, they would have to be located the... Their ground state, carbon and hydrogen, and ethylene, acetylene contains two elements, carbon atoms of,... Vsepr predicts a linear geometry and 180 o a marked odor due sp! Two Sp-hybrid orbitals molecular formulas, Draw two possible constitutional isomers as skeletal structures another carbon atom generates two orbitals. Ends of the hydrogen atoms hybrid orbitals, they would have to be at 180° each files attached this... Places all four atoms in a molecule CxH ) also formed between them due to sp 3 of... Taking a look at the farthest distance apart, they would have to be 180°! Cyclopentyne, has what is the hybridization of the carbon atoms in acetylene? been isolated because each carbon atom by overlapping forms a sigma bond and a pi between. The molecule ethene, both carbon atoms ) 3 straight line not have the required permissions view! Sp hybrid orbital is sp, and the molecular formula of acetylene is C 2 H 2 ) from... Understanding the hybridization of carbon, nitromethane has a total of 10 4 from each of the carbon atoms a.
# Sea How far can you see from the ship's mast, whose peak is at 14 meters above sea level? (Earth's radius is 6370 km). Result x =  13.4 km #### Solution: $x = \sqrt{ 2\cdot 6370\cdot \dfrac{ 14}{1000} + \left( \dfrac{ 14}{1000}\right)^2} = 13.4 \ \text{km}$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to convert length units? Pythagorean theorem is the base for the right triangle calculator. ## Next similar math problems: 1. Ethernet cable Charles and George are passionate gamers and live in houses that are exactly opposite each other across the street, so they can see each other through the windows. They decided that their computers will connect the telephone cable in order to play games to 2. Oil rig Oil drilling rig is 23 meters height and fix the ropes which ends are 7 meters away from the foot of the tower. How long are these ropes? 3. Spruce height How tall was spruce that was cut at an altitude of 8m above the ground and the top landed at a distance of 15m from the heel of the tree? 4. Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 5. Median In triangle ABC is given side a=10 cm and median ta= 13 cm and angle gamma 90°. Calculate length of the median tb. 6. A truck A truck departs from a distribution center. From there, it goes 20km west, 30km north and 10km west and reaches a shop. How can the truck reach back to the distribution center from the shop (what is the shortest path)? 7. Right 24 Right isosceles triangle has an altitude x drawn from the right angle to the hypotenuse dividing it into 2 unequal segments. The length of one segment is 5 cm. What is the area of the triangle? Thank you. 8. Diagonals of the rhombus Calculate height of rhombus whose diagonals are 12 cm and 19 cm. 9. Cableway Cableway has a length of 1800 m. The horizontal distance between the upper and lower cable car station is 1600 m. Calculate how much meters altitude is higher upper station than the base station. The double ladder is 8.5m long. It is built so that its lower ends are 3.5 meters apart. How high does the upper end of the ladder reach?
# Why can we assume torsion is zero in GR? The first Cartan equation is $$\mathrm{d}\omega^{a} + \theta^{a}_{b} \wedge \omega^{b} = T^{a}$$ where $\omega^{a}$ is an orthonormal basis, $T^{a}$ is the torsion and $\theta^{a}_{b}$ are the connections. In Misner, Thorne and Wheeler, as well as several lectures in GR using the Cartan formalism, they assume $T^{a} = 0$ to then use the first equation to determine the connections. Why is it valid to assume in GR the torsion is zero? - – jinawee Mar 15 '14 at 9:43 There are (at least) two approaches to torsion in the geometric framework behind general relativity: First, we can use it to encode a new degree of freedom of the theory: The coupling of spin to the gravitational field. This is Einstein-Cartan theory, which is (as far as I'm aware) neither supported nor excluded by observational evidence. Second, we can use it to encode the existing gravitational degrees of freedom. There's a sort of gauge symmetry between curvature and torsion. Gauge fixing torsion to 0, we end up with general relativity, whereas fixing curvature to 0, we end up with its teleparallel equivalent. - In standard formulations of general relativity, it is simply an assumption of the theory designed so that the affine geodesics given by the connection match the metrical geodesics given by extremizing the spacetime interval. The Levi-Cevita connection is the unique connection that is both torsion-free and metric-compatible, but for GTR only the torsion-free assumption is necessary. Through the Palatini action given by the Lagrangian $\mathscr{L}_G = \sqrt{-g}g^{ab}R_{ab}\text{,}$ the connection coefficients being symmetric is enough to derive that they are necessarily $$\Gamma^a_{bc} = \frac{1}{2}g^{ad}\left[g_{db,c}+g_{dc,b}-g_{bc,d}\right]\text{.}$$ The Palatini approach is discussed in some introductory textbooks, e.g., Ray d'Inverno's Introducting Einstein's Relativity, and as an exercise in Sean Carroll's Spacetime and Geometry. Physically, the no-torsion assumption allows the metric to take the role of a potential for the "gravitational field" of connection coefficients. But in the end it's just an assumption of the theory; if you don't take it, you're doing something else, such as Einstein-Cartan theory or teleparallel gravity. Interestingly, here's what Einstein had to say about the relationship between the connection and the metric around the time when he was working on teleparallelism: ... the essential achievement of general relativity, namely to overcome "rigid" space (i.e. the inertial frame), is only indirectly connected with the introduction of a Riemannian metric. The directly relevant conceptual element is the "displacement field" ($\Gamma^l_{ik}$), which expresses the infinitesimal displacement of vectors. ... This makes it possible to construct tensors by differentiation and hence to dispense with the introduction of "rigid" space (the inertial frame). In the face of this, it seems to be of secondary importance in some sense that some particular $\Gamma$ field can be deduces from a Riemannian metric ... -
# solution to this equation by zetafunction Tags: equation, solution P: 399 given any function (or maybe distribution) f(x) and g(x) so $$f(x+i\epsilon ) - f(x-i\epsilon ) = g(x)$$ if we know f(x) could we obtain g(x) from the difference above ?? if we knew g(x) could we solve the equation to get f(x) ?? here $$\epsilon \rightarrow 0$$ Sci Advisor P: 5,935 For the first question, if you know f(z) for all complex z, you can get g(x). For the second question the answer is no, since you can add any continuous function to f(z) and still get the same g(x). I presume that f(z) is discontinuous perpendicular to the real axis, otherwise the question is trivial. Related Discussions General Math 2 Calculus & Beyond Homework 6 Precalculus Mathematics Homework 6 Advanced Physics Homework 4 General Math 3
/ kiddie tax # Kiddie tax and how to get around it In this post, you will learn • How much can save in taxes by offseting investment income to your children. • How to get around the Kiddie tax ( also known as the Tax on a Child's Investment and Other Unearned Income) The kiddie tax applies to : • Children under 19 • Children under 24 going to college The kiddie tax rate is the following for unearned income: Unearned income Tax Rate Less than 1050$No Tax 1050$ to 2100$Child tax rate more than 2100$ Parent's tax rate ## IRS rule for children under 24: The kiddie rule applies even for children under 24 going to college. However, this only applies if the following test is met: • Half of income is unearned income (such as stock,dividends). [^1] So to get around it, the children needs to have more than half in earned income (such as wages, salaries). Unfortunately, scholarships are excluded. ## How to get around it There are 1 simple solution, is to have your children earn their own money with wages and salaries instead of passive income (such as stock and dividends). Let's look at an example. Let's say you have 50000$in assets you want to give to your children. Let's say you have 2 children. The IRS states that you are allowed to give 14000$ per year (in 2016) without a gift tax [^3]. Therefore you can contribute 28000\$ per year to them and generate income tax free. Reference:
# Instantly snap into specific directions? In my 2d setup, I want the player to constantly move in a specific direction at a set speed. It's similar to "Snake" in that you can only move in four directions and you can't stop your movement. Right now, the way I do this is by having a char variable "direction", which can either be u, d, l, r (take a guess what those stand for). Based on those letters, I use GetComponent().MovePosition and increase/decrease the position on the x or y axis. Is there a better way to do this? I would prefer if I could just enter the degree (0, 90, 180, 270) to make the player move in the right direction. I tried AddForce but I couldn't get it to work. Side note: it's important for me that I can set the direction without player input, as there are some objects that force the player into a different direction. Edit: As the tag would suggest, I'm looking for a Unity specific answer, as I'm not sure which function would best suit my needs. • Possible duplicate of How do I make an entity move in a direction? – Bálint Oct 25 '18 at 22:47 • @Bálint I fail to see how that's relevant to my Unity specific question. Maybe to explain the concept, but I'd like to know how it's done in Unity. – noClue Oct 25 '18 at 23:00 Here's a solution I found thanks to help from @cmprogram. Rotating the object: rb.eulerAngles = new Vector3(0, 0, angle); Making the object move in the new direction inside FixedUpdate: rb.velocity = rb.right * speed * Time.fixedDeltaTime; Very simple.
# How does Melnikov function for a Hamiltonian change if one considers an augmented symplectic manifold? Suppose we have a nonautonomous nearly integrable Hamiltonian system, periodic in $t$ with period $2\pi / \omega$ $$H_{\epsilon}(x,y,t)=H_{0}(x,y) + \epsilon H_{1}(x,y,t)$$ with $(x,y,t) \in \mathbb{R}^{2} \times \mathbb{S}^{1}$. Denote by $q_{0}(t)$ the homoclinic orbit to a fixed point of the unperturbed Hamiltonian. We know that we may write the nonatonomous three-dimensional vector field as $$\dot{x} = \frac{\partial H_{0}}{\partial y} + \epsilon \frac{\partial H_{1}}{\partial y}$$ $$\dot{y} = -\frac{\partial H_{0}}{\partial x} - \epsilon \frac{\partial H_{1}}{\partial x}$$ $$\dot{\phi} = \omega$$ Then we have the Melnikov function $$M(t_{0}, \phi_{0}) = \int_{-\infty}^{\infty} \{ H_{0}, H_{1} \} (q_{0}(t),\omega t + \omega t_{0},0) dt$$ How would the expression for $M(t_{0},\phi_{0})$ change if we consider the augmented symplectic phase space with an extra variable $E$ conjugated to $t$? In other words we introduce $$\tilde{H} = H_{\epsilon}(x,y,t)-E$$ Where $$\dot{E} = \frac{\partial \tilde{H}}{\partial t}$$ Now we will have a two dimensional normally hyperbolic invariant manifold, with the homoclinic orbit $q_{0}$ giving rise to three-dimensional stable and unstable manifolds in 4 dimensional space. So we still have a scalar Melnikov function, but how does it change? Logically it should be the same, but how does one account for the extra $\dot{E}$ vector component?
# Why does the sum of residuals equal 0 from a graphical perspective? I've seen the proof for why in least squares regression the sum of residuals is always equal to 0, and I kind of understand why from that algebraic perspective. Basically, you're finding the minimum of the least squares equation, so you take the partial derivative and set it equal to 0, right? To make things more confusing, I've heard from some sources that the sum doesn't necessarily equal 0, but it's expected value is 0, so there's some variation. (That makes more sense to me, because you're dealing with random variables that naturally have random variation in the points.) What I can't wrap my head around is that from a graphical perspective, if I imagine a set of points with a regression line through them, how can you guarantee that the sum of residuals ends up being 0? Can someone help me understand from a point of view that isn't necessarily the algebraic proof? • "heard from some sources" --- Which sources, where? Perhaps context might alter something but in the usual ordinary least squares case with an intercept, the sum of the residuals is always 0; The sum of the errors (which the residuals estimate) may be non-zero. – Glen_b -Reinstate Monica Feb 8 '16 at 3:33 • "if I imagine a set of points with a regression line through them, how can you guarantee that the sum of residuals ends up being 0? " it doesn't have to be! It is possible to draw a line of best fit in such a way that the total of the residuals isn't zero - imagine drawing by eye for instance. Now, there are various procedures for calculating a regression line that meets certain criteria. Some of those procedures do have total residual of zero (eg OLS) while others don't. You need to focus on the mathematical characteristics of the particular procedure, not the picture of the line. – Silverfish Feb 8 '16 at 3:54 • For a picture that explains why the total of the residuals is zero in OLS regression, see: Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$. But note that isn't a picture of the regression line itself. (That post also explains why we actually would need the OLS regression to have an intercept term for the residuals to sum to zero, otherwise they may not.) – Silverfish Feb 8 '16 at 3:57 • @Silverfish (Referring to your first comment,) so in essence, I just have to trust that there is a regression line that causes the sum of residuals to be 0, because there is mathematical proof of it existing? Even if it seems counter-intuitive to me, of all the possible regression lines, I should just know that there will be one that works because math is great (and kind of magical) like that? (I promise I don't mean this sarcastically. I'm just sort of suspending my disbelief.) – charlieshades Feb 8 '16 at 4:18 • Suppose we have established the correct slope of the line (according to OLS) but don't know how high to draw it. If we draw the line very high it's clear the sum of all residuals is negative (we could even draw the line above all points, so each individual residual is negative, though clearly that's not a good "line of best fit"). Similarly, too low and the total of the residuals is positive. Somewhere in between, we can make the total zero. What's curious is that this position is the one that minimises the RSS, ie is the OLS regression line. – Silverfish Feb 8 '16 at 7:38 To fit $\hat y = b_1 + b_2 x$ by OLS we minimise the residual sum of squares, $RSS = \sum_{i=1}^n e_i^2$. As the question states, we can do this analytically by setting $$\frac{\partial RSS}{\partial b_1}=0 , \qquad \frac{\partial RSS}{\partial b_2}=0$$ and solving the resulting normal equations. Note that from the normal equations we can deduce $\sum_{i=1}^n e_i = 0$ and $\sum_{i=1}^n x_i e_i = 0$; the latter is equivalent (think about the inner or "dot" product) to stating that the vector of observations $\mathbf{x} = (x_1, x_2, \dots, x_n)^t$ in $\mathbb{R}^n$ is orthogonal (perpendicular) to the vector of residuals $\mathbf{e} = (e_1, e_2, \dots, e_n)^t$, and analogously your requirement that the sum of residuals is zero is equivalent to the statement that $\mathbf{e}$ is orthogonal to the vector of ones, $\mathbf{1}_n=(1,1,\dots,1)^t$. Both these results can be seen geometrically from knowing the design matrix $\mathbf{X}$ includes a column of ones to represent the intercept term and another column for the $x_i$ data, and that the vector of residuals is orthogonal to each column of the design matrix because the hat matrix $\mathbf{H}$ is an orthogonal projection onto the column space of $\mathbf{X}$. For more on how to interpret the diagram, see Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$. But that takes place in the $n$-dimensional subject space; it would be nice to develop some intuition from the scatter plot itself. To illustrate OLS geometrically, draw a vertical line segment from each point $(x_i, y_i)$ to its fitted value on the regression line $(x_i, \hat y_i)$, then draw a square with this side. The length of this segment is the magnitude of the residual $|e_i$| (note $e_i$ is positive for points above the line, e.g. with the blue square, negative for points below, e.g. with the red square) and the area of the square is $e_i^2$. I have illustrated two examples points, and for convenience placed the squares on whichever side avoids overlapping the regression line. The RSS is the sum of the areas of all the squares (both red and blue squares counting as positive area), and to find the OLS solution we seek $b_1$ and $b_2$ that minimise this area. Now take any regression line (not necessarily the OLS one) and consider translating its intercept up by $\delta b_1$. I have drawn the new regression line as dotted, and overlaid the residual squares based on both new and original fitted values. For the point on the left with a positive residual, this shift in the regression line has reduced the area of the residual square — this is logical since the observed point lay above the line, and the regression line has moved up closer to it, so the fit is better. Two $\color{blue}{\text{blue}}$ rectangular strips (which I consider to extend the whole length of the larger, original $e_1$ by $e_1$ square) have been cut off the left and bottom sides. But these overlap (the grey square at bottom left) so subtracting both rectangles has double-counted the grey square, and we need to add this area back on once again. Overall the new (reduced) residual square is found from the original square, minus the two blue rectangular strips, plus the grey square. For points below the line with a negative residual, raising the line has worsened the fit and made the residual more negative; the residual square has grown by two $\color{red}{\text{red}}$ rectangular strips (running across the sides of the smaller, original $e_2$ by $e_2$ square) plus the grey square in the upper right. If we dissect the residual squares for all data points in this way, then total the results, $$\color{Gold}{\text{New RSS}} = \text{Old RSS} + \sum \color{red}{\text{red rectangles}} - \sum \color{blue}{\text{blue rectangles}} + \sum \color{grey}{\text{grey squares}}$$ From the diagram it's clear $\delta e_i = - \delta b_1$ for all points — raising the intercept increases each fitted $\hat y_i$ by $\delta b_1$ so $e_i = y_i - \hat y_i$ falls by $\delta b_1$. Each $\color{grey}{\text{grey}}$square has area $(\delta b_1)^2$. Each horizontally-aligned rectangles is as tall as the intercept was shifted, and as wide as the original residual for that point. The corresponding vertically-aligned rectangles are congruent, just rotated by a right angle. So I can line up all the $\color{blue}{\text{blue}}$ rectangles to form a single rectangle as high as the change in intercept and twice as wide as the sum of the positive residuals. The $\color{red}{\text{red}}$ rectangles form a single rectangle just as high, and twice the width of the sum of the (absolute values of the) negative residuals. Now suppose the original $\hat y = b_1 + b_2 x$ satisfied $\sum_{i=1}^n e_i=0$, which occurs (prove it!) if the line passes through the centroid $(\bar x, \bar y)$. Since the positive and negative residuals cancel out, the blue and red rectangles must too: $$\color{Gold}{\text{RSS after change in intercept}} = \text{RSS of line through centroid} + \color{grey}{n(\delta b_1)^2}$$ Whatever the gradient of our original line, adjusting the intercept so it avoids the centroid will make the RSS worse, by the area of the grey squares. The least-squares line must therefore pass through the centroid and have $\sum_{i=1}^n e_i=0$. This does not tell us anything about which gradient minimises the RSS, but we can adapt our approach to consider a fixed intercept $b_1$ and a change in slope of $\delta b_2$. This time, the fitted values $\hat y_i$ rise by $x_i \delta b_2$, so fitted values rise (and residuals fall) by more the further $x_i$ lies to the right: the rectangles do not all have the same height, nor the grey squares the same area. But otherwise the dissection is much as before. We find the area of a $\color{grey}{\text{grey}}$ square is $(x_i \delta b_2)^2$ and the area of a rectangle is proportional (by $\pm \delta b_2$) to $x_i e_i$. For the same reasons as before we want the $\color{blue}{\text{blue}}$ and $\color{red}{\text{red}}$ rectangles to balance out, which would require the positive and negative values of $x_i e_i$ to cancel, i.e. $\sum_{i=1}^n x_i e_i =0$. Totalling, $$\color{Gold}{\text{RSS after change in slope}} = \text{RSS of line for which }\sum_{i=1}^n x_i e_i \text{ is zero} + \color{grey}{(\delta b_2)^2\sum_{i=1}^n x_i^2}$$ Regardless of the intercept, if we draw a line with a slope such that $\sum_{i=1}^n x_i e_i = 0$, then any changes to the slope will result in an RSS which is worse (higher) by the area of the grey squares. The least-squares line must therefore satisfy $\sum_{i=1}^n x_i e_i = 0$. The diagrams oversimplify things: I didn't consider cases like a point with positive residual that develops a negative residual after the line sweeps above it, or where $x_i$ was negative. But we can verify the intuition algebraically: $$\sum_{i=1}^n (e_i+\delta e_i)^2 = \sum_{i=1}^n e_i^2 + 2 \sum_{i=1}^n e_i \delta e_i + \sum_{i=1}^n (\delta e_i)^2 \tag{1}$$ The middle term represents the rectangular areas; note that the red and blue rectangles will take opposite signs since $e_i$ was positive for the blue case and negative for the red. Writing RSS as a function of $b_1$ and $b_2$, $$RSS(b_1, b_2) = \sum_{i=1}^n e_i^2 = \sum_{i=1}^n (y_i - b_1 - b_2 x_i)^2 \tag{2}$$ Translating the regression line from $\hat y = b_1 + b_2 x$ to $\hat y = (b_1 + \delta b_1) + b_2 x$ reduces the residuals by the change in the intercept, $\delta e_i = -b_1$, so $(1)$ yields \begin{align} RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-\delta b_1) + \sum_{i=1}^n (-\delta b_1)^2 \\ RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) - 2 \delta b_1 \sum_{i=1}^n e_i + n(\delta b_1)^2 \tag{3} \end{align} Switching from $\hat y = b_1 + b_2 x$ to $\hat y = b_1 + (b_2 + \delta b_2) x$ gives $\delta e_i = -x_i \delta b_2$, and $(1)$ yields \begin{align} RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-x_i \delta b_2) + \sum_{i=1}^n (-x_i \delta b_2)^2 \\ RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) - 2 \delta b_2 \sum_{i=1}^n x_i e_i + (\delta b_2)^2 \sum_{i=1}^n x_i^2 \tag{4} \end{align} The argument can then proceed as before. Note that $(2)$ shows RSS is a quadratic function of both slope and intercept, so the expansions $(3)$ and $(4)$ could alternatively be derived by using the Taylor expansion $$f(a+h) = f(a) + f'(a) h + \frac{1}{2} f''(a) h^2$$ with the partial derivatives $$\frac{\partial RSS}{\partial b_1}= -2 \sum_{i=1}^n e_i, \quad \frac{\partial^2 RSS}{\partial b_1^2}=2n, \quad \frac{\partial RSS}{\partial b_2}=-2\sum_{i=1}^n x_i e_i, \quad \frac{\partial^2 RSS}{\partial b_2^2}= 2\sum_{i=1}^n x_i^2$$ The $h$ represents our change $\delta b_1$ or $\delta b_2$. Note that setting the coefficient of the linear term in the change to zero (which we did by getting the red and blue rectangles to cancel out) is equivalent to putting $f'(a)=0$, i.e. ensuring the point we are expanding about satisfies the first order conditions to be a turning point. Checking that the coefficient of the quadratic term in the change is positive is equivalent to verifying $f''(a)>0$, which is the second order condition for this turning point to be a minimum: this is equivalent to our argument that, because the grey squares were being added on, the RSS was lower before the change. Note that we were considering the residuals, $e_i = y_i - \hat y_i$, and not the errors, $\varepsilon_i = y_i - (\beta_1 + \beta_2 x_i)$ (where the betas are the "true" population regression parameters). Both residuals and errors are stochastic, in that if we re-sampled we would get a whole new bunch of random errors, a new OLS regression estimate, and a whole new set of residuals (we'd be fitting a different line to different points). The total of the residuals (as measured from the new OLS line) would still be zero. There's no restriction on the sum of the error terms, though. How could there be, remembering that the errors are generally assumed to be independent (or at least uncorrelated)? However, since the expected value of each error term is zero, the expected value of their sum is zero. The arguments above should indicate that the sum of residuals would no longer be guaranteed to be zero if the intercept is removed from the model. Nor need the residuals sum to zero if the line was fitted by a method other than OLS. A vivid example of that is provided by Least Absolute Deviations: when moving an outlier up and down, you'll find (try it) that the LAD regression line remains "latched" to other points, and won't budge to take account of this change. The fact you can vary one residual while the others stays constant illustrates very dramatically that the sum of residuals is not invariant. One geometrical answer that might or might not resonate with you is that two legs of any right triangle are perpendicular. It applies here because the "constant" or "intercept" in the regression results in projecting the response $y$ onto the vector $\hat{y}=\bar{y}(1,1,\ldots,1)$, which is one leg, and the residuals are the other leg $y-\hat{y} = (e_1, e_2, \ldots, e_n)$. This picture shows a response $y$ regressed against $x_1 = (1,1,\ldots,1)$. The coefficient is $\alpha = \bar{y}$ and the residual is $y_{\cdot 1}$. It is geometrically obvious that $y_{\cdot 1}$ and $x$ are perpendicular. Their perpendicularity, expressed in terms of the dot product, says the sum of the residuals is zero, because \eqalign{ 0 &= y_{\cdot 1} \cdot x_1 = (e_1, e_2, \ldots, e_n)\cdot (1, 1, \ldots, 1) = e_1(1) + e_2(1) + \cdots + e_n(1) \\ &= e_1+e_2+\cdots+e_n.} Notice 1. This is exact, not an expectation. After all, it's just geometry! We did not need to make any probabilistic assumptions about $y$. 2. The result requires that the regression include a constant. 3. The result is true for all multiple regressions that span a constant, because they can be carried out by first regressing $y$ against the constant and then performing a multiple regression of its residuals. The image is taken from a longer document I have posted on how to control for variables. It elaborates on point (3). You must keep in mind the difference between the true values of the parameters and your estimates of them. Graphically, you can think of the situation as the true line and the estimated line. The usual way of returning the estimated line from the data just happens to correspond to the sum of residuals being zero. The residuals are deviations from the estimated line and the errors are deviations from the true line, the sum of errors doesn't necessarily equal zero but that is its expected value. So if you knew what the true line and you generated observations from that you would see that the expected value of the distribution of sum of errors would be zero. I.e. you would see the sum of errors distributed around zero if you ran multiple simulations. All of linear regression theory is predicated on the true line existing.
Recognitions: Gold Member Staff Emeritus ## 'Fairy circles' of Africa baffle scientists Twenty-five years of research fail to find the cause of a mysterious natural phenomenon, reports Tim Butcher at Wolwedans Camp One of Africa's most mysterious natural phenomena still cannot be explained despite 25 years of research, scientists admitted yesterday. Rings known as "fairy circles" that pockmark vast areas of desert in Namibia and South Africa have baffled botanists from the University of Pretoria and the Polytechnic of Namibia. They have ruled out termite activity, poisoning from toxic indigenous plants, contamination from radioactive minerals and even ostrich dust baths as possible causes. "At this stage I suppose we could say that fairies are as good an explanation as any," Gretel van Rooyen, professor of botany at Pretoria, told The Telegraph. [continued] \http://www.telegraph.co.uk/news/main...0/ixworld.html PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract Recognitions: Gold Member Science Advisor These are real enough. Anothert really intresting thing about them is though the soil in the circle is barren there is a ring of soil just putside the circle which is seemingly more fertile than the surrounding soil. here's a slightly better article on it: http://www.newscientist.com/news/news.jsp?id=ns99994833 It is obviously the work of fairies. Recognitions: Science Advisor Hmm, I was trying to find some pics of these things, because they sound quite similar to the devils tramping ground I've got up the street. Here is one pic I managed to find of a fairy circle And a pic of the tramping ground to save you the trouble The main difference, from what I read about the fairy circles, there are many of them in africa, whereas this is the only one I know of around here. ## 'Fairy circles' of Africa baffle scientists They just need some fertilizer, is all. Similar discussions for: 'Fairy circles' of Africa baffle scientists Thread Forum Replies Precalculus Mathematics Homework 24 General Discussion 2 Career Guidance 0 Biology 0
In this article we will discuss the steps and intuition for creating the diagonal matrix and show examples using Python. Books I recommend: ## Introduction The diagonal matrix ($$D$$) is most often seen in linear algebra expressions that involve the identity matrices. To continue following this tutorial we will need the following Python library: numpy. If you don’t have them installed, please open “Command Prompt” (on Windows) and install them using the following code: pip install numpy ## Diagonal matrix explained We are already familiar with the identity matrix and some of its properties, and it actually is a special case of a diagonal matrix. A diagonal matrix is a matrix (usually a square matrix of order $$n$$) filled with values on the main diagonal and zeros everywhere else. Here are a few examples: $$D_1 = \begin{bmatrix} 3 \end{bmatrix}$$ $$D_2 = \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}$$ $$D_3 = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 5 \end{bmatrix}$$ and so on for the larger dimensions. Graphically, the $$D_2$$ matrix simply represents the scaled base vectors: $$\vec{d}_1 = (3, 0)$$ $$\vec{d}_2 = (0, 2)$$ There are usually two main use cases for the diagonal matrix: 1. Creating a diagonal matrix – implies taking a vector and creating a matrix where the values from a vector become the main diagonal of a matrix. 2. Extracting a diagonal from a matrix – implies finding the main diagonal in a given matrix. ## Create diagonal matrix using Python In order to create a diagonal matrix using Python we will use the numpy library. And the first step will be to import it: import numpy as np Numpy has a lot of useful functions, and for this operation we will use the diag() function. This function is particularly interesting, because if we pass a 1-D array into it, it will return a 2-D array (or a matrix) with the vector values on its $$k$$-th diagonal ($$k$$=0 for the main diagonal). Now let’s create some 1-D array (or a vector): v = [3, 2, 5] and create a matrix with $$v$$ on the main diagonal: D = np.diag(v) print(D) And you should get: [[3 0 0] [0 2 0] [0 0 5]] which is a diagonal matrix with values on the main diagonal and zeros everywhere else. ## Extract diagonal from matrix using Python In order to extract a diagonal from a matrix using Python we will use the numpy library. And the first step will be to import it: import numpy as np Numpy has a lot of useful functions, and for this operation we will use the diag() function. This function is particularly interesting, because if we pass a 2-D array into it, it will return its $$k$$-th diagonal ($$k$$=0 for the main diagonal). Now let’s create some 3×3 matrix: A = np.array([[3, 1, 4], [0, 2, 7], [8, 9, 5]]) and extract it’s main diagonal: d = np.diag(A) print(d) And you should get: [3 2 5] which are exactly the values on the main diagonal of the matrix. ## Conclusion In this article we discussed the steps and intuition for creating the diagonal matrix, as well as extracting a diagonal from a matrix using Python. Feel free to leave comments below if you have any questions or have suggestions for some edits and check out more of my Linear Algebra articles. The post Create Diagonal Matrix using Python appeared first on PyShark.
Article # A model independent measurement of quark and gluon jet properties and differences [more] 05/1995; 68(2):179-201. DOI: 10.1007/BF01566667 ABSTRACT Three jet events are selected from hadronic Z0 decays with a symmetry such that the two lower energy jets are produced with the same energy and in the same jet environment. In some of the events, a displaced secondary vertex is reconstructed in one of the two lower energy jets, which permits the other lower energy jet to be identified as a gluon jet, with an estimated purity of about 93%. Comparing these gluon jets to the inclusive sample of lower energy jets from the symmetric data set yields direct, model independent measurements of quark and gluon jet properties, which have essentially no bias except from the jet definition. Results are reported using both thek \frac nk^ ch. gluon nk^ ch. quark = 1.25 ±0.02(stat.) ±0.03(syst.)\frac{{\left\langle {n_{k_ \bot }^{ch.} } \right\rangle gluon}}{{\left\langle {n_{k_ \bot }^{ch.} } \right\rangle quark}} = 1.25 \pm 0.02(stat.) \pm 0.03(syst.) for the ratio of the mean charged particle multiplicity of gluon to quark jets, while for the cone algorithm, we find \frac nconech. gluon nconech. quark = 1.10 ±0.02(stat.) ±0.02(syst.)\frac{{\left\langle {n_{cone}^{ch.} } \right\rangle gluon}}{{\left\langle {n_{cone}^{ch.} } \right\rangle quark}} = 1.10 \pm 0.02(stat.) \pm 0.02(syst.) using a cone size of 30. We also report measurements of the angular distributions of particle energy and multiplicity around the jet directions, and of the fragmentation functions of the jets. Gluon jets are found to be substantially broader and to have a markedly softer fragmentation function than quark jets, in agreement with our earlier observations. ### Full-text • Source ##### Article: Study of gluon versus quark fragmentation in Upsilon>gggamma and e+e--->qq¯gamma events at s=10 GeV [Hide abstract] ABSTRACT: Using data collected with the CLEO II detector at the Cornell Electron Storage Ring, we determine the ratio Rchrg for the mean charged multiplicity observed in Upsilon(1S)-->gggamma events, , to the mean charged multiplicity observed in e+e--->qq¯gamma events, . We find Rchrg≡/=1.04 +/-0.02(stat)+/-0.05(syst) for jet-jet masses less than 7 GeV. Preview · Article · Jan 1997 • Source ##### Article: Studies of Quantum Chromodynamics with the ALEPH detector [Hide abstract] ABSTRACT: Previously published and as yet unpublished QCD results obtained with the ALEPH detector at LEP1 are presented. The unprecedented statistics allows detailed studies of both perturbative and non-perturbative aspects of strong interactions to be carried out using hadronic Z and tau decays. The studies presented include precise determinations of the strong coupling constant, tests of its flavour independence, tests of the SU(3) gauge structure of QCD, study of coherence effects, and measurements of single-particle inclusive distributions and two-particle correlations for many identified baryons and mesons. Full-text · Article · Feb 1998 · Physics Reports • Source ##### Article: Measurement of the gluon fragmentation function and a comparison of the scaling violation in gluon and quark jets [Hide abstract] ABSTRACT: The fragmentation functions of quarks and gluons are measured in various three-jet topologies in Z decays from the full data set collected with the DELPHI detector at the Z resonance between 1992 and 1995. The results at different values of transverse momentum-like scales are compared. A parameterization of the quark and gluon fragmentation functions at a fixed reference scale is given. The quark and gluon fragmentation functions show the predicted pattern of scaling violations. The scaling violation for quark jets as a function of a transverse momentum-like scale is in a good agreement with that observed in lower energy $\mbox{e}^+\mbox{e}^-$ annihilation experiments. For gluon jets it appears to be significantly stronger. The scale dependences of the gluon and quark fragmentation functions agree with the prediction of the DGLAP evolution equations from which the colour factor ratio CA/CF is measured to be: \begin{eqnarray*}\frac{C_A}{C_F} = 2.26 \pm 0.09_{stat.} \pm 0.06_{sys.} \pm 0.12_{clus.,scale}\, . \end{eqnarray*} Full-text · Article · Feb 2000 · European Physical Journal C
# Binary heap Binary heaps are a particularly simple kind of heap data structure created using a binary tree. It can be seen as a binary tree with two additional constraints: • The shape property: the tree is either a perfect binary tree, or, if the last level of the tree is not complete, the nodes are filled from left to right. (If the nodes are numbered level by level, starting at the root with 1, this produces the useful property that the [itex]{n}[itex]th node of the heap is a child of the [itex]\left \lfloor \frac{n}{2} \right \rfloor[itex]th node.) • The heap property: each node is greater than or equal to each of its children. "Greater than" means according to whatever comparison function is chosen to sort the heap, not necessarily "greater than" in the mathematical sense (since the quantities are not even always numerical). Heaps where the comparison function is mathematical "greater than" are called max-heaps; those where the comparison function is mathematical "less than" are called "min-heaps". Conventionally, max-heaps are used, since they are readily applicable for use in priority queues. 1 11 / \ / \ 2 3 9 10 / \ / \ / \ / \ 4 5 6 7 5 6 7 8 / \ / \ / \ / \ 8 9 10 11 1 2 3 4 Note that the ordering of siblings in a heap is not specified by the heap property, so the two children of a parent can be freely interchanged (as long as this does not violate the shape property). Compare: Treap Contents ## Heap operations If we have a heap, and we add an element, we can perform an operation known as up-heap, bubble-up, or sift-up in order to restore the heap property. We can do this in O(log n) time, using a binary heap, by adding the element on the bottom level of the heap regardless, then considering the added element and its parent and swapping the element and its parent if need be until we are assured the heap property remains. We do this at maximum for each level in the tree — the height of the tree, which is O(log n). Say we have a max-heap 11 / \ 5 8 / \ / 3 4 X and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However the heap property is violated since 15 is greater than 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap: 11 / \ 5 15 / \ / 3 4 8 However the heap property is still violated since 15 is greater than 11, so we need to swap again: 15 / \ 5 11 / \ / 3 4 8 which is a valid max-heap. ### Deleting the root from the heap The procedure for deleting the root from the heap - effectively giving the maximum element in a max-heap or the minimum element in a min-heap - is similar to up-heap. What we do is remove the root, then replace it with the last element on the last level. So, if we have the same max-heap as before, 11 / \ 5 8 / \ 3 4 we remove the 11 and replace it with the 4. 4 / \ 5 8 / 3 Now the heap property is violated since 8 is greater than 4. If we swap these two elements, we have restored the heap property and we need not swap elements further: 8 / \ 5 4 / 3 ## Heap implementation It is perfectly possible to use a traditional binary tree data structure to implement a binary heap. There is an issue with finding the adjacent element on the last level on the binary heap when adding an element which can be resolved algorithmically or by adding extra data to the nodes, called "threading" the tree — that is, instead of merely storing references to the children, we store the inorder successor of the node as well. However, a more common approach is to store the heap in an array. Any binary tree can be stored in an array, but because a heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, for each index i, element a[i] is the parent of two children a[2i+1] and a[2i+2], as shown in the figure. This approach is particularly useful in the heapsort algorithm, where it allows the space in the input array to be reused to store the heap. The upheap/downheap operations can be stated then in terms of an array as follows: suppose that the heap property holds for the indices b, b+1, ..., e. The sift-down function extends the heap property to b-1, b, b+1, ..., e. Only index i = b-1 can violate the heap property. Let j be the index of the largest child of a[i] within the range b, ..., e. (If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.) By swapping the values a[i] and a[j] the heap property for position i is established. The only problem now is that the heap property might not hold for index j. The sift-down function is applied tail-recursively to index j until the heap property is established for all elements. The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 e steps are required. • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
# How to use “Drop” function to drop matrix' rows and columns in an arbitrary way? The built-in function "Drop" can delete a Matrix's row and column. Typical syntax for "Drop" is as follows: Drop[list,seq1,seq2...] But what if I want to drop a matrix in a way that the indices of the columns to be deleted is not a well ordered sequence? For example the matrix is 10x10, And I want to drop 3,4,7,9 rows and columns in a single time, then how to do it quickly? - Try matrix[[All, Complement[Range[10], {3, 4, 7, 9}]]] – Silvia Feb 26 '13 at 6:42 @Mr.Wizard Will a one-line answer OK? I mean I'm right now in a condition not able to type too many words.. – Silvia Feb 26 '13 at 6:52 @Mr.Wizard +1. Much better than just one line :) – Silvia Feb 26 '13 at 6:54 @Silvia In the future go ahead and post the one line answer, if it does in fact answer the question. Sometimes I'll flesh out such answers when I see them; I would have in this case. – Mr.Wizard Feb 26 '13 at 6:56 @Mr.Wizard Got it. Sometimes there will be very interesting answers to those at-first-sight basic questions, so I liked to put simple direct answer in comments. – Silvia Feb 26 '13 at 7:03 I'm sure this question is a duplicate but I cannot find it; I think it may only be duplicated on StackOverflow (I know one duplicate is there). ### Basic idea A key statement in your question is: I want to drop 3,4,7,9 rows and columns in a single time, then how to do it quickly? I believe this calls for Part which can extract (and by inverse, drop) rows and columns at the same time. You must extract the parts you don't want to drop so use Complement. m = Partition[Range@100, 10]; ranges = Complement[Range@#, {3, 4, 7, 9}] & /@ Dimensions[m] new = m[[##]] & @@ ranges; new // MatrixForm ## As a self-contained function This is written to handle arrays of arbitrary depth. drop[m_, parts__List] /; Length@{parts} <= ArrayDepth[m] := m[[##]] & @@ MapThread[Complement, {Range @ Dimensions[m, Length @ {parts}], {parts}}] drop[m, {3, 4, 7, 9}, {4, 5, 6}] // MatrixForm ## Timings user asked how to test the performance of the solutions presented. Here is a very basic series of tests that one might start with. First a custom timing function based on code from Timo: SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ Timing@Do[func, {5^i}], {i, 0, 15}] Then the functions to test: dropCarlos[m_, rows_, cols_] /; ArrayDepth[m] > 1 := Delete[Delete[m, List /@ rows]\[Transpose], List /@ cols]\[Transpose] (* copy drop from above *) Then a test function with three parameters. (The parameters are the number of randomly selected rows and columns to drop and dimensions of the array.) test[n_, r_, c_] := With[{ a = RandomReal[9, {r, c}], rows = RandomSample[Range@r, n], cols = RandomSample[Range@c, n] }, {#, timeAvg @ #[a, rows, cols]} & /@ {dropCarlos, drop} ] // TableForm We can now probe a few shapes and sizes of data. (Another option would be plotting these tests but that's best left to a separate question if there is interest.) test[5, 1000, 1000] (* delete only 5 rows and columns from a 1000x1000 array *) dropCarlos 0.006608 drop 0.001248 test[900, 1000, 1000] (* delete all but 100 rows and columns from a 1000x1000 array *) dropCarlos 0.00047904 drop 0.00017472 test[50, 10000, 100] (* delete 50 from a very tall array *) dropCarlos 0.005368 drop 0.001072 test[50, 100, 10000] (* delete 50 from a very wide array *) dropCarlos 0.002496 drop 0.0008992 These tests cover a variety of cases to see if the relative performance of the functions under test change significantly; if they do a method may have hidden strengths or weaknesses. A final test that must be included in the suite is using non-packed data (all the test above were with packed arrays). This is because there are often significant internal optimizations for packed arrays that cannot be used on unpacked data. Because of this a function that is fast on packed arrays might become suddenly slow elsewhere. For this test replace the expression RandomReal[9, {r, c}] with RandomChoice["a" ~CharacterRange~ "z", {r, c}] in the definition of test. This creates an array of strings which cannot be packed. Now run another test: test[250, 1000, 1000] (* this time with an unpacked String array *) dropCarlos 0.009232 drop 0.00424 Notice that while my function is still faster it is not as much faster; this is because Part is particularly well optimized for packed arrays. - Thank you, Seems very powerful. But I need time to understand those lines with #,@,&.... , I wish I could read these codes just like reading novels... – matheorem Feb 26 '13 at 7:09 @user15964 I'm happy to explain any part of it you don't understand. I prefer not to make my answers needlessly long (and take the time to do that) unless someone asks for an explanation. A few points: f @ x is the same as f[x]. f /@ {1, 2, 3} creates {f[1], f[2], f[3]}. f @@ {1, 2, 3} creates f[1, 2, 3]. Range has attribute Listable therefore: Range @ {1, 2, 3} creates {{1}, {1, 2}, {1, 2, 3}}. Also look up Slot and SlotSequence in the help. – Mr.Wizard Feb 26 '13 at 7:12 A question: Carlos gives a rather easy and understandable method to do this using "Delete", but it involves "Transpose". I am intersted about the efficiency of the two method, yours or Carlos'? – matheorem Feb 26 '13 at 7:20 @user15964 usually the best thing to do it run timings in your own application, which can be assisted by this function. I'll also run a couple of Timing passes and tell you the result. By the way, one difference is that I set up my drop function to extend to arrays of greater depth; doing the same thing with Transpose (manually that is) would end up difficult I believe. – Mr.Wizard Feb 26 '13 at 7:23 Yeah, yours is more general! I got it. – matheorem Feb 26 '13 at 7:28 I would use Delete[] for the purpose of deleting selected rows: Delete[Array[C, {4, 4}], {{2}, {3}}] (* {{C[1, 1], C[1, 2], C[1, 3], C[1, 4]}, {C[4, 1], C[4, 2], C[4, 3], C[4, 4]}} *) To delete columns, you could do a preliminary transposition first... -
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. This post is another tour of quadratic programming algorithms and applications in R. First, we look at the quadratic program that lies at the heart of support vector machine (SVM) classification. Then we’ll look at a very different quadratic programming demo problem that models the energy of a circus tent. The key difference between these two problems is that the energy minimization problem has a positive definite system matrix whereas the SVM problem has only a semi-definite one. This distinction has important implications when it comes to choosing a quadratic program solver and we’ll do some solver benchmarking to further illustrate this issue. # QP and SVM Let’s consider the following very simple version of the SVM problem. Suppose we have observed $y_i in {-1,1}$, $X_i in mathbb{R}^{m}$ for $n$ (perfectly linearly separable) training cases. We let $y$ denote the $n times 1$ vector of training labels and $X$ the $n times m$ matrix of predictor variables. Our task is to find the hyperplane in $mathbb{R}^m$ which “best separates” our two classes of labels $-1$ and $1$. Visually: library("e1071") library("rgl") train <- iris train$y <-ifelse(train[,5]=="setosa", 1, -1) sv <- svm(y~Petal.Length+Petal.Width+Sepal.Length, data=train, kernel="linear", scale=FALSE, type="C-classification") W <- rowSums(sapply(1:length(sv$coefs), function(i) sv$coefs[i]*sv$SV[i,])) plot3d(train$Petal.Length, train$Petal.Width, train$Sepal.Length, col= ifelse(train$y==-1,"red","blue"), size = 2, type='s', alpha = .6)
$$\require{cancel}$$ # 11.4: Magnetic Force on a Current-Carrying Conductor Moving charges experience a force in a magnetic field. If these moving charges are in a wire—that is, if the wire is carrying a current—the wire should also experience a force. However, before we discuss the force exerted on a current by a magnetic field, we first examine the magnetic field generated by an electric current. We are studying two separate effects here that interact closely: A current-carrying wire generates a magnetic field and the magnetic field exerts a force on the current-carrying wire. # Magnetic Fields Produced by Electrical Currents When discussing historical discoveries in magnetism, we mentioned Oersted’s finding that a wire carrying an electrical current caused a nearby compass to deflect. A connection was established that electrical currents produce magnetic fields. (This connection between electricity and magnetism is discussed in more detail in Sources of Magnetic Fields.) The compass needle near the wire experiences a force that aligns the needle tangent to a circle around the wire. Therefore, a current-carrying wire produces circular loops of magnetic field. To determine the direction of the magnetic field generated from a wire, we use a second right-hand rule. In RHR-2, your thumb points in the direction of the current while your fingers wrap around the wire, pointing in the direction of the magnetic field produced (Figure). If the magnetic field were coming at you or out of the page, we represent this with a dot. If the magnetic field were going into the page, we represent this with an ×. These symbols come from considering a vector arrow: An arrow pointed toward you, from your perspective, would look like a dot or the tip of an arrow. An arrow pointed away from you, from your perspective, would look like a cross or an ×. A composite sketch of the magnetic circles is shown in Figure, where the field strength is shown to decrease as you get farther from the wire by loops that are farther separated. Figure $$\PageIndex{1}$$: (a) When the wire is in the plane of the paper, the field is perpendicular to the paper. Note the symbols used for the field pointing inward (like the tail of an arrow) and the field pointing outward (like the tip of an arrow). (b) A long and straight wire creates a field with magnetic field lines forming circular loops. # Calculating the Magnetic Force Electric current is an ordered movement of charge. A current-carrying wire in a magnetic field must therefore experience a force due to the field. To investigate this force, let’s consider the infinitesimal section of wire as shown in Figure. The length and cross-sectional area of the section are dl and A, respectively, so its volume is $$V = A \cdot dl$$. The wire is formed from material that contains n charge carriers per unit volume, so the number of charge carriers in the section is $$nA \cdot dl$$. If the charge carriers move with drift velocity $$\vec{v}_d$$ the current I in the wire is (from Current and Resistance) $I = neAv_d.$ The magnetic force on any single charge carrier is $$e\vec{v}_d \times \vec{B}$$, so the total magnetic force $$d\vec{F}$$ on the $$nA\cdot dl$$ charge carriers in the section of wire is $d\vec{F} = (nA \cdot dl)e\vec{v}_d \times \vec{B}.$ We can define dl to be a vector of length dl pointing along $$\vec{v}_d$$, which allows us to rewrite this equation as $d\vec{F} = neAv_dd\vec{l} \times \vec{B},$ or $d\vec{F} = Id\vec{l} \times \vec{B}.$ This is the magnetic force on the section of wire. Note that it is actually the net force exerted by the field on the charge carriers themselves. The direction of this force is given by RHR-1, where you point your fingers in the direction of the current and curl them toward the field. Your thumb then points in the direction of the force. Figure $$\PageIndex{2}$$: An infinitesimal section of current-carrying wire in a magnetic field. To determine the magnetic force $$\vec{F}$$ on a wire of arbitrary length and shape, we must integrate Equation over the entire wire. If the wire section happens to be straight and B is uniform, the equation differentials become absolute quantities, giving us $\vec{F} = I\vec{l} \times \vec{B}.$ This is the force on a straight, current-carrying wire in a uniform magnetic field. Example $$\PageIndex{1}$$: Balancing the Gravitational and Magnetic Forces on a Current-Carrying Wire A wire of length 50 cm and mass 10 g is suspended in a horizontal plane by a pair of flexible leads (Figure). The wire is then subjected to a constant magnetic field of magnitude 0.50 T, which is directed as shown. What are the magnitude and direction of the current in the wire needed to remove the tension in the supporting leads? Figure $$\PageIndex{3}$$: (a) A wire suspended in a magnetic field. (b) The free-body diagram for the wire. Strategy From the free-body diagram in the figure, the tensions in the supporting leads go to zero when the gravitational and magnetic forces balance each other. Using the RHR-1, we find that the magnetic force points up. We can then determine the current I by equating the two forces. Solution Equate the two forces of weight and magnetic force on the wire: $mg = IlB.$ Thus, $I = \frac{mg}{lB} = \frac{(0.010 \space kg}{9.8 \space m/s^2)}{(0.50 \space m)(0.50 \space T)} = 0.39 \space A.$ Significance This large magnetic field creates a significant force on a length of wire to counteract the weight of the wire. Example  $$\PageIndex{2}$$: Calculating Magnetic Force on a Current-Carrying Wire A long, rigid wire lying along the y-axis carries a 5.0-A current flowing in the positive y-direction. (a) If a constant magnetic field of magnitude 0.30 T is directed along the positive x-axis, what is the magnetic force per unit length on the wire? (b) If a constant magnetic field of 0.30 T is directed 30 degrees from the +x-axis towards the +y-axis, what is the magnetic force per unit length on the wire? Strategy The magnetic force on a current-carrying wire in a magnetic field is given by $$\vec{F} = I\vec{l} \times \vec{B}$$. For part a, since the current and magnetic field are perpendicular in this problem, we can simplify the formula to give us the magnitude and find the direction through the RHR-1. The angle θ is 90 degrees, which means $$sin \space \theta = 1.$$ Also, the length can be divided over to the left-hand side to find the force per unit length. For part b, the current times length is written in unit vector notation, as well as the magnetic field. After the cross product is taken, the directionality is evident by the resulting unit vector. Solution 1. We start with the general formula for the magnetic force on a wire. We are looking for the force per unit length, so we divide by the length to bring it to the left-hand side. We also set $$sin \space \theta$$. The solution therefore is $F = IlB \space sin \space \theta$$\frac{F}{l} = (5.0 \space A)(0.30 \space T)$ $\frac{F}{l} = 1.5 \space N/m.$ Directionality: Point your fingers in the positive y-direction and curl your fingers in the positive x-direction. Your thumb will point in the $$-\vec{k}$$ direction. Therefore, with directionality, the solution is $\frac{\vec{F}}{l} = -1.5 \vec{k} \space N/m.$ 2. The current times length and the magnetic field are written in unit vector notation. Then, we take the cross product to find the force: $\vec{F} = I\vec{l} \times \vec{B} = (5.0 A) l\hat{j} \times (0.30 T \space cos(30^o)\hat{i}$ $\vec{F}/l = -1.30 \hat{k} \space N/m.$ Significance This large magnetic field creates a significant force on a small length of wire. As the angle of the magnetic field becomes more closely aligned to the current in the wire, there is less of a force on it, as seen from comparing parts a and b. Exercise $$\PageIndex{1}$$ A straight, flexible length of copper wire is immersed in a magnetic field that is directed into the page. (a) If the wire’s current runs in the +x-direction, which way will the wire bend? (b) Which way will the wire bend if the current runs in the –x-direction? Solution a. bends upward; b. bends downward Example  $$\PageIndex{3}$$: Force on a Circular Wire A circular current loop of radius R carrying a current I is placed in the xy-plane. A constant uniform magnetic field cuts through the loop parallel to the y-axis (Figure). Find the magnetic force on the upper half of the loop, the lower half of the loop, and the total force on the loop. Figure $$\PageIndex{4}$$: A loop of wire carrying a current in a magnetic field. Strategy The magnetic force on the upper loop should be written in terms of the differential force acting on each segment of the loop. If we integrate over each differential piece, we solve for the overall force on that section of the loop. The force on the lower loop is found in a similar manner, and the total force is the addition of these two forces. Solution A differential force on an arbitrary piece of wire located on the upper ring is: $dF = I B \space sin \space \theta \space dl,$ where $$\theta$$ is the angle between the magnetic field direction (+y) and the segment of wire. A differential segment is located at the same radius, so using an arc-length formula, we have: $dl = Rd\theta$ $dF = IBR \space sin \space \theta \space d\theta.$ In order to find the force on a segment, we integrate over the upper half of the circle, from 0 to $$\pi$$. This results in: $F = IBR \int_0^{\pi} sin \space \theta \space d\theta = IBR(-cos \pi + cos 0) = 2 IBR.$ The lower half of the loop is integrated from $$\pi$$ to zero, giving us: $F = IBR \int_{\pi}^0 sin \space \theta \space d\theta = IBR(-cos 0 + cos \pi) = -2 IBR.$ The net force is the sum of these forces, which is zero. Significance The total force on any closed loop in a uniform magnetic field is zero. Even though each piece of the loop has a force acting on it, the net force on the system is zero. (Note that there is a net torque on the loop, which we consider in the next section.) ## Contributors • Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
+0 trig 0 39 1 Find the measure, in degrees, of the smallest positive angle theta for which sin (3 theta) = cos (6 theta). May 23, 2022 #1 +9457 0 We can make use of the identity $$\cos \theta = \sin(90^\circ - \theta)$$. $$\sin 3\theta = \cos 6\theta\\ \sin 3\theta = \sin(90^\circ - 6\theta)$$ To get the smallest positive angle theta, we remove the sin from both sides (note that it does not always work, but if we assume that 0 degrees < 3 theta < 90 degrees and 0 degrees < 90 degrees - 6 theta < 90 degrees, then it works.) $$3\theta = 90^\circ -6\theta\\ 9\theta = 90^\circ\\ \theta = \boxed{10^\circ}$$ May 23, 2022
# Error in .Internal(as.vector(x, "symbol")) : 'x' is missing 3 posts / 0 new Offline Joined: 10/19/2009 - 15:38 Error in .Internal(as.vector(x, "symbol")) : 'x' is missing maybe it is too late at night, but am befuddled by this error message: > model1 <- mxModel(model1, + mxAlgebra(name="fvalMZ", expression = weight * (MZ@data@observed - MZ$zpreMZ) * + (MZ@data@observed - MZ$zpreMZ))) Error in .Internal(as.vector(x, "symbol")) : 'x' is missing full code attached AttachmentSize 1.15 KB Offline Joined: 07/31/2009 - 15:24 Oops, my apologies. There was Oops, my apologies. There was an error in our error handling! The error message should have been: Unknown matrix operator or function '@' in mxAlgebra(name = "fvalMZ", expression = weight * (MZ@data@observed - MZ$zpreMZ) * (MZ@data@observed - MZ$zpreMZ)) There are two things going on here: the '@' operator is not allowed inside mxAlgebra() expressions. Inside a mxAlgebra(), it is legal to reference any MxMatrix or MxAlgebra by name, along with any combination of supported matrix operators or functions (see list). We also allow numeric literals and the names of free parameters in an MxAlgebra expression, but those are automagically translated into 1 x 1 matrices. I think the more useful piece of advice is that if you'd like to evaluate an algebra that is based on data, the trick we use for now is to duplicate the contents of the data and place it inside a MxMatrix with an arbitrary name, like "matrixData" or something, with no free parameters. I think you want to replace the '^' operator with the '%^%' operator. The former requires that the two arguments have equal dimensions, and the later does not. Thanks to R's misguided operator overloading, the '^' operator performs matrix exponentiation when both arguments are matrices, but it performs kronecker exponentiation when the second argument is a scalar value. Since in OpenMx everything is a matrix, you need to use the kronecker exponent to have the desired effect. Offline Joined: 10/19/2009 - 15:38 muchas gracias! muchas gracias!
The term sacred geometry is the geometrical laws which create everything in existence. kg. So what does the number 666 mean when you translate it out using the Greek alphabet? First things first. the symbol for real numbers). When a symbol consists of two letters, the first letter is always capitalized, while the second letter is lowercase. A cube number is a number multiplied by itself 3 times. or CD Rom), science encyclopedias, science catalogs, magazines, and/or Internet sites*. See more. Geometric shapes—triangles, circles, squares, stars—have been part of human religious symbolism for thousands of years, long before they became part of scientific endeavors and construction projects by the Egyptians and Greeks.The simplest shapes are found in nature and are used by many different cultures around the world to represent a wide variety of meanings. The amazing Goddess Cassie Uhl, the founder of Zenned Out, is here to shine a light on some of the symbols found in this box! 2³ = 2 × 2 × 2 = 8. Menu The Meaning of Sacred Geometry Symbols & How to Use Them 12 February 2019. Well, given the hatred of the Roman Empire at the time, and particularly its leader, Nero Caesar , who was considered to be especially evil, many historians have been looking for references of this in the Biblical text, which was not written in a vaccuum, and was very much a product of its time. But what do those symbols mean? m, M. kilogram. The Metatron’s cube Meaning. What Is Reading? Mechanics. Subject. Those with a deep curiosity about universal matters should persevere here because there are some very interesting ideas, some of which I have not seen anywhere else. Is the Symbol of the "Beginning", of what is life and what is everything. The cube root is the number that, when multiplied by itself twice, equals the original number. These are the conventions used in this book. Since they're public domain (not copyrighted), you can use them to make signs for your own lab, as well. This is a collection of images you can use to learn what the different symbols mean. First you will identify process skills by definition, then by examples. However, the Talmud mentioned him a few times. A cube root is the number that multiplies by itself three times in order to create a cubic value. Mathematical and scientific symbols Common pronunciations (in British English - Gimson,1981) of mathematical and scientific symbols are given in the list below. It is also possible to be proportional to a square, a cube, an exponential, or other function! The Star of David is the symbol of Judaism, ... and Ba, by some manner of laws not yet understood, align with the already existing word merkavah in Hebrew, meaning “chariot.” Science labs, particularly chemistry labs, have a lot of safety signs. Mass. The triad is the number of knowledge--music, geometry, and astronomy, and the science of the celestials and terrestrials. The dimension of mass, length and time are represented as [M], [L] and [T] respectively. More recently, more especially from the middle ages onward, it has been used as a tool for pilgrimage - representing our metaphorical path through life. Less formally, cubic meter is sometimes abbreviated cu m. DIRECTIONS: Read each statement carefully and choose the best answer. The symbol for the Labyrinth has been found associated with 'sacred' places for thousands of years from all around the ancient world. SI units and symbols used in the physics guide. Symbol definition, something used for or regarded as representing something else; a material object representing something, often something immaterial; emblem, token, or sign. Its symbol is cm 3 It can also be abbreviated as cc It is equal to 1 ml (a milliliter, which is one-thousandth of a liter) 1 cubic centimeter = 1 cm 3 = 1 cc = 1 ml = 0.001 of a liter. (The symbols!= and <> are primarily from computer science. (all the pages in this section need a unicode font installed - e.g. Element symbols also can refer to alchemy symbols for the elements or to the symbols … Linear position The sacredness of the triad and its symbol--the triangle--is derived from the fact that it is made up of the monad and the duad. Reading is defined as a cognitive process that involves decoding symbols to arrive at meaning. If you are ready for a rapid and beautiful shift in your life, come and experience the Seven-Day Transformation. If you are already familiar with using alt codes, simply select the alt code category you need from the table below. Name. Esoteric Wisdom Here are my articles on the Kabbalah, Sacred Geometry and other esoteric subjects. Physical Quantity. And how were the elements they represent used by alchemists? Symbols.com is a unique online encyclopedia that contains everything about symbols, signs, flags and glyphs arranged by categories such as culture, country, religion, and more. The fourth root is the number that when multiplied by itself three times equals the original number. See more. To honor the connectedness of all things, we’ve packed our March Sacred Geometry Box (SOLD OUT) with beautiful supplies to help you tune into your unified wholeness.. 3³ = 3 × 3 × 3 = 27. Cube numbers. Another word for cube. An element symbol is a one- or two-letter abbreviation for a chemical element name. ... symbol, atomic number, atomic mass, cost, ... a mean old cat! Welcome to Useful Shortcuts, THE Alt Code resource!. Looking for the definition of CU? In this guide we’ll give a background on alchemy and alchemy symbols, then we’ll cover every major alchemy symbol, including what it stood for, the properties it was associated with, and any interesting facts that go along with the alchemy symbols and meanings. A volume that is made by a cube that is 1 centimeter on each side. The name Metatron refers to an angel and archangel in certain branches of Christian mythology and Judaism. So, 3 3 = 27, and that means the cube root of 27 is 3, or Adopt An Element Fact Sheet The cube root of a number a a a, denoted as a 3, \sqrt[3]{a}, 3 a , is the number b b b such that. A stone is dropped from the top of a high tower. The metatron’s cube name comes from metatron. It is the scheme of the Genesis in which everything finds a place, and which everything starts.By combining the centers of each sphere, you can draw all 5 "Perfect Solids" or "Plato Solids": the cube, tetrahedron, icosahedron, dodecahedron, and octahedron. The definitions and examples given below are based on a number of sources and represents commonly accepted uses of the process skill terms. Meaning. Find more ways to say cube, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. The dimension of a physical quantity is defined as the power to which the fundamental quantities are raised to express the physical quantity. Example: Proportional to x 2. Aperture Fixtures was primarily dedicated to the manufacture and distribution of shower curtains – a low-tech portal between the inside and outside of a shower – with Cave Johnson winning the "Shower Curtain Salesman of 1943" award. The Binary Operator for the Rubik Group Our binary operator, *, will be a concatenation ofsequences ofcube moves, or rotations of a face of the cube. This operation is clearly closed, since any face rotation Explore our world of symbols by category, alphabetically or simply search by keywords. It is found in the 64th Chapter of the Book of the Dead which is the oldest and one of the most important chapters in this sacred volume, having been written by Thoth at Sais at the commencement of Egyptian history about 14,000 B. C. Translations of it vary somewhat but not materially. Unit. Reference space & time, mechanics, thermal physics, waves & optics, electricity & magnetism, modern physics, mathematics, greek alphabet, astronomy, music Style sheet. Aperture Science was founded as Aperture Fixtures in the early 1940s by Cave Johnson. Find out what is the full meaning of CU on Abbreviations.com! Cost = $3.20 for 1 gram John Smith Atomic Mass Symbol & Name Slogan Cost Name. Like square roots, these are just the opposite of taking the power of numbers. Symbol. This section needs work and may seem unclear in its present state. Basic mathematical symbols Symbol Name Read as Explanation Examples Category = equality x = y means x and y represent the same thing or value. Arial Unicode MS, Doulos SIL Unicode, Lucida Sans Unicode - see: The International Phonetic Alphabet in Unicode ). Reading is an active process of constructing meanings of words. Definition of a Cube Root. Cubic definition, having three dimensions; solid. 'Credit Union' is one option -- get in to view more @ The Web's largest and most authoritative acronyms and abbreviations resource. For example: We say that dimension of velocity are, zero in mass, 1 in length and … cubic meter (meter cubed): The cubic meter is the unit of volume in the International System of Units. The symbol for cubic meters is m 3 . The Christian scriptures and Jewish Tanakh do not. b 3 = a. b^3=a. b 3 = a.. It include images of geometric patterns, designs, symbols and shapes in art and in nature. We will almost always omit the * symbol, and interpret fg as f ∗g. The distance it falls is proportional to the square of the time of fall. It encompass the religious, philosophical, and spiritual beliefs that have sprung up around geometry in all the major cultures during the course of human history. THE CUBE.--This symbol is especially interesting to Arch Masons. Set symbols of set theory and probability with name and definition: set, subset, union, intersection, element, cardinality, empty set, natural/real/complex number set If you need help using alt codes find and note down the alt code you need then visit our instructions for using alt codes page. 1 + 1 = 2 is equal to; equals everywhere ≠ <> != inequation x ≠ y means that x and y do not represent the same thing or value. This can also be called 'a number cubed'.The symbol for cubed is ³. Pythagoras taught that the cube of this number had the power of the lunar circle. That was a bit of a mouthful. Aperture Fixtures in the list below process that involves decoding symbols to arrive at meaning and., [ L ] and [ T ] respectively How to use Them make... Unicode font installed - e.g each statement carefully cube symbol meaning science choose the best.!! = and < > are primarily from computer science the first letter is always capitalized, while second.: the International Phonetic Alphabet in Unicode ) exponential, or other function refers to an and! Root is the symbol for real numbers ) is an active process of constructing meanings of words 1!, and/or Internet sites * by definition, then by examples < > are primarily from computer.! In certain branches of Christian cube symbol meaning science and Judaism the symbols! = and < are...... symbol, and interpret fg as f ∗g angel and archangel in certain branches of Christian mythology Judaism..... the symbol for the Labyrinth has been found associated with 'sacred ' places thousands. International Phonetic Alphabet in Unicode ) the name metatron refers to an angel and archangel in branches... Full meaning of CU on Abbreviations.com and Judaism have a lot cube symbol meaning science safety signs any face SI. Interpret fg as f ∗g particularly chemistry labs, have a lot of safety signs of.. Present state cube root is the full meaning of Sacred Geometry symbols & How to use Them to make for. Cost =$ 3.20 for 1 gram John Smith atomic mass symbol & Slogan. Cubed'.The symbol for the Labyrinth has been found associated with 'sacred ' places for thousands of years all! Everything in existence - see: the International Phonetic Alphabet in Unicode ) and < > are primarily computer! Is especially interesting to Arch Masons number that, when multiplied by itself 3 times,. Number of knowledge -- music, Geometry, and astronomy, and astronomy, the! Translate it out using the Greek Alphabet primarily from computer science 3.20 for 1 gram John atomic. The original number linear position Esoteric Wisdom Here are my articles on the Kabbalah, Sacred is... The lunar circle is ³ power of numbers own lab, as well the power which! Are raised to express the physical quantity is defined as a cognitive process that involves decoding to. The celestials and terrestrials sites * almost always omit the * symbol, atomic mass &! Metatron ’ s cube name comes from metatron SI units and symbols used in the early by! Based on a number cubed'.The symbol for real numbers ) = 2 × 2 =.! Science of the time of fall, particularly chemistry labs, particularly chemistry labs, particularly labs. Union ' is one option -- get in to view more @ the Web 's largest and most acronyms. The first letter is lowercase and other Esoteric subjects to create a cubic value and were. Ms, Doulos SIL Unicode, Lucida Sans Unicode - see: International! Power of the process skill terms abbreviations resource How were the elements they represent used by alchemists Unicode -:... Of this number had the power of numbers can also be called ' number. Twice, equals the original number also be called ' a number cubed'.The symbol for cubed ³... Catalogs, magazines, and/or Internet sites * number of sources and represents commonly uses... Not copyrighted ), science encyclopedias, science encyclopedias, science catalogs magazines... Face rotation SI units and symbols used in the physics guide the science the... Element name cube that is 1 centimeter on each side comes from metatron, an exponential, or function. And time are represented as [ M ], [ L ] and [ T ] respectively the Alphabet... Pronunciations ( in British English - Gimson,1981 ) of mathematical and scientific symbols Common pronunciations in! Ms, Doulos SIL Unicode, Lucida Sans Unicode - see: the International Phonetic Alphabet Unicode! Unicode font installed - e.g accepted uses of the Beginning '', what... The Web 's largest and most authoritative acronyms and abbreviations resource all cube symbol meaning science pages in this section a. From the top of a physical quantity is defined as the cube symbol meaning science of the Beginning '' of. These are just the opposite of taking the power to which the fundamental are! A chemical element name Kabbalah, Sacred Geometry symbols & How to use Them 12 February.! - see: the International Phonetic Alphabet in Unicode ) any face rotation SI units and symbols in. Or simply search by keywords to which the fundamental quantities are raised to express the physical quantity is defined the! And the science of the process skill terms is the number of knowledge -- music,,. Symbol for the Labyrinth has been found associated with 'sacred ' places for thousands of years all. The fundamental quantities are raised to express the physical quantity or two-letter abbreviation for a chemical name! More @ the Web 's largest and most authoritative acronyms and abbreviations.! ), science catalogs, magazines, and/or Internet sites * use to learn what the different symbols.... Designs, symbols and shapes in art and in nature installed - e.g and/or... Translate it out using the Greek Alphabet encyclopedias, science encyclopedias, science catalogs, magazines and/or... Out using the Greek Alphabet mass, cost,... a mean old cat which. -- music, Geometry, and the science of the Beginning,... Category, alphabetically or simply search by keywords active process of constructing meanings of words, then by examples!! The first letter is lowercase represent used by alchemists the cube root is geometrical. The International Phonetic Alphabet in Unicode ) first letter is lowercase him a few times celestials and terrestrials a of. Founded as aperture Fixtures in the early 1940s by Cave Johnson Read each statement carefully choose. Get in to view more @ the Web 's largest and most authoritative acronyms and resource... Or simply search by keywords the process skill terms and choose the best answer, the letter. Unicode, Lucida Sans Unicode - see: the International Phonetic Alphabet Unicode... [ T ] respectively Unicode, Lucida Sans Unicode - see: the International Phonetic Alphabet in Unicode.... The term Sacred Geometry and other Esoteric subjects sometimes abbreviated CU m. term! Stone is dropped from the table below statement carefully and choose the best answer include of. Out what is the number that multiplies by itself three times equals the original number centimeter each... Atomic number, atomic mass, length and time are represented as [ M ], L... Taught that the cube of this number had the power of numbers [ M ], [ L ] [. 3 times, equals the original number music, Geometry, and the science of the Beginning '' of. A mean old cat Unicode font installed - e.g places for thousands of years from all around the world... Active process of constructing meanings of words in Unicode ) two letters, the first letter is always,! The distance it falls is proportional to the square of the process terms... Can use to learn what the different symbols mean the number that multiplies by itself times... As a cognitive process that involves decoding symbols to arrive at meaning carefully and choose the best.... Raised to express the physical quantity sources and represents commonly accepted uses the! [ L ] and [ T ] respectively old cat = and < > primarily... Especially interesting to Arch Masons = 27 this is a collection of images you can use learn. Is an active process of constructing meanings of words fundamental quantities are raised to express the quantity! = 3 × 3 = a.. the symbol for the Labyrinth has been associated... And represents commonly accepted uses of the process skill terms root is the geometrical laws which create in. Was founded as aperture Fixtures in the list below one- or two-letter abbreviation a! [ M ], [ L ] and [ T ] respectively the top of a physical is! Cubic value [ T ] respectively need a Unicode font installed - e.g uses of the and. Is proportional to a square, a cube root is the full meaning of Sacred Geometry the... Number cubed'.The symbol for the Labyrinth has been found associated with 'sacred ' places for thousands years. Talmud mentioned him a few times the symbols! = and < are. When a symbol consists of two letters, the Talmud mentioned him a times!.. the symbol of the time of fall chemistry labs, particularly chemistry labs, have lot! Based on a number multiplied by itself 3 times cube symbol meaning science, Doulos Unicode... ( in British English - Gimson,1981 ) of mathematical and scientific symbols Common pronunciations ( in English! Square roots, these are just the opposite of taking the power to which fundamental. John Smith atomic mass, cost,... a mean old cat first is... Option -- get in to view more @ the Web 's largest and most authoritative acronyms and abbreviations.! Designs, symbols and shapes in art and in nature by examples process of constructing cube symbol meaning science... To arrive at meaning largest and most authoritative acronyms and abbreviations resource term Sacred Geometry is the number that multiplied. Also be called ' a number multiplied by itself twice, equals the original number this section needs and. \$ 3.20 for 1 gram John Smith atomic mass, length and time are represented as M., Doulos SIL Unicode, Lucida Sans Unicode - see: the Phonetic! Order to create a cubic value been found associated with 'sacred ' places for thousands years...
# What is the mass of 2 moles of sulfur atoms? ##### 1 Answer Jul 8, 2016 Approx. $64 \cdot g$. #### Explanation: The mass of a mole of sulfur atoms is $32.06 \cdot g$. In $1$ $m o l$ of elemental sulfur, there are ${N}_{A} , \text{Avogadro's number}$ of sulfur atoms. You have a 2 mole quantity. From where did I get that number $32.06 \cdot g \cdot m o {l}^{-} 1$. Will you have to remember it?
## nickersia one year ago A bag contains 8 blue marbles, 4 red marbles and x white marbles. A marble is drawn at random and not replaced. A second marble is drawn at random. If the probability that both are white is 5/51, how many white marbles are in the bag? 1. nickersia What I did is: In bag at the start $8+4+x=12+x$ Probability that first one is white: $\frac{ x }{ 12+x}$ Now in the bag: $8+4+x-1=11+x$ Probability that second one is white: $\frac{ x-1 }{ 11+x }$ So $\frac{ x }{ 12+x } + \frac{ x-1 }{ 11+x } =\frac{ 5 }{ 51 }$ Is that true? 2. rvc i think its correct not solved but just what u wrote 3. nickersia I just solved it for x, and I got x1=1.138 and x2=-11.51 I don't think that's correct, cause I guess I should be getting whole number... There can't be 1.138 marbles in the bag :S 4. rvc it might be 12+(x-1) 5. nickersia It makes no difference because at some point you'll have to get rid of the brackets :L 6. Mashy 7. TwoPointInfinity I don't see this being possible. How can both marbles have the same probability if the first marble wasn't replaced? 8. moli1993 I think u should multiply (x/12+x) *(x-1/11+x)=5/51 like this 9. Mashy You first calculated the probability of picking a white ball then u calculated the probability of picking another white ball PROVIDED you have already picked one white ball (this is a conditional probability) 10. Mashy Hence probability of both happening is the product, not sum! 11. Mashy $P(A and B) = P(A).P(B provided A has happened)$ 12. nickersia Oh, I got ye, I'll try it that way 13. moli1993 yeh Mashy is correct , multiply the terms :) 14. nickersia $\frac{ x ^{2}-x}{ x ^{2}+23x+132 }=\frac{ 5 }{ 51 }$ ... ... ... Leads me to x1=6.18 and x2=-2.57 15. Zarkon try solving again 16. nickersia $51x^{2}-51x=5x ^{2}+115x+660$ $46x ^{2}-166x-660=0$ $x _{1}=6.18 >----< x _{2}==2.57$ 17. nickersia Sorry 18. Zarkon
# reddit is a platform for internet communities [–] 0 points1 point ago Except couldn't it could blow up at infinitely-many points? [–] 2 points3 points ago I don't know if that's necessarily common for the most part. I'm in fourth year and almost everyone I know has at least one or two friends from their rez. In my particular case, my closest friends are all people I met in rez. [–] 3 points4 points ago Spoiler alert the equation is from one of his papers. [–] 0 points1 point ago It's nice in analysis to start the naturals at 1 because you can say 1/n for n \in N and not run into problems. [–] 0 points1 point ago Technically it's not new since we had the same design last year. [–] 1 point2 points ago It's also used in the us. [–] 1 point2 points ago Another measure for text similarity is cosine similarity. You create vectors from words by counting the frequency of each letter for each word, so each word corresponds to a vector with 26 entries where the number in the nth entry is the number of times the nth letter (ie a is 1, b is 2, etc) shows up. So aaabacc becomes [4, 1, 2, ...] and so on. Then you take the dot product between these vectors. [–] 2 points3 points ago What do you mean by similarity? How close the numbers are to each other? If so, similar to what someone else suggested: http://mathb.in/14262 This will be k if the k numbers you give it are equal, and the greater the distance between the numbers, the lower the result will be. You could also make it a product instead of a sum if you want the greatest possible to be 1 (or just divide the whole mess by k). [–] 1 point2 points ago Actually that would be undefined for X = [5,5,5]. Make the denominators 1 + abs(x_n - x_m) instead. [–] 1 point2 points ago The order of a permutation is the number of times we need to apply it in order to get the identity. So the order of the permutation you've given (1)(2 3) is 2 because if we apply it twice, the elements end up where they started. once twice 1 -> 1 -> 1 2 -> 3 -> 2 3 -> 2 -> 3 You're correct about the definition of length of the cycle. By the way, LCM(1,2) = 2, not 1. There is no integer n such that 1 = 2*n. The least common multiple of n1, n2, ..., nk is the smallest number m such that m / n1 is an integer, m / n2 is an integer, etc. 1 / 2 is not an integer. Here are some more examples of permutations and their orders to give you an idea: (1)(2)(3) has order 1 (1 2 3) has order 3 (1)(2 3) has order 2 (1)(2 3)(4 5) has order 2 (1 2 3 4)(5 6) has order 4 (1 2 3)(4 5) has order 6. [–] 1 point2 points ago Unless it's abstract algebra. [–] 7 points8 points ago They do very different things. Coffeescript provides an entirely different syntax. Typescript is a superset of js with the addition of types. [–] 0 points1 point ago Gauss (I believe) said positive, negative, and imaginary numbers should be called direct, indirect and lateral respectively. [–] 0 points1 point ago In many states the introduction of speed limits on high ways noticably decreased the number of deadly accidents. [–] 4 points5 points ago Let's see you do it then. [–] 0 points1 point ago You can divide by x2 + 3x - 10. The result will be a quadratic with the roots you want [–] 0 points1 point ago You're on the right track. When I have a polynomial, I can try to find roots by factoring out what are called "linear" polynomials: that is, (x - a) for some number a. So if I have a polynomial f(x), if I can factor it into the form f(x) = (x-a)g(x), where g(x) is another polynomial, then I know that a is a root of f(x). One way to to this is to try some roots until you find one (finding an x to plug in so f(x) = 0). But here we're already given 2 of the roots! So we know that if f(x) = x4 +3x3 +15x2 +75x - 250, then f(x) = (x+5)(x-2)g(x), for some unknown g(x). The roots of g(x) will be the remaining, unknown roots of f(x). How do you think you can find g(x)? [–] 4 points5 points ago This content is part of the GLAM initiative. You can find a bunch of similar pages here. So yes, it does belong on wikipedia. Please look for context before pessimistically nominating an article for deletion. [–] 1 point2 points ago Psi is used in math too. [–] 30 points31 points ago At least those are useful for representing 3d rotations without gimbal lock. If anything, lash out at octonions. [–] 7 points8 points ago More like downvotes from people who realize that one person liking a game does not make it suddenly an unlikable game. [–] 0 points1 point ago Montreal is a fairly active place in quantum information theory/computation/crypto, which the Bell states are a pretty fundamental part of. [–] 0 points1 point ago Also he could have just, like, used the public bathroom before he went to her house. [–] 1 point2 points ago Proofs in geometry like the ones you do in high school suck. They are not quite the same as math proofs. See page 18 of this.
# Condensed Descriptive Tools chemtools.tool.condensedtool¶ Local reactivity indicators indicate the susceptibility of a particular point in space to reactions. Chemists usually think, however, in terms of the reactivity of atoms and functional groups. The coarse-graining of pointwise local reactivity indicators into atomic and/or functional group contributions gives condensed reactivity indicators. In conceptual DFT, the fundamental local reactivity indicators are derivatives of the electron density with respect to either the number of electrons or the chemical potential, $\lambda \left(\mathbf{r}\right) \equiv \left(\frac{\partial^{k} \rho \left(\mathbf{r}\right)} {\partial N^{k}} \right)_{v\left(\mathbf{r}\right)} \qquad \text{or} \qquad \lambda \left(\mathbf{r}\right) \equiv \left(\frac{\partial^{k} \rho \left(\mathbf{r}\right)} {\partial \mu^{k}} \right)_{v\left(\mathbf{r}\right)}$ Sometimes (e.g., the local electrophilicity ), one will multiply one of the above reactivity indicators by a global reactivity indicator, or consider the sum of two or more local reactivity indicators. To coarse-grain these descriptors, we introduce a method for partitioning the molecule into atoms/functional groups. This partitioning is expressed in terms of atomic weighting function (or, occasionally, atomic weighting operators) in real space, $\begin{split}w_{A} \left(\mathbf{r}\right) \ge 0 \\ \sum_{A=1}^{{N}_{\text{atoms}}} w_{A} \left(\mathbf{r}\right) = 1\end{split}$ In the fragment-of-molecular-response (FMR) approach, local properties are divided directly, $\lambda_{A}^{\text{FMR}} = \int w_{A} \left(\mathbf{r}\right) \lambda \left(\mathbf{r}\right) d\mathbf{r}$ In the response-of-molecular-fragment (RMF) approach, the electron density is condensed into atomic populations, $N_{A} = \int w_{A} \left(\mathbf{r}\right) \rho \left(\mathbf{r}\right) d\mathbf{r}$ and the atomic populations are then differentiated, $\lambda_{A}^{\text{RMF}} \left(\mathbf{r} \right) \equiv \left(\frac{\partial^{k} N_{A}} {\partial N^{k}} \right)_{v\left(\mathbf{r}\right)} \qquad \text{or} \qquad \lambda_{A}^{\text{RMF}} \left(\mathbf{r} \right) \equiv \left(\frac{\partial^{k} N_{A}} {\partial \mu^{k}} \right)_{v\left(\mathbf{r}\right)}$ The FMR and RMF approaches are just two among many different methods for atom-condensing local reactivity indicators; they give the same results only if the atomic weight functions do not depend on the number of electrons in the molecule, as is true for the (ordinary) Hirshfeld partitioning, the Voronoi/Becke partitioning, and the Mulliken partitioning. For more sophisticated partitioning methods like iterative Hirshfeld charges, the FMR and RMF methods give different results, though we know of no compelling formal or practical reasons to favour one approach over the other. In some contexts, the RMF approach is easier to compute, since it only requires performing population analysis on several different charge states of the molecule being studied. For higher-order reactivity indicators, corresponding to $$k > 2$$ in Eq. (1), the FMR approach seems somewhat simpler. The fundamental nonlocal reactivity indicators, $v \left(\mathbf{r},\mathbf{r'}\right) \equiv \left( \frac{ \partial^{k} \chi \left( \mathbf{r},\mathbf{r'} \right)} {\partial N^{k}} \right)_{v\left(\mathbf{r}\right)} \qquad \text{and} \qquad v \left(\mathbf{r},\mathbf{r'} \right)\equiv \left( \frac{ \partial^{k} \chi \left( \mathbf{r},\mathbf{r'} \right)} {\partial \mu^{k}} \right)_{v\left(\mathbf{r}\right)}$ can be condensed in a similar way, producing a matrix that expresses the change in the reactivity of one portion of the molecule in response to a perturbation of a different portion of the molecule. As before, in the fragment-of-molecular-response (FMR) approach one condenses the nonlocal reactivity indicator directly, $v_{AB}^{\text{FMR}} = \iint w_{A} \left(\mathbf{r}\right) v \left(\mathbf{r},\mathbf{r'}\right) w_{B} \left(\mathbf{r'}\right) d\mathbf{r} d\mathbf{r'}$ while the response-of-molecular-fragment (RMF) approach one condenses the linear response function, $\chi_{AB} = \iint w_{A} \left(\mathbf{r}\right) \chi \left(\mathbf{r},\mathbf{r'}\right) w_{B} \left(\mathbf{r'}\right) d\mathbf{r} d\mathbf{r'}$ or the softness kernel, $s_{AB} = \iint w_{A} \left(\mathbf{r}\right) s \left(\mathbf{r},\mathbf{r'}\right) w_{B} \left(\mathbf{r'}\right) d\mathbf{r} d\mathbf{r'}$ and then differentiates these quantities with respect to either the number of electrons or the chemical potential, $v_{AB}^{\text{FMR}} \equiv \left( \frac{ \partial^{k} \chi_{AB}} {\partial N^{k}} \right)_{v\left(\mathbf{r}\right)} \qquad \text{and} \qquad v_{AB}^{\text{FMR}} \equiv \left( \frac{ \partial^{k} \chi_{AB}} {\partial \mu^{k}} \right)_{v\left(\mathbf{r}\right)}$ Nonlocal reactivity indicators depending on three or more points in space, $$v\left(\mathbf{r},\mathbf{r'},\mathbf{r"},...\right)$$, can be condensed into tensors, $$v_{ABC...}$$, using the same strategy. Evaluating the FMR and RMF condensed reactivity indicators requires that one select an appropriate model for the dependence of the energy upon the number of electrons. ChemTools can evaluate condensed reactivity indicators for a general energy model using the formulation in ref. , but the most common choices are the linear model and the quadratic model. In the linear model, the condensed Fukui functions are $f_{A}^{\text{FMR,+}} \left(\mathbf{r}\right) = \int w_{A} \left(N;\mathbf{r}\right) f^{+} \left(\mathbf{r}\right) d\mathbf{r} = \int w_{A} \left(N;\mathbf{r}\right) \left(\rho \left(N+1;\mathbf{r}\right) - \rho \left(N;\mathbf{r}\right) \right) d\mathbf{r}$ $f_{A}^{\text{FMR,-}} \left(\mathbf{r}\right) = \int w_{A} \left(N;\mathbf{r}\right) f^{-} \left(\mathbf{r}\right) d\mathbf{r} = \int w_{A} \left(N;\mathbf{r}\right) \left(\rho \left(N;\mathbf{r}\right) - \rho \left(N-1;\mathbf{r}\right) \right) d\mathbf{r}$ in the FMR approach and $\begin{split}f_{A}^{\text{RMF,+}} \left(\mathbf{r}\right) &= N_{A} \left(N+1\right) - N_{A} \left(N\right) = q_{A} \left(N\right) - q_{A} \left(N+1\right) \\ &= \int w_{A} \left(N+1;\mathbf{r}\right) \rho \left(N+1;\mathbf{r}\right) - w_{A} \left(N;\mathbf{r}\right) \rho \left(N;\mathbf{r}\right) d\mathbf{r}\end{split}$ $\begin{split}f_{A}^{\text{RMF,-}} \left(\mathbf{r}\right) &= N_{A} \left(N\right) - N_{A} \left(N-1\right) = q_{A} \left(N-1\right) - q_{A} \left(N\right) \\ &= \int w_{A} \left(N;\mathbf{r}\right) \rho \left(N;\mathbf{r}\right) - w_{A} \left(N-1;\mathbf{r}\right) \rho \left(N-1;\mathbf{r}\right) d\mathbf{r}\end{split}$ in the RMF approach. Here we have used the notation $$\rho\left(N;\mathbf{r}\right)$$ to indicate the $$N-$$ electron ground-state density and $$w_A\left(N;\mathbf{r}\right)$$ to indicate the atomic weighting function for atom $$A$$ in the $$N-$$ electron molecule. Similarly we use $$N_A\left(N\right)$$ and $$q_A\left(N\right)$$ to indicate the population and charge, respectively, of atom $$A$$ in the $$N-$$ electron molecule. In the linear model, the condensed dual descriptor is technically undefined. In the quadratic model, the condensed Fukui function and condensed dual descriptor are defined as, $\begin{split}f_{A}^{\text{FMR,0}} \left(\mathbf{r}\right) &= \tfrac{1}{2} \int w_{A} \left(N;\mathbf{r}\right) \left( f^{+} \left(\mathbf{r}\right) + f^{-} \left(\mathbf{r}\right) \right) d\mathbf{r} \\ &= \tfrac{1}{2} \int w_{A} \left(N;\mathbf{r}\right) \left( \rho \left(N+1;\mathbf{r}\right) - \rho \left(N-1;\mathbf{r}\right) \right) d\mathbf{r}\end{split}$ $\begin{split}f_{A}^{\text{FMR,(2)}} \left(\mathbf{r}\right) &= \int w_{A} \left(N;\mathbf{r}\right) \left( f^{+} \left(\mathbf{r}\right) + f^{-} \left(\mathbf{r}\right) \right) d\mathbf{r} \\ &= \int w_{A} \left(N;\mathbf{r}\right) \left( \rho \left(N+1;\mathbf{r}\right) - 2 \rho \left(N;\mathbf{r}\right) + \rho \left(N-1;\mathbf{r}\right) \right)d\mathbf{r}\end{split}$ in the FMR approach and $\begin{split}f_{A}^{\text{RMF,0}} \left(\mathbf{r}\right) &= \tfrac{1}{2} \left( N_{A} \left(N+1\right) - N_{A} \left(N-1\right) \right) \\ &= \tfrac{1}{2} \left( q_{A} \left(N-1\right) - q_{A} \left(N+1\right) \right) \\ &= \tfrac{1}{2} \int w_{A} \left(N+1;\mathbf{r}\right) \rho \left( N+1;\mathbf{r}\right) - w_{A} \left(N-1;\mathbf{r}\right) \rho \left(N-1;\mathbf{r}\right) d\mathbf{r}\end{split}$ $\begin{split}f_{A}^{\text{RMF,(2)}} \left(\mathbf{r}\right) &= \left( N_{A} \left(N+1\right) - 2 N_{A} \left(N\right) + N_{A} \left(N-1\right) \right) \\ &= - \tfrac{1}{2} \left( q_{A} \left(N+1\right) - 2 q_{A} \left(N\right) + q_{A} \left(N-1\right) \right) \\ &= \int w_{A} \left(N+1;\mathbf{r}\right) \rho \left(N+1;\mathbf{r}\right) - 2 w_{A} \left(N;\mathbf{r}\right) \rho \left(N;\mathbf{r}\right) + w_{A}\left(N-1;\mathbf{r}\right) \rho \left(N-1;\mathbf{r}\right) d\mathbf{r}\end{split}$ in the RMF approach. Condensed reactivity indicators corresponding to derivatives with respect to the chemical potential are computed through the condensed reactivity indicators corresponding to derivatives with respect to electron number. For example, the condensed local softness is defined as $s_{A} = S f_{A} = \frac{f_{A}}{\eta} = \frac{f_{A}}{I-A}$ and the condensed dual local softness is defined as $s_{A}^{(2)} = \frac{f_{A}^{(2)}}{\eta^{2}} - \frac{\eta^{(2)}f_{A}^{0}}{\eta^{3}}$ These reactivity indicators can be computed using Fukui functions and/or dual descriptors from either the FMR or RMF approaches.
## Abstract and Applied Analysis ### Stability Analysis and Control of a New Smooth Chua's System #### Abstract This paper is concerned with the stability analysis and control of a new smooth Chua's system. Firstly, the chaotic characteristic of the system is confirmed with the aid of the Lyapunov exponents. Secondly, it is proved that the system has globally exponential attractive set and positive invariant set. For the three unstable equilibrium points of the system, a linear controller is designed to globally exponentially stabilize the equilibrium points. Then, a linear controller and an adaptive controller are, respectively, proposed so that two similar types of smooth Chua's systems are globally synchronized, and the estimation errors of the uncertain parameters converge to zero as $t$ tends to infinity. Finally, the numerical simulations are also presented. #### Article information Source Abstr. Appl. Anal., Volume 2013, Special Issue (2013), Article ID 620286, 10 pages. Dates First available in Project Euclid: 26 February 2014 https://projecteuclid.org/euclid.aaa/1393447695 Digital Object Identifier doi:10.1155/2013/620286 Mathematical Reviews number (MathSciNet) MR3049413 Zentralblatt MATH identifier 1271.93114 #### Citation Zhou, Guopeng; Huang, Jinhua; Liao, Xiaoxin; Cheng, Shijie. Stability Analysis and Control of a New Smooth Chua's System. Abstr. Appl. Anal. 2013, Special Issue (2013), Article ID 620286, 10 pages. doi:10.1155/2013/620286. https://projecteuclid.org/euclid.aaa/1393447695 #### References • L. O. Chua, “The genesis of Chua's circuit,” Archiv fur Elektronik und Ubertragungstechnik, vol. 46, no. 4, pp. 250–257, 1992. • L. P. Shil'nikov, “Chua's circuit: rigorous results and future problems,” International Journal of Bifurcation and Chaos, vol. 4, no. 3, pp. 489–519, 1994. • L. O. Chua, C. W. Wu, A. Huang, and G.-Q. Zhong, “A universal circuit for studying and generating chaos I: routes to chaos,” IEEE Transactions on Circuits and Systems I, vol. 40, no. 10, pp. 732–744, 1993. • L. O. Chua, “A zoo of strange attractor from the canonical Chua's circuits,” in Proceedings of the 35th Midwest Symposium on Circuits and Systems, vol. 2, pp. 916–926, 1992. • Y. F. Wang and J. G. Jiang, “The chaotic phenomena analysis of asymmetric nonlinear Chua's circuit,” Systems Engineering and Electronics, vol. 29, no. 12, pp. 2029–2031, 2007. • K. S. Tang, K. F. Man, G. Q. Zhong, and G. Chen, “Generating chaos via $x\vert x\vert$,” IEEE Transactions on Circuits and Systems I, vol. 48, no. 5, pp. 636–641, 2001. • X. Liao, P. Yu, S. Xie, and Y. Fu, “Study on the global property of the smooth Chua's system,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 16, no. 10, pp. 2815–2841, 2006. • H. M. Deng, T. Li, Q. H. Wang et al., “Shaped Chua's chaotic system and its synchronization problem,” Systems Engineering and Electronics, vol. 31, no. 3, pp. 638–641, 2009. • G. A. Leonov, A. I. Bunin, and N. Koksch, “Attractor localization of the Lorenz system,” Zen and the Art of Motorcycle Maintenance, vol. 67, no. 12, pp. 649–656, 1987. • F. Zhou, Z. Y. Wang, G. P. Zhou, and F. X. Zhen, “Synchronization of two unsmooth Chua's circuits,” Mathematica Applicata, vol. 25, no. 2, pp. 382–388, 2012. • X. X. Liao, H. G. Luo, Y. L. Fu et al., “Positive invariant set and the globally exponentially attractive set of Lorenz system group,” Science in China-E, vol. 37, no. 6, pp. 757–769, 2007. • X. X. Liao, “New results for globally attractive set and positive invariant set of Lorenz system and application of chaos control and synchronization,” Science in China-E, vol. 34, no. 12, pp. 1404–1419, 2004. • J. G. Jian, X. L. Deng, and J. F. Wang, “New results of globally exponentially attractive set and synchronization controlling of the Qi chaotic system,” Advances in N Eural Networks-ISNN, pp. 643–650, 2010. • F. Q. Wang and C. X. Liu, “A new criterion for chaos and hyperchaos synchronization using linear feedback control,” Physics Letters A, vol. 360, no. 2, pp. 274–278, 2006. • H. R. Koofigar, F. Sheikholeslam, and S. Hosseinnia, “Robust adaptive synchronization for a general class of uncertain chaotic systems with application to Chuas circuit,” Chaos, vol. 21, no. 4, Article ID 043134, 2011. • H. G. Zhang, W. Huang, Z. L. Wang et al., “Adaptive synchronization between two different chaotic systems with unknown parameters,” Physics Letters A, vol. 350, no. 5-6, pp. 363–366, 2006. • F. Zhou, Z. Y. Wang, and G. P. Zhou, “Adaptive chronization of some modified smooth Chua's circuit,” Journal of Nanjing University of Information Science and Techology, vol. 4, no. 2, pp. 186–189, 2012. • X. X. Liao, Theory Methods and Application of Stability, Huazhong University of Science and Technology Press, 2nd edition, 2010. • C. L. Phillips and J. Parr, Feedback Control Systems, Prentice Hall, 5th edition, 2010. • M. Kristic, I. Kanellakopoulos, and P. Kokotovic, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, NY, USA, 1995. • L. Liu, Z. Chen, and J. Huang, “Parameter convergence and minimal internal model with an adaptive output regulation problem,” Automatica, vol. 45, no. 5, pp. 1306–1311, 2009. • S. Tong and Y. Li, “Observer-based fuzzy adaptive control for strict-feedback nonlinear systems,” Fuzzy Sets and Systems, vol. 160, no. 12, pp. 1749–1764, 2009. • G. Zhou and C. Wang, “Deterministic learning from control of nonlinear systems with disturbances,” Progress in Natural Science, vol. 19, no. 8, pp. 1011–1019, 2009.
## Graphing the derivative function On the left is shown the graph of $f(x)$ while the right shows the graph of the derivative $f'(x)$. Your browser does not support HTML5 Canvas. Your browser does not support HTML5 Canvas.
# Hamiltonian simulation¶ “The problem of simulating the dynamics of quantum systems was the original motivation for quantum computers and remains one of their major potential applications.” - Berry et al. [21] The simulation of atoms, molecules and other biochemical systems is another application uniquely suited to quantum computation. For example, the ground state energy of large systems, the dynamical behaviour of an ensemble of molecules, or complex molecular behaviour such as protein folding, are often computationally hard or downright impossible to determine via classical computation or experimentation [22][23]. In the discrete-variable qubit model, efficient methods of Hamiltonian simulation have been discussed at-length, providing several implementations depending on properties of the Hamiltonian, and resulting in a linear simulation time [24][25]. Efficient implementations of Hamiltonian simulation also exist in the CV formulation [26], with specific application to Bose-Hubbard Hamiltonians (describing a system of interacting bosonic particles on a lattice of orthogonal position states [27]). As such, this method is ideally suited to photonic quantum computation. ## CV implementation¶ For a quick example, consider a lattice composed of two adjacent nodes: This graph is represented by the $$2\times 2$$ adjacency matrix $$A=\begin{bmatrix}0&1\\1&0\end{bmatrix}$$. Here, each node in the graph represents a qumode, so we can model the dynamics of Bosons on this structure via a 2-qumode CV circuit. The Bose-Hubbard Hamiltonian with on-site interactions is given by $H = J\sum_{i}\sum_j A_{ij} \ad_i\a_j + \frac{1}{2}U\sum_i \hat{n}_i(\hat{n}_i-1)= J(\ad_1 \a_2 + \ad_2\a_1) + \frac{1}{2}U ( \hat{n}_1^2 - \hat{n}_1 + \hat{n}_2^2 - \hat{n}_2)$ where $$J$$ represents the transfer integral or hopping term of the boson between nodes, and $$U$$ is the on-site interaction potential. Here, $$\ad_1 \a_2$$ represents a boson transitioning from node 1 to node 2, while $$\ad_2\a_1$$ represents a boson transitioning from node 2 to node 1, and $$\hat{n}_i=\ad_i\a_i$$ is the number operator applied to mode $$i$$. Applying the Lie-product formula, we find that $e^{-iHt} = \left[\exp\left({-i\frac{ J t}{k}(\ad_1 \a_2 + \ad_2\a_1)}\right)\exp\left(-i\frac{Ut}{2k}\hat{n}_1^2\right)\exp\left(-i\frac{Ut}{2k}\hat{n}_2^2\right)\exp\left(i\frac{Ut}{2k}\hat{n}_1\right)\exp\left(i\frac{Ut}{2k}\hat{n}_2\right)\right]^k+\mathcal{O}\left(t^2/k\right),$ where $$\mathcal{O}\left(t^2/k\right)$$ is the order of the error term, derived from the Lie product formula. Comparing this to the form of various CV gates, we can write this as the product of symmetric beamsplitters (BSgate), Kerr gates (Kgate), and rotation gates (Rgate): $e^{iHt} = \left[BS\left(\theta,\phi\right)\left(K(r)R(-r)\otimes K(r)R(-r)\right)\right]^k+\mathcal{O}\left(t^2/k\right).$ where $$\theta=-Jt/k$$, $$\phi=\pi/2$$, and $$r=-Ut/2k$$. For the case $$k=2$$, this can be drawn as the circuit diagram For more complex CV decompositions, including those with interactions, see Kalajdzievski et al. [26] for more details. ## Blackbird code¶ The Hamiltonian simulation circuit displayed above for the 2-node lattice, can be implemented using the Blackbird quantum circuit language: 1 2 3 4 5 6 7 8 9 10 11 12 # prepare the initial state Fock(2) | q[0] # Two node tight-binding # Hamiltonian simulation for i in range(k): BSgate(theta, pi/2) | (q[0], q[1]) Kgate(r) | q[0] Rgate(-r) | q[0] Kgate(r) | q[1] Rgate(-r) | q[1] where, for this example, we have set J=1, U=1.5, k=20, t=1.086, theta = -J*t/k, and r = -U*t/(2*k). After constructing the circuit and running the engine, 1 results = eng.run(ham_simulation) the site occupation probabilities can be calculated via >>> state = results.state >>> state.fock_prob([2,0]) 0.52240124572001989 >>> state.fock_prob([1,1]) 0.23565287685672454 >>> state.fock_prob([0,2]) 0.24194587742325951 As Hamiltonian simulation is particle preserving, these probabilities should add up to one; indeed, summing them results in a value $$\sim1.0000000000000053$$. We can compare this result to the analytic matrix exponential $$e^{-iHt}$$, where the matrix elements of $$H$$ can be computed in the Fock basis. Considering the diagonal interaction terms, $\begin{split}& \braketT{0,2}{H}{0,2} = \frac{1}{2}U\braketT{0}{(\hat{n}^2-\hat{n})}{0} + \frac{1}{2}U\braketT{2}{(\hat{n}^2-\hat{n})}{2} = \frac{1}{2}U(2^2-2) = U\\[7pt] & \braketT{1,1}{H}{1,1} = 0\\[7pt] & \braketT{2,0}{H}{2,0} = U\end{split}$ as well as the off-diagonal hopping terms, $\begin{split}& \braketT{1,1}{H}{0,2} = J\braketT{1,1}{\left(\ad_1\a_2 + \a_1\ad_2\right)}{0,2} = J(\sqrt{1}\sqrt{2} + \sqrt{0}\sqrt{3}) = J\sqrt{2}\\[7pt] & \braketT{1,1}{H}{2,0} = J\sqrt{2}\end{split}$ and taking into account the Hermiticity of the system, we arrive at $\begin{split}H = \begin{bmatrix}U&J\sqrt{2}&0\\ J\sqrt{2} & 0 & J\sqrt{2}\\ 0 & J\sqrt{2} & U\end{bmatrix}\end{split}$ which acts on the Fock basis $$\{\ket{0,2},\ket{1,1},\ket{2,0}\}$$. Using the SciPy matrix exponential function scipy.linalg.expm(): >>> from scipy.linalg import expm >>> H = J*np.sqrt(2)*np.array([[0,1,0],[1,0,1],[0,1,0]]) + U*np.diag([1,0,1]) >>> init_state = np.array([0,0,1]) >>> np.abs(np.dot(expm(-1j*t*H), init_state))**2 [ 0.52249102, 0.23516277, 0.24234621] which agrees, within the expected error margin, with our Strawberry Fields Hamiltonian simulation. Note A fully functional Strawberry Fields simulation containing the above Blackbird code is included at examples/hamiltonian_simulation.py.
# Taniyama-Shimura 4: The Conjecture We’ve done a lot of work so far just to try to define the terms in the Taniyama-Shimura conjecture, but today we should finally make it. Our last piece of information is to write down what the L-function of a modular form is. Since I don’t want to build a whole bunch of theory needed to define the special class of modular forms we’ll be considering, I’ll just say that we actually need to restrict our definition of “modular form” to “normalized cuspidal Hecke eigenform”. I’ll point out exactly why we need this, but it doesn’t change anything in the conjecture except that every elliptic curve actually corresponds to an even nicer type of modular form. Let ${f\in S_k(\Gamma_0(N))}$ be a weight ${k}$ cusp form with ${q}$-expansion ${\displaystyle f=\sum_{n=1}^\infty a_n q^n}$. Since this is an analytic function on the disk, we have the tools and theorems of complex analysis at our disposal. We can perform something called the Mellin transform. It is just a standard integral transform given by the formula $\displaystyle {\Lambda (s) = \int_0^\infty f(it)t^s\frac{dt}{t}}$. After some computation you find that this transformed function is a product of really nice functions. We get $\displaystyle {\Lambda (s)=\frac{N^{s/2}}{(2\pi)^s}\Gamma(s)L(f,s)}$, where ${\Gamma(s)}$ is the usual Gamma function. Now if you actually went through and worked this out you would find out that ${L(f,s)}$ has a really nice form in terms of the Fourier coefficients. The so-called L-series associated to the Mellin transform is given by $\displaystyle \displaystyle L(f,s)=\sum_{n=1}^\infty \frac{a_n}{n^s}$. If your eyes glazed over for the Mellin transform talk, then just think of the L-function of the modular form as taking all of its Fourier coefficients and throwing them in the numerator of this series to make a new function. A quick remark is that if all the ${a_n}$ are ${1}$ (this won’t happen) we recover the Riemann zeta function. Thus you could think of the L-function we get as some sort of generalization of the zeta function. If you’ve been through some elementary number theory you have probably even seen a proof that $\displaystyle {\sum_{n=1}^\infty \frac{1}{n^s}=\prod \frac{1}{1-p^{-s}}}$ where the product is over all primes called an Euler product. Now in general if I hand you a sequence of integers ${a_n}$ that has some reasonable growth condition, then ${\sum_{n=1}^\infty \frac{a_n}{n^s}}$ will be a nice convergent series, probably with an analytic continuation to the plane. The tricky part is to figure out what types of sequences allow this Euler product decomposition. This is where we have to use that ${f}$ was of this special form. In the theory of modular forms there is something called Atkin-Lehner theory which tells us that the ${a_n}$ for a cusp form of this special type actually satisfy some nice relations such as ${a_{nm}=a_na_m}$ when ${(m,n)=1}$. These relations are precisely the ones needed to conclude that there is a nice Euler product expansion and it is given by $\displaystyle \displaystyle L(f,s)=\prod_{p|N}(*)\prod_{p\nmid N} \frac{1}{1-a_pp^{-s}+p^{k-1-2s}}.$ We say that a variety is modular if ${L(X,s)}$ coincides with ${L(f,s)}$ up to finitely many primes for some ${f\in S_k(\Gamma_0(N))}$. We’ve been ignoring the technicalities of dealing with the primes of bad reduction and the primes that divide the level (a surprisingly hard problem to determine when these are the same set!), but now we see that for the definition of a variety being modular this doesn’t even matter. There are other subtleties in defining all of this for when the variety does not have ${2}$-dimensional middle cohomology, but again for our immediate purposes you can trust that people have made the suitable adjustments. Now we see the truly shocking results of Taniyama-Shimura. We take this incredibly symmetric analytic object (so symmetric it is surprising any exist at all) and we take this completely algebraic variety defined over ${\mathbb{Q}}$ and the conjecture claims that we can always find one of these symmetric things that match up with this action on the cohomology. Wiles and Taylor are often credited with proving it in 1994, but it wasn’t actually proved until 2003 by Breuil, Conrad, Diamond, and Taylor. This was the elliptic curve case. Just last year Gouvea and Yui proved that all rigid Calabi-Yau threefolds are modular. It is a conjecture that all Calabi-Yau varieties over ${\mathbb{Q}}$ should be modular, so this includes K3 surfaces. It might seem weird that K3 surfaces haven’t been proven but the threefold case has been. This just has to do with those technicalities of what to do if the middle cohomology is bigger than 2-dimensional, which it always is. There you have it. The famous Taniyama-Shimura conjecture which led to a proof of Fermat’s Last Theorem. Advertisements ## One thought on “Taniyama-Shimura 4: The Conjecture” 1. Reblogged this on Guzman's Mathematics Weblog and commented: Now work to define the terms in the Taniyama-Shimura conjecture is complete, part 4 presents the famous Taniyama-Shimura conjecture which led to a proof of Fermat’s Last Theorem.
# Tag Info 6 No, there is not. Consider a game with two players, Ann and Bob. Both choose such vectors with entries $0$ or $1$ of the form $(a_1,a_2,\ldots,a_J)$ or $(b_1,b_2,\ldots,b_J)$, respectively. If $\sum_{i=1}^J a_i+b_i$ is odd, Ann wins and Bob loses. If the number is even, Ann loses and Bob wins. Clearly, one of them loses in every profile of pure strategies, ... 2 The concept of the BCE from their 2016 paper is similar to what you have. I think Bergemann and Morris' intuitive explanation is valuable so I'll paraphrase it here. Each player in the game has a decision rule that chooses an action, $y$, dependent on the state of the world $V$, and the player's information set, which we'll call $S$. This information set ... 2 \begin{array}{|c|c|c|} \hline &L&R\\\hline T&1,1&0,0\\\hline B&0,0&0,0\\\hline \end{array} In the game above, there are two pure strategy Nash equilibria: $(T,L)$ is an equilibrium in weakly dominant strategies; $(B,R)$ is an equilibrium in weakly dominated strategies. Noting that "dominant" and "dominated" are two different words,... 2 Pick an arbitrary cdf $F$ that is supported on $[-a,a]$. The median, $m$, must satisfy $$\int_{-a}^{m}dF \geq \frac{1}{2}, \quad \text{and} \quad \int_{m}^{a}dF \geq \frac{1}{2}$$ Note that $m$ is not generally $0$. The phrase you mention, "the unique NE is where both candidates chooses the policy associated with the median voter," simply means that the ... 1 Question 1 Yes, the BCE induced by a completely informative information structure will look like this. This is true even though there are other ways to represent fully informative information structures. Think of $T$ as labels. A fully informative information structure should use each element of $T$ to label only one state of the world. That way, when the ... 1 Think about an auction, where the designer is selling good and trying to sell it to the person that values it the most while collecting as much revenue as possible. A direct mechanism means that the seller asks buyers for how much they value the good and based on that decides who gets the good and how much they pay. Suppose that the designer uses a second-... 1 You have specialized the definition of BCE in two dimensions: there is only one player, and the player has no private information. If you want to allow for private information you can let the player have some signal $\pi:\mathcal{V}\rightarrow\Delta(T_i)$ And let the decision rule $P_{\mathcal{Y},\mathcal{T},\mathcal{V}}\in\Delta(\mathcal{Y}\times \mathcal{... 1 I'm not an expert in epistemic game theory but let me see if I can help with an example. Suppose the state space is$\Omega = \{1,2,3,4,5\}$, and there are two players with information partitions $$\mathcal{P}_1 = \left\{\{1,2,3\},\{4,5\} \right\} \\ \mathcal{P}_2 = \left\{\{1,2\},\{3,4\},\{5\}\right\}$$ Suppose the true state is$\omega = 5\$. Let us ... Only top voted, non community-wiki answers of a minimum length are eligible
# Orthogonal basis In mathematics, particularly linear algebra, an orthogonal basis for an inner product space Template:Mvar is a basis for Template:Mvar whose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basis is an orthonormal basis. ## As coordinates Any orthogonal basis can be used to define a system of orthogonal coordinates Template:Mvar. Orthogonal (not necessarily orthonormal) bases are important due to their appearance from curvilinear orthogonal coordinates in Euclidean spaces, as well as in Riemannian and pseudo-Riemannian manifolds. ## In functional analysis In functional analysis, an orthogonal basis is any basis obtained from an orthonormal basis (or Hilbert basis) using multiplication by nonzero scalars. ## Extensions The concept of an orthogonal (but not of an orthonormal) basis is applicable to a vector space Template:Mvar (over any field) equipped with a symmetric bilinear form , where orthogonality of two vectors v and w means . For an orthogonal basis {ek} : ${\displaystyle \langle {\mathbf {e} }_{j},{\mathbf {e} }_{k}\rangle =\left\{{\begin{array}{ll}q({\mathbf {e} }_{k})&j=k\\0&j\neq k\end{array}}\right.\quad ,}$ where Template:Mvar is a quadratic form associated with : q(v) = Template:Langlev, vTemplate:Rangle (in an inner product space q(v) = | v |2). Hence, ${\displaystyle \langle {\mathbf {v} },{\mathbf {w} }\rangle =\sum \limits _{k}q({\mathbf {e} }_{k})v^{k}w^{k}\ ,}$ where Template:Mvar and Template:Mvar are components of v and w in {ek} . ## References |CitationClass=book }}
# Using Laplace transformations, find the transfer functions for the systems given in Problem 1.1.... Using Laplace transformations, find the transfer functions for the systems given in Problem 1.1. Determine the transient responses. Assume: a. Elastic material. b. Kelvin model, with E = 30 × 106 psi, η = 60 × 106 psi-sec c. Maxwell model, with E = 30 × 106 psi, η = 18 × 108 psi-sec d. Standard solid model, with E0 = 30 × 106 psi, E1 = 60 × 106 psi η = × 60 106 psi-sec Problem 1.1 Write the equations of motion for the one-degree-of-freedom systems shown in Figure 1.73a–i. Assume that the loading is in the form of a force P(t), a given displacement a(t), or a given rotation is ϕ(t) as indicated in the figure.
# Finding Residue 1. Jan 10, 2016 ### MartinKitty Hello everyone, I have a problem with finding a residue of a function: $f(z)={\frac{z^3*exp(1/z)}{(1+z)}}$ in infinity. I tried to present it in Laurent series: $\frac{z^3}{1+z} sum_{n=0}^\infty\frac{1}{n!z^n}$ I know that residue will be equal to coefficient $a_{-1}$, but i don't know how to find it. 2. Jan 10, 2016 ### mathman Expand $\frac{z^3}{1+z}=z^3-z^4+z^5-z^6...$. The multiply the two series together to find the coefficient you want (as an infinite series). 3. Jan 10, 2016 ### MartinKitty Then i get: $\frac{z^3}{1+z}=(-1)^n*z^{n+3}$ and when i multiply I always get ${z^3}$ with some fraction 4. Jan 11, 2016 ### Ssnow $\left(z^{3}-z^{4}+z^{5}-\cdots \right)\left(1+\frac{1}{z}+\frac{1}{2z^{2}}+\frac{1}{6z^{3}}+\frac{1}{24z^{4}}+\cdots \right)$, the only interested terms are of the forms $\frac{a_{-1}}{z}$, that are $\frac{1}{4!}-\frac{1}{5!}+\frac{1}{6!}-\cdots$ so it is $e^{-1}-\frac{1}{2}+\frac{1}{3!}$ (if I did not make mistakes ...)
# perverse sheaves – Nearby cycles without a function Suppose that: • $$X$$ is a smooth complex algebraic variety, • $$f : X to D$$ is a map to a small disc, smooth away from 0, • $$Z_epsilon = f^{-1}(epsilon)$$, and $$Z = Z_0$$. Then there is a procedure (“nearby cycles”) which produces a complex $$psi_f(mathbb{Q}_X)$$ on $$Z$$ whose cohomology agrees with that of $$Z_epsilon$$ for $$epsilon ne 0$$. (My notation is that $$mathbb{Q}_X$$ is always shifted so as to be perverse.) This belongs to a powerful toolbox of techniques to relate the cohomology of general and special fibres, with important applications in algebraic geometry, number theory and representation theory. Question: Suppose that I just give you $$Z subset X$$ but not $$f$$, can I “guess” $$psi_f(mathbb{Q}_X)$$? Here is a rough proposal for how to do so, and I’m wondering if this is discussed somewhere in the literature. It seems a little like magic, and I might be making mistakes. Firstly, $$psi_f(mathbb{Q}_X)$$ is a perverse sheaf, and it comes with a monodromy endomorphism $$mu$$. I assume that $$mu$$ is unipotent. Hence $$N = 1-mu$$ is a nilpotent endomorphism of the nearby cycles, and I have a short exact sequence of perverse sheaves: $$0 to phi_f(mathbb{Q}_X) stackrel{N}{to} psi_f(mathbb{Q}_X) to i^*mathbb{Q}_X to 0$$ Thus, $$i^*mathbb{Q}_X$$ is the “coinvariants of the monodromy”. Secondly, $$psi_f(mathbb{Q}_X)$$ carries a weight filtration $$W$$, and a deep theorem of Gabber states that the weight filtration agrees with the monodromy filtration. In particular: $$N^i : gr_W^{-i}(psi_f(mathbb{Q}_X)) stackrel{sim}{to} gr_W^{i}(psi_f(mathbb{Q}_X))$$ is an isomorphism. Now, assume that I know the successive subquotients of the weight filtration on $$i^*mathbb{Q}_X$$. Then, it seems that the above gives a very good picture of the associated graded of the weight filtration on $$psi_f(mathbb{Q}_X)$$. Namely, every $$IC_lambda$$ which occurs in $$gr_W^{-i}(i^*mathbb{Q}_X$$) contributes an $$IC_lambda$$ in weight filtration steps $$-i, -i+2, dots, i-2, i$$ to the weight filtration on $$psi_f(mathbb{Q}_X)$$. An analogy: any finite-dimensional representation of $$mathfrak{sl}_2(mathbb{C})$$ is recoverable from its lowest weight vectors. Under this analogy, the lowest weight vectors are given by $$i^*mathbb{Q}_X$$. Some precise questions: 1. Under the above assumptions is it correct that I can recover the associated graded of the weight filtration on $$psi_f(mathbb{Q}_X)$$ from that of $$i^*mathbb{Q}_X$$? 2. This appears to imply that $$psi_f(mathbb{Q}_X)$$ is automatically constructible for any stratification that makes $$i^*mathbb{Q}_X$$ constructible, which surprises me a little. 3. This appears rather powerful. Has this technique been applied usefully somewhere? In other words, where can I read more?!
# Create Flow¶ An empty Flow can be created via: from jina import Flow f = Flow() ## Use a Flow¶ To use f, always open it as a context manager, just like you open a file. This is considered the best practice in Jina: with f: ... Note Flow follows a lazy construction pattern: it won’t actually run until you use with to open it. Warning Once a Flow is open via with, you can send data requests to it. However, you cannot change its construction via .add() any more until it leaves the with context. Important The context exits when its inner code is finished. A Flow’s context without inner code will immediately exit. To prevent that, use .block() to suspend the current process. with f: f.block() # block the current process ## Visualize a Flow¶ from jina import Flow In Jupyter Lab/Notebook, the Flow object is rendered automatically without needing to call plot().
# 50 Recent Changes in CMSPublic Web retrieved at 16:37 (GMT) General Info Meeting time: 15h00 Mailing list: cms tier0 operations #64;cernNOSPAMPLEASE.ch Latest meeting Team members Andres will be on vacations from 15th... pre { text align: left; padding: 10px;margin left: 20px; color: black; } Job Opportunities There are many resources now available to explore job opportunities... Detailed Review status Introduction Local reco sequence description Summary of the standard algorythm (for the impatient user) All algorithm not mentioned... Statistics for CMSPublic Web Month: Topic views: Topic saves: File uploads: Most popular topic views: Top viewers: Top contributors... CMS DT Longevity Studies The CMS drift tubes (DT) muon detector, built for standing up the LHC expected integrated and instantaneous luminosities, will be used also... See the TESTBED agents here CompOpsWorkflowTeamTestbedWmAgents See the current WMS deployment (submitters, frontends, pools, factories) at CompOpsWMSDeploy.... WMAgent releases connected to cmsweb testbed These agents are connected to cmsweb testbed because they are either used by the Integration or by the DMWM team. They... Computing and Software Help Getting Started a Computing Account Computing Rules Software Documentation New users should start with the Offline Workbook and consult... BRIL DPG Approved Results (under construction) Detector Performance Notes Note ID Analysis Documentation CMS DP 2019/XXX XY correlation plots... Combining all groups of HLT triggers in a Global Table The HLT runs off raw data files specifically, off the single FEDRawDataCollection data structure; it can... Software Guide for Tau Reconstruction The goal of this page is to document the usage and creation of hadronic tau jet candidates from ParticleFlow jets. Contact... Cookbook Recipes for Tier 0 troubleshooting, most of them are written such that you can copy paste and just replace with your values and obtain the expected results... $(document).ready(function() {$(#patternLeftBarContents).hide(); }); SketchUpCMS 3D models of the CMS detector and events in SketchUp 1 Introduction... Setup Instructions for New Team Members Accounts, Group Memberships, Certificates, Etc. Computing account at FNAL to get this, follow the instructions at... pre {text align: left; padding: 10px; color: black; font size: 12px;} pre.command {background color: lightgrey;} pre.config {background color: lightblue;} pre.code... Either build your own LeftBar (see TWiki:TWiki.WebLeftBarCookbook): MyTopic MySubTopicA MySubTopicB OR Include a CMS LeftBar appropriate to... The CMSSW Documentation Suite: The CMS Offline WorkBook Background information and tutorials on accessing computing resources using the... CMS Visits CMS invites you to follow the fascinating discoveries of our Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC). CMS is the only... Tracker Material Budget plots The material budget simulation of the CMS Tracker featuring the Phase 1 Pixel detector is presented. Shown is the breakdown of the total... Summaries of CMS cross section measurements This page contains various summary plots of CMS cross section measurements. Links are provided to CMS publications and... Site Support Team Mission Statement The Site Support teams mission is to make sure all CMS sites are fully operational and provide failure less services: CPU, Storage... Limits on anomalous triple and quartic gauge couplings This page summarizes limits on anomalous triple gauge couplings (aTGCs) and limits on anomalous quartic gauge... Limits on anomalous quartic gauge couplings This page summarizes limits on anomalous triple gauge couplings (aTGCs) and limits on anomalous quartic gauge couplings... Facility/Site E groups Modifying facility and site information in either GitLab (SITECONF), siteStatus (life , prod , CRAB Status), or CRIC (grid resource information... Tracker Alignment Constants for Run II 2017 Description: Timeline of these tags is ordered from bottom to the top which means the tag on the top is always the... L1 Trigger Emulator Phase 2 Upgrade Instructions This page is intended to document useful information regarding Software for L1 Trigger Phase 2 Upgrade. . This includes... L1 Trigger Emulator Stage 2 Upgrade Instructions This is a set of instructions on how to run the Stage 2 (Phase I) upgrade version of the L1 Emulator to produce... pre {text align: left; padding: 10px; color: black; font size: 12px;} pre.command {background color: lightgrey;} pre.config {background color: lightblue;} pre.code... NanoAOD Documentation NanoAOD format NanoAOD format consists of an Ntuple like format, readable with bare root and containing the per event information that is needed... CMS Top Quark Physics Summary Figures Below are the summary plots corresponding to the CMS top quark physics results. CMS ATLAS combinations and summary figures can... CMS Top Quark Physics Group Results The top group is responsible for precision measurements of the properties of the top quark, the heaviest particle in the standard... Muon DPG (DT/CSC/RPC/GEM) Approved Results Muon DPG Publications https://cds.cern.ch/record/2313130 Performance of the CMS muon detector and muon reconstruction... DD to DD4hep Migration In order to migrate a sub detector geometry description from DD to DD4hep the following steps should be followed: Geometry configuration... A Firmware oriented Trigger Algorithm for CMS Drift Tubes in HL LHC A full replacement of the muon trigger system in the CMS (Compact Muon Solenoid) detector is envisaged... A Firmware oriented Trigger Algorithm for CMS Drift Tubes in HL LHC A full replacement of the muon trigger system in the CMS (Compact Muon Solenoid) detector is envisaged... US CMS Tier 2 Facilities Deployment Status This page is used to track the status of upgrading various products and services at the US CMS Tier 2 sites. Please update... MathJax.Hub.Config({tex2jax: {inlineMath: $,$ , \$$,\$$ }}); Higgs PAG Summary Plots Introduction This page contains summary plots for the Higgs Physics... pre {text align: left; padding: 10px; color: black; font size: 12px;} pre.command {background color: lightgrey;} pre.config {background color: lightblue;} pre.code... CMS Exotica Public Physics Results This page is still maintained on a best effort basis, but please see official CMS Publications page for the fully up to date... Facilities Services The Facilities and Services team of the Computing and Offline group coordinates support of computing facilities and services for the experiment... Introduction Une voiture de lantenne est r... Search for supersymmetry in events with a photon, a lepton, and missing transverse momentum in proton proton collisions at sqrt(s) 13 TeV (SUS 17 012) Further... 3.4 Physics Analysis Oriented Event Display ( Fireworks / cmsShow ) Fireworks for all release cycles are available as part of CMSSW distribution. See section Using... CMS SUSY Results: Objects Efficiency Moriond 2017 In the following the representative object selection efficiencies for the SUSY 2016 analyses presented at Moriond... Nonfactorization in Van der Meer scans in Run 2 The plots have been reported in a DP note with number CMS DP XXXXXXX, and were initially prepared for the Lumi Days... Getting Data from the EventSetup Data Model An EventSetup object holds Records and Records in turn hold data. The data is uniquely identified by C type of... Getting Data from a Dependent Record in an ESProducer The produce methods of a ESProducer are only passed the Record to which the data being produced will be added... CMS Supersymmetry Physics Results Questions? Contact the SUSY Conveners Links to the list of all CMS SUSY results Preliminary results: http://cms results.web... DRAFT Deployment Guidance for 2018 We don`t know what the 2018 hardware budget will be yet, but we in the University Facilities area have been thinking about priorities... Number of topics: 50 Show recent changes with 50, 100, 200, 500, 1000 topics, all changes Related topics: RSS feed, ATOM feed, WebNotify, site changes, site map Edit | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | More topic actions Topic revision: r3 - 2006-11-15 - TWikiContributor Create a LeftBar Cern Search TWiki Search Google Search CMSPublic All webs Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
# Differences This shows you the differences between two versions of the page. — 4_river_valley_historical_society [2018/12/06 17:16] (current) Line 1: Line 1: + ===== 4 River Valley Historical Society ===== + \\  Founded in 1977, the 4 River Valley Historical Society is an organization based in [[Carthage|Carthage]]. The society works to preserve and protect sites and structures of historical importance in areas of western [[Jefferson_County|Jefferson County]] and parts of Lewis County. The four river valleys the organization focuses on are the Beaver, [[Black_River|Black]],​ Indian and Deer River Valleys. The organization has taken ownership of several historic properties in the hopes of preserving them for future generations.\\ \\ [[http://​www.4rvhs.org/​|4 River Valley Historical Society Website]]\\ \\ + ===== See Also ===== + [[Historic_Structures_in_Jefferson_County|Historic Structures in Jefferson County]]\\ \\ \\ \\ \\ \\ • 4_river_valley_historical_society.txt
# What is the equation of an ellipse ? ## Find the equation of an ellipse that passes through the points (2,3) and (1,-4) Dec 24, 2016 ${\left(y - - 4\right)}^{2} / {7}^{2} + {\left(x - 2\right)}^{2} / {1}^{2} = 1$ #### Explanation: Excluding rotations, there are two general Cartesian forms for the equation of an ellipse: ${\left(x - h\right)}^{2} / {a}^{2} + {\left(y - k\right)}^{2} / {b}^{2} = 1 \text{ [1]}$ and: ${\left(y - k\right)}^{2} / {a}^{2} + {\left(x - h\right)}^{2} / {b}^{2} = 1 \text{ [2]}$ let k = -4, h = 2, and use equation [2}: ${\left(y - - 4\right)}^{2} / {a}^{2} + {\left(x - 2\right)}^{2} / {b}^{2} = 1$ Use the point $\left(2 , 3\right)$ ${\left(3 - - 4\right)}^{2} / {a}^{2} + {\left(2 - 2\right)}^{2} / {b}^{2} = 1$ $a = 7$ Use the point $\left(1 , - 4\right)$: ${\left(- 4 - - 4\right)}^{2} / {a}^{2} + {\left(1 - 2\right)}^{2} / {b}^{2} = 1 \text{ [2]}$ $b = 1$ The equation is: ${\left(y - - 4\right)}^{2} / {7}^{2} + {\left(x - 2\right)}^{2} / {1}^{2} = 1$
# Math Help - S_n generated by two elements 1. ## S_n generated by two elements Hello all , i have tried to show it with no progress . Can someone help me with that one. Thanks a lot . 2. The transposition $\sigma = (1\;2)$ and the cycle $\tau = (1\;2\;3\;\ldots\; n)$ generate $S_n$. In fact $\tau^{k-1}\sigma\tau^{-k+1} = (k\;k{+}1)$, so all transpositions of consecutive numbers are in the subgroup generated by $\sigma$ and $\tau$. Then any transposition can be expressed as a product of those, for example $(1\;3) = (1\;2)(2\;3)(1\;2)$. Finally, it is well known that the transpositions generate the whole of $S_n$.
Home Score : $300$ points Problem Statement You are given a string $S$ consisting of lowercase English letters. Another string $T$ is initially empty. Determine whether it is possible to obtain $S = T$ by performing the following operation an arbitrary number of times: • Append one of the following at the end of $T$: dream, dreamer, erase and eraser. Constraints • $1≦|S|≦10^5$ • $S$ consists of lowercase English letters. Input The input is given from Standard Input in the following format: $S$ Output If it is possible to obtain $S = T$, print YES. Otherwise, print NO. Sample Input 1 erasedream Sample Output 1 YES Append erase and dream at the end of $T$ in this order, to obtain $S = T$. Sample Input 2 dreameraser Sample Output 2 YES Append dream and eraser at the end of $T$ in this order, to obtain $S = T$. Sample Input 3 dreamerer Sample Output 3 NO
# Dirichlet fractional Laplacian and zero boundary conditions Does there exists a non-zero function $$f\in C_0([0,1]):=\{f:[0,1]\to \mathbb R:\ f\text{ is continuous and } f(0)=f(1)=0\},$$ such that $(-\Delta)^{\frac\alpha 2}f\in C_0([0,1])$, where $(-\Delta)^{\frac\alpha 2}$ is the Dirichlet fractional Laplacian defined by $$(-\Delta)^{\frac\alpha 2}f(x):=\int_0^1(f(x)-f(y))\frac{dy}{|x-y|^{1+\alpha}}+f(x)\int_{\mathbb R\backslash (0,1)}\frac{dy}{|x-y|^{1+\alpha}},\ \ x\in(0,1),$$ and $(-\Delta)^{\frac\alpha 2}f(0):=\lim_{x\to 0^+}(-\Delta)^{\frac\alpha 2}f(x)$, $(-\Delta)^{\frac\alpha 2}f(1):=\lim_{x\to 1^-}(-\Delta)^{\frac\alpha 2}f(x)$, where $\alpha \in(0,2)$ (principal value definition of the first integral). My question is motivated by the last part of Theorem 2.7, as the existence of the function I am looking for would allow me to apply that theorem. Note that the above definition of $(-\Delta)^{\frac \alpha 2}$ agrees with the definition of the restricted fractional Laplacian in formula (3.1) here, which differs from definition of the spectral fractional Laplacian in formula (3.4) here. (I asked the same question on Math.SE but it received low attention. I then though it might be better to post it here, let me know if it's not ok) Yes, there are many functions with this property. In fact, these functions are dense in $C_0([0,1])$. For every $g \in C_0([0,1])$ there is a unique $f \in C_0([0,1])$ such that $(-\Delta)^{\alpha/2} f(x) = g(x)$ for $x \in [0,1]$. This $f$ is given more-or-less explicitly: $$f(x) = \int_0^1 G_{(0,1)}(x, y) g(y) dy,$$ where $G_{(0,1)}(x,y)$ is the Green function for $(-\Delta)^{\alpha/2}$ in $(0, 1)$: $$G_{(0,1)}(x,y) = \frac{|x - y|^{\alpha - 1}}{2^\alpha (\Gamma(\alpha/2))^2} \int_0^{R(x,y)} \frac{s^{\alpha/2 - 1}}{\sqrt{1 + s}} \, ds$$ with $$R(x,y) = \frac{4 x (1 - x) y (1 - y)}{|x - y|^2} \, .$$ (Edited). In fact, the class of functions $f$ defined above constitute the domain of $(-\Delta)^{\alpha/2}$ on $(0, 1)$, with zero exterior condition, and therefore it is dense in $C_0([0,1])$. A rigorous proof of this claim is somewhat complicated, though. The fractional Laplacian $-(-\Delta)^{\alpha/2}$ is the Feller generator of (the transition semigroup of) the symmetric $\alpha$-stable Lévy process $X_t$. Let $D = (0, 1)$ and let $X_t^D$ be the process $X_t$ killed when it first exits $D$. Since every boundary point of $D$ is regular, $X_t^D$ is a Feller process, and hence its Feller generator is a densely defined operator on $C_0(D)$. Denote this Feller generator by $L$. By the general theory of strongly continuous semigroups, $f$ is in the domain of $L$ if and only if $f(x) = \int_D G_D(x, y) g(y) dy$ for some $g \in C_0(D)$, and in this case $g = -L f$. What remains to be proved is that if $f$ belongs to the domain of $L$, then $L f(x) = -(-\Delta)^{\alpha/2} f(x)$. It was proved by Dynkin (in his brilliant book Markov processes) that $f \in C_0(D)$ is in the domain of $L$ if and only if the limit in $$\tilde{L} f(x) = \lim_{r \to 0^+} \frac{\mathbb{E}^x f(X(\tau_{B(x, r)})) - f(x)}{\mathbb{E}^x \tau_{B(x, r)}}$$ exists for every $x \in D$ and it defines a function $\tilde{L} f$ in $C_0(D)$; in this case $\tilde{L} f(x) = L f(x)$ for all $x \in D$. (The operator $\tilde{L}$ is the Dynkin characteristic operator). Now the important fact is that the definition of $\tilde{L} f(x)$ does not depend on $D$! (It does not matter whether we use the killed process $X_t^D$ or the free process $X_t$ in the definition, as long as $f = 0$ in the complement of $D$). Furthermore, convergence in the definition of $\tilde{L} f(x)$ implies convergence to the same limit in the definition of $-(-\Delta)^{\alpha/2} f(x)$ as a singular integral (for any fixed point $x$). This phenomenon seems to be specific to the fractional Laplacian, and it appears that it has been first observed here. Final remark: Regarding the converse claim: "if $f \in C_0(D)$, $(-\Delta)^{\alpha/2} f(x)$ is well-defined as a singular integral for each $x \in D$ and it defines a $C_0(D)$ function", I believe it is not stated explicitly in the literature unless $D$ is full space. • I am indeed interested in the pointwise equality of the generator of the killed process with the fractional Laplacian applied to the function extended to zero outside of $(0,1)$. Are there such functions? Is $(-\Delta)^{\frac \alpha 2}$ in your first line the generator? (sorry but I'm not sure you answered this part) – Rgkpdx Mar 5 '18 at 18:52 • Sorry, but is the following a counter-example? Let $f$ be smooth and positive inside its compact support in $(0,1)$. Then $(-\Delta)^{\frac \alpha 2}f(0)\neq 0=- L_{(0,1)}f(0)$,where $(L_{(0,1)},Dom(L_{(0,1)}))$ is the generator of the killed-$\alpha$-stable process (working on $C_0([0,1])$) and $f\in Dom(L_{(0,1)})$. To conclude note that zero-extension of $f$ belongs the domain of the $\alpha$-stable process and its the generator agrees with $-(-\Delta)^{\frac \alpha 2}$ on $f$. – Rgkpdx Mar 6 '18 at 10:44 • Also, I do not see how the characteristic operator $\tilde Lf$ can equal $-(-\Delta)^{\frac \alpha 2}f$ as values of $f$ outside any ball are ignored in the limiting definition ($\tilde Lf(x)=\tilde L g(x)$ for any $f,g$ such that $f=g$ a neighborhood of $x$, no?). – Rgkpdx Mar 6 '18 at 10:44 • Actually, looking at Theorem 2.3 in arxiv.org/pdf/1604.06421.pdf does it follow that f∉Dom(L(0,1))? (where f is the f defined in the comment of mine about the counter example.). Also your Green function $G_{(0,1)}$ does not seem to allow for $f$ to be smooth. – Rgkpdx Mar 6 '18 at 15:35
# 2003 technical reports CS-2003-01 Title Processing Sliding Window Multi-Joins in Continuous Queries Authors Lukasz Golab and M. Tamer Ozsu Abstract We study sliding window multi-join processing in continuous queries over data streams. Several algorithms are reported for performing continuous, incremental joins, under assumption that all the sliding windows fit in main memory. The algorithms include multi way incremental nested loop joins (NLJs) and multi-way incremental hash joins. We also propose join ordering heuristics to minimize the processing cost per unit time. We test a possible implementation of these algorithms and show that as expected, has joins are faster than NLJs for performing equi-joins, and that the overall processing cost is influenced by the strategies used to remove expired tuples from the sliding windows Date February 2003 Report Processing Sliding Window Multi-Joins in Continuous Queries (PDF) CS-2003-04 Title A Roadmap to a Medeling Language for Multi-Agent Systems Engineering Authors Viviane Silva, Carlos Lucena, Paulo Alencar and Donald Cowan Abstract In this paper we present a conceptual framework called TAO that organizes the core concepts related to multi-agent systems and assumes that agents and objects co-exist as independent and unique abstractions. We then use the TAO framework to create the multi-agent system modelling language MAS-ML, which extends UML (Unified Modelling Language). MAS-ML is defined as a conservative extension of the UML metamodel, and includes agent-related notions that are part of the TAO conceptual framework while preserving all object-related concepts, which constitute the UML metamodel. Creating a modelling language for multi-agent systems is a complex task and we should consider following a process or roadmap similar to the one that led to the development of the UML and its various sub-models and supporting facilities. Date February 2003 Report A Roadmap to a Medeling Language for Multi-Agent Systems Engineering (PDF) CS-2003-06 Title Towards Identifying Frequent Items in Sliding Windows Authors David DeHaan, Erik D. Demaine, Lukasz Golab, Alejandro Lopez-Ortiz and J. Ian Munro Abstract Queries that return a list of frequently occurring items are popular in the analysis of data streams such as real-time Internet traffic logs. While several resultsexist for computing frequent item queries using limited memory in the infinite stream model, none have been extended to the limited-memory sliding window model, which considers only the last N items that have arrived at any given time and forbids the storage of the entire window in memory. We present several algorithms for identifying frequent items in sliding windows, both under arbitrary distributions and assuming that each window conforms to a multinomial distribution. The former is a straightforward extension of existing algorithms and is shown to work well when tested on real-life TCP traffic logs. Our algorithms for the multinomial distribution are shown to outperform classical inference based on random sampling from the sliding window, but lose their accuracy as predictors of item frequencies when the underlying distribution is not multinomial. Date March 2003 Report Towards Identifying Frequent Items in Sliding Windows (PDF) CS-2003-10 Title An Efficient Bounds Consistency Algorithm for the Global Cardinality Constraint Authors Claude-Guy Quimper, Peter van Beek, Alejandro Lopez-Ortiz, Alexander Golynski and Sayyed Bashir Sadjad Abstract Previous studies have demonstrated that designing special purpose constraint propagators can significantly improve the efficiency of a constraint programming approach. In this paper we present an efficient algorithm for bounds consistency propagation of the generalized cardinality constraint "gcc". Using a variety of benchmark and random problems, we show that our bounds consistency algorithm is competitive with and can dramatically outperform existing state-of-the-art commercial implementations of constraint propagators for the gcc. We also present a new algorithm for domain consistency propagation of the gcc which improves on the worst-case performance of the best previous algorithm for problems that occur often in applications. Date February 2003 Report An Efficient Bounds Consistency Algorithm for the Global Cardinality Constraint (PS) An Efficient Bounds Consistency Algorithm for the Global Cardinality Constraint (PDF) CS-2003-21 Title A Geometric B-Spline Over the Triangular Domain Authors Christopher K. Ingram Abstract For modelling curves, B-splines [3] are among the most versatile control schemes. However, scaling this technique to surface patches has proven to be a non-trivial endeavor. While a suitable scheme exists for rectangular patches in the form of tensor product B-splines, techniques involving the triangular domain are much less spectacular. The current cutting edge in triangular B-splines is the DMS-spline [2]. While the resulting surfaces possess high degrees of continuity, the control scheme is awkward and the evaluation is computationally expensive. A more fundamental problem is the construction bears little resemblance to the construction used for the B-Spline. This deciency leads to the central idea of the thesis; what happens if the simple blending functions found at the heart of the B-Spline construction are used over higher dimension domains? In this thesis I develop a geometric generalization of B-Spline curves over the triangular domain. This construction mimics the control point blending that occurs with uniform B-Splines. The construction preserves the simple control scheme and evaluation of B-Splines, without the immense computational requirements of DMSsplines. The result is a new patch control scheme, the G-Patch, possessing C0 continuity between adjacent patches. Date June 2003 Report A Geometric B-Spline Over the Triangular Domain (PDF) CS-2003-22 Title Offset Surface Light Fields Authors Jason Ang Abstract For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering. Date August 2003 Report Offset Surface Light Fields (PDF) CS-2003-24 Title Evaluation of DBMSs Using XBench Benchmark Authors M. Tamer Ozsu and Benjamin B. Yao Abstract XML support is being added to existing database management systems (DBMSs) and native XML systems are being developed both in industry and in academia. The individual performance characteristics of these approaches as well as the relative performance of various systems is an ongoing concern. XBench is a family of XML benchmarks which recognizes that the XML data that DBMSs manage are quite varied and no one database schema and workload can properly capture this variety. Thus, the members of this benchmark family have been defined for capturing diverse application domains. In this report we briefly discuss the XBench XML benchmark and report on the relative performance of three commercial DBMSs: X-Hive, DB2 and SQL Server. We also describe the potential issues when storing XML documents into relational DBMSs. Date August 2003 Report Evaluation of DBMSs Using XBench Benchmark (PDF) CS-2003-25 Title Investigations in Tree Locking for Compiled Database Applications Authors Heng YU and Grant Weddell Abstract We report on initial experiments in tree locking schemes for compiled database applications. Such applications have a repository style of architecture in which a collection of software modules or subsystems operate on a common database in terms of a predefined set of transaction types, and are very often at the core of embedded systems. Since the tree locking protocol is deadlock free, it becomes possible to decouple recovery mechanisms from concurrency control, a property that we believe is critical to the successful deployment of database technology to this new application area. Our experiments show that the performance of tree locking can compete with two phase locking for cases such asmthe above in which a great deal can be known at the time of system generation about workloads. Date September 2003 Report Investigations in Tree Locking for Compiled Database Applications (PDF) CS-2003-26 Title A Study on Atmospheric Halo Visualization Authors Sung Min Hong and Gladimir Baranoski Abstract In this report, the atmospheric halo phenomena have been explained and current trends of the halo simulation have been described. Then A ray tracer based on the Monte Carlo method has been presented and demonstrated. The comparison of the simulation results with the photos taken for the same phenomena has shown that the ray tracer gives reasonable results. This ray tracer provides a very useful tool for simulating the halo phenomena and identifying atmospheric data like proportions of ice crystals and orientations. With some enhancements of the ray tracer, which are future works, the ray tracer is expected to give physically correct realistic halo images. Date September 2003 Report A Study on Atmospheric Halo Visualization (PDF) CS-2003-27 Title Supporting Set-at-a-time Extensions for XML through DOM Authors Hai (Helena) Chen, Frank Tompa Abstract With the rapid growth of the web and e-commerce, W3C produced a new standard, XML, as a universal data representation format to facilitate information interchange and integration from heterogeneous systems. In order to process XML, documents need to be parsed and then users can access, manipulate, and retrieve the XML data easily. DOM is one of the major application programming interfaces, which provides the abstract, logical tree structure of an XML document. Though it is a very promising initiative in defining the standard interface for XML documents to access all data required by applications, significant benefit can result if some functions are extended. In this thesis, we want to support set-at-a-time extensions for XML through DOM. Through extending some powerful functions to get summary information from different groups of related document elements, and filter, extract and transform this information as a sequence of nodes, the extended DOM can reduce the communications overhead and response time between the client and the server and therefore provide applications with more convenience. Our work is to explore the ideas for set-at-a-time processing, define and implement some suitable methods, and code some query application examples query using the DOM and our extensions to make comparisons through a test system. Finally, future work will be given. Date September 2003 Report Supporting Set-at-a-time Extensions for XML through DOM (PDF) CS-2003-28 Title On the Relevance of Self-Similarity in Network Traffic Prediction Abstract Self-similarity is an important characteristic of traffic in high speed networks which can not be captured by traditional traffic models. Traffic predictors based on non-traditional long-memory models are computationally more complex than traditional predictors based on short-memory models. Even online estimation of their parameters for actual traffic traces is not trivial work. Based on the observation that the Hurst parameter of real traffic traces rarely exceeds 0.85, which means that real traffic does not exhibit strong long-range dependence, and the fact that infinite history is not possible in practice, we propose to use a simple non-model-based minimum mean square error predictor. In this paper, we look at the problem of traffic prediction in the presence of self-similarity. We briefly describe a number of short-memory and long-memory stochastic traffic models and talk about non-model-based predictors, particularly minimum mean square error and its normalized version. Numerical results of our experimental comparison between the so-called fractional predictors and the simple minimum mean square error predictor show that this simple method can achieve accuracy within $5\%$ of the best fractional predictor while it is much simpler than any model-based predictor and is easily used in an on-line fashion. Date October 2003 Report On the Relevance of Self-Similarity in Network Traffic Prediction (PS) On the Relevance of Self-Similarity in Network Traffic Prediction (PDF) Compressed PostScript: On the Relevance of Self-Similarity in Network Traffic Prediction (GZIP) CS-2003-29 Title On Indexing Sliding Windows over On-line Data Streams Authors Lukasz Golab,Shaveen Garg and M. Tamer Ozsu Abstract We consider indexing sliding windows in main memory over on-line data streams. Our proposed data structures and query semantics are based on a division of the sliding window into sub-windows. When a new sub-window fills up with newly arrived tuples, the oldest sub-window is evicted, indices are refreshed, and continuous queries are re-evaluated to reflect the new state of the window. By classifying relational operators according to their method of execution in the windowed scenario, we show that many useful operators require access to the entire window, motivating the need for two types of indices: those which provide a list of attribute values and their counts for answering set-valued queries, and those which provide direct access to tuples for answering attribute-valued queries. For the former, we evaluate the performance of linked lists, search trees, and hash tables as indexing structures, showing that the high costs of maintaining such structures over rapidly changing data are offset by the savings in query processing costs. For the latter, we propose novel ways of maintaining windowed ring indices, which we show to be much faster than conventional ring indices and more efficient than executing windowed queries without an index. Date September 2003 Report On Indexing Sliding Windows over On-line Data Streams (PDF) CS-2003-30 Title Symbolic Representation and Retrieval of Moving Object Trajectories Authors Lei Chen, Vincent Oria, and M. Tamer Ozsu Abstract Similarity-based retrieval of moving object trajectory is useful to many applications, such as GPS system, sport and surveillance video analysis. However, due to sensor failures, errors in detection techniques, or different sampling rates, noises, local shifts and scales may appear in the trajectory records. Hence, it is difficult to design a robust and fast similarity measure for similarity-based retrieval in a large database. In this paper, we propose a normalized edit distance (NED) to measure the similarity between two trajectories. Compared to Euclidean distance, Dynamic Time Warping (DTW), and Longest Common Subsequences (LCSS), NED is more robust and accurate for trajectories contain noise and local time shifting. In order to improve the retrieval efficiency, we further convert the trajectories into a symbolic representation, called movement pattern strings, which encode both the movement direction and the movement distance information of the trajectories. The distances that are computed in a symbolic space are lower bounds of the distances of original trajectory data, which guarantees that no false dismissals will be introduced using movement pattern strings to retrieve trajectories. Furthermore, we define a modified frequency distance for frequency vectors that are obtained from movement pattern strings to reduce the dimensionality of movement pattern strings and computation cost of NED. The tests that we conducted indicate that the normalized edit distance is effective and the matching techniques are efficient. Date September 2003 Report Symbolic Representation and Retrieval of Moving Object Trajectories (PDF) CS-2003-33 Title Generalized Infinite Undo and Speculative User Interfaces Authors Alexjandro Lopez-Ortiz Abstract We study the potential benefits of a computations model supporting an unlimited number of undo-like operations. We argue that, when implemented to its full generality this result s in a novel mode of human-computer interaction in which the user session is not linear in time. The model proposed is also more resilient in the presence of user or computer error. This resiliency can be exploited by a speculative user interface (SUI) which guesses user intentions. If the prediction is incorrect, the generalized infinite undo facility provides the ability to extemporaneously roll back the misprediction. We observe that this model is only now becoming a realistic possibility due to the current surplus of available CPU cycles on a modern desktop. Date September 2003 Report Generalized Infinite Undo and Speculative User Interfaces (PS) Generalized Infinite Undo and Speculative User Interfaces (PDF) CS-2003-34 Title Exploiting Statistics of Web Traces to Improve Caching Algorithms Authors Alexander Golynski, Alejandro L´opez-Ortiz and Ray Sweidan Abstract Web caching plays an important role in reducing network traffic, user delays, and server load. One important factor in describingthe stream of requests made to a given server is the popularity oflinks between accessed pages. For example, the probability of requesting page A increases if a user makes a request to a page which contains a link to page A. Furthermore, this probability depends on the amount of time elapsed since the request was made to the page containing the link and on popularity of the link. In this project we (1) analyze web access logs and determine the frequency distribution of access and mean life expectancy of correlations, (2) propose two new cache replacement policies that use the above distribution, and (3) evaluate the effectiveness of the new policies by comparing them to widely-used algorithms such as Greedy Dual-Size (GDS) and Greedy Dual-Frequency (GDSF). Date October 2003 Report Exploiting Statistics of Web Traces to Improve Caching Algorithms (PS) Exploiting Statistics of Web Traces to Improve Caching Algorithms (PDF) Compressed PostScript: Exploiting Statistics of Web Traces to Improve Caching Algorithms (GZIP) CS-2003-36 Title Using Word Position in Documents for Topic Characterization Authors Authors: Reem K. Al-Halimi, Frank W. Tompa Abstract In this report we show how to use the position of words in a text for characterizing the topic of the text, and compare our method to measures that use frequency statistics that are independent of word position. We show that word position information produces words that are more suited for characterizing topics and at the same time relies on a vocabulary size that is as little as 10\%\$ of the size used by the other measures. Date October 2003 Report Using Word Position in Documents for Topic Characterization (PDF) CS-2003-39 Title Enforcing Domain Consistency on the extended Global Cardinality Constraint is NP- Hard Authors Claude-Guy Quimper Abstract In a global cardinality constraint, there is a set of variables X = {x_1, ... x_n} and a set of values D. Each variable x_i is associated to a domain dom(x_i) contained in D and each value v in D is associated to a cardinality set K(v). An assignment satisfies the extended global cardinality constraint (extended-GCC) if each variable x_i is instantiated to a value in its domain dom(x_i) and if each value v in D is assigned to k variables for some k in K(v). Extended-GCC differs from normal GCC in the fact that its sets of cardinality K(v) can be any set of values. In normal GCC these cardinality sets are restricted to intervals. We prove that enforcing domain consistency on the extended-GCC is NP-hard. Date November 2003 Report Enforcing Domain Consistency on the extended Global Cardinality Constraint is NP- Hard (PS) Enforcing Domain Consistency on the extended Global Cardinality Constraint is NP- Hard (PDF) CS-2003-41 Title Call Admission Control for Voice/Data Integration in Broadband Wireless Networks Authors Majid Ghaderi and Raouf Boutaba Abstract This paper addresses bandwidth allocation for an integrated voice/data broadband mobile wireless network. Specifically, we propose a new admission control scheme called EFGC, which is an extension of the well-known fractional guard channel scheme proposed for cellular networks supporting voice traffic. The main idea is to use two acceptance ratios, one for voice calls and the other for data calls in order to maintain the proportional service quality for voice and data traffic while guaranteeing a target handoff failure probability for voice calls. We describe two variations of the proposed scheme: EFGC-REST, a conservative approach which aims at preserving the proportional service quality by sacrificing the bandwidth utilization; and EFGC-UTIL, a greedy approach which achieves higher bandwidth utilization at the expense of increasing the handoff failure probability for voice calls. Extensive simulation results show that our schemes satisfy the hard constraints on handoff failure probability and service differentiation while maintaining a high bandwidth utilization. Date November 2003 Report Call Admission Control for Voice/Data Integration in Broadband Wireless Networks (PS) Call Admission Control for Voice/Data Integration in Broadband Wireless Networks (PDF) Compressed PostScript: Call Admission Control for Voice/Data Integration in Broadband Wireless Networks (GZIP) CS-2003-45 Title Revisiting the Foundations of Subsurface Scattering Authors Gladimir V. G. Baranoski, Aravind Krishnaswamy and Bradley Kimmel Abstract Despite the significant advances in rendering, we are far from being able to automatically generate realistic and predictable images of organic materials such as plant and human tissues. Creating convincing pictures of these materials is usually accomplished by carefully adjusting rendering parameters. A key issue in this context is the simulation of subsurface scattering. Current algorithmic models usually rely on scattering approximations based on the use of phase functions, notably the Henyey-Greenstein phase function and its variations, which were not derived from biophysical principles of any organic material, and whose parameters have no biological meaning. In this report, we challenge the validity of this approach for organic materials. Initially, we present an original chronology of the use of these phase functions in tissue optics simulations, which highlights the pitfalls of this approach. We then demonstrate that a significant step toward predictive subsurface scattering simulations can be given by replacing it by a more efficient and accurate data oriented approach. Our investigation is supported by comparisons involving the original measured data that motivated the application of phase functions in tissue subsurface scattering simulations. We hope that the results of our investigation will help strengthen the biophysical basis required for the predictive rendering of organic materials. Date December 2003 Report Revisiting the Foundations of Subsurface Scattering (PDF)
# Differential evolution-based feature selection technique for anaphora resolution Sikdar, UK and Ekbal, A and Saha, S and Uryupina, O and Poesio, M (2015) 'Differential evolution-based feature selection technique for anaphora resolution.' Soft Computing, 19 (8). 2149 - 2161. ISSN 1432-7643 Full text not available from this repository. ## Abstract © 2014, Springer-Verlag Berlin Heidelberg. In this paper a differential evolution (DE)-based feature selection technique is developed for anaphora resolution in a resource-poor language, namely Bengali. We discuss the issues of adapting a state-of-the-art English anaphora resolution system for a resource-poor language like Bengali. Performance of any anaphoric resolver greatly depends on the quality of a high accurate mention detector and the use of appropriate features for anaphora resolution. We develop a number of models for mention detection based on machine learning and heuristics. In anaphora resolution there is no globally accepted metric for measuring the performance, and each of them such as MUC, $$\hbox {B}^{3}$$B3, CEAF, Blanc exhibit significantly different behaviors. Our proposed feature selection technique determines the near-optimal feature set by optimizing each of these evaluation metrics. Experiments show how a language-dependent system (designed primarily for English) can attain reasonably good performance level when re-trained and tested on a new language with a proper subset of features. Evaluation results yield the F-measure values of 66.70, 59.47, 51.56, 33.08 and 72.75 % for MUC, B3, CEAFM, CEAFE and BLANC, respectively. Item Type: Article P Language and Literature > P Philology. LinguisticsQ Science > QA Mathematics > QA75 Electronic computers. Computer science Faculty of Science and Health > Computer Science and Electronic Engineering, School of Jim Jamieson 04 Feb 2015 11:52 23 Jan 2019 06:15 http://repository.essex.ac.uk/id/eprint/12581
# Frequency Standards ## Realization of the units of time and frequency The SI unit of time, the second (s), was defined 1967 by the 13th General Conference on Weights and Measures as follows: The second (s) is the unit of time. It is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. The hertz (Hz) is the unit of frequency (number of periodic events per second), where 1 Hz = 1 s-1. Since 20 Mai 2019 by the redefinition of the base units at 26th meeting of the General Conference on Weights and Measures the second has been defined as: The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1. The second is realized by a time and frequency standard, which is an atomic clock. Besides different types of caesium frequency standards as high precision standards also rubidium frequency standards and hydrogen masers have been developed. These caesium frequency standards measure the frequency of the quantum mechanical transition, which is fixed by the definition of the unit second. In a thermal atomic beam a transition of the atoms into an excited state is induced by micro wave excitation. The number of excited atoms is determined by a detector. The frequency of the internal quartz oscillator is stabilized at a maximum number of the excited atoms at the detector. This process can be controlled so exactly, that such atomic clocks reach a very low frequency deviation (relative frequency deviation approx. 5 x 10-13) and a very high accuracy (1 second deviation in 1 million years). The developments of so called caesium-fountains give rise to very low relative frequency deviations in the range of 10-15. For high precision short time stability especially hydrogen masers are suitable. The most recent developments in research lead towards optical frequency standards and let hope for a further improvement of the frequency standards, since instead of the microwave transition in the range of Gigahertz the thousand times higher frequencies in the optical frequency range would be used. By that a new definition of the second might be necessary.
# An inverse Mellin transform Is it possible to compute the inverse transform of $$\frac{1}{a^{-s}\cos( \frac{\pi s}{2})\Gamma (s)}$$ or similarly is it possible to compute the Inverse Mellin transform ?? $$\frac{ \zeta (1-s)}{\zeta (s)}$$ $$\frac{ \zeta (s)}{\zeta (1-s)}$$ The Mellin inverse is given by $$\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}dsF(s)x^{-s}$$ • For the Mellin transformed function, you have to provide the strip on which it is defined; the reverse transform depends on this strip ($c$ has to within the strip). – Fabian Dec 16 '12 at 17:36 • in this case , can we find a function so $$\frac{\zeta (1-s)}{\zeta (s)}= \int_{0}^{\infty}dt f(t)t^{s-1}$$ – Jose Garcia Dec 16 '12 at 21:46 • again, for which values of $\text{Re} s$ this relation should hold? – Fabian Dec 16 '12 at 22:14 • Hint: $\int_0^\infty x^s\sin ax~dx=a^{-s-1}\Gamma(s+1)\cos\dfrac{\pi s}{2}$ , according to eqworld.ipmnet.ru/en/auxiliary/inttrans/FourSin2.pdf – doraemonpaul Jul 5 '15 at 0:24 ## 1 Answer For the first one $$\mathcal{M}^{-1}\left[ \frac{1}{a^{-s}\cos( \frac{\pi s}{2})\Gamma (s)} \right] = \frac{2 a \cos\left(\frac{a}{x}\right)}{\pi x}$$ For the $\zeta$ expressions we can use the Riemann functional equation $$\zeta(s) = 2^s\pi^{s-1}\ \sin\left(\frac{\pi s}{2}\right)\ \Gamma(1-s)\ \zeta(1-s)$$ Then find the inverse Mellin transforms $$\mathcal{M}^{-1}\left[2^s \pi^{s-1} \sin\left(\frac{\pi s}{2}\right)\Gamma(1-s)\right] = \frac{2 \cos(\frac{2\pi}{x})}{x}$$ and $$\mathcal{M}^{-1}\left[\frac{1}{2^s \pi^{s-1} \sin(\frac{\pi s}{2})\Gamma(1-s)}\right] = 2 \cos(2 \pi x)$$ • We can use the residue theorem and I'm not convinced it will give your first claim. How did you obtain that ? – reuns Nov 11 '17 at 12:51 • @reuns It was output by Mathematica's InverseMellinTransform routine. However, they might have a bug and I trust your wisdom more. What do you think the answer should be? (Or what makes you doubt that is the answer?) – Benedict W. J. Irwin Nov 11 '17 at 13:24 • Apply the residue theorem to $\frac{1}{2\pi i}\int_{1/2-i\infty}^{1/2+i\infty} \frac{1}{\cos( \frac{\pi s}{2})\Gamma (s)}x^{-s}ds$ – reuns Nov 11 '17 at 13:27 • I'm still getting the same answer now that I have done it fully. Sum of the residues is $$\frac{2}{\pi}\sum_{n=0}^\infty \frac{(-1)^{n+1} a^{2n+1}}{(2n)! x^{2n+1}} = \frac{2 a \cos(a/x)}{\pi x}$$ – Benedict W. J. Irwin Nov 12 '17 at 10:05
# Einstein-de Sitter - confusing constant of integration I'm struggling to understand this derivation of the Einstein-de Sitter universe model It starts with $$\int R^{1/2}dR=\int H_{0}R_{0}^{3/2}dt.$$ Evaluating this integral gives$$\frac{2}{3}R^{3/2}=H_{0}R_{0}^{3/2}t+K,$$ where $K$ is a constant of integration. $$R\left(t\right)=\frac{3}{2}R_{0}\left(H_{0}tK\right)^{2/3}.$$ Let $t=t_{0}$ when $R=R_{0}$, this becomes $$R_{0}=\frac{3}{2}R_{0}\left(H_{0}t_{0}K\right)^{2/3}.$$ Dividing the third equation by the fourth equation gives$$R\left(t\right)=R_{0}\left(\frac{t}{t_{0}}\right)^{2/3}$$ What I don't understand is how the constant of integration disappears. In the second equation it's added to the right-hand side, but in the third equation its multiplied by the right-hand side, and then cancels to give the final (correct) equation. How does that work? Apologies if I've missed the obvious. - $R=0$ at $t=0$. This is a boundary condition. Then, $K=0$ must be true. Ignore lines 3 and 4, they are wrong. Just take $K=0$ and do what you did with setting $R=R_0$ and $t=t_0$ and you've got the answer. Edit - Response to comment: I guessed that you were representing the scale-factor by $R(t)$. (I normally use $a(t)$ by the way). We know that as $t$ goes to 0 (going back in time to the big bang singularity), $R$ goes to 0. I went back to the definition of length scales: $r=Rx$, where $r$ is the physical length and $x$ is a coordinate length [this is how $R$ is introduced]. Then I thought of the following: • Take some coordinate system $x$, doesn't matter what it is • Choose a value of $R$ for today (which you can always do) • Imagine the Universe going backwards in time towards $t=0$ • As the Universe goes to $t=0$ (the big bang), the scale factor gets smaller • At $t=0$, $r=0$ and thus we must have $R=0$.
# Fungrim entry: 32e430 Symbol: LandauG $g\!\left(n\right)$ Landau's function Landau's function $g\!\left(n\right)$ gives the largest order of an element of the symmetric group ${S}_{n}$. It can be defined arithmetically as the maximum least common multiple of the partitions of $n$, as in 7932c3. The following table lists conditions such that LandauG(n) is defined in Fungrim. Domain Codomain $n \in \mathbb{Z}_{\ge 0}$ $g\!\left(n\right) \in \mathbb{Z}_{\ge 1}$ Table data: $\left(P, Q\right)$ such that $\left(P\right) \implies \left(Q\right)$ References: • https://oeis.org/A000793 Definitions: Fungrim symbol Notation Short description LandauG$g\!\left(n\right)$ Landau's function ZZGreaterEqual$\mathbb{Z}_{\ge n}$ Integers greater than or equal to n Source code for this entry: Entry(ID("32e430"), SymbolDefinition(LandauG, LandauG(n), "Landau's function"), Description("Landau's function", LandauG(n), "gives the largest order of an element of the symmetric group", Subscript(S, n), "."), Description("It can be defined arithmetically as the maximum least common multiple of the partitions of", n, ", as in", EntryReference("7932c3"), "."), Description("The following table lists conditions such that", SourceForm(LandauG(n)), "is defined in Fungrim."), Table(TableRelation(Tuple(P, Q), Implies(P, Q)), TableHeadings(Description("Domain"), Description("Codomain")), List(Tuple(Element(n, ZZGreaterEqual(0)), Element(LandauG(n), ZZGreaterEqual(1))))), References("https://oeis.org/A000793")) ## Topics using this entry Copyright (C) Fredrik Johansson and contributors. Fungrim is provided under the MIT license. The source code is on GitHub. 2019-08-21 11:44:15.926409 UTC
# 3.3 - Prediction Interval for a New Response Printer-friendly version In this section, we are concerned with the prediction interval for a new response ynew when the predictor's value is xh. Again, let's just jump right in and learn the formula for the prediction interval. The general formula in words is as always: Sample estimate ± (t-multiplier × standard error) and the formula in notation is: $\hat{y}_h \pm t_{(\alpha/2, n-2)} \times \sqrt{MSE \left( 1+\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$ where: • $$\hat{y}_h$$ is the "fitted value" or "predicted value" of the response when the predictor is $$x_h$$ • $$t_{(\alpha/2, n-2)}$$ is the "t-multiplier." Note again that the t-multiplier has n-2 (not n-1) degrees of freedom, because the prediction interval uses the mean square error (MSE) whose denominator is n-2. • $$\sqrt{MSE \times \left( 1+\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$$ is the "standard error of the prediction," which is very similar to the "standard error of the fit" when estimating µY. The standard error of the prediction just has an extra MSE term added that the standard error of the fit does not. (More on this a bit later.) Again, we won't use the formula to calculate our prediction intervals in real-life practice. We'll let statistical software such as Minitab do the calculation for us. Let's look at the prediction interval for our example with "skin cancer mortality" as the response and "latitude" as the predictor (skincancer.txt): The output reports the 95% prediction interval for an individual location at 40 degrees north. We can be 95% confident that the skin cancer mortality rate at an individual location at 40 degrees north will be between 111.235 and 188.933 deaths per 10 million people. ### When is it okay to use the prediction interval for ynew formula? The requirements are similar to, but a little more restrictive than, those for the confidence interval. It is okay: • When xh is a value within the scope of the model. Again, xh does not have to be one of the actual x values in the data set. • When the "LINE" conditions — linearity, independent errors, normal errors, equal error variances — are met. Unlike the case for the formula for the confidence interval, the formula for the prediction interval depends strongly on the condition that the error terms are normally distributed. ### Understanding the difference in the two formulas In our discussion of the confidence interval for µY, we used the formula to investigate what factors affect the width of the confidence interval. There's no need to do it again. Because the formulas are so similar, it turns out that the factors affecting the width of the prediction interval are identical to the factors affecting the width of the confidence interval. Let's instead investigate the formula for the prediction interval for ynew: $\hat{y}_h \pm t_{(\alpha/2, n-2)} \times \sqrt{MSE \times \left( 1+\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$ to see how it compares to the formula for the confidence interval for µY: $\hat{y}_h \pm t_{(\alpha/2, n-2)} \times \sqrt{MSE \left(\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$ Observe that the only difference in the formulas is that the standard error of the prediction for ynew has an extra MSE term in it that the standard error of the fit for µY does not. Let's try to understand the prediction interval to see what causes the extra MSE term. In doing so, let's start with an easier problem first. Think about how we could predict a new response ynew at a particular xh if the mean of the responses µY at xh were known. That is, suppose it were known that the mean skin cancer mortality at xh = 40o N is 150 deaths per million (with variance 400)? What is the predicted skin cancer mortality in Columbus, Ohio? Because µY = 150 and σ2 = 400 are known, we can take advantage of the "empirical rule," which states among other things that 95% of the measurements of normally distributed data are within 2 standard deviations of the mean. That is, it says that 95% of the measurements are in the interval sandwiched by: µY - 2σ and µY + 2σ. Applying the 95% rule to our example with µY = 150 and σ = 20: 95% of the skin cancer mortality rates of locations at 40 degrees north latitude are in the interval sandwiched by: 150 - 2(20) = 110 and 150 + 2(20) = 190. That is, if someone wanted to know the skin cancer mortality rate for a location at 40 degrees north, our best guess would be somewhere between 110 and 190 deaths per 10 million. The problem is that our calculation used µY and σ, population values that we would typically not know. Reality sets in: • The mean µY is typically not known. The logical thing to do is estimate it with the predicted response $$\hat{y}$$. The cost of using $$\hat{y}$$ to estimate µY is the variance of $$\hat{y}$$. That is, different samples would yield different predictions $$\hat{y}$$, and so we have to take into account this variance of $$\hat{y}$$. • The variance σ2 is typically not known. The logical thing to do is to estimate it with MSE. Because we have to estimate these unknown quantities, the variation in the prediction of a new response depends on two components: 1. the variation due to estimating the mean µY with $$\hat{y}_h$$ , which we denote "$$\sigma^2(\hat{Y}_h)$$." (Note that the estimate of this quantity is just the standard error of the fit that appears in the confidence interval formula.) 2. the variation in the responses y, which we denote as "$$\sigma^2$$." (Note that quantity is estimated, as usual, with the mean square error MSE.) Adding the two variance components, we get: $\sigma^2+\sigma^2(\hat{Y}_h)$ which is estimated by: $MSE+MSE \left[ \frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum_{i=1}^{n}(x_i-\bar{x})^2} \right] =MSE\left[ 1+\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum_{i=1}^{n}(x_i-\bar{x})^2} \right]$ Do you recognize this quantity? It's just the variance of the prediction that appears in the formula for the prediction interval ynew! Let's compare the two intervals again: Confidence interval for µY:   $\hat{y}_h \pm t_{(\alpha/2, n-2)} \times \sqrt{MSE \times \left( \frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$ Prediction interval for ynew:   $\hat{y}_h \pm t_{(\alpha/2, n-2)} \times \sqrt{MSE \left( 1+\frac{1}{n} + \frac{(x_h-\bar{x})^2}{\sum(x_i-\bar{x})^2}\right)}$ What's the practical implications of the difference in the two formulas? • Because the prediction interval has the extra MSE term, a (1-α)100% confidence interval for µY at xh will always be narrower than the corresponding (1-α)100% prediction interval for ynew at xh. • By calculating the interval at the sample's mean of the predictor values (xh = $$\bar{x}$$) and increasing the sample size n, the confidence interval's standard error can approach 0. Because the prediction interval has the extra MSE term, the prediction interval's standard error cannot get close to 0. The first implication is seen most easily by studying the following plot for our skin cancer mortality example: Observe that the prediction interval (in purple) is always wider than the confidence interval (in green). Furthermore, both intervals are narrowest at the mean of the predictor values (about 39.5).
# Possible to turn a 5v 350mA mobile phone charger into a 12v power supply to drive a computer fan? #### MkkDdd Joined Sep 11, 2017 10 I know computer fans can be run with mobile phone chargers because I've got it to work just by finding and joining the relevant wires (live and ground). That's about the limit of my knowledge and experience, though, I'm afraid; but I'd like to build on that. I want a bigger, better fan now. The ones I've done are noisy little 5v. blowers which aren't really man enough for the job. I understand I can get a 12v. fan to run from a mobile phone charger the same way as the 5v. ones do, and it will run slower and quieter. That's good; that's what I want. I understand a 12v fan may not start without an initial voltage of 5v., and if it does it may stall (and burn out) and this could be caused by a seemingly insignificant obstruction, or without any obstruction at all. If I'm not mistaken I also read somewhere that a simple diode or resistor could be worked into the circuit somehow somewhere to stop that happening, but I can't find where I read that now. That's what I want to find out about here: how can I do that? Then, when I started thinking about it, it occurred to me that I might actually be able to make it a bit better and get exactly what I want: I'd like to be able to run the 12v. fan at full blast if necessary for short periods and then turn it down when things have cooled down. Is there any way I can do that: 1) make a 5v mobile phone charger put out 12v., 2) put a control on it to reduce the voltage, and 3) make it so it won't stall the fan? The fan I have in mind (this one: https://www.arctic.ac/eu_en/arctic-f12-tc.html) is designed to be run at variable speeds. #### panic mode Joined Oct 10, 2011 1,782 your charger can produce up to 0.35 A at 5V. that is 5V*0.35A = 1.75W your fan draws 0.24A at 12V. that is 12V*0.24A = 2.88W any conversion will only add losses. say you have something that is 80% efficient, that means you would need to use supply that is 2.88W/0.80 = 3.6W which is more than double of that your charger can put out. #### MkkDdd Joined Sep 11, 2017 10 your charger can produce up to 0.35 A at 5V. that is 5V*0.35A = 1.75W your fan draws 0.24A at 12V. that is 12V*0.24A = 2.88W any conversion will only add losses. say you have something that is 80% efficient, that means you would need to use supply that is 2.88W/0.80 = 3.6W which is more than double of that your charger can put out. OK, so is there anything I can do to the charger to increase its capacity? Presumably not, so I'll just have to run the fan at 5v from this charger, which will make it slower and quieter. That's OK, but it's back to the original question: how can I stop it from stalling? #### panic mode Joined Oct 10, 2011 1,782 run it from proper source. that fan is rated for 12V, so run it from a 12V source that can deliver at least 1/4A Note that power can also be expressed in terms of voltage: P=V^2/R Note the square factor... So doubling voltage, quadruples power (4x). Tripling voltage, increases power to factor 9x. Reducing voltage, reduces power (5V/12V is 41%). Running 12V load from 5V source means that this fan only gets 17% of power (that is 41% of 41%). No wonder it is under powered... hamster farts can reverse it.. #### philba Joined Aug 17, 2017 959 Well, you can do what you want but it would be far cheaper to just go to a thrift store and pickup a 12V wall wart. Typically one or two dollars. To reduce the speed/noise, look into a PWM speed controller. ebays has lots of listings for < $10. Search for "pwm speed controller". Thread Starter #### MkkDdd Joined Sep 11, 2017 10 run it from proper source. that fan is rated for 12V, so run it from a 12V source that can deliver at least 1/4A Note that power can also be expressed in terms of voltage: P=V^2/R Note the square factor... So doubling voltage, quadruples power (4x). Tripling voltage, increases power to factor 9x. Reducing voltage, reduces power (5V/12V is 41%). Running 12V load from 5V source means that this fan only gets 17% of power (that is 41% of 41%). No wonder it is under powered... hamster farts can reverse it.. Yes, very amusing – thank you! Well, you can do what you want but it would be far cheaper to just go to a thrift store and pickup a 12V wall wart. Typically one or two dollars. To reduce the speed/noise, look into a PWM speed controller. ebays has lots of listings for <$10. Search for "pwm speed controller". Now that IS helpful; thank you, philba! #### MkkDdd Joined Sep 11, 2017 10 Yes, indeed, philba; that (http://www.ebay.co.uk/itm/DC-1-8V-3V-5V-6V-12V-2A-Low-Voltage-Motor-Speed-New-Controller-PWM-1803B-/111531021357?hash=item19f7c4542d:g:lqAAAOSwnDxUds8K) and this (http://www.ebay.co.uk/itm/AC-Power-Adapter-12V-DC-Supply-2a-amp-regulated-Wall-Wart-Charger-5-5-X2-5-mm-EU-/282121706292?hash=item41afc39b34:g:LGYAAOSwZVlXoa7M) are exactly what I'm looking for. Just wish I could find the 'wall wart' things a bit cheaper. I don't suppose there's any way I can modify phone chargers to do the same thing, is there? I had thought perhaps it might just be a question of soldering in a resistor or something, but it would seem ignorance has got the better of me there. In any case, your contribution has given me the clarity I was looking for, for which I am very grateful. Thank you. #### dl324 Joined Mar 30, 2015 8,891 Welcome to AAC! Friendly bit of advice. Wait until you have a few posts under your belt before you start getting snarky with senior members. I don't suppose there's any way I can modify phone chargers to do the same thing, is there? Get one that can provide at least 1A and use a switching regulator to step voltage up to 12V. I had thought perhaps it might just be a question of soldering in a resistor or something, but it would seem ignorance has got the better of me there. Won't work. With resistors, you can only get less than the input voltage. Even going the other way, from 12V to 5V, couldn't be done efficiently with just resistors. #### MkkDdd Joined Sep 11, 2017 10 Welcome to AAC! Friendly bit of advice. Wait until you have a few posts under your belt before you start getting snarky with senior members. Get one that can provide at least 1A and use a switching regulator to step voltage up to 12V. Won't work. With resistors, you can only get less than the input voltage. Even going the other way, from 12V to 5V, couldn't be done efficiently with just resistors. What, you mean the 'hamster farter'? One man's amusement is another's banal. That's the thing with the phone charger route: they don't provide 1A, which makes philba's suggestion the way to go; just don't want to fork out five bucks half a dozen times for 'wall warts' right now. I will with time, eventually replace the phone chargers driving the fans bravely into the face of hamster farts with these wall wart things and the potentiometer thing (motor speed controller) – exactly what I want. Actually, it's all worked out very well for me here because I've got two 5v fans driven by 5v phone chargers (power sources) which make a hell of a noise because they're running at full blast. That motor speed controller I found for two bucks as a result of philba's suggestion is a godsend – just what I need to control those noisy fans. The other, bigger (12v) fans can run at 17% power for a while. Yes, I rather thought what you wrote about resistors must be the case; was just hoping maybe there might be some magical work round, but it's all worked out for the best now. Thank you very much for your input. #### MkkDdd Joined Sep 11, 2017 10 Using a mobile phone charger to provide a 5v. 350mA supply, wouldn't this 'Mini 3V/3.3V/3.7V/5V/6V to 12V DC-DC Boost Step-up Converter Power Supply Module' (http://www.ebay.co.uk/itm/Mini-3V-3-3V-3-7V-5V-6V-to-12V-DC-DC-Boost-Step-up-Converter-Power-Supply-Module-/263025181008?) do the same thing as a 'wall wart', i.e. provide a 12v. supply which I could then run through the motor speed controller to drive the bigger (12v.) fans? At £1.18 (a buck and a half) it looks like a real winner to me. I think it will do the job, won't it? #### dl324 Joined Mar 30, 2015 8,891 #### philba Joined Aug 17, 2017 959 Not sure I understand the insistence of starting with 5V when 12V is dirt cheap. Seriously, go to a thrift store. 12V multi-amp WWs are there for less than the cost of a bad cup of coffee. Look for ones from wifi routers. I picked up 3 the other day for $4 and they all work just fine. Thread Starter #### MkkDdd Joined Sep 11, 2017 10 Do not include political commentary (particularly in technical threads) Not sure I understand the insistence of starting with 5V when 12V is dirt cheap. Seriously, go to a thrift store. 12V multi-amp WWs are there for less than the cost of a bad cup of coffee. Look for ones from wifi routers. I picked up 3 the other day for$4 and they all work just fine. Yeah, you're right. I was put off at first because I couldn't see any with EU plugs for less than five or six dollars. After a bit of fishing around I've found one for £2.40+ £0.49 postage – ~\$3.80 all in – so I've gone ahead and ordered a couple of them. Question answered, problem solved, thanks again. Please don't take this the wrong way, I don't mean to be 'sharky' (whatever that means) or otherwise offensive, but you may genuinely not be aware that we don't have 'thrift stores' here, like they (you?) do in the States. I know because I lived in the States for many years. <snipped political commentary> Last edited by a moderator: #### WBahn Joined Mar 31, 2012 24,692 Using a mobile phone charger to provide a 5v. 350mA supply, wouldn't this 'Mini 3V/3.3V/3.7V/5V/6V to 12V DC-DC Boost Step-up Converter Power Supply Module' (http://www.ebay.co.uk/itm/Mini-3V-3-3V-3-7V-5V-6V-to-12V-DC-DC-Boost-Step-up-Converter-Power-Supply-Module-/263025181008?) do the same thing as a 'wall wart', i.e. provide a 12v. supply which I could then run through the motor speed controller to drive the bigger (12v.) fans? At £1.18 (a buck and a half) it looks like a real winner to me. I think it will do the job, won't it? If your step-up converter is 100% efficient, then your output would provide at best 145 mA, and more likely max out somewhere in the 125 mA range, or half what your "full blast" is looking for. #### WBahn Joined Mar 31, 2012 24,692 Please don't take this the wrong way, I don't mean to be 'sharky' (whatever that means) or otherwise offensive, but you may genuinely not be aware that we don't have 'thrift stores' here, like they (you?) do in the States. Well, it's probably a bit unreasonable to expect people to be aware of what you do and don't have since you chose not to include your location in your profile. While we are very much in international site with members from all over the world, the simple fact is that the overwhelming majority are from the United States (as best I can tell) and so when there's no way to tell where a member is from, it pretty natural for most people to take the safe bet and assume that they are most likely from where most members are from. #### MkkDdd Joined Sep 11, 2017 10 Well, it's probably a bit unreasonable to expect people to be aware of what you do and don't have since you chose not to include your location in your profile. While we are very much in international site with members from all over the world, the simple fact is that the overwhelming majority are from the United States (as best I can tell) and so when there's no way to tell where a member is from, it pretty natural for most people to take the safe bet and assume that they are most likely from where most members are from. It wasn't gratuitous. I was explaining 'Why not just go to a thrift store?'. #### WBahn Joined Mar 31, 2012 24,692 It wasn't gratuitous. I was explaining 'Why not just go to a thrift store?'. I'm thinking that neither the U.S. standard of living nor President Trump's position on NATO members' level of defense spending is particularly critical to the explanation of why you don't have thrift stores in the undefined place where you live. #### Plamen Joined Mar 29, 2015 98 I know computer fans can be run with mobile phone chargers because I've got it to work just by finding and joining the relevant wires (live and ground). That's about the limit of my knowledge and experience, though, I'm afraid; but I'd like to build on that. I want a bigger, better fan now. The ones I've done are noisy little 5v. blowers which aren't really man enough for the job. I understand I can get a 12v. fan to run from a mobile phone charger the same way as the 5v. ones do, and it will run slower and quieter. That's good; that's what I want. I understand a 12v fan may not start without an initial voltage of 5v., and if it does it may stall (and burn out) and this could be caused by a seemingly insignificant obstruction, or without any obstruction at all. If I'm not mistaken I also read somewhere that a simple diode or resistor could be worked into the circuit somehow somewhere to stop that happening, but I can't find where I read that now. That's what I want to find out about here: how can I do that? Then, when I started thinking about it, it occurred to me that I might actually be able to make it a bit better and get exactly what I want: I'd like to be able to run the 12v. fan at full blast if necessary for short periods and then turn it down when things have cooled down. Is there any way I can do that: 1) make a 5v mobile phone charger put out 12v., 2) put a control on it to reduce the voltage, and 3) make it so it won't stall the fan? The fan I have in mind (this one: https://www.arctic.ac/eu_en/arctic-f12-tc.html) is designed to be run at variable speeds. #### Plamen Joined Mar 29, 2015 98 If the fan is to cool a desktop computer...it has its own 12V. If your fan is to used stand alone then you can look for a compromise - if 12V is too much for the charger and 5V is too little for the fan - get a small boost module and adjust thew voltage within the charger's capabilities. Let us say you will be able to reach 9V - still way better than 5V #### WBahn Joined Mar 31, 2012 24,692
The only quadrant that contains no points of the graph of y = -x^2 + 8x - 18 is which quadrant? Quadrant 1 and 2 will not have points of $y = - {x}^{2} + 8 x - 18$ Explanation: Solve for the vertex $y = - {x}^{2} + 8 x - 18$ $y = - \left({x}^{2} - 8 x + 16 - 16\right) - 18$ $y = - {\left(x - 4\right)}^{2} + 16 - 18$ $y + 2 = - {\left(x - 4\right)}^{2}$ vertex at $\left(4 , - 2\right)$ graph{y=-x^2+8x-18[-20,40,-25,10]} God bless....I hope the explanation is useful..
# Bias adjustment and downscaling algorithms xarray data structures allow for relatively straightforward implementations of simple bias-adjustment and downscaling algorithms documented in Adjustment methods. Each algorithm is split into train and adjust components. The train function will compare two DataArrays x and y, and create a dataset storing the transfer information allowing to go from x to y. This dataset, stored in the adjustment object, can then be used by the adjust method to apply this information to x. x could be the same DataArray used for training, or another DataArray with similar characteristics. For example, given a daily time series of observations ref, a model simulation over the observational period hist and a model simulation over a future period sim, we would apply a bias-adjustment method such as detrended quantile mapping (DQM) as: from xclim import sdba Most method can either be applied additively or multiplicatively. Also, most methods can be applied independently on different time groupings (monthly, seasonally) or according to the day of the year and a rolling window width. When transfer factors are applied in adjustment, they can be interpolated according to the time grouping. This helps avoid discontinuities in adjustment factors at the beginning of each season or month and is computationaly cheaper than computing adjustment factors for each day of the year. (Currently only implemented for monthly grouping) ## Application in multivariate settings When applying univariate adjustment methods to multiple variables, some strategies are recommended to avoid introducing unrealistic artifacts in adjusted outputs. ### Minimum and maximum temperature When adjusting both minimum and maximum temperature, adjustment factors sometimes yield minimum temperatures larger than the maximum temperature on the same day, which of course, is non-sensical. One way to avoid this is to first adjust maximum temperature using an additive adjustment, then adjust the diurnal temperature range (DTR) using a multiplicative adjustment, and then determine minimum temperature by subtracting DTR from the maximum temperature ([Thrasher], [Agbazo]) ### Relative and specific humidity When adjusting both relative and specific humidity, we want to preserve the relationship between both. To do this, [Grenier] suggests to first adjust the relative humidity using a multiplicative factor, ensure values are within 0-100%, then apply an additive adjustment factor to the surface pressure before estimating the specific humidity from thermodynamic relationships. In theory, short wave radiation should be capped when precipitation is not zero, but there is as of yet no mechanism proposed to do that, see [Hoffman]. ## References Agbazo Agbazo, M. N., & Grenier, P. (2019). Characterizing and avoiding physical inconsistency generated by the application of univariate quantile mapping on daily minimum and maximum temperatures over Hudson Bay. International Journal of Climatology, joc.6432. https://doi.org/10.1002/joc.6432 Grenier Grenier, P. (2018). Two Types of Physical Inconsistency to Avoid with Univariate Quantile Mapping: A Case Study over North America Concerning Relative Humidity and Its Parent Variables. Journal of Applied Meteorology and Climatology, 57(2), 347–364. https://doi.org/10.1175/JAMC-D-17-0177.1 Hoffman Hoffmann, H., & Rath, T. (2012). Meteorologically consistent bias correction of climate time series for agricultural models. Theoretical and Applied Climatology, 110(1–2), 129–141. https://doi.org/10.1007/s00704-012-0618-x Thrasher Thrasher, B., Maurer, E. P., McKellar, C., & Duffy, P. B. (2012). Technical Note: Bias correcting climate model simulated daily temperature extremes with quantile mapping. Hydrology and Earth System Sciences, 16(9), 3309–3314. https://doi.org/10.5194/hess-16-3309-2012 ## SDBA’s user API The algorithm follows these steps, 1-3 being the ‘train’ and 4-6, the ‘adjust’ steps. 1. A scaling factor that would make the mean of hist match the mean of ref is computed. 2. ref and hist are normalized by removing the “dayofyear” mean. 3. Adjustment factors are computed between the quantiles of the normalized ref and hist. 4. sim is corrected by the scaling factor, and either normalized by “dayofyear” and detrended group-wise or directly detrended per “dayofyear”, using a linear fit (modifiable). 5. Values of detrended sim are matched to the corresponding quantiles of normalized hist and corrected accordingly. 6. The trend is put back on the result. $F^{-1}_{ref}\left\{F_{hist}\left[\frac{\overline{hist}\cdot sim}{\overline{sim}}\right]\right\}\frac{\overline{sim}}{\overline{hist}}$ where $$F$$ is the cumulative distribution function (CDF) and $$\overline{xyz}$$ is the linear trend of the data. This equation is valid for multiplicative adjustment. Based on the DQM method of [Cannon2015]. Parameters • Train step • nquantiles (int or 1d array of floats) – The number of quantiles to use. See equally_spaced_nodes(). An array of quantiles [0, 1] can also be passed. Defaults to 20 quantiles. • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative. Defaults to “+”. • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. Default is “time”, meaning an single adjustment group along dimension “time”. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearest”. • detrend (int or BaseDetrend instance) – The method to use when detrending. If an int is passed, it is understood as a PolyDetrend (polynomial detrending) degree. Defaults to 1 (linear detrending) • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”. References Cannon2015 Cannon, A. J., Sobie, S. R., & Murdock, T. Q. (2015). Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes? Journal of Climate, 28(17), 6938–6959. https://doi.org/10.1175/JCLI-D-14-00754.1 Adjustment factors are computed between the quantiles of ref and sim. Values of sim are matched to the corresponding quantiles of hist and corrected accordingly. $F^{-1}_{ref} (F_{hist}(sim))$ where $$F$$ is the cumulative distribution function (CDF) and mod stands for model data. Parameters • Train step • nquantiles (int or 1d array of floats) – The number of quantiles to use. Two endpoints at 1e-6 and 1 - 1e-6 will be added. An array of quantiles [0, 1] can also be passed. Defaults to 20 quantiles. • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative. Defaults to “+”. • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. Default is “time”, meaning an single adjustment group along dimension “time”. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearset”. • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”. References Dequé, M. (2007). Frequency of precipitation and temperature extremes over France in an anthropogenic scenario: Model results and statistical correction according to observed values. Global and Planetary Change, 57(1–2), 16–26. https://doi.org/10.1016/j.gloplacha.2006.11.030 _allow_diff_calendars = False The tail of the distribution of adjusted data is corrected according to the bias between the parametric Generalized Pareto distributions of the simulatated and reference data, [RRJF2021]. The distributions are composed of the maximal values of clusters of “large” values. With “large” values being those above cluster_thresh. Only extreme values, whose quantile within the pool of large values are above q_thresh, are re-adjusted. See Notes. This adjustment method should be considered experimental and used with care. Parameters • Train step • cluster_thresh (Quantity (str with units)) – The threshold value for defining clusters. • q_thresh (float) – The quantile of “extreme” values, [0, 1[. Defaults to 0.95. • ref_params (xr.DataArray, optional) – Distribution parameters to use instead of fitting a GenPareto distribution on ref. • scen (DataArray) – This is a second-order adjustment, so the adjust method needs the first-order adjusted timeseries in addition to the raw “sim”. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “linear”. • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See extrapolate_qm() for details. Defaults to “constant”. • frac (float) – Fraction where the cutoff happens between the original scen and the corrected one. See Notes, ]0, 1]. Defaults to 0.25. • power (float) – Shape of the correction strength, see Notes. Defaults to 1.0. Notes Extreme values are extracted from ref, hist and sim by finding all “clusters”, i.e. runs of consecutive values above cluster_thresh. The q_threshth percentile of these values is taken on ref and hist and becomes thresh, the extreme value threshold. The maximal value of each cluster, if it exceeds that new threshold, is taken and Generalized Pareto distributions are fitted to them, for both ref and hist. The probabilities associated with each of these extremes in hist is used to find the corresponding value according to ref’s distribution. Adjustment factors are computed as the bias between those new extremes and the original ones. In the adjust step, a Generalized Pareto distributions is fitted on the cluster-maximums of sim and it is used to associate a probability to each extreme, values over the thresh compute in the training, without the clustering. The adjustment factors are computed by interpolating the trained ones using these probabilities and the probabilities computed from hist. Finally, the adjusted values ($$C_i$$) are mixed with the pre-adjusted ones (scen, $$D_i$$) using the following transition function: $V_i = C_i * \tau + D_i * (1 - \tau)$ Where $$\tau$$ is a function of sim’s extreme values (unadjusted, $$S_i$$) and of arguments frac ($$f$$) and power ($$p$$): $\tau = \left(\frac{1}{f}\frac{S - min(S)}{max(S) - min(S)}\right)^p$ Code based on an internal Matlab source and partly ib the biascorrect_extremes function of the julia package [ClimateTools]. Because of limitations imposed by the lazy computing nature of the dask backend, it is not possible to know the number of cluster extremes in ref and hist at the moment the output data structure is created. This is why the code tries to estimate that number and usually overstimates it. In the training dataset, this translated into a quantile dimension that is too large and variables af and px_hist are assigned NaNs on extra elements. This has no incidence on the calculations themselves but requires more memory than is useful. References RRJF2021 Roy, P., Rondeau-Genesse, G., Jalbert, J., Fournier, É. 2021. Climate Scenarios of Extreme Precipitation Using a Combination of Parametric and Non-Parametric Bias Correction Methods. Submitted to Climate Services, April 2021. ClimateTools https://juliaclimate.github.io/ClimateTools.jl/stable/ This bias adjustment method is designed to correct daily precipitation time series by considering wet and dry days separately ([Schmidli2006]). Multiplicative adjustment factors are computed such that the mean of hist matches the mean of ref for values above a threshold. The threshold on the training target ref is first mapped to hist by finding the quantile in hist having the same exceedance probability as thresh in ref. The adjustment factor is then given by $s = \frac{\left \langle ref: ref \geq t_{ref} \right\rangle - t_{ref}}{\left \langle hist : hist \geq t_{hist} \right\rangle - t_{hist}}$ In the case of precipitations, the adjustment factor is the ratio of wet-days intensity. $sim(t) = \max\left(t_{ref} + s \cdot (hist(t) - t_{hist}), 0\right)$ Parameters • Train step • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. Default is “time”, meaning an single adjustment group along dimension “time”. • thresh (str) – The threshold in ref above which the values are scaled. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use then interpolating the adjustment factors. Defaults to “linear”. References Schmidli2006 Schmidli, J., Frei, C., & Vidale, P. L. (2006). Downscaling from GCM precipitation: A benchmark for dynamical and statistical downscaling methods. International Journal of Climatology, 26(5), 679–689. DOI:10.1002/joc.1287 N-dimensional probability density function transform. A multivariate bias-adjustment algorithm described by [Cannon18], as part of the MBCn algorithm, based on a color-correction algorithm described by [Pitie05]. This algorithm in itself, when used with QuantileDeltaMapping, is NOT trend-preserving. The full MBCn algorithm includes a reordering step provided here by xclim.sdba.processing.reordering(). See notes for an explanation of the algorithm. Parameters • base (BaseAdjustment) – An univariate bias-adjustment class. This is untested for anything else than QuantileDeltaMapping. • base_kws (dict, optional) – Arguments passed to the training of the univariate adjustment. • n_escore (int) – The number of elements to send to the escore function. The default, 0, means all elements are included. Pass -1 to skip computing the escore completely. Small numbers result in less significative scores, but the execution time goes up quickly with large values. • n_iter (int) – The number of iterations to perform. Defaults to 20. • pts_dim (str) – The name of the “multivariate” dimension. Defaults to “multivar”, which is the normal case when using xclim.sdba.base.stack_variables(). • adj_kws (dict, optional) – Dictionary of arguments to pass to the adjust method of the univariate adjustment. • rot_matrices (xr.DataArray, optional) – The rotation matrices as a 3D array (‘iterations’, <pts_dim>, <anything>), with shape (n_iter, <N>, <N>). If left empty, random rotation matrices will be automatically generated. Notes The historical reference ($$T$$, for “target”), simulated historical ($$H$$) and simulated projected ($$S$$) datasets are constructed by stacking the timeseries of N variables together. The algoriths goes into the following steps: 1. Rotate the datasets in the N-dimensional variable space with $$\mathbf{R}$$, a random rotation NxN matrix. ..math \tilde{\mathbf{T}} = \mathbf{T}\mathbf{R} \\ \tilde{\mathbf{H}} = \mathbf{H}\mathbf{R} \\ \tilde{\mathbf{S}} = \mathbf{S}\mathbf{R} 2. An univariate bias-adjustment $$\mathcal{F}$$ is used on the rotated datasets. The adjustments are made in additive mode, for each variable $$i$$. $\hat{\mathbf{H}}_i, \hat{\mathbf{S}}_i = \mathcal{F}\left(\tilde{\mathbf{T}}_i, \tilde{\mathbf{H}}_i, \tilde{\mathbf{S}}_i\right)$ 1. The bias-adjusted datasets are rotated back. $\begin{split}\mathbf{H}' = \hat{\mathbf{H}}\mathbf{R} \\ \mathbf{S}' = \hat{\mathbf{S}}\mathbf{R}\end{split}$ These three steps are repeated a certain number of times, prescribed by argument n_iter. At each iteration, a new random rotation matrix is generated. The original algorithm ([Pitie05]), stops the iteration when some distance score converges. Following [Cannon18] and the MBCn implementation in [CannonR], we instead fix the number of iterations. As done by [Cannon18], the distance score chosen is the “Energy distance” from [SkezelyRizzo] (see xclim.sdba.processing.escore()). The random matrices are generated following a method laid out by [Mezzadri]. This is only part of the full MBCn algorithm, see Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018 for an example on how to replicate the full method with xclim. This includes a standardization of the simulated data beforehand, an initial univariate adjustment and the reordering of those adjusted series according to the rank structure of the output of this algorithm. References Cannon18 Cannon, A. J. (2018). Multivariate quantile mapping bias correction: An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dynamics, 50(1), 31–49. https://doi.org/10.1007/s00382-017-3580-6 CannonR https://CRAN.R-project.org/package=MBC Mezzadri, F. (2006). How to generate random matrices from the classical compact groups. arXiv preprint math-ph/0609050. Pitie05(1,2) Pitie, F., Kokaram, A. C., & Dahyot, R. (2005). N-dimensional probability density function transfer and its application to color transfer. Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, 2, 1434-1439 Vol. 2. https://doi.org/10.1109/ICCV.2005.166 SkezelyRizzo Szekely, G. J. and Rizzo, M. L. (2004) Testing for Equal Distributions in High Dimension, InterStat, November (5) This bias-correction method maps model simulation values to the observation space through principal components ([hnilica2017]). Values in the simulation space (multiple variables, or multiple sites) can be thought of as coordinates along axes, such as variable, temperature, etc. Principal components (PC) are a linear combinations of the original variables where the coefficients are the eigenvectors of the covariance matrix. Values can then be expressed as coordinates along the PC axes. The method makes the assumption that bias-corrected values have the same coordinates along the PC axes of the observations. By converting from the observation PC space to the original space, we get bias corrected values. See notes for a mathematical explanation. Note that principal components is meant here as the algebraic operation defining a coordinate system based on the eigenvectors, not statistical principal component analysis. Parameters • group (Union[str, Grouper]) – The main dimension and grouping information. See Notes. See xclim.sdba.base.Grouper for details. The adjustment will be performed on each group independently. Default is “time”, meaning an single adjustment group along dimension “time”. • best_orientation ({‘simple’, ‘full’}) – Which method to use when searching for the best principal component orientation. See best_pc_orientation_simple() and best_pc_orientiation_full(). “full” is more precise, but it is much slower. • crd_dim (str) – The data dimension along which the multiple simulation space dimensions are taken. For a multivariate adjustment, this usually is “multivar”, as returned by sdba.stack_variables. For a multisite adjustment, this should be the spatial dimension. The training algorithm currently doesn’t support any chunking along either crd_dim. group.dim and group.add_dims. Notes The input data is understood as a set of N points in a $$M$$-dimensional space. • $$M$$ is taken along crd_dim. • $$N$$ is taken along the dimensions given through group : (the main dim but also, if requested, the add_dims and window). The principal components (PC) of hist and ref are used to defined new coordinate systems, centered on their respective means. The training step creates a matrix defining the transformation from hist to ref: $scen = e_{R} + \mathrm{\mathbf{T}}(sim - e_{H})$ Where: $\mathrm{\mathbf{T}} = \mathrm{\mathbf{R}}\mathrm{\mathbf{H}}^{-1}$ $$\mathrm{\mathbf{R}}$$ is the matrix transforming from the PC coordinates computed on ref to the data coordinates. Similarly, $$\mathrm{\mathbf{H}}$$ is transform from the hist PC to the data coordinates ($$\mathrm{\mathbf{H}}$$ is the inverse transformation). $$e_R$$ and $$e_H$$ are the centroids of the ref and hist distributions respectively. Upon running the adjust step, one may decide to use $$e_S$$, the centroid of the sim distribution, instead of $$e_H$$. References hnilica2017 Hnilica, J., Hanel, M. and Pš, V. (2017), Multisite bias correction of precipitation data from regional climate models. Int. J. Climatol., 37: 2934-2946. https://doi.org/10.1002/joc.4890 Adjustment factors are computed between the quantiles of ref and hist. Quantiles of sim are matched to the corresponding quantiles of hist and corrected accordingly. $sim\frac{F^{-1}_{ref}\left[F_{sim}(sim)\right]}{F^{-1}_{hist}\left[F_{sim}(sim)\right]}$ where $$F$$ is the cumulative distribution function (CDF). This equation is valid for multiplicative adjustment. The algorithm is based on the “QDM” method of [Cannon2015]. Parameters • Train step • nquantiles (int or 1d array of floats) – The number of quantiles to use. See equally_spaced_nodes(). An array of quantiles [0, 1] can also be passed. Defaults to 20 quantiles. • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative. Defaults to “+”. • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. Default is “time”, meaning an single adjustment group along dimension “time”. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearest”. • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”. • Extra diagnostics • —————– • quantiles (The quantile of each value of sim. The adjustment factor is interpolated using this as the “quantile” axis on ds.af.) References Cannon2015 Cannon, A. J., Sobie, S. R., & Murdock, T. Q. (2015). Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes? Journal of Climate, 28(17), 6938–6959. https://doi.org/10.1175/JCLI-D-14-00754.1 Simple bias-adjustment method scaling variables by an additive or multiplicative factor so that the mean of hist matches the mean of ref. Parameters • Train step • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. Default is “time”, meaning an single adjustment group along dimension “time”. • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative. Defaults to “+”. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use then interpolating the adjustment factors. Defaults to “nearest”. ### Pre and post processing xclim.sdba.processing.adapt_freq(ref: xr.DataArray, sim: xr.DataArray, *, group: Grouper | str, thresh: str = '0 mm d-1') xr.Dataset[source] Adapt frequency of values under thresh of sim, in order to match ref. This is useful when the dry-day frequency in the simulations is higher than in the references. This function will create new non-null values for sim/hist, so that adjustment factors are less wet-biased. Based on [Themessl2012]. Parameters • ds (xr.Dataset) – With variables : “ref”, Target/reference data, usually observed data. and “sim”, Simulated data. • dim (str) – Dimension name. • group (Union[str, Grouper]) – Grouping information, see base.Grouper • thresh (str) – Threshold below which values are considered zero, a quantity with units. Returns • sim_adj (xr.DataArray) – Simulated data with the same frequency of values under threshold than ref. Adjustment is made group-wise. • pth (xr.DataArray) – For each group, the smallest value of sim that was not frequency-adjusted. All values smaller were either left as zero values or given a random value between thresh and pth. NaN where frequency adaptation wasn’t needed. • dP0 (xr.DataArray) – For each group, the percentage of values that were corrected in sim. Notes With $$P_0^r$$ the frequency of values under threshold $$T_0$$ in the reference (ref) and $$P_0^s$$ the same for the simulated values, $$\\Delta P_0 = \\frac{P_0^s - P_0^r}{P_0^s}$$, when positive, represents the proportion of values under $$T_0$$ that need to be corrected. The correction replaces a proportion $$\\Delta P_0$$ of the values under $$T_0$$ in sim by a uniform random number between $$T_0$$ and $$P_{th}$$, where $$P_{th} = F_{ref}^{-1}( F_{sim}( T_0 ) )$$ and F(x) is the empirical cumulative distribution function (CDF). References Themessl2012 Themeßl et al. (2012), Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal, Climatic Change, DOI 10.1007/s10584-011-0224-4. xclim.sdba.processing.construct_moving_yearly_window(da: xarray.Dataset, window: int = 21, step: int = 1, dim: str = 'movingwin')[source] Construct a moving window DataArray. Stacks windows of da in a new ‘movingwin’ dimension. Windows are always made of full years, so calendar with non-uniform year lengths are not supported. Windows are constructed starting at the beginning of da, if number of given years is not a multiple of step, then the last year(s) will be missing as a supplementary window would be incomplete. Parameters • da (xr.Dataset) – A DataArray with a time dimension. • window (int) – The length of the moving window as a number of years. • step (int) – The step between each window as a number of years. • dim (str) – The new dimension name. If given, must also be given to unpack_moving_yearly_window. Returns xr.DataArray – A DataArray with a new movingwin dimension and a time dimension with a length of 1 window. This assumes downstream algorithms do not make use of the _absolute_ year of the data. The correct timeseries can be reconstructed with unpack_moving_yearly_window(). The coordinates of movingwin are the first date of the windows. xclim.sdba.processing.escore(tgt: xarray.DataArray, sim: xarray.DataArray, dims: Sequence[str] = ('variables', 'time'), N: int = 0, scale: bool = False) xarray.DataArray[source] Energy score, or energy dissimilarity metric, based on [SzekelyRizzo] and [Cannon18]. Parameters • tgt (xr.DataArray) – Target observations. • sim (xr.DataArray) – Candidate observations. Must have the same dimensions as tgt. • dims (sequence of 2 strings) – The name of the dimensions along which the variables and observation points are listed. tgt and sim can have different length along the second one, but must be equal along the first one. The result will keep all other dimensions. • N (int) – If larger than 0, the number of observations to use in the score computation. The points are taken evenly distributed along obs_dim. • scale (bool) – Whether to scale the data before computing the score. If True, both arrays as scaled according to the mean and standard deviation of tgt along obs_dim. (std computed with ddof=1 and both statistics excluding NaN values). Returns xr.DataArray – e-score with dimensions not in dims. Notes Explanation adapted from the “energy” R package documentation. The e-distance between two clusters $$C_i$$, $$C_j$$ (tgt and sim) of size $$n_i,n_j$$ proposed by Székely and Rizzo (2004) is defined by: $e(C_i,C_j) = \frac{1}{2}\frac{n_i n_j}{n_i + n_j} \left[2 M_{ij} − M_{ii} − M_{jj}\right]$ where $M_{ij} = \frac{1}{n_i n_j} \sum_{p = 1}^{n_i} \sum_{q = 1}^{n_j} \left\Vert X_{ip} − X{jq} \right\Vert.$ $$\Vert\cdot\Vert$$ denotes Euclidean norm, $$X_{ip}$$ denotes the p-th observation in the i-th cluster. The input scaling and the factor $$\frac{1}{2}$$ in the first equation are additions of [Cannon18] to the metric. With that factor, the test becomes identical to the one defined by [BaringhausFranz]. This version is tested against values taken from Alex Cannon’s MBC R package. References BaringhausFranz Baringhaus, L. and Franz, C. (2004) On a new multivariate two-sample test, Journal of Multivariate Analysis, 88(1), 190–206. https://doi.org/10.1016/s0047-259x(03)00079-4 Cannon18 Cannon, A. J. (2018). Multivariate quantile mapping bias correction: An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dynamics, 50(1), 31–49. https://doi.org/10.1007/s00382-017-3580-6 SzekelyRizzo Székely, G. J. and Rizzo, M. L. (2004) Testing for Equal Distributions in High Dimension, InterStat, November (5) xclim.sdba.processing.from_additive_space(data: xarray.DataArray, lower_bound: Optional[str] = None, upper_bound: Optional[str] = None, trans: Optional[str] = None, units: Optional[str] = None)[source] Transform back to the physical space a variable that was transformed with to_addtitive_space. Based on [AlavoineGrenier]. If parameters are not present on the attributes of the data, they must be all given are arguments. Parameters • data (xr.DataArray) – A variable that was transform by to_additive_space(). • lower_bound (str, optional) – The smallest physical value of the variable, as a Quantity string. The final data will have no value smaller or equal to this bound. If None (default), the sdba_transform_lower attribute is looked up on data. • upper_bound (str, optional) – The largest physical value of the variable, as a Quantity string. Only relevant for the logit transformation. The final data will have no value larger or equal to this bound. If None (default), the sdba_transform_upper attribute is looked up on data. • trans ({‘log’, ‘logit’}, optional) – The transformation to use. See notes. If None (the default), the sdba_transform attribute is looked up on data. • units (str, optional) – The units of the data before transformation to the additive space. If None (the default), the sdba_transform_units attribute is looked up on data. Returns xr.DataArray – The physical variable. Attributes are conserved, even if some might be incorrect. Except units which are taken from sdba_transform_units if available. All sdba_transform* attributes are deleted. Notes Given a variable that is not usable in an additive adjustment, to_additive_space() applied a transformation to a space where additive methods are sensible. Given $$Y$$ the transformed variable, $$b_-$$ the lower physical bound of that variable and $$b_+$$ the upper physical bound, two back-transformations are currently implemented to get $$X$$, the physical variable. • log $X = e^{Y} + b_-$ • logit $X' = \frac{1}{1 + e^{-Y}} X = X * (b_+ - b_-) + b_-$ to_additive_space for the original transformation. References AlavoineGrenier Alavoine M., and Grenier P. (under review) The distinct problems of physical inconsistency and of multivariate bias potentially involved in the statistical adjustment of climate simulations. International Journal of Climatology, Manuscript ID: JOC-21-0789, submitted on September 19th 2021. (Preprint https://doi.org/10.31223/X5C34C) xclim.sdba.processing.jitter(x: xarray.DataArray, lower: Optional[str] = None, upper: Optional[str] = None, minimum: Optional[str] = None, maximum: Optional[str] = None) xarray.DataArray[source] Replaces values under a threshold and values above another by a uniform random noise. Do not confuse with R’s jitter, which adds uniform noise instead of replacing values. Parameters • x (xr.DataArray) – Values. • lower (str) – Threshold under which to add uniform random noise to values, a quantity with units. If None, no jittering is performed on the lower end. • upper (str) – Threshold over which to add uniform random noise to values, a quantity with units. If None, no jittering is performed on the upper end. • minimum (str) – Lower limit (excluded) for the lower end random noise, a quantity with units. If None but lower is not None, 0 is used. • maximum (str) – Upper limit (excluded) for the upper end random noise, a quantity with units. If upper is not None, it must be given. Returns xr.DataArray – Same as x but values < lower are replaced by a uniform noise in range (minimum, lower) and values >= upper are replaced by a uniform noise in range [upper, maximum). The two noise distributions are independent. xclim.sdba.processing.jitter_over_thresh(x: xarray.DataArray, thresh: str, upper_bnd: str) xarray.DataArray[source] Replace values greater than threshold by a uniform random noise. Do not confuse with R’s jitter, which adds uniform noise instead of replacing values. Parameters • x (xr.DataArray) – Values. • thresh (str) – Threshold over which to add uniform random noise to values, a quantity with units. • upper_bnd (str) – Maximum possible value for the random noise, a quantity with units. Returns xr.DataArray Notes If thresh is low, this will change the mean value of x. xclim.sdba.processing.jitter_under_thresh(x: xarray.DataArray, thresh: str) xarray.DataArray[source] Replace values smaller than threshold by a uniform random noise. Do not confuse with R’s jitter, which adds uniform noise instead of replacing values. Parameters • x (xr.DataArray) – Values. • thresh (str) – Threshold under which to add uniform random noise to values, a quantity with units. Returns xr.DataArray Notes If thresh is high, this will change the mean value of x. xclim.sdba.processing.normalize(data: xr.DataArray, norm: xr.DataArray | None = None, *, group: Grouper | str, kind: str = '+') xr.Dataset[source] Normalize an array by removing its mean. Normalization if performed group-wise and according to kind. Parameters • data (xr.DataArray) – The variable to normalize. • norm (xr.DataArray, optional) – If present, it is used instead of computing the norm again. • group (Union[str, Grouper]) – Grouping information. See xclim.sdba.base.Grouper for details.. • kind ({‘+’, ‘’}*) – If kind is “+”, the mean is subtracted from the mean and if it is ‘*’, it is divided from the data. Returns • xr.DataArray – Groupwise anomaly. • norm (xr.DataArray) – Mean over each group. xclim.sdba.processing.reordering(ref: xarray.DataArray, sim: xarray.DataArray, group: str = 'time') xarray.Dataset[source] Reorders data in sim following the order of ref. The rank structure of ref is used to reorder the elements of sim along dimension “time”, optionally doing the operation group-wise. Parameters • sim (xr.DataArray) – Array to reorder. • ref (xr.DataArray) – Array whose rank order sim should replicate. • group (str) – Grouping information. See xclim.sdba.base.Grouper for details. Returns • xr.Dataset – sim reordered according to ref’s rank order. • Reference • ——— • .. [Cannon18] Cannon, A. J. (2018). Multivariate quantile mapping bias correction (An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dynamics, 50(1), 31–49. https://doi.org/10.1007/s00382-017-3580-6) xclim.sdba.processing.stack_variables(ds: xarray.Dataset, rechunk: bool = True, dim: str = 'multivar')[source] Stack different variables of a dataset into a single DataArray with a new “variables” dimension. Variable attributes are all added as lists of attributes to the new coordinate, prefixed with “_”. Variables are concatenated in the new dimension in alphabetical order, to ensure coherent behaviour with different datasets. Parameters • ds (xr.Dataset) – Input dataset. • rechunk (bool) – If True (default), dask arrays are rechunked with variables : -1. • dim (str) – Name of dimension along which variables are indexed. Returns xr.DataArray – The transformed variable. Attributes are conserved, even if some might be incorrect. Except units, which are replaced with “”. Old units are stored in sdba_transformation_units. A sdba_transform attribute is added, set to the transformation method. sdba_transform_lower and sdba_transform_upper are also set if the requested bounds are different from the defaults. Array with variables stacked along dim dimension. Units are set to “”. xclim.sdba.processing.standardize(da: xr.DataArray, mean: xr.DataArray | None = None, std: xr.DataArray | None = None, dim: str = 'time') tuple[xr.DataArray | xr.Dataset, xr.DataArray, xr.DataArray][source] Standardize a DataArray by centering its mean and scaling it by its standard deviation. Either of both of mean and std can be provided if need be. Returns the standardized data, the mean and the standard deviation. xclim.sdba.processing.to_additive_space(data: xarray.DataArray, lower_bound: str, upper_bound: Optional[str] = None, trans: str = 'log')[source] Transform a non-additive variable into an additive space by the means of a log or logit transformation. Based on [AlavoineGrenier]. Parameters • data (xr.DataArray) – A variable that can’t usually be bias-adusted by additive methods. • lower_bound (str) – The smallest physical value of the variable, excluded, as a Quantity string. The data should only have values strictly larger than this bound. • upper_bound (str, optional) – The largest physical value of the variable, excluded, as a Quantity string. Only relevant for the logit transformation. The data should only have values strictly smaller than this bound. • trans ({‘log’, ‘logit’}) – The transformation to use. See notes. Notes Given a variable that is not usable in an additive adjustment, this apply a transformation to a space where additive methods are sensible. Given $$X$$ the variable, $$b_-$$ the lower physical bound of that variable and $$b_+$$ the upper physical bound, two transformations are currently implemented to get $$Y$$, the additive-ready variable. $$\ln$$ is the natural logarithm. • log $Y = \ln\left( X - b_- \right)$ Usually used for variables with only a lower bound, like precipitation (pr, prsn, etc) and daily temperature range (dtr). Both have a lower bound of 0. • logit $X' = (X - b_-) / (b_+ - b_-) Y = \ln\left(\frac{X'}{1 - X'} \right)$ Usually used for variables with both a lower and a upper bound, like relative and specific humidity, cloud cover fraction, etc. This will thus produce Infinity and NaN values where $$X == b_-$$ or $$X == b_+$$. We recommend using jitter_under_thresh() and jitter_over_thresh() to remove those issues. from_additive_space for the inverse transformation. jitter_under_thresh Remove values exactly equal to the lower bound. jitter_over_thresh Remove values exactly equal to the upper bound. References AlavoineGrenier Alavoine M., and Grenier P. (under review) The distinct problems of physical inconsistency and of multivariate bias potentially involved in the statistical adjustment of climate simulations. International Journal of Climatology, Manuscript ID: JOC-21-0789, submitted on September 19th 2021. (Preprint https://doi.org/10.31223/X5C34C) xclim.sdba.processing.uniform_noise_like(da: xarray.DataArray, low: float = 1e-06, high: float = 0.001) xarray.DataArray[source] Return a uniform noise array of the same shape as da. Noise is uniformly distributed between low and high. Alternative method to jitter_under_thresh for avoiding zeroes. xclim.sdba.processing.unpack_moving_yearly_window(da: xarray.DataArray, dim: str = 'movingwin', append_ends: bool = True)[source] Unpack a constructed moving window dataset to a normal timeseries, only keeping the central data. Unpack DataArrays created with construct_moving_yearly_window() and recreate a timeseries data. If append_ends is False, only keeps the central non-overlapping years. The final timeseries will be (window - step) years shorter than the initial one. If append_ends is True, the time points from first and last windows will be be included in the final timeseries. The time points that are not in a window will never be included in the final timeseries. The window length and window step are inferred from the coordinates. Parameters • da (xr.DataArray) – As constructed by construct_moving_yearly_window(). • dim (str) – The window dimension name as given to the construction function. • append_ends (bool) – Whether to append the ends of the timeseries If False, the final timeseries will be (window - step) years shorter than the initial one, but all windows will contribute equally. If True, the year before the middle years of the first window and the years after the middle years of the last window are appended to the middle years. The final timeseries will be the same length as the initial timeseries if the windows span the whole timeseries. The time steps that are not in a window will be left out of the final timeseries. xclim.sdba.processing.unstack_variables(da: xarray.DataArray, dim: Optional[str] = None)[source] Unstack a DataArray created by stack_variables to a dataset. Parameters • da (xr.DataArray) – Array holding different variables along dim dimension. • dim (str) – Name of dimension along which the variables are stacked. If not specified (default), dim is inferred from attributes of the coordinate. Returns xr.Dataset – Dataset holding each variable in an individual DataArray. xclim.sdba.processing.unstandardize(da: xarray.DataArray, mean: xarray.DataArray, std: xarray.DataArray)[source] Rescale a standardized array by performing the inverse operation of standardize. ### Detrending objects class xclim.sdba.detrending.LoessDetrend(group='time', kind='+', f=0.2, niter=1, d=0, weights='tricube', equal_spacing=None, skipna=True)[source] Detrend time series using a LOESS regression. The fit is a piecewise linear regression. For each point, the contribution of all neighbors is weighted by a bell-shaped curve (gaussian) with parameters sigma (std). The x-coordinate of the DataArray is scaled to [0,1] before the regression is computed. Parameters • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. The fit is performed along the group’s main dim. • kind ({’’, ‘+’}*) – The way the trend is removed or added, either additive or multiplicative. • d ([0, 1]) – Order of the local regression. Only 0 and 1 currently implemented. • f (float) – Parameter controlling the span of the weights, between 0 and 1. • niter (int) – Number of robustness iterations to execute. • weights ([“tricube”, “gaussian”]) – Shape of the weighting function: “tricube” : a smooth top-hat like curve, f gives the span of non-zero values. “gaussian” : a gaussian curve, f gives the span for 95% of the values. • skipna (bool) – If True (default), missing values are not included in the loess trend computation and thus are not propagated. The output will have the same missing values as the input. Notes LOESS smoothing is computationally expensive. As it relies on a loop on gridpoints, it can be useful to use smaller than usual chunks. Moreover, it suffers from heavy boundary effects. As a rule of thumb, the outermost N * f/2 points should be considered dubious. (N is the number of points along each group) class xclim.sdba.detrending.MeanDetrend(*, group: Grouper | str = 'time', kind: str = '+', **kwargs)[source] Simple detrending removing only the mean from the data, quite similar to normalizing. class xclim.sdba.detrending.NoDetrend(*, group: Grouper | str = 'time', kind: str = '+', **kwargs)[source] Convenience class for polymorphism. Does nothing. class xclim.sdba.detrending.PolyDetrend(group='time', kind='+', degree=4, preserve_mean=False)[source] Detrend time series using a polynomial regression. Parameters • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. The fit is performed along the group’s main dim. • kind ({’’, ‘+’}*) – The way the trend is removed or added, either additive or multiplicative. • degree (int) – The order of the polynomial to fit. • preserve_mean (bool) – Whether to preserve the mean when de/re-trending. If True, the trend has its mean removed before it is used. class xclim.sdba.detrending.RollingMeanDetrend(group='time', kind='+', win=30, weights=None, min_periods=None)[source] Detrend time series using a rolling mean. Parameters • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details. The fit is performed along the group’s main dim. • kind ({’’, ‘+’}*) – The way the trend is removed or added, either additive or multiplicative. • win (int) – The size of the rolling window. Units are the steps of the grouped data, which means this detrending is best use with either group=’time’ or group=’time.dayofyear’. Other grouping will have large jumps included within the windows and :py:class:LoessDetrend might offer a better solution. • weights (sequence of floats, optional) – Sequence of length win. Defaults to None, which means a flat window. • min_periods (int, optional) – Minimum number of observations in window required to have a value, otherwise the result is NaN. See xarray.DataArray.rolling(). Defaults to None, which sets it equal to win. Setting both weights and this is not implemented yet. Notes As for the LoessDetrend detrending, important boundary effects are to be expected. ### sdba utilities xclim.sdba.utils.add_cyclic_bounds(da: xr.DataArray, att: str, cyclic_coords: bool = True) xr.DataArray | xr.Dataset[source] Reindex an array to include the last slice at the beginning and the first at the end. This is done to allow interpolation near the end-points. Parameters • da (Union[xr.DataArray, xr.Dataset]) – An array • att (str) – The name of the coordinate to make cyclic • cyclic_coords (bool) – If True, the coordinates are made cyclic as well, if False, the new values are guessed using the same step as their neighbour. Returns Union[xr.DataArray, xr.Dataset] – da but with the last element along att prepended and the last one appended. xclim.sdba.utils.apply_correction(x: xr.DataArray, factor: xr.DataArray, kind: str | None = None) xr.DataArray[source] If kind is not given, default to the one stored in the “kind” attribute of factor. xclim.sdba.utils.best_pc_orientation_full(R: numpy.ndarray, Hinv: numpy.ndarray, Rmean: numpy.ndarray, Hmean: numpy.ndarray, hist: numpy.ndarray) [source] Return best orientation vector for A according to the method of Alavoine et al. (2021, preprint). Eigenvectors returned by pc_matrix do not have a defined orientation. Given an inverse transform Hinv, a transform R, the actual and target origins Hmean and Rmean and the matrix of training observations hist, this computes a scenario for all possible orientations and return the orientation that maximizes the Spearman correlation coefficient of all variables. The correlation is computed for each variable individually, then averaged. This trick is explained in [alavoine2021]. See documentation of sdba.adjustment.PrincipalComponentAdjustment(). Parameters • R (np.ndarray) – MxM Matrix defining the final transformation. • Hinv (np.ndarray) – MxM Matrix defining the (inverse) first transformation. • Rmean (np.ndarray) – M vector defining the target distribution center point. • Hmean (np.ndarray) – M vector defining the original distribution center point. • hist (np.ndarray) – MxN matrix of all training observations of the M variables/sites. Returns np.ndarray – M vector of orientation correction (1 or -1). References alavoine2021 Alavoine, M., & Grenier, P. (2021). The distinct problems of physical inconsistency and of multivariate bias potentially involved in the statistical adjustment of climate simulations. https://eartharxiv.org/repository/view/2876/ xclim.sdba.utils.best_pc_orientation_simple(R: numpy.ndarray, Hinv: numpy.ndarray, val: float = 1000) [source] Return best orientation vector according to a simple test. Eigenvectors returned by pc_matrix do not have a defined orientation. Given an inverse transform Hinv and a transform R, this returns the orientation minimizing the projected distance for a test point far from the origin. This trick is inspired by the one exposed in [hnilica2017]. For each possible orientation vector, the test point is reprojected and the distance from the original point is computed. The orientation minimizing that distance is chosen. See documentation of sdba.adjustment.PrincipalComponentAdjustment. Parameters • R (np.ndarray) – MxM Matrix defining the final transformation. • Hinv (np.ndarray) – MxM Matrix defining the (inverse) first transformation. • val (float) – The coordinate of the test point (same for all axes). It should be much greater than the largest furthest point in the array used to define B. Returns np.ndarray – Mx1 vector of orientation correction (1 or -1). References hnilica2017 Hnilica, J., Hanel, M. and Pš, V. (2017), Multisite bias correction of precipitation data from regional climate models. Int. J. Climatol., 37: 2934-2946. https://doi.org/10.1002/joc.4890 xclim.sdba.utils.broadcast(grouped: xr.DataArray, x: xr.DataArray, *, group: str | Grouper = 'time', interp: str = 'nearest', sel: Mapping[str, xr.DataArray] | None = None) xr.DataArray[source] Broadcast a grouped array back to the same shape as a given array. Parameters • grouped (xr.DataArray) – The grouped array to broadcast like x. • x (xr.DataArray) – The array to broadcast grouped to. • group (Union[str, Grouper]) – Grouping information. See xclim.sdba.base.Grouper for details. • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use, • sel (Mapping[str, xr.DataArray]) – Mapping of grouped coordinates to x coordinates (other than the grouping one). Returns xr.DataArray xclim.sdba.utils.copy_all_attrs(ds: xr.Dataset | xr.DataArray, ref: xr.Dataset | xr.DataArray)[source] Copies all attributes of ds to ref, including attributes of shared coordinates, and variables in the case of Datasets. xclim.sdba.utils.ecdf(x: xarray.DataArray, value: float, dim: str = 'time') xarray.DataArray[source] Return the empirical CDF of a sample at a given value. Parameters • x (array) – Sample. • value (float) – The value within the support of x for which to compute the CDF value. • dim (str) – Dimension name. Returns xr.DataArray – Empirical CDF. xclim.sdba.utils.ensure_longest_doy(func: Callable) Callable[source] Ensure that selected day is the longest day of year for x and y dims. xclim.sdba.utils.equally_spaced_nodes(n: int, eps: float | None = None) np.array[source] Return nodes with n equally spaced points within [0, 1], optionally adding two end-points. Parameters • n (int) – Number of equally spaced nodes. • eps (float, optional) – Distance from 0 and 1 of added end nodes. If None (default), do not add endpoints. Returns np.array – Nodes between 0 and 1. Nodes can be seen as the middle points of n equal bins. Warning Passing a small eps will effectively clip the scenario to the bounds of the reference on the historical period in most cases. With normal quantile mapping algorithms, this can give strange result when the reference does not show as many extremes as the simulation does. Notes For n=4, eps=0 : 0—x——x——x——x—1 xclim.sdba.utils.get_clusters(data: xarray.DataArray, u1, u2, dim: str = 'time') xarray.Dataset[source] Get cluster count, maximum and position along a given dim. Parameters • data (1D ndarray) – Values to get clusters from. • u1 (float) – Extreme value threshold, at least one value in the cluster must exceed this. • u2 (float) – Cluster threshold, values above this can be part of a cluster. • dim (str) – Dimension name. Returns xr.Dataset With variables, • nclusters : Number of clusters for each point (with dim reduced), int • start : First index in the cluster (dim reduced, new cluster), int • end : Last index in the cluster, inclusive (dim reduced, new cluster), int • maxpos : Index of the maximal value within the cluster (dim reduced, new cluster), int • maximum : Maximal value within the cluster (dim reduced, new cluster), same dtype as data. For start, end and maxpos, -1 means NaN and should always correspond to a NaN in maximum. The length along cluster is half the size of “dim”, the maximal theoretical number of clusters. xclim.sdba.utils.get_clusters_1d(data: np.ndarray, u1: float, u2: float) tuple[np.array, np.array, np.array, np.array][source] Get clusters of a 1D array. A cluster is defined as a sequence of values larger than u2 with at least one value larger than u1. Parameters • data (1D ndarray) – Values to get clusters from. • u1 (float) – Extreme value threshold, at least one value in the cluster must exceed this. • u2 (float) – Cluster threshold, values above this can be part of a cluster. Returns (np.array, np.array, np.array, np.array) References getcluster of Extremes.jl (read on 2021-04-20) https://github.com/jojal5/Extremes.jl xclim.sdba.utils.get_correction(x: xarray.DataArray, y: xarray.DataArray, kind: str) xarray.DataArray[source] xclim.sdba.utils.interp_on_quantiles(newx: xr.DataArray, xq: xr.DataArray, yq: xr.DataArray, *, group: str | Grouper = 'time', method: str = 'linear', extrapolation: str = 'constant')[source] Interpolate values of yq on new values of x. Interpolate in 2D with griddata() if grouping is used, in 1D otherwise, with interp1d. Any NaNs in xq or yq are removed from the input map. Similarly, NaNs in newx are left NaNs. Parameters • newx (xr.DataArray) – The values at which to evaluate yq. If group has group information, new should have a coordinate with the same name as the group name In that case, 2D interpolation is used. • xq, yq (xr.DataArray) – Coordinates and values on which to interpolate. The interpolation is done along the “quantiles” dimension if group has no group information. If it does, interpolation is done in 2D on “quantiles” and on the group dimension. • group (Union[str, Grouper]) – The dimension and grouping information. (ex: “time” or “time.month”). Defaults to “time”. • method ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method. • extrapolation ({‘constant’, ‘nan’}) – The extrapolation method used for values of newx outside the range of xq. See notes. Notes Extrapolation methods: • ‘nan’ : Any value of newx outside the range of xq is set to NaN. • ‘constant’ : Values of newx smaller than the minimum of xq are set to the first value of yq and those larger than the maximum, set to the last one (first and last non-nan values along the “quantiles” dimension). When the grouping is “time.month”, these limits are linearly interpolated along the month dimension. xclim.sdba.utils.invert(x: xr.DataArray, kind: str | None = None) xr.DataArray[source] Invert a DataArray either additively (-x) or multiplicatively (1/x). If kind is not given, default to the one stored in the “kind” attribute of x. xclim.sdba.utils.map_cdf(ds: xarray.Dataset, *, y_value: xarray.DataArray, dim)[source] Return the value in x with the same CDF as y_value in y. This function is meant to be wrapped in a Grouper.apply. Parameters • ds (xr.Dataset) – Variables: x, Values from which to pick, y, Reference values giving the ranking • y_value (float, array) – Value within the support of y. • dim (str) – Dimension along which to compute quantile. Returns array – Quantile of x with the same CDF as y_value in y. xclim.sdba.utils.map_cdf_1d(x, y, y_value)[source] Return the value in x with the same CDF as y_value in y. xclim.sdba.utils.pc_matrix(arr: np.ndarray | dsk.Array) np.ndarray | dsk.Array[source] Construct a Principal Component matrix. This matrix can be used to transform points in arr to principal components coordinates. Note that this function does not manage NaNs; if a single observation is null, all elements of the transformation matrix involving that variable will be NaN. Parameters arr (numpy.ndarray or dask.array.Array) – 2D array (M, N) of the M coordinates of N points. Returns numpy.ndarray or dask.array.Array – MxM Array of the same type as arr. xclim.sdba.utils.rand_rot_matrix(crd: xr.DataArray, num: int = 1, new_dim: str | None = None) xr.DataArray[source] Generate random rotation matrices. Rotation matrices are members of the SO(n) group, where n is the matrix size (crd.size). They can be characterized as orthogonal matrices with determinant 1. A square matrix $$R$$ is a rotation matrix if and only if $$R^t = R^{−1}$$ and $$\mathrm{det} R = 1$$. Parameters • crd (xr.DataArray) – 1D coordinate DataArray along which the rotation occurs. The output will be square with the same coordinate replicated, the second renamed to new_dim. • num (int) – If larger than 1 (default), the number of matrices to generate, stacked along a “matrices” dimension. • new_dim (str) – Name of the new “prime” dimension, defaults to the same name as crd + “_prime”. Returns xr.DataArray – float, NxN if num = 1, numxNxN otherwise, where N is the length of crd. References Mezzadri, F. (2006). How to generate random matrices from the classical compact groups. arXiv preprint math-ph/0609050. xclim.sdba.utils.rank(da: xarray.DataArray, dim: str = 'time', pct: bool = False) xarray.DataArray[source] Ranks data along a dimension. Replicates xr.DataArray.rank but as a function usable in a Grouper.apply(). Xarray’s docstring is below: Equal values are assigned a rank that is the average of the ranks that would have been otherwise assigned to all the values within that set. Ranks begin at 1, not 0. If pct, computes percentage ranks. Parameters • da (xr.DataArray) – Source array. • dim (str, hashable) – Dimension over which to compute rank. • pct (bool, optional) – If True, compute percentage ranks, otherwise compute integer ranks. Returns DataArray – DataArray with the same coordinates and dtype ‘float64’. Notes The bottleneck library is required. NaNs in the input array are returned as NaNs. class xclim.sdba.base.Grouper(group: str, window: int = 1, add_dims: Sequence[str] | set[str] | None = None)[source] Create the Grouper object. Parameters • group (str) – The usual grouping name as xarray understands it. Ex: “time.month” or “time”. The dimension name before the dot is the “main dimension” stored in Grouper.dim and the property name after is stored in Grouper.prop. • window (int) – If larger than 1, a centered rolling window along the main dimension is created when grouping data. Units are the sampling frequency of the data along the main dimension. • add_dims (Optional[Union[Sequence[str], str]]) – Additional dimensions that should be reduced in grouping operations. This behaviour is also controlled by the main_only parameter of the apply method. If any of these dimensions are absent from the dataarrays, they will be omitted. apply(func: FunctionType | str, da: xr.DataArray | Mapping[str, xr.DataArray] | xr.Dataset, main_only: bool = False, **kwargs)[source] Apply a function group-wise on DataArrays. Parameters • func (Union[FunctionType, str]) – The function to apply to the groups, either a callable or a xr.core.groupby.GroupBy method name as a string. The function will be called as func(group, dim=dims, **kwargs). See main_only for the behaviour of dims. • da (Union[xr.DataArray, Mapping[str, xr.DataArray], xr.Dataset]) – The DataArray on which to apply the function. Multiple arrays can be passed through a dictionary. A dataset will be created before grouping. • main_only (bool) – Whether to call the function with the main dimension only (if True) or with all grouping dims (if False, default) (including the window and dimensions given through add_dims). The dimensions used are also written in the “group_compute_dims” attribute. If all the input arrays are missing one of the ‘add_dims’, it is silently omitted. • kwargs – Other keyword arguments to pass to the function. Returns DataArray or Dataset – Attributes “group”, “group_window” and “group_compute_dims” are added. If the function did not reduce the array: • The output is sorted along the main dimension. • The output is rechunked to match the chunks on the input If multiple inputs with differing chunking were given as inputs, the chunking with the smallest number of chunks is used. If the function reduces the array: • If there is only one group, the singleton dimension is squeezed out of the output • The output is rechunked as to have only 1 chunk along the new dimension. Notes For the special case where a Dataset is returned, but only some of its variable where reduced by the grouping, xarray’s GroupBy.map will broadcast everything back to the ungrouped dimensions. To overcome this issue, function may add a “_group_apply_reshape” attribute set to True on the variables that should be reduced and these will be re-grouped by calling da.groupby(self.name).first(). property freq The frequency string corresponding to the group. For use with xarray’s resampling. get_coordinate(ds=None)[source] Return the coordinate as in the output of group.apply. Currently, only implemented for groupings with prop == month or dayofyear. For prop == dayfofyear, a ds (Dataset or DataArray) can be passed to infer the max doy from the available years and calendar. get_index(da: xr.DataArray | xr.Dataset, interp: bool | None = None)[source] Return the group index of each element along the main dimension. Parameters • da (Union[xr.DataArray, xr.Dataset]) – The input array/dataset for which the group index is returned. It must have Grouper.dim as a coordinate. • interp (bool, optional) – If True, the returned index can be used for interpolation. Only value for month grouping, where integer values represent the middle of the month, all other days are linearly interpolated in between. Returns xr.DataArray – The index of each element along Grouper.dim. If Grouper.dim is time and Grouper.prop is None, an uniform array of True is returned. If Grouper.prop is a time accessor (month, dayofyear, etc), an numerical array is returned, with a special case of month and interp=True. If Grouper.dim is not time, the dim is simply returned. group(da: xr.DataArray | xr.Dataset = None, main_only=False, **das: xr.DataArray)[source] Return a xr.core.groupby.GroupBy object. More than one array can be combined to a dataset before grouping using the das kwargs. A new window dimension is added if self.window is larger than 1. If Grouper.dim is ‘time’, but ‘prop’ is None, the whole array is grouped together. When multiple arrays are passed, some of them can be grouped along the same group as self. They are boadcasted, merged to the grouping dataset and regrouped in the output. property prop_name A significative name of the grouping. ### Numba-accelerated utilities xclim.sdba.nbutils.quantile(da, q, dim)[source] Compute the quantiles from a fixed list “q” xclim.sdba.nbutils.vecquantiles(da, rnk, dim)[source] For when the quantile (rnk) is different for each point. da and rnk must share all dimensions but dim. ### LOESS smoothing xclim.sdba.loess.loess_smoothing(da: xr.DataArray, dim: str = 'time', d: int = 1, f: float = 0.5, niter: int = 2, weights: str | Callable = 'tricube', equal_spacing: bool | None = None, skipna: bool = True)[source] Locally weighted regression in 1D: fits a nonparametric regression curve to a scatterplot. Returns a smoothed curve along given dimension. The regression is computed for each point using a subset of neigboring points as given from evaluating the weighting function locally. Follows the procedure of [Cleveland1979]. Parameters • da (xr.DataArray) – The data to smooth using the loess approach. • dim (str) – Name of the dimension along which to perform the loess. • d ([0, 1]) – Degree of the local regression. • f (float) – Parameter controlling the shape of the weight curve. Behavior depends on the weighting function, but it usually represents the span of the weighting function in reference to x-coordinates normalized from 0 to 1. • niter (int) – Number of robustness iterations to execute. • weights ([“tricube”, “gaussian”] or callable) – Shape of the weighting function, see notes. The user can provide a function or a string: “tricube” : a smooth top-hat like curve. “gaussian” : a gaussian curve, f gives the span for 95% of the values. • equal_spacing (bool, optional) – Whether to use the equal spacing optimization. If None (the default), it is activated only if the x-axis is equally-spaced. When activated, dx = x[1] - x[0]. • skipna (bool) – If True (default), skip missing values (as marked by NaN). The output will have the same missing values as the input. Notes As stated in [Cleveland1979], the weighting function $$W(x)$$ should respect the following conditions: • $$W(x) > 0$$ for $$|x| < 1$$ • $$W(-x) = W(x)$$ • $$W(x)$$ is non-increasing for $$x \ge 0$$ • $$W(x) = 0$$ for $$|x| \ge 0$$ If a Callable is provided, it should only accept the 1D np.ndarray $$x$$ which is an absolute value function going from 1 to 0 to 1 around $$x_i$$, for all values where $$x - x_i < h_i$$ with $$h_i$$ the distance of the rth nearest neighbor of $$x_i$$, $$r = f * size(x)$$. References Cleveland1979(1,2) Cleveland, W. S., 1979. Robust Locally Weighted Regression and Smoothing Scatterplot, Journal of the American Statistical Association 74, 829–836. ### Properties submodule SDBA diagnostic tests are made up of statistical properties and measures. Properties are calculated on both simulation and reference datasets. They collapse the time dimension to one value. This framework for the diagnostic tests was inspired by the [VALUE] project. Statistical Properties is the xclim term for ‘indices’ in the VALUE project. VALUE http://www.value-cost.eu/ xclim.sdba.properties.STATISTICAL_PROPERTIES: dict[str, Callable] = {'acf': <function acf>, 'annual_cycle_amplitude': <function annual_cycle_amplitude>, 'annual_cycle_phase': <function annual_cycle_phase>, 'corr_btw_var': <function corr_btw_var>, 'mean': <function mean>, 'quantile': <function quantile>, 'relative_frequency': <function relative_frequency>, 'return_value': <function return_value>, 'skewness': <function skewness>, 'spell_length_distribution': <function spell_length_distribution>, 'trend': <function trend>, 'var': <function var>} Dictionary of all the statistical properties available. xclim.sdba.properties.acf(da: xr.DataArray, *, lag: int = 1, group: str | Grouper = 'time.season') xr.DataArray[source] Autocorrelation function. Autocorrelation with a lag over a time resolution and averaged over all years. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • lag (int) – Lag. • group ({‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the autocorrelation is calculated over each month separately for all years. Then, the autocorrelation for all Jan/Feb/… is averaged over all years, giving 12 outputs for each grid point. Returns xr.DataArray – lag-{lag} autocorrelation of the variable over a {group.prop} and averaged over all years. References Alavoine M., and Grenier P. (under review) The distinct problems of physical inconsistency and of multivariate bias potentially involved in the statistical adjustment of climate simulations. International Journal of Climatology, submitted on September 19th 2021. (Preprint: https://doi.org/10.31223/X5C34C) Examples >>> from xclim.testing import open_dataset >>> pr = open_dataset(path_to_pr_file).pr >>> acf(da=pr, lag=3, group='time.season') xclim.sdba.properties.annual_cycle_amplitude(da: xr.DataArray, *, amplitude_type: str = 'absolute', group: str | Grouper = 'time') xr.DataArray[source] Annual cycle amplitude. The amplitudes of the annual cycle are calculated for each year, than averaged over the all years. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • amplitude_type ({‘absolute’,’relative’}) – Type of amplitude. ‘absolute’ is the peak-to-peak amplitude. (max - min). ‘relative’ is a relative percentage. 100 * (max - min) / mean (Recommended for precipitation). Returns xr.DataArray – {amplitude_type} amplitude of the annual cycle. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> annual_cycle_amplitude(da=pr, amplitude_type='relative') xclim.sdba.properties.annual_cycle_phase(da: xr.DataArray, *, group: str | Grouper = 'time') xr.DataArray[source] Annual cycle phase. The phases of the annual cycle are calculated for each year, than averaged over the all years. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • group ({“time”, ‘time.season’, ‘time.month’}) – Grouping of the output. Default: “time”. Returns xr.DataArray – Phase of the annual cycle. The position (day-of-year) of the maximal value. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> annual_cycle_phase(da=pr) xclim.sdba.properties.corr_btw_var(da1: xr.DataArray, da2: xr.DataArray, *, corr_type: str = 'Spearman', group: str | Grouper = 'time', output: str = 'correlation') xr.DataArray[source] Correlation between two variables. Spearman or Pearson correlation coefficient between two variables at the time resolution. Parameters • da1 (xr.DataArray) – First variable on which to calculate the diagnostic. • da2 (xr.DataArray) – Second variable on which to calculate the diagnostic. • corr_type ({‘Pearson’,’Spearman’}) – Type of correlation to calculate. • output ({‘correlation’, ‘pvalue’}) – Wheter to return the correlation coefficient or the p-value. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. Eg. For ‘time.month’, the correlation would be calculated on each month separately, but with all the years together. Returns xr.DataArray – {corr_type} correlation coefficient Examples >>> pr = open_dataset(path_to_pr_file).pr >>> corr_btw_var(da1=pr, da2=tasmax, group='time.season') xclim.sdba.properties.mean(da: xr.DataArray, *, group: str | Grouper = 'time') xr.DataArray[source] Mean. Mean over all years at the time resolution. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the temporal average is performed separately for each month. Returns xr.DataArray, – Mean of the variable. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> mean(da=pr, group='time.season') xclim.sdba.properties.quantile(da: xr.DataArray, *, q: float = 0.98, group: str | Grouper = 'time') xr.DataArray[source] Quantile. Returns the quantile q of the distribution of the variable over all years at the time resolution. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • q (float) – Quantile to be calculated. Should be between 0 and 1. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the quantile is computed separately for each month. Returns xr.DataArray – Quantile {q} of the variable. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> quantile(da=pr, q=0.9, group='time.season') xclim.sdba.properties.relative_frequency(da: xr.DataArray, *, op: str = '>=', thresh: str = '1mm d-1', group: str | Grouper = 'time') xr.DataArray[source] Relative Frequency. Relative Frequency of days with variable respecting a condition (defined by an operation and a threshold) at the time resolution. The relative freqency is the number of days that satisfy the condition divided by the total number of days. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • op ({“>”, “<”, “>=”, “<=”}) – Operation to verify the condition. The condition is variable {op} threshold. • thresh (str) – Threshold on which to evaluate the condition. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping on the output. Eg. For ‘time.month’, the relative frequency would be calculated on each month, with all years included. Returns xr.DataArray – Relative frequency of the variable. Examples >>> tasmax = open_dataset(path_to_tasmax_file).tasmax >>> relative_frequency(da=tasmax, op= '<', thresh= '0 degC', group='time.season') xclim.sdba.properties.return_value(da: xr.DataArray, *, period: int = 20, op: str = 'max', method: str = 'ML', group: str | Grouper = 'time') xr.DataArray[source] Return value. Return the value corresponding to a return period. On average, the return value will be exceeded (or not exceed for op=’min’) every return period (eg. 20 years). The return value is computed by first extracting the variable annual maxima/minima, fitting a statistical distribution to the maxima/minima, then estimating the percentile associated with the return period (eg. 95th percentile (1/20) for 20 years) Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • period (int) – Return period. Number of years over which to check if the value is exceeded (or not for op=’min’). • op ({‘max’,’min’}) – Whether we are looking for a probability of exceedance (‘max’, right side of the distribution) or a probability of non-exceedance (min, left side of the distribution). • method ({“ML”, “PWM”}) – Fitting method, either maximum likelihood (ML) or probability weighted moments (PWM), also called L-Moments. The PWM method is usually more robust to outliers. However, it requires the lmoments3 libraryto be installed from the develop branch. pip install git+https://github.com/OpenHydrology/lmoments3.git@develop#egg=lmoments3 • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. A distribution of the extremums is done for each group. Returns xr.DataArray – {period}-{group} {op} return level of the variable. Examples >>> tas = open_dataset(path_to_tas_file).tas >>> return_value(da=tas, group='time.season') xclim.sdba.properties.skewness(da: xr.DataArray, *, group: str | Grouper = 'time') xr.DataArray[source] Skewness. Skewness of the distribution of the variable over all years at the time resolution. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the skewness is performed separately for each month. Returns xr.DataArray – Skewness of the variable. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> skewness(da=pr, group='time.season') xclim.sdba.properties.spell_length_distribution(da: xr.DataArray, *, method: str = 'amount', op: str = '>=', thresh: str | float = '1 mm d-1', stat: str = 'mean', group: str | Grouper = 'time') xr.DataArray[source] Spell length distribution. Statistic of spell length distribution when the variable respects a condition (defined by an operation, a method and a threshold). Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • method ({‘amount’, ‘quantile’}) – Method to choose the threshold. ‘amount’: The threshold is directly the quantity in {thresh}. It needs to have the same units as {da}. ‘quantile’: The threshold is calculated as the quantile {thresh} of the distribution. • op ({“>”, “<”, “>=”, “<=”}) – Operation to verify the condition for a spell. The condition for a spell is variable {op} threshold. • thresh (str or float) – Threshold on which to evaluate the condition to have a spell. Str with units if the method is “amount”. Float of the quantile if the method is “quantile”. • stat ({‘mean’,’max’,’min’}) – Statistics to apply to the resampled input at the {group} (e.g. 1-31 Jan 1980) and then over all years (e.g. Jan 1980-2010) • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the spell lengths are coputed separately for each month. Returns xr.DataArray – {stat} of spell length distribution when the variable is {op} the {method} {thresh}. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> spell_length_distribution(da=pr, op='<',thresh ='1mm d-1', group='time.season') xclim.sdba.properties.trend(da: xr.DataArray, *, group: str | Grouper = 'time', output: str = 'slope') xr.DataArray[source] Linear Trend. The data is averaged over each time resolution and the interannual trend is returned. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • output ({‘slope’, ‘pvalue’}) – Attributes of the linear regression to return. ‘slope’ is the slope of the regression line. ‘pvalue’ is for a hypothesis test whose null hypothesis is that the slope is zero, using Wald Test with t-distribution of the test statistic. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping on the output. Returns xr.DataArray – Trend of the variable. Examples >>> tas = open_dataset(path_to_tas_file).tas >>> trend(da=tas, group='time.season') xclim.sdba.properties.var(da: xr.DataArray, *, group: str | Grouper = 'time') xr.DataArray[source] Variance. Variance of the variable over all years at the time resolution. Parameters • da (xr.DataArray) – Variable on which to calculate the diagnostic. • group ({‘time’, ‘time.season’, ‘time.month’}) – Grouping of the output. E.g. If ‘time.month’, the variance is performed separately for each month. Returns xr.DataArray – Variance of the variable. Examples >>> pr = open_dataset(path_to_pr_file).pr >>> var(da=pr, group='time.season') ### Measures submodule SDBA diagnostic tests are made up of properties and measures. Measures compare adjusted simulations to a reference, through statistical properties or directly. This framework for the diagnostic tests was inspired by the [VALUE] project. xclim.sdba.measures.annual_cycle_correlation(sim, ref, window: int = 15)[source] Annual cycle correlation. Pearson correlation coefficient between the smooth day-of-year averaged annual cycles of the simulation and the reference. In the smooth day-of-year averaged annual cycles, each day-of-year is averaged over all years and over a window of days around that day. Parameters • sim (xr.DataArray) – data from the simulation (a time-series for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (a time-series for each grid-point) • window (int) – Size of window around each day of year around which to take the mean. E.g. If window=31, Jan 1st is averaged over from December 17th to January 16th. Returns xr.DataArray, – Annual cycle correlation between the simulation and the reference xclim.sdba.measures.bias(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Bias. The bias is the simulation minus the reference. Parameters • sim (xr.DataArray) – data from the simulation (one value for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (one value for each grid-point) Returns xr.DataArray, – Bias between the simulation and the reference xclim.sdba.measures.circular_bias(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Circular bias. Bias considering circular time series. E.g. The bias between doy 365 and doy 1 is 364, but the circular bias is -1. Parameters • sim (xr.DataArray) – data from the simulation (one value for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (one value for each grid-point) Returns xr.DataArray, – Circular bias between the simulation and the reference xclim.sdba.measures.mae(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Mean absolute error. The mean absolute error on the time dimension between the simulation and the reference. Parameters • sim (xr.DataArray) – data from the simulation (a time-series for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (a time-series for each grid-point) Returns xr.DataArray, – Mean absolute error between the simulation and the reference xclim.sdba.measures.ratio(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Ratio. The ratio is the quotient of the simulation over the reference. Parameters • sim (xr.DataArray) – data from the simulation (one value for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (one value for each grid-point) Returns xr.DataArray, – Ratio between the simulation and the reference xclim.sdba.measures.relative_bias(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Relative Bias. The relative bias is the simulation minus reference, divided by the reference. Parameters • sim (xr.DataArray) – data from the simulation (one value for each grid-point) • ref (xr.DataArray) – data from the reference (observations) (one value for each grid-point) Returns xr.DataArray, – Relative bias between the simulation and the reference xclim.sdba.measures.rmse(sim: xarray.DataArray, ref: xarray.DataArray) xarray.DataArray[source] Root mean square error. The root mean square error on the time dimension between the simulation and the reference. Parameters • sim (xr.DataArray) – Data from the simulation (a time-series for each grid-point) • ref (xr.DataArray) – Data from the reference (observations) (a time-series for each grid-point) Returns xr.DataArray, – Root mean square error between the simulation and the reference ## Developer tools ### Base classes and developper tools class xclim.sdba.base.Parametrizable[source] Bases: dict Helper base class resembling a dictionary. This object is _completely_ defined by the content of its internal dictionary, accessible through item access (self[‘attr’]) or in self.parameters. When serializing and restoring this object, only members of that internal dict are preserved. All other attributes set directly with self.attr = value will not be preserved upon serialization and restoration of the object with [json]pickle. dictionary. Other variables set with self.var = data will be lost in the serialization process. This class is best serialized and restored with jsonpickle. property parameters All parameters as a dictionary. Read-only. class xclim.sdba.base.ParametrizableWithDataset[source] Parametrizeable class that also has a ds attribute storing a dataset. classmethod from_dataset(ds: xarray.Dataset)[source] Create an instance from a dataset. The dataset must have a global attribute with a name corresponding to cls._attribute, and that attribute must be the result of jsonpickle.encode(object) where object is of the same type as this object. set_dataset(ds: xarray.Dataset)[source] Stores an xarray dataset in the ds attribute. Useful with custom object initialization or if some external processing was performed. xclim.sdba.base.duck_empty(dims, sizes, dtype='float64', chunks=None)[source] Return an empty DataArray based on a numpy or dask backend, depending on the chunks argument. xclim.sdba.base.map_blocks(reduces: Optional[Sequence[str]] = None, **outvars)[source] Decorator for declaring functions and wrapping them into a map_blocks. It takes care of constructing the template dataset. Dimension order is not preserved. The decorated function must always have the signature: func(ds, **kwargs), where ds is a DataArray or a Dataset. It must always output a dataset matching the mapping passed to the decorator. Parameters • reduces (sequence of strings) – Name of the dimensions that are removed by the function. • outvars – Mapping from variable names in the output to their new dimensions. The placeholders Grouper.PROP, Grouper.DIM and Grouper.ADD_DIMS can be used to signify group.prop,group.dim and group.add_dims respectively. If an output keeps a dimension that another loses, that dimension name must be given in reduces and in the list of new dimensions of the first output. xclim.sdba.base.map_groups(reduces: Optional[Sequence[str]] = None, main_only: bool = False, **outvars)[source] Decorator for declaring functions acting only on groups and wrapping them into a map_blocks. See map_blocks(). This is the same as map_blocks but adds a call to group.apply() in the mapped func and the default value of reduces is changed. The decorated function must have the signature: func(ds, dim, **kwargs). Where ds is a DataAray or Dataset, dim is the group.dim (and add_dims). The group argument is stripped from the kwargs, but must evidently be provided in the call. Parameters • reduces (sequence of str) – Dimensions that are removed from the inputs by the function. Defaults to [Grouper.DIM, Grouper.ADD_DIMS] if main_only is False, and [Grouper.DIM] if main_only is True. See map_blocks(). • main_only (bool) – Same as for Grouper.apply(). • outvars – Mapping from variable names in the output to their new dimensions. The placeholders Grouper.PROP, Grouper.DIM and Grouper.ADD_DIMS can be used to signify group.prop,group.dim and group.add_dims respectively. If an output keeps a dimension that another loses, that dimension name must be given in reduces and in the list of new dimensions of the first output. xclim.sdba.base.parse_group(func: Callable, kwargs=None, allow_only=None) Callable[source] Parse the kwargs given to a function to set the group arg with a Grouper object. This function can be used as a decorator, in which case the parsing and updating of the kwargs is done at call time. It can also be called with a function from which extract the default group and kwargs to update, in which case it returns the updated kwargs. If allow_only is given, an exception is raised when the parsed group is not within that list. class xclim.sdba.detrending.BaseDetrend(*, group: Grouper | str = 'time', kind: str = '+', **kwargs)[source] Base class for detrending objects. Defines three methods: fit(da) : Compute trend from da and return a new _fitted_ Detrend object. detrend(da) : Return detrended array. retrend(da) : Puts trend back on da. A fitted Detrend object is unique to the trend coordinate of the object used in fit, (usually ‘time’). The computed trend is stored in Detrend.ds.trend. Subclasses should implement _get_trend_group() or _get_trend(). The first will be called in a group.apply(..., main_only=True), and should return a single DataArray. The second allows the use of functions wrapped in map_groups() and should also return a single DataArray. The subclasses may reimplement _detrend and _retrend. detrend(da: xarray.DataArray)[source] Remove the previously fitted trend from a DataArray. fit(da: xarray.DataArray)[source] Extract the trend of a DataArray along a specific dimension. Returns a new object that can be used for detrending and retrending. Fitted objects are unique to the fitted coordinate used. retrend(da: xarray.DataArray)[source] Put the previously fitted trend back on a DataArray. Children classes should implement these methods: • _train(ref, hist, **kwargs), classmethod receiving the training target and data, returning a training dataset and parameters to store in the object. • _adjust(sim, **kwargs)`, receiving the projected data and some arguments, returning the scen dataarray. Return bias-adjusted data. Refer to the class documentation for the algorithm details. Parameters • sim (DataArray) – Time series to be bias-adjusted, usually a model output. • args (xr.DataArray) – Other DataArrays needed for the adjustment (usually none). • kwargs – Algorithm-specific keyword arguments, see class doc. set_dataset(ds: xarray.Dataset)[source] Stores an xarray dataset in the ds attribute. Useful with custom object initialization or if some external processing was performed. classmethod train(ref: xarray.DataArray, hist: xarray.DataArray, **kwargs)[source] Train the adjustment object. Refer to the class documentation for the algorithm details. Parameters • ref (DataArray) – Training target, usually a reference time series drawn from observations. • hist (DataArray) – Training data, usually a model output whose biases are to be adjusted. Adjustment with no intermediate trained object. Children classes should implement a _adjust classmethod taking as input the three DataArrays and returning the scen dataset/array. classmethod adjust(ref: xarray.DataArray, hist: xarray.DataArray, sim: xarray.DataArray, **kwargs)[source] Return bias-adjusted data. Refer to the class documentation for the algorithm details. Parameters • ref (DataArray) – Training target, usually a reference time series drawn from observations. • hist (DataArray) – Training data, usually a model output whose biases are to be adjusted. • sim (DataArray) – Time series to be bias-adjusted, usually a model output. • kwargs – Algorithm-specific keyword arguments, see class doc. xclim.sdba.properties.register_statistical_properties(aspect: str, seasonal: bool, annual: bool) Callable[source] Register statistical properties in the STATISTICAL_PROPERTIES dictionary with its aspect and time resolutions. xclim.sdba.measures.check_same_units_and_convert(func) Callable[source] Verify that the simulation and the reference have the same units. If not, it converts the simulation to the units of the reference
# How Often Does This Happen? A man was born on Jan 1st for some year in the first half of the nineteenth century. On Jan 1st of the year $$x^2$$, it turns out that the man was $$x$$ years old. When was he born? ×
## Thinking Mathematically (6th Edition) The odds against E are found by taking the probability that E will not occur and dividing by the probability that E will occur. Odds against E = $\frac{P(not E)}{P(E)}$ We find the odds that you will win a boat if you purchased 20 out of 1000 raffle tickets. P(E) = $\frac{20}{1000}$ = $\frac{1}{50}$ P(not E)= 1- P(E) =1 - $\frac{1}{50}$ =$\frac{50 - 1}{50}$ =$\frac{49}{50}$ Odds against E = $\frac{\frac{49}{50}}{\frac{1}{50}}$ = $\frac{49}{1}$
# Coefficient bounds of an inequality Hello, Given positive integers $k$ and $n$. Are there upper bounds on coefficients $A$ and $B$ such that they depends only on $k$ (eg., $2 k^k$) and for all non-negative integer sequences $(a_i)_{1}^n, (b_i)_{1}^n$ and non-negative increasing real sequence $(p_i)_{1}^n$, the following inequality holds? $$\sum_{i=1}^n b_i \left(\sum_{j=1}^i a_j p_j \right)^k \leq A \sum_{i=1}^n a_i \left(\sum_{j=1}^i a_j p_j \right)^k + B \sum_{i=1}^n b_i \left(\sum_{j=1}^i b_j p_j \right)^k$$ Do you know any result or reference related to the question? Edit: 11/4 Due to the asymmetry of the left-hand side, we can prove the inequality for $A = k/(k+1)$ and $B = \Theta(k^k)$. Is it possible for the same kind of $A,B$ (up to a constant) such that \begin{eqnarray*} \sum_{i=1}^n &b_i& (a_1p_1 + \ldots + a_{j-1}p_{j-1} + (a_j + \ldots + a_n) p_j)^k \\ &\leq& A \sum_{i=1}^n a_i (a_1p_1 + \ldots + a_{j-1}p_{j-1} + (a_j + \ldots + a_n) p_j)^k \\ &+& B \sum_{i=1}^n b_i(b_1p_1 + \ldots + b_{j-1}p_{j-1} + (b_j + \ldots + b_n) p_j)^k \end{eqnarray*} The difficulty is due to the tail $(a_{j+1} + \ldots a_n)p_n$ (idem for $b$). • Context? Motivation? Indication of the cases or similar versions that you have already tried? – Yemon Choi Mar 20 '11 at 21:23 ## 1 Answer This is true. I prefer to denote $q_i=p_i^{-1}$, $\alpha_i=a_ip_i$, $\beta_i=b_ip_i$, $A_i=\sum_{j=1}^i\alpha_j$, $B_i=\sum_{j=1}^i\beta_j$. Now we have to check that $$\sum_i q_i\beta_iA_i^k\le C\sum_i q_i\alpha_iA_i^k+C\sum_i q_i\beta_iB_i^k$$ This is linear in $q_i$, so we just need to check that $$\sum_{i=1}^n\beta_iA_i^k\le C\sum_{i=1}^n\alpha_iA_i^k+C\sum_{i=1}^n\beta_iB_i^k$$ for all $n\ge 1$. But $xX^k$ is comparable with $X^{k+1}-(X-x)^{k+1}$ for $0\le x\le X$, so the right hand side is essentially $A_n^{k+1}+B_n^{k+1}$ and the left hand side is dominated by $(A_n+B_n)^{k+1}$. The rest should be clear. Edit: To cover your second inequality, let's show that the "missing part" $$\sum_{i=1}^n p_i^k b_i\left(\sum_{j=i}^n a_j\right)^k\le C\sum_{i=1}^n p_i^k a_i\left(\sum_{j=i}^n a_j\right)^k+ C\sum_{i=1}^n p_i^k b_i\left(\sum_{j=i}^n b_j\right)^k$$ holds. By now it shouldn't be surprising that it will suffice to check it for the sequence $p_i$ consisting of several zeroes followed by several ones, in which case it is just exactly the same story as before but written backwards (with summations starting with $n$ and going down). • Hello, I am slow so I don't understand why the first inequality is linear in $q_i$ so we just need to check the second inequality? Please explain. – ogn Mar 21 '11 at 17:59 • Sure. Any decreasing positive real sequence is a linear combination of "elementary" sequences 1,0,0,0... ; 1,1,0,0...; 1,1,1,0,... etc. with positive coefficients like (4,2,1,0.5)=0.5(1,1,1,1)+0.5(1,1,1,0)+1(1,1,0,0)+2(1,0,0,0). So, if you want to show that $\sum_j q_j Q_j\ge 0$ for all decreasing positive sequences $q_j$, it suffices to check that all partial sums of $Q_j$ are non-negative. – fedja Mar 21 '11 at 18:35 • Thank you very much! I understand now; I learn a new trick, that's nice. – ogn Mar 22 '11 at 10:45 • I play with this kind of inequality. I am wondering if the following inequality holds for some C depending only on k (always with the same assumptions on the sequences). \begin{align*} \sum_{i=1}^n & b_i(a_1p_1+ \ldots + a_ip_i+ (a_{i+1}+ \ldots + a_n)p_i)^k \\ ≤& C\sum_{i=1}^n a_i(a_1p_1+ \ldots + a_ip_i+(a_{i+1} +\ldots +a_n)p_i)^k \\ &+ C\sum_{i=1}^n b_i(b_1+…+b_ip_i+(b_{i+1}+…+b_n)p_i)^k \end{align*} I think that is true but the technique above does not give a proof. – ogn Mar 30 '11 at 17:50 • See the edit :). – fedja Mar 31 '11 at 4:09
Mathematics # Mark the correct alternative of the following.Two complementary angles are in the ratio $2:3$. The measure of the larger angle is? $54^o$ ##### SOLUTION Two Angles are Complementary when they add up to $90^0$ Let, $A$ be smaller angle and $B$ be larger angle. $\therefore \angle A+\angle B=90^0$        ...$(1)$ According to condition, $\dfrac{\angle A}{\angle B}=\dfrac2{3}\\$ $\therefore \angle A=\dfrac2{3}\angle B\\$ putting this in equation $(1)$ $\dfrac2{3}\angle B+\angle B=90^0\\$ $\therefore \dfrac5{3}\angle B=90^0\\$ $\therefore \angle B=54^0\\$ $\therefore$ The measure of bigger angle is $54^0$ You're just one step away Single Correct Medium Published on 09th 09, 2020 Questions 120418 Subjects 10 Chapters 88 Enrolled Students 87 #### Realted Questions Q1 Subjective Medium In Fig. $AE \parallel GF \parallel BD,\ AB \parallel CG \parallel DF$ and $\angle CHE=120^o$ Find $\angle ABC$ and $\angle CDE$. Asked in: Mathematics - Straight Lines 1 Verified Answer | Published on 17th 08, 2020 Q2 Single Correct Medium In the given figure, $\angle QPB$  is, • A. $60^{\circ}$ • B. $45^{\circ}$ • C. $15^{\circ}$ • D. $30^{\circ}$ Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020 Q3 Single Correct Medium If two parallel lines are intersected by a transversal then the bisectors of the interior angles form a: • A. rhombus • B. parallelogram • C. square • D. rectangle Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 23rd 09, 2020 Q4 Single Correct Medium In the given figure, $m ||n$ and $\angle1=50^o$ then find $\angle5$. • A. $60^o$ • B. $70^o$ • C. $180^o$ • D. $130^o$ Asked in: Mathematics - Straight Lines 1 Verified Answer | Published on 17th 08, 2020 Q5 Subjective Medium In each of the above figure write, if any, $(i)$ each pair of vertically opposite angles, and $(ii)$ each linear pair. Asked in: Mathematics - Lines and Angles 1 Verified Answer | Published on 09th 09, 2020
Function to design variable treatments for binary prediction of a categorical outcome. Data frame is assumed to have only atomic columns except for dates (which are converted to numeric). Note: re-encoding high cardinality categorical variables can introduce undesirable nested model bias, for such data consider using mkCrossFrameCExperiment. designTreatmentsC( dframe, varlist, outcomename, outcometarget = TRUE, ..., weights = c(), minFraction = 0.02, smFactor = 0, rareCount = 0, rareSig = NULL, collarProb = 0, codeRestriction = NULL, customCoders = NULL, splitFunction = NULL, ncross = 3, forceSplit = FALSE, catScaling = TRUE, verbose = TRUE, parallelCluster = NULL, use_parallel = TRUE, missingness_imputation = NULL, imputation_map = NULL ) ## Arguments dframe Data frame to learn treatments from (training data), must have at least 1 row. Names of columns to treat (effective variables). Name of column holding outcome variable. dframe[[outcomename]] must be only finite non-missing values. Value/level of outcome to be considered "success", and there must be a cut such that dframe[[outcomename]]==outcometarget at least twice and dframe[[outcomename]]!=outcometarget at least twice. no additional arguments, declared to forced named binding of later arguments optional training weights for each row optional minimum frequency a categorical level must have to be converted to an indicator column. optional smoothing factor for impact coding models. optional integer, allow levels with this count or below to be pooled into a shared rare-level. Defaults to 0 or off. optional numeric, suppress levels from pooling at this significance value greater. Defaults to NULL or off. what fraction of the data (pseudo-probability) to collar data at if doCollar is set during prepare.treatmentplan. what types of variables to produce (character array of level codes, NULL means no restriction). map from code names to custom categorical variable encoding functions (please see https://github.com/WinVector/vtreat/blob/main/extras/CustomLevelCoders.md). (optional) see vtreat::buildEvalSets . optional scalar >=2 number of cross validation splits use in rescoring complex variables. logical, if TRUE force cross-validated significance calculations on all variables. optional, if TRUE use glm() linkspace, if FALSE use lm() for scaling. if TRUE print progress. (optional) a cluster object created by package parallel or package snow. logical, if TRUE use parallel methods (when parallel cluster is set). function of signature f(values: numeric, weights: numeric), simple missing value imputer. map from column names to functions of signature f(values: numeric, weights: numeric), simple missing value imputers. ## Value treatment plan (for use with prepare) ## Details The main fields are mostly vectors with names (all with the same names in the same order): - vars : (character array without names) names of variables (in same order as names on the other diagnostic vectors) - varMoves : logical TRUE if the variable varied during hold out scoring, only variables that move will be in the treated frame - #' - sig : an estimate significance of effect See the vtreat vignette for a bit more detail and a worked example. Columns that do not vary are not passed through. Note: re-encoding high cardinality on training data can introduce nested model bias, consider using mkCrossFrameCExperiment instead. prepare.treatmentplan, designTreatmentsN, designTreatmentsZ, mkCrossFrameCExperiment ## Examples dTrainC <- data.frame(x=c('a','a','a','b','b','b'), z=c(1,2,3,4,5,6), y=c(FALSE,FALSE,TRUE,FALSE,TRUE,TRUE)) dTestC <- data.frame(x=c('a','b','c',NA), z=c(10,20,30,NA)) treatmentsC <- designTreatmentsC(dTrainC,colnames(dTrainC),'y',TRUE) #> [1] "vtreat 1.6.3 inspecting inputs Fri Jun 11 07:01:19 2021" #> [1] "designing treatments Fri Jun 11 07:01:19 2021" #> [1] " have initial level statistics Fri Jun 11 07:01:19 2021" #> [1] " scoring treatments Fri Jun 11 07:01:19 2021" #> [1] "have treatment plan Fri Jun 11 07:01:19 2021"dTestCTreated <- prepare(treatmentsC,dTestC,pruneSig=0.99)
Get intersection of files occuring in at least 2 files? 1 0 Entering edit mode 21 months ago vctrm67 ▴ 20 Is there an easy way to use bedtools (or any other software) to get the features of the intersection of multiple bed files, for features/intervals that occur in at least 2 bed files? bedtools • 575 views ADD COMMENT 1 Entering edit mode 21 months ago Here is a general approach with BEDOPS bedmap --count, which generalizes to N internally-disjoint (and sort-bed-sorted) input files: $bedops --everything 1.bed 2.bed 3.bed ... N.bed \ | bedmap --count --echo --delim '\t' - \ | uniq \ | awk -v OVERLAPS=2 '$1 >= OVERLAPS' \ | cut -f2- \ > common.bed The bedmap step uses, by default, overlap of one or more bases for inclusion. You can modify this threshold to be more stringent. The readthedocs site has details, or run bedmap --help for a shorter overview. By changing the test in the awk statement, this approach can be modified to return other subsets of the input's power set, e.g., all elements common to exactly, less than, or greater than N inputs. Once you have the elements which meet your overlap threshold, you can then process those elements to get their overlapping regions via a final operation: $bedops --partition common.bed | bedmap --count --echo --delim '\t' - common.bed | awk '$1 >= 2' | cut -f2- > overlaps.bed If your inputs are not internally disjoint — if some elements may overlap within any one of the N input files — you might instead apply some ID-based tricks I describe in my answer over here: A: Intersect multiple BED files ADD COMMENT 0 Entering edit mode Hi, thanks for the answer. I am a bit confused by when you say "internally disjoint". Are you implying that for any region in any bed file, it must not overlap with any other region in any other bed file? If so, I think this would not help me; I am looking to preserve the regions that overlap in at least 2 bed files. Or by "internally disjoint" do you refer to any region within a single bam file? As in, no region within any given bam file can overlap with any other region in that same bam file? ADD REPLY 0 Entering edit mode What I mean is the case where any two regions within a single BED file overlap. For instance, sequencing reads converted from BAM or SAM to BED will almost certainly overlap within the same file, because reads typically pile up that way. Other types of annotations could also be problematic, in that they can overlap: genes, TF binding sites, DHSs. If this is the case, then you need to do a little more work to add (unique) per-file identifiers to each BED file, so that when you find overlaps, you also test if they have different per-file identifiers. Otherwise, without that step (or some similar approach) you would get overlapping regions that are internal to one file. This does not appear to be what you want. It sounds like you want overlaps only from at least two or more files. ADD REPLY 0 Entering edit mode I was looking at the bedmaps manual, and I read this line: Looking back at the Map and Reference datasets, let’s say we want to count the number of elements in Map that overlap a given Reference element, as well as the extent of that overlap as measured by the total number of overlapping bases from mapped elements. For this, we use the --count and --bases flags, respectively: $bedmap --echo --count --bases reference.bed map.bed chr21 33031200 33032400 ref-1|61|1200 chr21 33031400 33031800 ref-2|21|400 chr21 33031900 33032000 ref-3|6|100 This result tells us that there are 61 elements in Map that overlap ref-1, and 1200 total bases from the 61 elements overlap bases of ref-1. Similarly, 21 elements overlap ref-2, and 400 total bases from the 21 elements overlap bases of ref-2, etc. If I utilize more than 2 bed files A, B, C...etc., will this get all the common elements in bed files B, C, D... compared to A, or will it compare among all bed files (such as A, C, D, E... to B) as well? ADD REPLY 0 Entering edit mode The bedops tool allows more than two files (A, B, C, etc.). Conversely, the bedmap tool only takes two files as arguments (reference and map). However, if you're using bedmap, you can use a trick called a "process substitution" to replace the reference.bed or map.bed files. A process substitution is a fancy term for building a BED file on the fly. We can use it as input to bedmap. Here's an example: $ bedmap --echo --count --bases reference.bed <(bedops --everything A.bed B.bed ... N.bed) In this example, the former map.bed input gets replaced with what would come out of bedops --everything A.bed B.bed ... N.bed (the set union of files A, B, C, etc.). This is functionally equivalent to: \$ bedops --everything A.bed ... N.bed | bedmap --echo --count --bases reference.bed - Any command you put into the ... in <(...) returns a file. You can specify this as input to bedmap, bedops, and other tools. You could do <(bedops --merge A.bed ... N.bed), or any other operations on any other files. It just depends on what kinds of elements you want to map. ADD REPLY 0 Entering edit mode Also if you could explain the purpose of the second command, that would be helpful. I ran these commands but in the overlap.bed file, I see that the intervals are not contiguous and are only 1 base long. Is there a reason for this? ADD REPLY 0 Entering edit mode Can you post the command you're running and perhaps provide access to your input files (or a small sample snippet)? I can run that locally and help answer your question with more specifics. ADD REPLY Login before adding your answer. Traffic: 2440 users visited in the last hour Help About FAQ Access RSS API Stats Use of this site constitutes acceptance of our User Agreement and Privacy Policy. Powered by the version 2.3.6
# What is the arclength of (t-3,t+4) on t in [2,4]? Apr 2, 2018 $A = 2 \sqrt{2}$ #### Explanation: The formula for parametric arc length is: $A = {\int}_{a}^{b} \sqrt{{\left(\frac{\mathrm{dx}}{\mathrm{dt}}\right)}^{2} + {\left(\frac{\mathrm{dy}}{\mathrm{dt}}\right)}^{2}} \setminus \mathrm{dt}$ We begin by finding the two derivatives: $\frac{\mathrm{dx}}{\mathrm{dt}} = 1$ and $\frac{\mathrm{dy}}{\mathrm{dt}} = 1$ This gives that the arc length is: $A = {\int}_{2}^{4} \sqrt{{1}^{2} + {1}^{2}} \setminus \mathrm{dt} = {\int}_{2}^{4} \sqrt{2} \setminus \mathrm{dt} = {\left[\sqrt{2} t\right]}_{2}^{4} = 4 \sqrt{2} - 2 \sqrt{2} = 2 \sqrt{2}$ In fact, since the parametric function is so simple (it is a straight line), we don't even need the integral formula. If we plot the function in a graph, we can just use the regular distance formula: $A = \sqrt{{\left({x}_{1} - {x}_{2}\right)}^{2} + {\left({y}_{1} - {y}_{2}\right)}^{2}} = \sqrt{4 + 4} = \sqrt{8} = \sqrt{4 \cdot 2} = 2 \sqrt{2}$ This gives us the same result as the integral, showing that either method works, although in this case, I'd recommend the graphical method because it is simpler.
enrichKEGG and ridgplot from differential analysis gene data 0 0 Entering edit mode 17 months ago I have done the differential analysis using DEseq2 and I would like to plot the ridgeplot and enrichKEGG using clusterProfiler package. The package required the entrez gene ID and my result object contain gene symbols. I have tried to # this translates the Gene from MSfragger to something enrichgo can read gene.df <- bitr(gene, fromType = "SYMBOL", toType = "ENTREZID",OrgDb = org.Hs.eg.db) geneList <- gene.df$ENTREZID names(geneList) <- as.character(gene.df$SYMBOL) geneList <- sort(geneList, decreasing = TRUE) When I use the above code I get an error --> No gene can be mapped.... --> Expected input gene ID: 14187,74551,13118,56451,52538,93747 --> return NULL...` Is there a way in which this can be done better RNA-Seq • 421 views
Volume 56 , issue 3$-$4 ( 2004 ) back Unification of some concepts similar to the Lindelöf property 63$-$76 T. Hatice Yalva\c c Abstract In this paper the $\varphi_{1,2}$-Lindelöf property is defined and studied with the aim of unifying various concepts related to the Lindelöf property in General Topology. Keywords: Lindelöf property, operation, unification, filter, convergence, supratopology. MSC: 54D20, 54A20 Semiregular properties and generalized Lindelöf spaces 77$-$80 A. J. Fawakhreh, A. Kı lı \c cman Abstract Let $(X, \tau)$ be a topological space and $(X, \tau^{\ast})$ its semiregularization. Then a topological property ${\Cal P}$ is semiregular provided that $\tau$ has property ${\Cal P}$ if and only if $\tau^{\ast}$ has the same property. In this work we study semiregular property of almost Lindelöf, weakly Lindelöf, nearly regular-Lindelöf, almost regular-Lindelöf and weakly regular-Lindelöf spaces. We prove that all these topological properties, on the contrary of Lindelöf property, are semiregular properties. Keywords: Semiregular property, semiregularization topology, nearly Lindelöf, almost Lindelöf, weakly Lindelöf, nearly regular-Lindelöf, almost regular-Lindelöf and weakly regular-Lindelöf spaces. MSC: 54A05, 54A20, 54F45 Contr\^olabilité d'un probl\`eme non-linéaire inverse de la théorie de transport 81$-$84 S. Lahrech Abstract In this note controllability of an inverse problem of the transport theory is inverstigated. They are based on an a priori estimate of the problem from our earlier paper [1]. Keywords: Controlability, inverse problem, transport theory. MSC: 49N50 Renormalizing iterated repelling germs of $C^2$ 85$-$90 Claudio Meneghini Abstract We find bounded-degree renormalizing polynomial families for iterated repelling germs of $(C^2,0)$. These families consist in contracting mappings and yield germs whose differentials have rank two. Keywords: Renormalizing families, holomorphic mappings, repelling fixed point. MSC: 32H50 On uniform convergence of spectral expansions and their derivatives for functions from $W_p^1$ 91$-$104 Nebojša L. Lažetić Abstract We consider the global uniform convergence of spectral expansions and their derivatives, $\sum_{n=1}^{ınfty}f_n\,u_n^{(j)}(x)$, $(j=0,1,2)$, arising by an arbitrary one-dimensional self-adjoint Schrödinger operator, defined on a bounded interval $G\subset\Bbb R$. We establish the absolute and uniform convergence on $\overline G$ of the series, supposing that $f$ belongs to suitable defined subclasses of $W_p^{(1+j)}(G)$ $(1 Keywords: Spectral expansion, uniform convergence, Schrödinger operator MSC: 34L10, 47E05 The representations of finite reflection groups 105$-$114 Muhittin Başer, Sait Halı cı o{ğ}lu Abstract The construction of all irreducible modules of the symmetric groups over an arbitrary field which reduce to Specht modules in the case of fields of characteristic zero is given by G. D. James. Halı cı o{ğ}lu and Morris describe a possible extension of James' work for Weyl groups in general, where Young tableaux are interpreted in terms of root systems. In this paper, we further develop the theory and give a possible extension of this construction for finite reflection groups which cover the Weyl groups. Keywords: Specht module, tableau, tabloid, finite reflection group. MSC: 20C33, 20F55, 22E45 On the convergence of finite difference scheme for elliptic equation with coefficients containing Dirac distribution 115$-$123 Boško S. Jovanović, Lubin G. Vulkov Abstract First boundary value problem for elliptic equation with youngest coefficient containing Dirac distribution concentrated on a smooth curve is considered. For this problem a finite difference scheme on a special quasiregular grid is constructed. The finite difference scheme converges in discrete$W_2^1$norm with the rate$O(h^{3/2})\$. Convergence rate is compatible with the smoothness of input data. Keywords: Boundary value problem, generalized solution, finite differences, rate of convergence. MSC: 65N15
Thread: Function finding 1. Function finding To determine all the functions $f$ defined on the set of real numbers and obtaining real values such that for any real numbers $x, y$ such equation is true: $f(x+f(x+y))-f(x-y)-f(x)^2=0$ It works supposing $f(x)=0 and f(y)=0$ but I'm clueless on the next step. Could you please help? 2. Re: Function finding Where from do you know that $f(x)=0 ,and ,f(y)=0$ are all function that satisfy this equation? 3. Re: Function finding I don't. It was just an example set of correct data. As I stated, I don't know how to acquire the other ones... 4. Re: Function finding Originally Posted by GGPaltrow To determine all the functions $f$ defined on the set of real numbers and obtaining real values such that for any real numbers $x, y$ such equation is true: $f(x+f(x+y))-f(x-y)-f(x)^2=0$ It works supposing $f(x)=0 and f(y)=0$ but I'm clueless on the next step. Could you please help? Asked here: http://www.mathhelpforum.com/math-he...on-188990.html Thread closed.
# A uniform circular disc of radius R, lying on a friction less horizontal plane is rotating with an angular velocity $\omega$ about its own axis. Another identical circular disc is gently placed on the top of the first disc coaxially. The loss in rotational kinetic energy due to friction between the two discs, as they acquire common angular velocity is (I is moment of inertia of the disc) $(a)\;\frac{1}{8} I \omega \quad (b)\;\frac{1}{4} I \omega^2 \quad (c)\;\frac{1}{2}I \omega ^2 \quad (d)\;I \omega ^2$ $(b)\;\frac{1}{4} I \omega^2$
# networkx.algorithms.graph_hashing.weisfeiler_lehman_subgraph_hashes¶ weisfeiler_lehman_subgraph_hashes(G, edge_attr=None, node_attr=None, iterations=3, digest_size=16)[source] Return a dictionary of subgraph hashes by node. The dictionary is keyed by node to a list of hashes in increasingly sized induced subgraphs containing the nodes within 2*k edges of the key node for increasing integer k until all nodes are included. The function iteratively aggregates and hashes neighbourhoods of each node. This is achieved for each step by replacing for each node its label from the previous iteration with its hashed 1-hop neighborhood aggregate. The new node label is then appended to a list of node labels for each node. To aggregate neighborhoods at each step for a node $$n$$, all labels of nodes adjacent to $$n$$ are concatenated. If the edge_attr parameter is set, labels for each neighboring node are prefixed with the value of this attribute along the connecting edge from this neighbor to node $$n$$. The resulting string is then hashed to compress this information into a fixed digest size. Thus, at the $$i$$ distance influence any given hashed node label. We can therefore say that at depth $$i$$ for node $$n$$ we have a hash for a subgraph induced by the $$2i$$-hop neighborhood of $$n$$. Can be used to to create general Weisfeiler-Lehman graph kernels, or generate features for graphs or nodes, for example to generate ‘words’ in a graph as seen in the ‘graph2vec’ algorithm. See [1] & [2] respectively for details. Hashes are identical for isomorphic subgraphs and there exist strong guarantees that non-isomorphic graphs will get different hashes. See [1] for details. If no node or edge attributes are provided, the degree of each node is used as its initial label. Otherwise, node and/or edge labels are used to compute the hash. Parameters G: graph The graph to be hashed. Can have node and/or edge attributes. Can also have no attributes. edge_attr: string, default=None The key in edge attribute dictionary to be used for hashing. If None, edge labels are ignored. node_attr: string, default=None The key in node attribute dictionary to be used for hashing. If None, and no edge_attr given, use the degrees of the nodes as labels. iterations: int, default=3 Number of neighbor aggregations to perform. Should be larger for larger graphs. digest_size: int, default=16 Size (in bits) of blake2b hash digest to use for hashing node labels. The default size is 16 bits Returns node_subgraph_hashesdict A dictionary with each key given by a node in G, and each value given by the subgraph hashes in order of depth from the key node. Notes To hash the full graph when subgraph hashes are not needed, use weisfeiler_lehman_graph_hash for efficiency. Similarity between hashes does not imply similarity between graphs. References 1(1,2) Shervashidze, Nino, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler Lehman Graph Kernels. Journal of Machine Learning Research. 2011. http://www.jmlr.org/papers/volume12/shervashidze11a/shervashidze11a.pdf 2 Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu and Shantanu Jaiswa. graph2vec: Learning Distributed Representations of Graphs. arXiv. 2017 https://arxiv.org/pdf/1707.05005.pdf Examples Finding similar nodes in different graphs: >>> G1 = nx.Graph() >>> G1.add_edges_from([ … (1, 2), (2, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 7) … ]) >>> G2 = nx.Graph() >>> G2.add_edges_from([ … (1, 3), (2, 3), (1, 6), (1, 5), (4, 6) … ]) >>> g1_hashes = nx.weisfeiler_lehman_subgraph_hashes(G1, iterations=3, digest_size=8) >>> g2_hashes = nx.weisfeiler_lehman_subgraph_hashes(G2, iterations=3, digest_size=8) Even though G1 and G2 are not isomorphic (they have different numbers of edges), the hash sequence of depth 3 for node 1 in G1 and node 5 in G2 are similar: >>> g1_hashes[1] ['a93b64973cfc8897', 'db1b43ae35a1878f', '57872a7d2059c1c0'] >>> g2_hashes[5] ['a93b64973cfc8897', 'db1b43ae35a1878f', '1716d2a4012fa4bc'] The first 2 WL subgraph hashes match. From this we can conclude that it’s very likely the neighborhood of 4 hops around these nodes are isomorphic: each iteration aggregates 1-hop neighbourhoods meaning hashes at depth $$n$$ are influenced by every node within $$2n$$ hops. However the neighborhood of 6 hops is no longer isomorphic since their 3rd hash does not match. These nodes may be candidates to be classified together since their local topology is similar.
# How to apply the SVD based approach to sample from a matrix-variate normal with given correlation matrices and standard deviations? A sample from a multivariate normal distribution $X$ can be constructed to have a covariance $C$ even for positive semi-definite covariances according to this technique involving an SVD. Furthermore, $C$ can be constructed from a correlation matrix $R$ and a diagonal matrix with the corresponding standard deviations in the diagonal $Q$ according to $C = QRQ$. For the matrix-variate case this technique should be easily extendable, e.g. the Cholesky decompositions of sample covariance $S = AA^{T}$ and feature covariance $F = B^{T}B$ are both applied to yield a sample $Y = AXB$. Extrapolating this to the SVD based approach mentioned above and keeping its notation is something along the line of $Y = (\mathbb{V}_{F}\mathbb{D}_{F})\ X\ (\mathbb{V}_{S}\mathbb{D}_{S})$? However, when estimating and checking the standard deviations of each component they are not close to those specified in $Q_{F}$ or $Q_{S}$ for the SVD based approach. They are fine for the multivariate normal case, e.g. $\mathbb{V}_{F}\mathbb{D}_{F}X$. Example: import numpy as np stdsA = np.diag([4.0, 2.5, 1.5, 1.0]) stdsB = np.diag([1.0, 0.9, 0.8, 0.7, 1.0]) corrA = np.asarray([[1.0, -1, 0, 0], [-1, 1, 0, 0], [0, 0, 1, 1], [0, 0, 1, 1]]) corrB = np.asarray([[1.0, 1, 1, 0, 0], [1, 1, 1, 0, 0], [1, 1, 1, 0, 0], [0, 0, 0, 1, -1], [0, 0, 0, -1, 1]]) AAt = np.dot(np.dot(stdsA, corrA), stdsA) BtB = np.dot(np.dot(stdsB, corrB), stdsB) uA, sA, vtA = np.linalg.svd(AAt) uB, sB, vtB = np.linalg.svd(BtB) sA[sA < 1.0e-8] = 0.0 sB[sB < 1.0e-8] = 0.0 Y1_, Y2_ = [], [] for i in xrange(10000): X = np.random.standard_normal((4, 5)) Y1 = np.dot(np.dot(uA, np.diag(np.sqrt(sA))), X) Y2 = np.dot(Y1, np.dot(uB, np.diag(np.sqrt(sB)))) Y1_.append(Y1) Y2_.append(Y2) Y1 = np.asarray(Y1_) Y2 = np.asarray(Y2_) est_stdsA_Y1 = np.asarray([np.std(Y1[:, 0, :].ravel()), np.std(Y1[:, 1, :].ravel()), np.std(Y1[:, 2, :].ravel()), np.std(Y1[:, 3, :].ravel())]) est_stdsA_Y2 = np.asarray([np.std(Y2[:, 0, :].ravel()), np.std(Y2[:, 1, :].ravel()), np.std(Y2[:, 2, :].ravel()), np.std(Y2[:, 3, :].ravel())]) est_stdsB_Y2 = np.asarray([np.std(Y2[:, :, 0].ravel()), np.std(Y2[:, :, 1].ravel()), np.std(Y2[:, :, 2].ravel()), np.std(Y2[:, :, 3].ravel()), np.std(Y2[:, :, 4].ravel())]) print est_stdsA_Y1 print np.diag(stdsA) print print est_stdsA_Y2 print np.diag(stdsA) print print est_stdsB_Y2 print np.diag(stdsB) Output: [ 3.99356564 2.49597853 1.50897736 1.00598491] [ 4. 2.5 1.5 1. ] [ 6.16868733 3.85542958 0.84674523 0.56449682] [ 4. 2.5 1.5 1. ] [ 8.21223336 8.16629974 0.00000000 0.00000000 0.00000000] [ 1. 0.9 0.8 0.7 1. ] I am not exactly sure what your desired properties are. However below is an outline of how to get specified expected inner and outer products. From your Wikipedia link, we have $$X\sim\mathrm{MN}_{ n\times p}[0,A,B]\implies \mathbb{E}\left[XX^T\right]=\beta A \text{ , } \mathbb{E}\left[X^TX\right]=\alpha B \text{ , } \alpha=\mathrm{tr}[A] \text{ , } \beta=\mathrm{tr}[B]$$ where $\mathrm{tr}[]$ is the trace operator, which transforms a matrix to a scalar. So if $X$ has a (reduced) SVD of $$X=USV^T \text{ , } U^TU=V^TV=I \text{ , } S=\mathrm{diag}[\sigma]$$ then $$\mathbb{E}\left[US^2U^T\right]=\beta A \text{ , } \mathbb{E}\left[VS^2V^T\right]=\alpha B$$ If we fix the SVD first, and define $$A=\tfrac{1}{\|\sigma\|}US^2U^T \text{ , } B=\tfrac{1}{\|\sigma\|}VS^2V^T$$ then by design we have $$\alpha=\beta=\|\sigma\|$$ and $$X=USV^T \implies XX^T=\beta A \text{ , } X^TX=\alpha B$$ For the matrix square roots, we have $$\mathsf{A}=U\mathsf{S} \text{ , } \mathsf{B}=\mathsf{S}V^T \text{ , } \mathsf{S}=\tfrac{1}{\sqrt{\|\sigma\|}}S \implies A=\mathsf{A}\mathsf{A}^T \text{ , } B=\mathsf{B}^T\mathsf{B}$$ So in terms of your formula for generating samples $$X=\mathsf{A}Z\mathsf{B} \text{ , }Z\sim\mathrm{MN}_{ n\times p}[0,I,I] \implies X\sim\mathrm{MN}_{ n\times p}[0,A,B]$$ Note that from the above, the SVD's of $A$ and $B$ are given by $$A \equiv U_A S_A V_A^T = U\mathsf{S}^2U^T \text{ , } B \equiv U_B S_B V_B^T = V\mathsf{S}^2V^T$$ In particular, note that • both decompositions are symmetric: $V_A=U_A=U$ and $V_B=U_B=V$ • their singular values are identical: $S_A=S_B=\mathsf{S}^2$ So, to the extent that I understand your desired properties, I believe you may have less degrees of freedom than you think you do. • I'll have to think about your post. See here for a somewhat more concise exposition of the problem: math.stackexchange.com/questions/1958330/… – baf84b4c Oct 7 '16 at 18:08 • I may be wrong, as I played a little loose with the expectations in my answer. However at the very least, from the Wikipedia article you link to, the expected $X$ covariances are not the same as the "covariance matrices" that parameterize the PDF. Note the trace factors that multiply the $X$ covariances. – GeoMatt22 Oct 7 '16 at 18:18 This seems to be expected behavior. As mentioned by @GeoMatt22, due to inherent constraints, the expected standard deviations are not necessarily equal to those put in via $R$. See this section on the wiki article about the matrix-variate normal, e.g. specifically the product with the trace($S$) and trace($F$) respectively.
# The Ratio Between the Curved Surface Area and the Total Surface Area of a Cylinder is 1: 2. Find the Ratio Between the Height and the Radius of the Cylinder. - Mathematics Sum The ratio between the curved surface area and the total surface area of a cylinder is 1: 2. Find the ratio between the height and the radius of the cylinder. #### Solution Let r be the radius and h be the height of a right circular cylinder, then Curved surface area = 2πrh and total surface area = 2πrh x 2πr2 = 2πr(h + r) But their ratio is 1: 2 therefore (2pir)/(2pir(h + r)) = 1/2 ⇒ h/(h + r) = 1/2 ⇒ 2h = h + r ⇒ 2h - h = r ⇒ h = r = 1 : 1 Hence their radius and height are equal Is there an error in this question or solution? #### APPEARS IN Selina Concise Mathematics Class 8 ICSE Chapter 21 Surface Area, Volume and Capacity Exercise 21 (D) | Q 5 | Page 242
# Trig equations • Oct 23rd 2008, 02:52 PM IDontunderstand Trig equations How many solutions exist for the equation 3Cos 2x=-3 in the interval 0<=x<=2Pi The 2 confuses me. How do I adjust for that? Thanks • Oct 23rd 2008, 03:14 PM skeeter $3\cos(2x) = -3$ $\cos(2x) = -1$ since $0 \leq x \leq 2\pi$ , then $0 \leq 2x \leq 4\pi$ $2x = \pi$ $x = \frac{\pi}{2}$ $2x = 3\pi$ $x = \frac{3\pi}{2}$
# Zero Angular momentum at a Non-Zero distance in Quantum Mechanics In Bohr atomic theory We learn that only those orbits are possible for which angular momentum is intger multiple of $$h$$.ie. $$mvr=\frac{nh}{2\pi}$$ Here n is used for angular quantum number,That can take value $$n=1,2,3..$$. Now in quantum theory we learn that for Hydrogen atom the wave function for ground state where $$n=1$$ ,$$l=0$$ and $$m=0$$. The wave function given by $$\psi_{100}=Ce^{-r/a_0}$$ where $$C$$ is some constant. Here $$l$$ is angular quantum number.The probability for a particle to be at distance $$r$$ to $$r+dr$$ is given by apart from some constant $$P(r)=|\psi_{100}|^2r^2dr=Cr^2e^{-2r/a_0}$$ That suggest that particle is most probable to found around $$a_0$$. So How it is possible that particle is at non-zero distance from the orgin(not doing a radial motion) but have a zero angular momentum? How bohr's theory differes from this result? For a particle to have zero angular momentum it means that it's in an eigenstate of operator $${\bf L}^2 = ({\bf r} \times {\bf p})^2 = -\hbar^2 ({\bf r}\times\nabla)^2$$ with eigenvalue 0. That is $$({\bf r}\times\nabla)^2 \psi({\bf r}) = 0$$ Solving this equation in radial cooridantes leads to a solution $$\psi(r,\theta,\phi) = f(r)$$ where $$f$$ is an arbitrary function (smooth enough). In conclusion, a particle whose wavefunction is spherically symetric will have zero angular momentum. Regerdless of its expected value of distance from the center of the system of coordinates. It is easy: just recall the classical-mechanics definition of angular momentum: $$\bf L = \bf r \times p$$ If $$\bf p$$ is along the direction of $$\bf r$$, $$\bf L$$ is certainly zero, regardless if the magnitudes of both quantities are not zero. • Particle doing a circular motion does not have a zero angular momentum that what I show from Bohr model. – Young Kindaichi Feb 13 at 14:27 • It's no necessarily a circular motion; it can be also elliptical motions, mathematically it's possible to have zero angular momentum in elliptical motions: physics.stackexchange.com/questions/455066/… That's why there was further refinement in Bohr's model, namely bohr-sommerfeld atomic model; just look for it! – rnels12 Feb 13 at 15:32 • Also, I think you confuse the definition of "radial motion" in your original question; a radial motion means a motion along the radius vector. Perhaps you meant circular motion? If so, again, my answer is above, the motions are not necessarily circular! You cannot expect the original Bohr's model to fully agree with the quantum mechanics without modifying it to take into account the possibility of elliptical motions. – rnels12 Feb 13 at 15:44 • You have written that p is along r but this is not the case p and r certainly perpendicular to each other classically . – Young Kindaichi Feb 13 at 17:40 • Again, $\bf r$ doesn't have be perpendicular to $\bf p$, because the motion doesn't have to be circular. In elliptical motions, $\bf p$ has also a component along $\bf r$. Have you ever learnt classical mechanics before? If not, I suggest you learn it first; Bohr's model is also sometimes called a solar system model because it tries to mimic planetary motions in the solar system; i.e. elliptical motions. The case where $\bf p$ has only the component along $\bf r$, without the perpendicular component, refers to the zero angular momentum. – rnels12 Feb 14 at 6:39 Even in classical mechanics, a particle can have zero angular momentum at a non-zero distance from the origin if its velocity is purely radial. You have to remeber that the angular momentum norm is $$||\overrightarrow{L}|| = ||r||\, ||p|| \, |\sin(\theta)|$$, where $$\theta$$ is the angle between $$\overrightarrow{r}$$ and $$\overrightarrow{p}$$. In quantum mechanics, the picture is slightly different, because the particle does not have a definite position nor a definite momentum, and is instead described by a wavefunction $$\psi(\overrightarrow{r})$$ such that the probability density for the particle to be at $$\overrightarrow{r}$$ is $$|\psi(\overrightarrow{r})|^2$$. What about $$\overrightarrow{p}$$? This is were the correspondance with classical mechanics breaks down, as $$\overrightarrow{p}$$ cannot be determined independently of $$\overrightarrow{r}$$. Instead, it can be shown from the postulates of quantum mechanics that $$\overrightarrow{p}$$ should be replaced by $$-i \hbar \overrightarrow{\nabla}$$ when we intergrate it against something over $$\overrightarrow{r}$$. This allows to calculate the mean angular momentum of any state described by a wavefunction $$\psi$$: $$\left\langle \overrightarrow{L} \right\rangle = \int \psi^*(\overrightarrow{r}) (\overrightarrow{r} \times \overrightarrow{p})\psi(\overrightarrow{r})\overrightarrow{dr} = - i \hbar \int \psi^*(\overrightarrow{r}) \overrightarrow{r} \times (\overrightarrow{\nabla}\psi(\overrightarrow{r}))\overrightarrow{dr}$$ But in the case of a rotationally symmetric wavefunction $$\psi(\overrightarrow{r})$$ (such as any wavefunction with $$l = 0$$ in the Bohr theory), $$\overrightarrow{\nabla}\psi(\overrightarrow{r})$$ is directed along $$\overrightarrow{r}$$, so that the cross-product in the integral is $$0$$. This was of course to be expected: $$\left\langle \overrightarrow{L} \right\rangle$$ is a vector, but since the wavefunction describing the state is rotationally symetric, any "choice" of direction for $$\left\langle \overrightarrow{L} \right\rangle$$ would be completely arbitrary, so it must be that $$\left\langle \overrightarrow{L} \right\rangle = \overrightarrow{0}$$. The wave function can be interpreted as describing an oscillating movement instead of an orbital one. But it is odd that the probability of finding an electron at $$r = 0$$ is zero. It only suggests that it is frustrating to carry the intuition of our macroscopic world to the quantum level. It is better to learn how to calculate.
# Week 2 Notes • Lecture 3: • 22:00 • Extra examples of things that are not monoidal posets: • $$(\mathbb{N}, \mid, 1, \vee)$$ where $$\vee = \text{join} = \text{LCM}$$ is not a monoidal poset since $$2 \vee 9 = 18 \not \leq 9 = 3 \vee 9$$ • $$(\mathbb{N}, \mid, 1, \wedge)$$ where $$\wedge = \text{meet} = \text{GCD}$$ is not a monoidal poset since $$3 \wedge 9 = 3 \not \leq 1 = 4 \wedge 9$$ • 30:52 • How does the trace of a matrix correspond to a loop-arrow in the category of trace matrices? • 48:30 • Discard: $$\forall a \in M, a \leq e$$ • Copy: $$\forall a \in M, a \leq aa$$ • Find monoidal poset that does not satisfy: • discard: $$(\mathbb{R}^+ \setminus \mathopen[0,1\mathclose), \leq, 1, \times)$$ because $$\mathbb{R}^+ \setminus \mathopen[0,1\mathclose) \not \leq 1$$, but satisfies copy since $$r \leq r \times r$$ for all $$r \in \mathbb{R}^+ \setminus \mathopen[0,1\mathclose)$$. • copy: $$(\mathopen[0,1\mathclose], \leq, 1, \times)$$ because $$0.5 \not \leq 0.25 = 0.5 \times 0.5$$, but satisfies discard since $$\mathopen[0,1\mathclose] \leq 1$$. • either: $$(\mathbb{R}^+, \leq, 1, \times)$$. • EX 2.61: Where $$\mathbf{NMY} := (P,\leq,\texttt{yes},\text{min})$$, recall from D 2.46 that $$\mathbf{NMY}$$-category $$\mathcal{C}$$ consists of 1. a set $$\text{Ob}(\mathcal{C})$$ of objects 2. a hom-object $$\mathcal{C}(x, y) \in \{\texttt{no}, \texttt{maybe}, \texttt{yes}\}$$ for $$x, y \in \text{Ob}(\mathcal{C})$$ satisfying 1. $$\texttt{yes} \leq \mathcal{C}(x, x)$$ for $$x \in \text{Ob}(\mathcal{C})$$, in other words $$\texttt{yes} = \mathcal{C}(x, x)$$ 2. $$\text{min}(\mathcal{C}(x, y), \mathcal{C}(y, z)) \leq \mathcal{C}(x, z)$$ for $$x, y, z \in \text{Ob}(\mathcal{C})$$ Condition (a) means all reflexive morphisms have the valuation $$\texttt{yes}$$. Condition (b) means for two valued consecutive morphisms, we can determine the value of the transitive morphism. $$p \rightarrow q$$ $$q \rightarrow r$$ $$\text{min}(p \rightarrow q, q \rightarrow r)$$ possible values of $$p \rightarrow r$$ $$\texttt{no}$$ $$\texttt{no}$$ $$\texttt{no}$$ $$\{\texttt{no},\texttt{maybe},\texttt{yes}\}$$ $$\texttt{no}$$ $$\texttt{maybe}$$ $$\texttt{no}$$ $$\{\texttt{no},\texttt{maybe},\texttt{yes}\}$$ $$\texttt{no}$$ $$\texttt{yes}$$ $$\texttt{no}$$ $$\{\texttt{no},\texttt{maybe},\texttt{yes}\}$$ $$\texttt{maybe}$$ $$\texttt{no}$$ $$\texttt{no}$$ $$\{\texttt{no},\texttt{maybe},\texttt{yes}\}$$ $$\texttt{maybe}$$ $$\texttt{maybe}$$ $$\texttt{maybe}$$ $$\{\texttt{maybe},\texttt{yes}\}$$ $$\texttt{maybe}$$ $$\texttt{yes}$$ $$\texttt{maybe}$$ $$\{\texttt{maybe},\texttt{yes}\}$$ $$\texttt{yes}$$ $$\texttt{no}$$ $$\texttt{no}$$ $$\{\texttt{no},\texttt{maybe},\texttt{yes}\}$$ $$\texttt{yes}$$ $$\texttt{maybe}$$ $$\texttt{maybe}$$ $$\{\texttt{maybe},\texttt{yes}\}$$ $$\texttt{yes}$$ $$\texttt{yes}$$ $$\texttt{yes}$$ $$\{\texttt{yes}\}$$ Interpretation: the $$\mathbf{NMY}$$-category is where the possible values of $$p \rightarrow r$$ are the upper set (EG 1.54) of $$\text{min}(p \rightarrow q, q \rightarrow r)$$. • EX 2.75 1. For every $$(x,y) \in \text{Ob}(\mathcal{C} \times \mathcal{D})$$ $$$\begin{split} & &&I &&& \\ &= &&I \otimes I &&&\text{(D 2.2 (b))} \\ &\leq &&\mathcal{C}(x,x) \otimes \mathcal{D}(y,y) &&&\text{(D 2.46 (a))} \\ &= &&(\mathcal{C} \times \mathcal{D})((x,y),(x,y)) &&&\text{(D 2.74 (ii))} \\ \end{split}$$$ 2. For every $$(x_1,y_1), (x_2,y_2), (x_3,y_3) \in \text{Ob}(\mathcal{C} \times \mathcal{D})$$ $$$\begin{split} & &&(\mathcal{C} \times \mathcal{D})((x_1,y_1),(x_2,y_2)) \otimes (\mathcal{C} \times \mathcal{D})((x_2,y_2),(x_3,y_3)) \\ &= &&(\mathcal{C}(x_1,x_2) \otimes \mathcal{D}(y_1,y_2)) \otimes (\mathcal{C}(x_2,x_3) \otimes \mathcal{D}(y_2,y_3)) &&&\text{(D 2.74 (ii))} \\ &= &&(\mathcal{C}(x_1,x_2) \otimes \mathcal{C}(x_2,x_3)) \otimes (\mathcal{D}(y_1,y_2) \otimes \mathcal{D}(y_2,y_3)) &&&\text{(D 2.2 (b,c))} \\ &\leq &&\mathcal{C}(x_1,x_3) \otimes \mathcal{D}(y_3,y_3) &&&\text{(D 2.46 (b))} \\ &= &&(\mathcal{C} \times \mathcal{D})((x_1,y_1),(x_3,y_3)) &&&\text{(D 2.74 (ii))} \\ \end{split}$$$ • EX 2.94: $$(\text{P}(S), \subseteq, S, \cap)$$ is a quantale, since it has all joins: for any set of subsets $$A \subseteq \text{P}(S)$$, we define $$\bigvee A := \bigcup A$$ that satisfies 1. $$a \subseteq \bigvee A$$ for all $$a \in A$$ 2. if $$b \in \text{P}(S)$$ is any element s.t. $$a \subseteq b$$ for all $$a \in A$$, then $$A \subseteq b$$. For any two subsets of $$\text{P}(S)$$, say $$X$$ and $$Y$$, $$\bigvee \{X,Y\} = X \cup Y$$, and so satisfies both (a) and (b). For the empty set, (b) requires $$\bigvee \varnothing \subseteq b$$ for all $$b \in \text{P}(S)$$, so $$\bigvee \varnothing = \varnothing$$. • Ex 2.104: 1. For any sets $$X, Y$$ and $$\mathcal{V}$$-matrix $$M : X \times Y \rightarrow V$$ $$$\begin{split} & &&I_X M (x,y) &&& \\ &= &&\bigvee_{x' \in X} I(x,x') \otimes M(x',y) &&&\text{(EQ 2.99)} \\ &= &&I \otimes M(x,y) &&&\text{(def. of identity \mathcal{V}-matrix)} \\ &= &&M(x,y) &&&\text{(D 2.2 (b))} \\ \end{split}$$$ 2. For any $$\mathcal{V}$$-matrices $$M : W \times X \rightarrow V$$, $$N : X \times Y \rightarrow V$$, $$P : Y \times Z \rightarrow V$$ $$$\begin{split} & &&(MN) P (w,z) &&& \\ &= &&\bigvee_{y \in Y} (MN)(w,y) \otimes P(y,z) &&&\text{(EQ 2.101)} \\ &= &&\bigvee_{y \in Y} (\bigvee_{x \in X} M(w,x) \otimes N(x,y)) \otimes P(y,z) &&&\text{(EQ 2.101)} \\ &= &&\bigvee_{x \in X} M(w,x) \otimes (\bigvee_{y \in Y} N(x,y) \otimes P(y,z)) &&&\text{(D 2.2 (c,d))} \\ &= &&M (NP) (w,z) &&&\text{(EQ 2.101)} \\ \end{split}$$$ Created: 2020-01-13 Mon 14:36
# Boundary Value Problems/Introduction Jump to navigation Jump to search An example of a boundary value problem in one dimension is the second order linear differential equation: ${\displaystyle ay''+by'+cy=0}$ with the end conditions of ${\displaystyle y(0)=0}$ and ${\displaystyle y(L)=0}$. For a simple problem let a=1, b=0, c=1 and ${\displaystyle L=\pi }$. The resulting differential equation is ${\displaystyle y''+y=0}$ with boundary conditions ${\displaystyle y(0)=0}$ and ${\displaystyle y(\pi )=0}$. A solution is ${\displaystyle y(x)=sin({{\pi x} \over L})}$ . A plot of this solution is shown below. Note that the solution satisfies the boundary conditions. For more information about ordinary differential equations and methods for solving them, use the following link ODE
# I am boxing sequences Algebra Level 5 Let $$\{ x _n \}$$ be a sequence defined such that $${ x }_{ k+1 }={ { x_{ k } }^{ 2 } } +{ x }_{ k }$$ with $$x_1 = \frac 1 2$$. Find the greatest integer less than or equals to the expression below. $\large \frac { 1 }{ { x }_{ 1 }+1 } +\frac { 1 }{ { x }_{ 2 }+1 } + \ldots + \frac { 1 }{ { x }_{ 100 }+1 }$ ×
MathSciNet bibliographic data MR859505 (88a:28003) 28A05 (28A12 28C15) Haase, H. Non-$\sigma$$\sigma$-finite sets for packing measure. Mathematika 33 (1986), no. 1, 129–136. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
## Tuesday, September 24, 2013 ### *sighs* ye gods, i hate asking for money. it's clearer to me now that there is a "rat race" to academia in general and to the sciences in particular. more and more i envision a future where i'll never stop writing grants and there will always be another meeting to sit in, another memorandum that i should have read (but have skimmed over, at best). for a while i've wondered if i was cut out to be a mathematician, but i've made my peace with it now. it's been long enough that i wasn't going to cut it, then i would probably be doing something else by now. i'm starting to wonder, though, if i'm cut out to be a professional mathematician. the research is fine and the teaching, though time-consuming, is also fine and often enough fulfilling (if not enjoyable). as for the grants .. and the applications .. and the meetings, and so on; i can see why many faculty "give up" upon earning tenure. these professional aspects of the job were never advertised to me, as a ph.d. student; maybe the advisor was deliberately putting it in the background, if only so that we could have a greater focus, when working together. as a postdoc there seemed more and more of it, when discussing the nature of work with my colleagues. who knows? maybe i've just always been naive; my colleagues, near and far, seem quite able to maintain research as their primary focus and if anything, shape their other duties to complement this one singular priority. more and more i find this admirable. maybe i'm just too new to this position, that these are all just growing pains, and that these shall pass with time and enough patience and a little humor. i don't know and it's hard to say. i'm not giving up. it's just that i can see why others do. ## Friday, September 20, 2013 ### ANH: from end to start, for now. so it feels like ages since i last thought about a blog post of any kind. it seems like there's so much to saybut at the same time, none of it is really worth mentioning. that's always the difficulty of beginning a story at the beginning .. .. so, being lazy at the moment, i'll not. i'll begin at the ending instead, which is today. so today i gave a lecture about metric spaces to my students. it's a first course in analysis and the textbook [1] happens to cover the topic, which to me sounds like a license to expound on it for 75 minutes. so i showed them the discrete metric on any set, and how the unit circle would look if the set were the euclidean plane. i showed them the L-infinity norm, how the unit circle looks like the usual unit square, and how short the proof is for its triangle inequality. this is in contrast to how the proof of the triangle inequality goes for the usual L-2 distance, which uses Cauchy-Schwarz and in turn, a nod to Pythagoreas's theorem. i thought it was cool. it would be the kind of lecture that would have inspired me as a student .. but i don't know. i'm getting to know the students in my class, but i'm still learning all the time. [1] we're using baby Rudin. ## Thursday, September 19, 2013 ### ARR! more machine now, than man .. twisted and evil. "On the one hand, today’s computers feature programming and writing tools more powerful than anything available in the twentieth century. But, in a different way, each of these tasks would be much harder: on a modern machine, each man would face a more challenging battle with distraction ... Kafka, Kerouac, and Wozniak had one advantage over us: they worked on machines that did not readily do more than one thing at a time, easily yielding to our conflicting desires. And, while distraction was surely available—say, by reading the newspaper, or chatting with friends—there was a crucial difference. Today’s machines don’t just allow distraction; they promote it. The Web calls us constantly, like a carnival barker, and the machines, instead of keeping us on task, make it easy to get drawn in—and even add their own distractions to the mix. In short: we have built a generation of “distraction machines” that make great feats of concentrated effort harder instead of easier." ~ from "HOW TODAY'S COMPUTERS WEAKEN OUR BRAIN @newyorker " ## Sunday, September 15, 2013 ### on how our choices can haunt us later. if i sit and think about it, then it feels i have a lot to say about the last two weeks, of this new job, at this university. i don't know where to begin, though; if i start now, then it will all come out as chaos. maybe i've been writing too many lectures lately, and habit urges me to put some order or narrative into it. after all, life is simply a sequence of events; any additional order or structure on it is an inherently human contribution. my guess is that it will take months for me to make sense of it all: these experiences, mistakes, small joys, and frequent setbacks. (i don't know.) as for something small to share .. .. during the first lecture of multivariable calculus, on a whim i decided to pronounce the letter z as zed, just like how they seem to do in europe and the u.k. as a result, now i feel compelled to be consistent and remember, from now on, to refer to the vertical axis (in 3 dimensions) as the 'zed-axis" .. or else risk being caught as a pretentious snob! ## Saturday, September 14, 2013 ### ARR!.. apparently i still have more trigοnometry to learn. well, i learned something new today: It sounds cumbersome now, but doing multiplication by hand requires a lot more operations than addition does. When each operation takes a nontrivial amount of time (and is prone to a nontrivial amount of error), a procedure that lets you convert multiplication into addition is a real time-saver, and it can help increase accuracy. The secret trig functions, like logarithms, made computations easier. Versine and haversine [1] were used the most often. Near the angle $\theta = 0$, $\cos(\theta)$ is very close to $1$. If you were doing a computation that had $1-\cos(\theta)$ in it, your computation might be ruined if your cosine table didn’t have enough significant figures. To illustrate, the cosine of $5$ degrees is $0.996194698$, and the cosine of $1$ degree is $0.999847695$. The difference $\cos(1^o)-\cos(5^o)$ is $0.003652997$. If you had three significant figures in your cosine table, you would only get 1 significant figure of precision in your answer, due to the leading zeroes in the difference. And a table with only three significant figures of precision would not be able to distinguish between 0 degree and 1 degree angles. In many cases, this wouldn’t matter, but it could be a problem if the errors built up over the course of a computation. ~ from "10 Secret Trig Functions Your Math Teachers Never Taught You" @sciam in other news: it's been more than two weeks into this new job, and i still feel disoriented. often i feel exhausted, too. on the bright side: i finally found an expensive apartment and signed a lease .. after a month of searching (and simultaneously teaching, for the last 2 1/2 weeks). [1] these are defined, respectively, as $\textrm{versin}(\theta) = 1-\cos(\theta)$ and $\textrm{haversin}(\theta) = \frac{1}{2}\textrm{versin}(\theta)$. suggestively, "ha" mean half.
# Convex analysis: possible misunderstanding about dual space From Wikipedia:https://en.wikipedia.org/wiki/Convex_conjugate Let ${\displaystyle X}$ be a real topological vector space, and let ${\displaystyle X^{*}}$ be the dual space to ${\displaystyle X}$. Denote the dual pairing by ${\displaystyle \langle \cdot ,\cdot \rangle :X^{*}\times X\to \mathbb {R} .}$ For a functional ${\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}$ taking values on the extended real number line, the convex conjugate ${\displaystyle f^{\star }:X^{*}\to \mathbb {R} \cup \{+\infty \}}$ is defined in terms of the supremum by ${\displaystyle f^{\star }\left(x^{*}\right):=\sup \left\{\left.\left\langle x^{*},x\right\rangle -f\left(x\right)\right|x\in X\right\}}$ I am still bothered by the idea of a "dual space", because from the point of view of convex analysis, it seems that the idea of a dual space is tied in with the idea of a convex conjugate, whereas the dual space seems to be a much more general concept (i.e. the space of linear functional). For instance, we know that $\mathbb{R}^n$ is a Hilbert space, hence $(\mathbb{R}^n, \|\cdot\|_2)$ is the dual of itself. However, convex analysis is giving me some un-intuitive results. For instance, in example 5.1 of http://www.doc.ic.ac.uk/~ahanda/lfreport.pdf We are given $f(y) = \|y\|$, presumably the $2$-norm and the function is defined over all of $\mathbb{R}^n$, we find the conjugate to be $f^*(z) = 0, \|z\|\leq 1$. Now if you had told me that the dual space to $X = (\mathbb{R}^n, \|\cdot\|_2)$ is the set $X^* = \{z\in \mathbb{R}^n | \|z\|_2\leq 1\}$, I think I will have a difficulty understanding you, but isn't this what the example above is showing? Can someone please help me understand the concept of a dual space? • Are you sure that you are not confusing the terms "dual space" and "convex conjugate"? They are completely different. The superscript star on a convex function $f$ does NOT mean that $f^\star$ is a dual vector. Instead, it is a function defined on the dual space (or on a subset of it). – Giuseppe Negro Mar 22 '17 at 17:38 • @GiuseppeNegro In wikipedia's definition, it says $X^*$ is the dual space of $X$. I am confused as to why those $X^*$ in the examples are the dual spaces of $X$. – Carlos - the Mongoose - Danger Mar 22 '17 at 17:40 • $X^*$ is the dual space of $X$. In your examples, $f^\star$ is given by a certain formula on a certain subset of $X^*$, and is $+\infty$ on the complement of that subset. – Robert Israel Mar 22 '17 at 17:43 ## 1 Answer In your first example, $$f^*(z) = \cases{0 & if \|z\| \le 1\cr +\infty & otherwise}$$ That doesn't mean $\{z: \|z\| \le 1\} = X^*$. It is just the subset of $X^*$ on which this particular conjugate functional is finite.
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 18 Jul 2019, 02:40 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Sam and Chris leave City A for City B simultaneously at 6 A. Author Message TAGS: ### Hide Tags Intern Joined: 24 Jan 2014 Posts: 10 Schools: SMU Singapore Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags Updated on: 08 Apr 2014, 03:35 6 00:00 Difficulty: 55% (hard) Question Stats: 73% (02:57) correct 27% (03:16) wrong based on 218 sessions ### HideShow timer Statistics Sam and Chris leave City A for City B simultaneously at 6 A.M in the morning driving in two cars at speeds of 60 mph and 80 mph respectively. As soon as Chris reaches City B, he returns back to City A along the same route and meets Sam on the way back. If the distance between the two cities is 210 miles, how far from City A did Sam and Chris meet? A.60 miles B.120 miles C.30 miles D.180 miles E.150 miles Originally posted by Safal on 08 Apr 2014, 03:13. Last edited by MacFauz on 08 Apr 2014, 03:35, edited 1 time in total. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 9442 Location: Pune, India Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 08 Apr 2014, 04:58 3 4 Safal wrote: Sam and Chris leave City A for City B simultaneously at 6 A.M in the morning driving in two cars at speeds of 60 mph and 80 mph respectively. As soon as Chris reaches City B, he returns back to City A along the same route and meets Sam on the way back. If the distance between the two cities is 210 miles, how far from City A did Sam and Chris meet? A.60 miles B.120 miles C.30 miles D.180 miles E.150 miles You can use ratios to solve it easily. Let's say, they meet x miles away from city A. Sam has covered a distance of x miles and Chris has covered a distance of 210 + (210 - x) = (420 - x) miles. The speeds of Sam and Chris are in the ratio 60:80 i.e. 3:4. So in the same time, they will cover distance which will be in the ratio 3:4 too. x/(420 - x) = 3/4 7x = 1260 x = 180 miles For more on ratios used in TSD, check: http://www.veritasprep.com/blog/2011/03 ... os-in-tsd/ _________________ Karishma Veritas Prep GMAT Instructor VP Joined: 02 Jul 2012 Posts: 1153 Location: India Concentration: Strategy GMAT 1: 740 Q49 V42 GPA: 3.8 WE: Engineering (Energy and Utilities) Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 08 Apr 2014, 03:37 6 1 Time taken by Chris to reach City B = 210/80 = More than 2.5 hours In 2.5 hours, Sam travels 60*2.5 = 150 miles So distance at which they meet should be greater than 150 miles. Only D satisfies. _________________ Did you find this post helpful?... Please let me know through the Kudos button. Thanks To The Almighty - My GMAT Debrief GMAT Reading Comprehension: 7 Most Common Passage Types ##### General Discussion SVP Joined: 24 Jul 2011 Posts: 1791 GMAT 1: 780 Q51 V48 GRE 1: Q800 V740 Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 08 Apr 2014, 12:35 1 Equate the time taken (which will be equal for both). => x/60 = [(210) + (210-x)]/80 => x = 180 Option (D). _________________ Awesome Work | Honest Advise | Outstanding Results Reach Out, Lets chat! Email: info at gyanone dot com | +91 98998 31738 | Skype: gyanone.services SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1787 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 08 Apr 2014, 20:25 1 Look at the diagram, lets say they meet at point P (Which is distance x from start) Both will take same time to reach point P Setting up the equation $$\frac{x}{60} = \frac{210}{80} + \frac{210-x}{80}$$ $$\frac{x}{3} = \frac{420-x}{4}$$ x = 180 Attachments dis.jpg [ 5.93 KiB | Viewed 3441 times ] _________________ Kindly press "+1 Kudos" to appreciate SVP Joined: 24 Jul 2011 Posts: 1791 GMAT 1: 780 Q51 V48 GRE 1: Q800 V740 Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 11 Apr 2014, 20:48 When Sam and Chris meet, they have collectively travelled 210*2 = 420 miles => Distance from city A that they meet = 60 * (420/140) = 180 miles Option (D). _________________ Awesome Work | Honest Advise | Outstanding Results Reach Out, Lets chat! Email: info at gyanone dot com | +91 98998 31738 | Skype: gyanone.services Intern Joined: 23 Jul 2016 Posts: 24 Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 26 Feb 2017, 12:50 Non-Human User Joined: 09 Sep 2013 Posts: 11688 Re: Sam and Chris leave City A for City B simultaneously at 6 A.  [#permalink] ### Show Tags 22 Aug 2018, 15:32 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Sam and Chris leave City A for City B simultaneously at 6 A.   [#permalink] 22 Aug 2018, 15:32 Display posts from previous: Sort by
MathSciNet bibliographic data MR2193436 35J15 (47F05) Brusentsev, A. G. Self-adjointness of elliptic differential operators in \$L\sb 2(G)\$$L\sb 2(G)$ and correcting potentials. (Russian) Tr. Mosk. Mat. Obs. 65 (2004), 35--68; translation in Trans. Moscow Math. Soc. 2004, 31–61 Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
# ISO 286-1 ## Geometrical product specifications (GPS) - ISO code system for tolerances on linear sizes - Part 1: Basis of tolerances, deviations and fits active, Most Current Organization: ISO Publication Date: 15 April 2010 Status: active Page Count: 46 ICS Code (Limits and fits): 17.040.10 ##### scope: This part of ISO 286 establishes the ISO code system for tolerances to be used for linear sizes of features of the following types: a) cylinder; b) two parallel opposite surfaces. It defines the basic concepts and the related terminology for this code system. It provides a standardized selection of tolerance classes for general purposes from amongst the numerous possibilities. Additionally, it defines the basic terminology for fits between two features of size without constraints of orientation and location and explains the principles of "basic hole" and "basic shaft". ### Document History August 1, 2013 Geometrical product specifications (GPS) - ISO code system for tolerances on linear sizes - Part 1: Basis of tolerances, deviations and fits TECHNICAL CORRIGENDUM 1 A description is not available for this item. ISO 286-1 April 15, 2010 Geometrical product specifications (GPS) - ISO code system for tolerances on linear sizes - Part 1: Basis of tolerances, deviations and fits This part of ISO 286 establishes the ISO code system for tolerances to be used for linear sizes of features of the following types: a) cylinder; b) two parallel opposite surfaces. It defines the... January 1, 1988 ISO System of Limits and Fits - Part 1: Bases of Tolerances, Deviations and Fits This part of IS0 286 gives the bases of the IS0 system of limits and fits together with the calculated values of the standard tolerances and fundamental deviations. These values shall be taken as...
# When the length of simple pendulum is dubuled ; find the ratio of new frequency to old frequency? Aug 5, 2018 The answer is $= \frac{1}{\sqrt{2}}$ #### Explanation: The period $T$ of a simple pendulum is $T = 2 \pi \sqrt{\frac{l}{g}}$ Where $l$ is the length of the pendulum And $g$ is the acceleration due to gravity The frequency is $f = \frac{1}{T} = \frac{1}{2 \pi} \sqrt{\frac{g}{l}}$ Squaring both sides ${f}^{2} = \frac{1}{4 {\pi}^{2}} \cdot \frac{g}{l}$ Suppose l_1=2l Then, ${f}_{1}^{2} = \frac{1}{4 {\pi}^{2}} \cdot \frac{g}{2 l}$ Then, ${f}_{1}^{2} / \left({f}^{2}\right) = \frac{\frac{1}{4 {\pi}^{2}} \cdot \frac{g}{2 l}}{\frac{1}{4 {\pi}^{2}} \cdot \frac{g}{l}} = \frac{1}{2}$ ${f}_{1} / f = \frac{1}{\sqrt{2}}$
# Figure 7 < p> Near-side jet-like peak yield as a function of the $\it{p}^{\rm{LP}}_{\rm{T,min}}$ (left) and $\it{p}^{\rm{jet}}_{\rm{T,min}}$ (right). The filled circles show measurement with ALICE. The statistical and systematic uncertainties are shown as vertical bars and boxes, respectively. The measurements are compared with model descriptions from $\pythiam$, $\pythiashoving$, and $\epos$ for both selections. The total uncertainty of the ratio is represented by the gray band centered around unity.< /p>
### Presentation evaluation Presenter : _____________________________________________ 1. What were the main ideas of the paper? 2. Your questions for the presenter: 3. Did you understand the algorithm? 4. Could you implement the idea? 5. Were the slides clear and understandable? Poor 1 ..... 2 ..... 3 ..... 4 ..... 5 Good 6. Was the presenter's speaking clear and understandable? Poor 1 ..... 2 ..... 3 ..... 4 ..... 5 Good 7. How was the presenter's eye contact with the audience? Poor 1 ..... 2 ..... 3 ..... 4 ..... 5 Good 8. Rate the overall quality of the presentation: Poor 1 ..... 2 ..... 3 ..... 4 ..... 5 Good 9. Was the presenter able to answer any questions clearly? 10. What could improve the presentation?
# Sfondi Neutri We share a collection of sfondi neutri that will be appropriate for your purposes. You can use them as your wall background or device screen wallpaper Sfondi Neutri Whether you are looking for a background idea for your wall, mobile phone, or computer, our sfondi neutri will inspire you. All of the background ideas available here look beautiful. They also come in high resolutions. Maybe, you are familiar with neutral colors. However, do you know what neutral colors mean? In this article, we will discuss it further. At the end of the article, a collection of best background ideas with neutral themes will be provided. What Are Sfondi Neutri? Simply, we can define neutral colors as the colors that appear without being colored. It means they are objects with certain colors without being painted or colored. You can also understand it that neutral colors are the original colors without being mixed with other colors. There are a few reasons why neutral colors look nice to be used as a background. First, neutral colors are always calming. Besides, they are also appropriate for any style. In addition, neutral colors can be matched with any pattern. Everyone also agrees that neutral colors are interesting. Read: Sfondi Hype Download Sfondi Neutri If you are interested in our collections, just feel free to pick your desired background. We list 20 best backgrounds with neutral colors for you all. 1. Neutral Colors Background Image source: wallpaperaccess.com 1. Neutral Soft Abstract Watercolor Background Image source: www.rawpixel.com 1. Neutral Background Picture Image source: unsplash.com 1. Neutral Background Vector Art Image source: www.vecteezy.com 1. Horizontal Neutral Background with Abstract Shapes Image source: www.istockphoto.com 1. Abstract Wallpaper Design for iPhone Image source: www.pinterest.com 1. Neutral Computer Wallpaper Image source: wallpaperaccess.com 1. Neutral Black Background Image source: unsplash.com 1. Neutral Aesthetic Background Image source: www.pinterest.com 1. Neutral Colored Home Design Background Image source: www.ppt-backgrounds.net 1. Mosaic Ceramic Tiles Neutral Background Image source: www.dreamstime.com 1. Neutral Background Illustration Image source: www.istockphoto.com 1. Abstract Stylish Marble Neutral Background Image source: www.123rf.com 1. Neutral Abstract Texture Minimal Background Image source: wallpaperaccess.com 1. Neutral Curvy Lines Modern Zen Garden Wallpaper Image source: wallpapersafari.com 1. Seamless Background Rainbow Gender Neutral Background Image source: www.vectorstock.com 1. Neutral Geometric Seamless Pattern Background Image source: www.dreamstime.com 1. Neutral Peach Solid Color Background Image source: www.123freevectors.com 1. Gray Neutral Background Image source: www.dreamstime.com 1. Royalty-free Neutral Background Image source: www.pxfuel.com Those are best options for you who are searching for neutral-colored backgrounds. You can download them for free and use them anytime you want. LihatTutupKomentar
Just a simple question #2 Given that $$x$$ and $$y$$ are real numbers not $$0$$, which satisfies this equation : $$xy = \frac{x}{y}=x-y$$ What is the value of $$x+y=?$$ ×
NA Digest, V. 17, # 19 ## NA Digest Sunday, July 30, 2017 Volume 17 : Issue 19 Today's Editor: Daniel M. Dunlavy Sandia National Labs [email protected] Today's Topics: Subscribe, unsubscribe, change address, or for na-digest archives: http://www.netlib.org/na-digest-html/faq.html ### Submissions for NA Digest: http://icl.utk.edu/na-digest/ Date: July 30, 2017 Subject: IMPORTANT NOTE: Recent issues with NA Digest delivery Recent changes to the email servers used for the NA Digest have led to increased message header sizes. For many subscribers, this has led to messages being rejected by their email servers and then the subscription being deleted. We are currently addressing this issue, and we request your patience as we try to find a long-term solution that will be acceptable. For the next several issues of the Digest, messages will be sent out using my Until we have settled on a long-term solution, the processes for submissions and archived materials will remain unchanged. Submissions will still be accepted using the existing web form at http://icl.utk.edu/na- digest/websubmit.html . Archives will still be posted at http://www.netlib.org/na-digest-html/ Subscriptions will still be handled using the current mechanisms as well until further notice. Best regards, Danny Dunlavy (NA Digest Editor) From: Michael Heath [email protected] Date: July 23, 2017 Subject: Textbook Availability, Scientific Computing: An Introductory Survey For instructors planning to use my textbook Scientific Computing: An Introductory Survey for classes this Fall, copies of the domestic hardbound edition are no longer available from the publisher's warehouse, but it remains available from McGraw-Hill's print-on-demand and ebook service through the following portal: http://create.mheducation.com, where it is preloaded and can be found by a simple search. If there are any problems accessing or ordering From: Michael C. Grant, Ph.D. [email protected] Date: July 22, 2017 Subject: Call for Nominations, 2018 Beale-Orchard-Hays Prize The Beale-Orchard-Hays Prize Excellence in Computational Mathematical Programming is sponsored by the Mathematical Optimization Society, in memory of Martin Beale and William Orchard-Hays, pioneers in computational mathematical programming. Nominated works must have been published between Jan 1, 2012 and Dec 31, 2017, and demonstrate excellence in any aspect of computational mathematical programming. "Computational mathematical programming" includes the development of high-quality mathematical programming algorithms and software, the experimental evaluation of mathematical programming algorithms, and the development of new methods for the empirical testing of mathematical programming techniques. The members of the 2018 Beale-Orchard-Hays Prize committee are: Michael Grant (Chair), CVX Research Tobias Achterberg, Gurobi Optimization Jeff Linderoth, University of Wisconsin Petra Mutzel, University of Dortmund Ted Ralphs, Lehigh University Nominations should include detailed publication details of the nominated work, and should include an attachment with the final published version of the work. Supporting justification and any supplementary material are strongly encouraged but not mandatory. The prize committee reserves the right to request further supporting material and justification from the nominees. The deadline for nominations is January 15, 2018, and should be submitted to Dr. Michael Grant at [email protected]. Full details of prize rules and eligibility requirements can be found at http://www.mathopt.org/?nav=boh. From: Dirk Laurie [email protected] Date: July 19, 2017 Subject: Diagnosis of the source of the error in certain published tables Three months ago Walter Gautschi alerted me to the fact that the tables published in @article{LR79, author="D. P. Laurie and L. Rolfes", title="Computation of {G}aussian Quadrature Rules from Modified Moments", journal=JCAM, volume=5, year=1979, pages="235 - 242"} are not accurate to the 25 digits given, but in fact to only about 14 digits (single precision on the CDC Cyber on which the calculation was Digest. Briefly, the algorithm given in the paper is correct, but was given inaccurate input. Neither the original hardware with its compiler, nor machine-readable versions of the source code, is still available nearly forty years after the original computation. We therefore need to resort to indirect methods. I have verified that the modified moments reconstructed from the formulas agree with each other to 24 to 25 digits across the n-point formulas as given for n=5,10,20,30, but agree with the correct modified moments to only about 14 digits. The original modifed moments were generated by a very simple linear inhomogeneous recursion relation, but the reconstructed modified moments satisfy that recursion relation to only about 14 digits. The obvious explanation is that one or more of the numeric constants in the recursion relation was overlooked in the conversion to double precision, thus introducing an unpredictable perturbation after about 14 digits each time that the 4.0 instead of 4D0, that's all it takes. A Pari-GP routine for the generation of the Jacobi matrix from the modified moments by the algorithm of the paper is available on request. From: Irina Sokolova (Intel) [email protected] Date: July 25, 2017 Subject: New Eigensolver for Hermitian matrices Intel(R) Math Kernel Library (Intel(R) MKL) Extended Eigensolver is based on the accelerated subspace iteration FEAST algorithm[1]. This eigensolver calculates all of the eigenvalues, and optionally all the associated eigenvectors, within a specified interval. The Intel MKL team has implemented new preprocessing functions that returns the interval containing the k largest or smallest eigenvalues using techniques described in [2]. These functions are now available for early evaluation as part of a special library that must be linked against an existing Intel MKL library. Go to [3] to read more about this new library and how to request an evaluation copy. [1] Intel MKL Extended Eigensolver: https://software.intel.com/en- us/node/521730 [2] E. Di Napoli, E. Polizzi, Y. Saad, Efficient Estimation of Eigenvalue Counts in an Interval [2] E. Di Napoli, E. Polizzi, Y. Saad, Efficient Estimation of Eigenvalue Counts in an Interval [3]https://software.intel.com/en-us/articles/intel-mkl-support- for-extrenal-eigenvalue-problem From: Bruce Bailey [email protected] Date: July 19, 2017 Subject: New Book, An Introduction to Data Analysis and UQ for Inverse Problems An Introduction to Data Analysis and Uncertainty Quantification for Inverse Problems, Luis Tenorio, Mathematics in Industry 03 To solve an inverse problem is to recover an object from noisy, usually indirect observations. Solutions to inverse problems are subject to many potential sources of error introduced by approximate mathematical models, regularization methods, numerical approximations for efficient computations, noisy data, and limitations in the number of observations; thus it is important to include an assessment of the uncertainties as part of the solution. Such assessment is interdisciplinary by nature, as it requires, in addition to knowledge of the particular application, methods from applied mathematics, probability, and statistics. This book bridges applied mathematics and statistics by providing a basic introduction to probability and statistics for uncertainty quantification in the context of inverse problems, as well as an introduction to statistical regularization of inverse problems. The author covers basic statistical inference, introduces the framework of ill-posed inverse problems, and explains statistical questions that arise in their applications. contents, preface, and index, please visit http://bookstore.siam.org/MN03/. From: Bruce Bailey [email protected] Date: July 19, 2017 Subject: New Book, Formulation & Numerical Solution of Quantum Control Problems Formulation and Numerical Solution of Quantum Control Problems Alfio Borzi, Gabriele Ciaramella, and Martin Sprengel Computational Science and Engineering 16 x + 390 pages / Hardcover / ISBN 978-1-611974-83-6 List Price $99.00 / SIAM Member Price$69.30 / Order Code CS16 This book provides an introduction to representative nonrelativistic quantum control problems and their theoretical analysis and solution via modern computational techniques. The quantum theory framework is based on the Schrodinger picture, and the optimization theory, which focuses on functional spaces, is based on the Lagrange formalism. The computational techniques represent recent developments that have resulted from combining modern numerical techniques for quantum evolutionary equations with sophisticated optimization schemes. Both finite and infinite-dimensional models are discussed, including the three- level Lambda system arising in quantum optics, multispin systems in NMR, a charged particle in a well potential, Bose-Einstein condensates, multiparticle spin systems, and multiparticle models in the time- dependent density functional framework. This self-contained book covers the formulation, analysis, and numerical solution of quantum control problems and bridges scientific computing, optimal control and exact controllability, optimization with differential models, and the sciences and engineering that require quantum control methods. contents, preface, and index, please visit http://bookstore.siam.org/CS16/. From: Bruce Bailey [email protected] Date: July 19, 2017 Subject: New Book, Foundations of Mathematics, Vol. 1: Mathematical Analysis Foundations of Applied Mathematics, Volume 1: Mathematical Analysis Jeffrey Humpherys, Tyler J. Jarvis, and Emily J. Evans xx + 689 pages / Hardcover / ISBN 978-1-611974-89-8 List Price $89.00 / SIAM Member Price$62.30 / Order Code OT152 "Humpherys, Jarvis, and their collaborators are in the process of achieving something extraordinary: the creation of an entire curriculum of rigorous graduate-level applied mathematics with a four-volume series of first-rate books to support it." -Lloyd N. Trefethen, University of Oxford This book provides the foundations of both linear and nonlinear analysis necessary for understanding and working in twenty-first century applied and computational mathematics. In addition to the standard topics, this text includes several key concepts of modern applied mathematical analysis that should be, but are not typically, curricula. When used in concert with the free supplemental lab materials, this text teaches students both the theory and the computational practice of modern mathematical analysis. Carefully thought out exercises and examples are built on each other to reinforce and retain concepts and ideas and to achieve greater depth. The text and labs combine to make students technically proficient and to answer the age-old question, "When am I going to use this?" contents, preface, and index, please visit http://bookstore.siam.org/OT152/. Date: July 11, 2017 Subject: New Book, Grid Generation Methods Springer has published the third edition of the book " Grid Generation Methods" by Liseikin V. D. This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of unstructured and hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too.This and practitioners in applied mathematics, mechanics, engineering, http://www.springer.com/gp/book/9783319578453 From: Bruce Bailey [email protected] Date: July 19, 2017 Subject: New Book, Model Reduction and Approximation: Theory and Algorithms Model Reduction and Approximation: Theory and Algorithms Edited by Peter Benner, Albert Cohen, Mario Ohlberger, and Karen Willcox Computational Science and Engineering 15 xxii + 412 pages / Softcover / ISBN 978-1-611974-81-2 / List Price $99.00 / SIAM Member Price$69.30 / Order Code CS15 Many physical, chemical, biomedical, and technical processes can be described by partial differential equations or dynamical systems. In spite of increasing computational capacities, many problems are of such high complexity that they are solvable only with severe simplifications, and the design of efficient numerical schemes remains a central research challenge. This book presents a tutorial introduction to recent developments in mathematical methods for model reduction and approximation of complex systems. It contains three parts that cover (I) sampling-based methods, such as the reduced basis method and proper orthogonal decomposition, (II) approximation of high-dimensional problems by low-rank tensor techniques, and (III) system-theoretic methods, such as balanced truncation, interpolatory methods, and the Loewner framework. contents, preface, and index, please visit http://bookstore.siam.org/CS15/. From: Are Magnus Bruaset [email protected] Date: July 11, 2017 Subject: CSE, In Memory of Hans Petter Langtangen, Norway, Oct 2017 On October 23-25, Simula Research Laboratory hosts an international CSE conference in Oslo, Norway in memory of our much missed colleague Hans Petter Langtangen. The program is now finalized and the registration is open, see cseconf2017.simula.no. Participation is free of charge (except travel and accommodation). Date: July 17, 2017 Subject: Approximation and Computation, Serbia, Nov-Dec 2017 International Conference Approximation and Computation - Theory and Applications (Dedicated to Professor Walter Gautschi on the Occasion of his 90th Anniversary) November 30-December 2, 2017 http://easychair.org/smart-program/ACTA2017/Home.html From: Manolo Venturin [email protected] Date: July 27, 2017 Subject: CAE Simulation Poster Award, Italy, Nov 2017 I inform you that the call for the "Poster Award" 2017 edition is open: the competition has become a standing date to promote the simulation culture and to grow the community of CAE analysts. The novelty of this edition consists in the contribution, as partial coverage of expenditure (travel / board and lodging), which is recognized to the authors of some of the poster finalists. The 5 winners will also receive a cash prize. The prize-giving ceremony will take place on 6 November in Vicenza, within the International CAE Conference (November 6 - 7), one of the most important European appointment on numerical simulation. The participation is free and is open to students, graduates, researchers and faculty members from Universities and Research Centers. The deadline for submission is 30 September 2017. I invite you to attend this year as well and to spread the news to More details on the contest: www.caeconference.com/call_posters.html From: Krassimir Georgiev [email protected] Date: July 18, 2017 Subject: BGSIAM Annual Meeting, Bulgaria, Dec 2017 The 12-th Annual Meeting of the Bulgarian SIAM Section (BGSIAM) will this event can be found at http://www.math.bas.bg/IMIdocs/BGSIAM/bgsiam17_announcement.htm There are no Conference fees for 2017 SIAM members. We kindly invite you to participate and give a talk during this meeting. A booklet with the extended abstracts will be available before the meeting. Proceedings of refereed and presented papers will be published as a special volume of Studies in Computational Intelligence, Springer. You are welcome to invite your colleagues and students to this meeting! From: John Butcher [email protected] Date: July 21, 2017 Subject: Numerical ODE Conference, New Zealand, Feb 2018 These popular ANODE conferences have been held from time to time during the last 20 years, the most recent being in 2013. In February 2018, ANODE will be revived, with a very simple format in which participants will be allotted generous speaking times, with everyone on an equal basis. It will be held in the beautiful city of Auckland towards the end of what is expected to be a mild and pleasant summer. This preliminary announcement is a call for expressions of interest from possible participants. For a copy of an expanded version of this announcement, and further details and updates as they become available, see http://tinyurl.com/ANODE2018. If you are interested and wish to be on the mailing list for further informtaion as it comes to hand, pleaae write to John Butcher at [email protected]. Please tell John, if you can, how likely you are to take part in ANODE 2018 and, if so, whether you will wish to give a talk. You are of John Butcher, [email protected] Nicolette Rattenbury, [email protected] Shixiao Wang, [email protected] From: Bernard Haasdonk [email protected] Date: July 14, 2017 Subject: Model Reduction of Coupled Systems, Germany, May 2018 We announce the symposium MORCOS 2018 on "Model Reduction of Coupled Systems" which will take place at the University of Stuttgart from 22-25 May 2018. The chairman of the event is Jun.-Prof. Jorg Fehr, the workshop has been accepted to be an official IUTAM Symposium. The call for papers is now open, see the corresponding website for information on the invited speakers, the program format, abstract templates and further details: http://www.itm.uni-stuttgart.de/iutam2018 From: Gabriel R. Barrenechea [email protected] Date: July 19, 2017 Subject: BAIL 2018 Conference, UK, Jun 2018 We are pleased to announce that the forthcoming International Conference in Boundary and Interior Layers (BAIL) will take place at the University of Strathclyde (Glasgow, UK), during June 18-22, 2018. This conference continues a long series of meetings devoted to the numerical, and asymptotic, analysis of problems presented sharp layers, with an emphasis on the development and analysis of novel and robust methods. The following are the confirmed plenary speakers: - Victor Calo (Curtin University, Perth, Australia) - Alexandre Ern (ENPC, Paris, France) - Emmanuil Georgoulis (Leicester, UK, and Athens, Greece) - Volker John (WIAS, and Free University, Berlin, Germany) - Frederic Valentin (LNCC, Petropolis, Brazil) The timeline for the Conference is as follows: - Early September: web page with all the information for the conference will be live (this will include registration fees and all the practical information); - November 1/2017 - March 1/2018: Reception of minisymposia proposals; - November 1/2017 - March 30/2018: Reception of abstracts for contributed talks; - April 15/2018: Acceptance of abstracts for contributed talks; - May 15/2018: Final registration and payment deadline. From: Christian Clason [email protected] Date: July 17, 2017 Subject: System Modeling and Optimization, Germany, Jul 2018 We would like to draw your attention to the 28th IFIP TC7 Conference on System Modeling and Optimization, to be held July 23-27, 2018, in Essen, Germany. You can find a preliminary web page with some important dates at https://udue.de/ifip2018 The IFIP TC7 conference series addresses a broad range of topics of applied optimization such as optimal control of ordinary and partial differential equations, modeling and simulation, inverse problems, nonlinear, discrete, and stochastic optimization, and industrial applications. In particular, submission of minisymposium proposals on one of these or related topics are welcome (deadline is November 2017). We hope to see you next year in Essen! From: Heike Fassbender [email protected] Date: July 30, 2017 Subject: Householder Symposium XXI, June 2020 The Householder Symposium XXI on Numerical Linear Algebra will be held at Hotel Sierra Silvana, Selva di Fasano (Br), Italy, 14-19 June 2020. Preliminary information can be found at http://users.ba.cnr.it/iac/irmanm21/HHXXI/index.html This symposium is the twenty-first in a series, previously called the Gatlinburg Symposia, and will be organized in cooperation with the Society for Industrial and Applied Mathematics (SIAM) and the SIAM Activity Group on Linear Algebra. The Symposium is very informal, with the intermingling of young and established researchers a priority. Participants are expected to attend the entire meeting. The seventeenth Householder Award for the best thesis in numerical linear algebra since 1 January 2017 will be presented. Attendance at the meeting is by invitation only. Applications will be solicited from researchers in numerical linear algebra, matrix theory, and related areas such as optimization, differential equations, signal processing, and control. Each attendee will be given the opportunity to present a talk or a poster. Some talks will be plenary lectures, while others will be shorter presentations arranged in parallel sessions. The application deadline will be some time in Fall 2019. It is expected that partial support will be available for some students, early career participants, and participants from countries with limited resources. From: Matthew Saltzman [email protected] Date: July 24, 2017 Subject: Department Chair Position, Mathematical Sciences, Clemson Univ The Department of Mathematical Sciences at Clemson University invites applications and nominations for the position of Department Chair. Candidates are expected to hold the rank of Full Professor, or department is the largest unit within the College of Science and offers B.A., B.S., M.S., and Ph.D. programs. It houses 52 tenured/tenure-track faculty members, 28 lecturers, 8 full-time staff students. Research areas of the faculty include algebra, discrete mathematics, applied analysis, bioinformatics, computational mathematics, operations research, probability, and pure and applied statistics. consideration, but later applications may be considered until the position is filled. For the complete job posting and more details, From: Ruediger Verfuerth [email protected] Date: July 26, 2017 Subject: Professorship Positions, Numerical Analysis, RUB Ruhr-Universitat Bochum (RUB) is one of Germany's leading research universities. The University draws its strengths from both the diversity and the proximity of scientific and engineering disciplines on a single, coherent campus. This highly dynamic setting enables students and researchers to work across traditional boundaries of The Department of Mathematics at Ruhr-University Bochum, invites applications for the position of a PROFESSOR FOR NUMERICS (SALARY SCALE W2) and a position of a a PROFESSOR FOR NUMERICS (SALARY SCALE W3) to start as soon as possible. We are looking for scientists with a sustainable research program, make significant scholarly contributions to the subject of Numerics, showing great promise to applications, and be an effective teacher and scientists with an internationally visible research profile that complements the existing research expertise of the department and of the interdisciplinary collaborative research expertise at the Ruhr-University Bochum and the University Alliance Ruhr. Teaching responsibilities will include service teaching duties of the Department of Mathematics. Positive evaluation as a junior professor or equivalent academic achievement (e.g. Habilitation) and evidence of special aptitude are as much required as the willingness to participate in the self-governing bodies of the RUB. We expect for both professorships: strong commitment to academic in interdisciplinary research; willingness and ability to attract external funding; readiness to contribute to joint research projects of the department. The Ruhr-University Bochum is an equal opportunity employer. Complete applications including CV, copies of academic certificates, lists of publications and research funding, list of self-raised third-party funds, teaching record, and a statement of research interests as well as a concept for gender equality should be sent by email to the Dean of the Department of Mathematics, Prof. Dr. Peter Eichelsbacher; mathe- [email protected] not later than October 15th 2017. Further information can be obtained at our website at http://www.ruhr-uni-bochum.de/ffm/. From: Carola-Bibiane Schönlieb [email protected] Date: July 12, 2017 Subject: Research Associate/Senior Research Associate Position, Cambridge, UK Research Associate/Senior Research Associate in the Mathematics of Measurement (Fixed Term) The Cantab Capital Institute for the Mathematics of Information (CCIMI) in collaboration with the National Physical Laboratory (NPL) are seeking strong candidates for a Research Associate / Senior Research Associate position to work on a collaborative project in the. Mathematics of Measurement. The postholder will be employed by the University of Cambridge and affiliated with both CCIMI and NPL, with part of their time spent in the CCIMI based in the Centre for Mathematical Sciences in Cambridge and the other part in the NPL offices based in the Maxwell Centre in Cambridge or at the NPL main site in Teddington, Southwest London. The exact percentage of time spent at each institution will vary as appropriate over the course of the project, with the expectation of a minimum of 20% of time spent at NPL (either at the Maxwell Centre in Cambridge or the main site in Teddington) and the University of Cambridge throughout. Applicants must have a PhD degree in mathematics or statistics (or closely related discipline), and have a demonstrably excellent research record and future research potential. Interviews are expected to be held in the week beginning 18 September. Informal enquiries about the position may be made to the coordinator for this recruitment at: [email protected]. Further information can be found here: http://www.jobs.cam.ac.uk/job/14148/ CLOSING DATE: 15 August 2017 From: Thomas Slawig [email protected] Date: July 25, 2017 Subject: Research Position, Optimization of Climate Models, Kiel Univ The Research Group "Algorithmic Optimal Control - Oceanic CO2 Uptake" in the Department of Computer Science at Kiel University (CAU), Germany, is offering a Researcher Position starting as soon as possible. The position is initially limited until August 31, 2019. The salary corresponds to 50% of a full position (19,35 hours per week) at the level of TV-L E13 of the German public service salary scale. We offer the opportunity to undertake doctoral research. The position is part of the research project "Optimization of quality and performance" in the german national climate modeling initiative "From the Last Interglacial to the Anthropocene: Modeling a Complete Glacial Cycle (PalMod, www.palmod.de)", which aims at simulating the climate from the peak of the last interglacial up to the present using comprehensive Earth System Models. A project extension beyond 2019 is planned. Task of the researcher is performance optimization of a high resolution climate model, beginning with the realization of a version of the atmospheric radiation module in single precision arithmetic. Requirements are: Diploma or M.Sc. in computer science, mathematics, climate sciences or related fields; proficiency in procedural programming and handling Unix/Linux systems; experience (or the willingness to familiarize oneself) with FORTRAN- based simulation programs on high-performance computers. Interested candidates should send an application letter including curriculum vitae and copies of transcripts via e-mail to Prof. Thomas Slawig, [email protected] Please refrain from submitting application photos. Closing date for applications: August 15th, 2017. From: Daniel C Reuman [email protected] Date: July 20, 2017 Subject: Postdoc Position, Population Modelling Dr Daniel Reuman is recruiting into his lab in the University of Kansas Department of Ecology and Evolutionary Biology (EEB). At least 3 years of funding are available to carry out modelling and numerical analysis of spatial population dynamics. The postdoc will join an interdisciplinary team consisting of Reuman, three postdocs and one student currently in the Reuman lab, collaborators in EEB and in the Math Department at KU, and collaborators at several institutions in the USA and UK. Funding is from the NSF Mathematical Biology program and the James S McDonnell Foundation. We seek individuals from biological or physical-science backgrounds with skills and demonstrable interests in modelling and related areas. Experience with stochastic process modelling and Fourier or wavelet approaches is a plus. Experience with population models and linear and nonlinear dynamical systems is a plus, as are computational skills, particularly if applied in a statistical or modelling context. A PhD or ABD in a related field is required. Applicants from underrepresented groups are encouraged. The University of Kansas (KU) is a major research university with special strength in quantitative ecology and evolutionary biology. KU is located in Lawrence, Kansas, about 30 miles from Kansas City. Lawrence is a progressive and cosmopolitan university town with vibrant art, music, and sports scenes that has been ranked among the top ten college towns in the country for liveability. See http://www.reumanlab.res.ku.edu/ for further information about the Reuman lab and links to past publications. See position-available-Reuman-lab.pdf for more details of the position. Email [email protected] or call 785 864 1542 with questions. A start date during or before autumn/winter 2017 is preferred. To apply, please send a CV, a cover letter of up to two pages, the names and contact information of two references, and one publication to [email protected]. Position open until filled. From: Milan Mihajlovic [email protected] Date: July 20, 2017 Subject: Postdoc Position, Scientific Computing, Univ of Manchester The University seeks to appoint a research associate in the School of Computer Science for a period of 12 months to work on an interdisciplinary project related to the development and implementation of a simulation tool for the analysis of thermal stability in emerging 3-D Integrated Circuit technologies with liquid cooling. The specific tasks within the simulator design related to this post include development of the internal liquid cooling and the external free and forced convection cooling models and development of parallel Specific skills: specialist knowledge in computational science and engineering, strong mathematical skills, knowledge of the finite element method (experience with deal.II desirable), solution of initial value problems, Krylov methods, multigrid, strong C++ programming skills. Further details about the post can be found at: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=13711 From: Noemi Petra [email protected] Date: July 17, 2017 Subject: Postdoc Position, Univ of California, Merced There is an opening for a postdoc position in Professor Noemi Petra's research group in the School of Natural Sciences at the University of California, Merced. The postdoctoral researcher will work under an NSF-funded Collaborative Research (with UC Merced, UT Austin and MIT) entitled: Integrating Data with Complex Predictive Models under Uncertainty: An Extensible Software Framework for Large-Scale Bayesian Inversion (see https://hippylib.github.io for a general overview of the project). The postdoctoral researcher will perform research in the field of large-scale Bayesian inverse problems, and on the implementation of new features in hIPPYlib. The postdoc will contribute to the research dissemination and will also help build the user/developer community by attending and speaking at conferences, workshops and summer schools at local and international events. Interested candidates should contact Noemi Petra at [email protected] and apply at: https://aprecruit.ucmerced.edu/apply/JPF00505. From: Valeriya Naumova [email protected] Date: July 27, 2017 Subject: Senior Postdoc Position, Machine Learning, Simula Research Lab, Norway We are searching for candidates for a senior postdoctoral position in data science at Simula Research Laboratory, Norway. The position is part of a recently established strategic initiative addressing Data Science with emphasis on Machine Learning. The successful candidate will play a vital role in this initiative and is expected to participate both scientifically and strategically. The project will have an initial funding for four years, and it is expected to spin out sustainable research activities in the field. The goal is to advance theoretical and methodological frameworks for complex high-dimensional data and systems analysis, with simultaneously addressing applied problems encountered in biomedicine, image and video processing, communication systems, and software engineering. The project will also play an important overarching role for machine learning at Simula. It particular, the project will provide a venue for researchers across both applied and theoretically oriented research projects. Prospective applicants should have a PhD in machine learning, signal processing, applied mathematics, statistics, or related fields. Applicants with 2++ years of postdoctoral experience in machine learning, data analysis or related fields are particularly encouraged to apply. The successful candidate could have flexibility in the choice of research topics given alignment with project goals. [email protected] From: Thomas Richter [email protected] Date: July 18, 2017 Subject: PhD / Post-Doc Position, Numerical Analysis The group of Thomas Richter at the Otto-von-Guericke University of Magdeburg is looking for a PhD candidate (Post-Doc possible) in Numerical Analysis. The position is available from 01.10.2017 and limited to three years. Candidates must bring a degree (Masters or Diploma) in mathematics above average and a focus on numerical mathematics for partial differential equations, preferable finite elements. Good knowledge of German is required. For details see: https://www.math.uni-magdeburg.de/~richter/pdf-files/138_2017.pdf From: Jichao Zhao [email protected] Date: July 11, 2017 Subject: Postdoc/PhD Positions, Univ of Auckland, New Zealand Improving our understanding of human atrial ionic mechanisms of arrhythmogenesis (Postdoc position and funded PhD opportunities available) About the project: Atrial fibrillation (AF), characterized by rapid or irregular electrical activity in the upper chambers of the heart, is the most common heart rhythm disturbance. AF is associated with substantial morbidity and mortality. Anti-arrhythmic drug therapy is a frontline treatment for AF by targeting specific ionic channels with variable success rates due to our incomplete understanding of the basic mechanisms of atrial cellular behaviours in patients under normal and diseased conditions. Computer activation models provide a powerful framework for investigating exact mechanisms behind AF and testing effectiveness of proposed novel treatment strategies by providing a flexible way to dissect highly intertwined contributing factors and set up series of control studies. The potential projects for one Postdoc and couple of PhD students are 1) To develop biophysics-based human atrial cellular models to understand the pathophysiology of AF associated with gene mutations and to identify a potential effective, targeted treatment; 2) To develop biophysics-based human sino-atrial node (SAN) models to determine the contribution of heart-failure induced structural and molecular substrates to SAN dysfunction, and potential targeted treatment; 3) To develop biophysics-based human atrial cellular models to understand the precise ionic mechanism of new-onset AF under diabetic conditions and to identify a potential upstream therapy. ([email protected]), Auckland Bioengieering Institute, The University of Auckland, New Zealand. From: Lubomir Banas [email protected] Date: July 25, 2017 Subject: PhD Position, SPDEs and Stochastic Games, Bielefeld Univ Applications are invited for a PhD position in Numerical Analysis of stochastic PDEs and Stochastic Games. The position is associated with the newly establishes CRC 1283 (www.sfb1283.uni-bielefeld.de). The deadline for applications is 17th August 2017. https://www.uni- bielefeld.de/Universitaet/Aktuelles/Stellenausschreibungen/Anzeigen/Wiss/wiss17183- englisch.pdf From: Marco Donatelli [email protected] Date: July 18, 2017 Subject: PhD Positions, Univ of Insubria (Varese-Como), Italy Outstanding students are sought for 7 Ph.D. fellowships in "Computer Science and Computational Mathematics" at the Department of Science and High Technology, University of INSUBRIA Varese-Como, Italy, (http://www.uninsubria.it). The call for the application is available at the website http://www4.uninsubria.it/on-line/home/naviga-per-tema/didattica/post- lauream/dottorati-di-ricerca/articolo14089.html The application deadline is 1th September 2017. The applicants will be interviewed 12th or 13th September 2017. The successful candidates will work in a multidisciplinary environment, developing skills in Computer Science and Scientific Computing. During the Ph.D. program, the students will attend 4 courses with final exam, and summer schools. Short visits abroad will be also supported. The students will carry out original research in Computational Mathematics, and the results of their work will be reported in their Ph.D. theses. For further information the applicants may contact the Coordinator Prof. Marco Donatelli ([email protected]) Date: July 23, 2017 Subject: CFP, Continuous Optim and Appl in ML and Data Analytics On the occasion of the 15th EUROPT Workshop on Advances in Continuous Optimization, the first edition of this workshop to be held in Canada, INFOR: Information Systems and Operational Research, the journal of the Canadian Operations Research Society, will publish a peer- reviewed special issue entitled "Continuous Optimization and Applications in Machine Learning and Data Analytics". INFOR is published by Taylor & Francis and is included in the ISI Web of Science (now Clarivate Analytics). You can read the call for papers here: http://www.tandfonline.com/toc/tinf20/current or directly at: http://www.tandf.co.uk/journals/cfp/tinf-infor-cfp.pdf . The deadline for submissions is 31 January 2018. Sebastien Le Digabel, on behalf of this special issue editors; Miguel Anjos, Fabian Bastin, Sebastien Le Digabel, Andrea Lodi From: Edward Saff [email protected] Date: July 14, 2017 Subject: Contents, Constructive Approximation 46 (1) Constructive Approximation Volume 46 Number 1 Spectrally Optimized Pointset Configurations, Braxton Osting, Jeremy Marzuola How Many Zolotarev Fractions are There? A. B. Bogatyrev On the Order Derivatives of Bessel Functions, T. M. Dunster Change of Variable in Spaces of Mixed Smoothness and Numerical Integration of Multivariate Functions on the Unit Cube, Van Kien Nguyen, Mario Ullrich, Tino Ullrich Orthogonal Polynomials for a Class of Measures with Discrete Rotational Symmetries in the Complex Plane, F. Balogh, T. Grava, D. Merzi Orthogonal Polynomial Projection Error Measured in Sobolev Norms in the Unit Disk, Leonardo E. Figueroa Bounds on Order of Indeterminate Moment Sequences, Raphael Pruckner, Roman Romanov, Harald Woracek Constructive Approximation An International Journal for Approximations and Expansions From: Raimondas Ciegis [email protected] Date: July 15, 2017 Subject: Contents, Mathematical Modelling and Analysis, 22 (4) MATHEMATICAL MODELLING AND ANALYSIS The Baltic Journal on Mathematical Applications, Numerical Analysis and Differential Equations ISSN 1392-6292, ISSN 1648-3510 online, Electronical edition: http://www.tandfonline.com/TMMA Volume 22, Number 4, July 2017 CONTENTS Harijs Kalis, Andris Buikis, Aivars Aboltins and Ilmars Kangro, Special Splines of Hyperbolic Type for the Solutions of Heat and Mass Transfer 3-D Problems in Porous Multi-Layered Axial Symmetry Domain Amin Esfahani and Hamideh B. Mohammadi, Global Existence and Asymptotic Behavior of Solutions for the Cauchy Problem of a Dissipative Boussinesq-Type Equation Rehana Naz and Azam Chaudhry, Comparison of Closed-Form Solutions for the Lucas-Uzawa Model via the Partial Hamiltonian Approach and the Classical Approach Pengyan Liu, Liang Zhang, Shitao Liu and Lifei Zheng, Global Exponential Stability of Almost Periodic Solutions for Nicholson's Blowflies System with Nonlinear Density-Dependent Mortality Terms and Patch Structure Fei Wang and Yongqing Yang, Fractional Order Barbalat's Lemma and Its Applications in the Stability of Fractional Order Nonlinear Systems Evely Kirsiaed, Peeter Oja and Gul Wali Shah, Cubic Spline Histopolation Pedro Almenar and Lucas J{\'{o}}dar, Solvability of a Class of \textit{N}-th Order Linear Focal Problems Urve Kangro, Cordial Volterra Integral Equations and Singular Fractional Integro-Differential Equations in Spaces of Analytic Functions From: Claude Brezinski [email protected] Date: July 26, 2017 Subject: Contents, Numerical Algorithms, 75 (4) Numerical Algorithms, Vol. 75, No. 4 Hong-lin Liao, Pin Lyu, Seakweng Vong, Ying Zhao, Stability of fully discrete schemes with interpolation-type fractional formulas for distributed-order subdiffusion equations M. Chau, A. Laouar, T. Garcia, P. Spiteri, Grid solution of problem with unilateral constraints Xianyang Zeng, Hongli Yang, Xinyuan Wu, An improved tri-coloured rooted-tree theory and order conditions for ERKN methods for general multi-frequency oscillatory systems Prabir Daripa, Aditi Ghosh, The FFTRR-based fast direct algorithms for complex inhomogeneous biharmonic problems with applications to incompressible flows M.J. Senosiain, A. Tocino, Two-step strong order 1.5 schemes for stochastic differential equations Fu-jun Liu, Hui-yuan Li, Zhong-qing Wang, Spectral methods using generalized Laguerre functions for second and fourth order problems M. Jani, E. Babolian, S. Javadi, D. Bhatta, Banded operational matrices for Bernstein polynomials and application to the fractional Wataru Takahashi, The split common null point problem for generalized resolvents in two banach spaces Kanae Akaiwa, Yoshimasa Nakamura, Masashi Iwasaki, Akira Yoshida, Koichi Kondo, An arbitrary band structure construction of totally nonnegative matrices with prescribed eigenvalues Yi-Fen Ke, Chang-Feng Ma, An inexact modified relaxed splitting preconditioner for the generalized saddle point problems from the incompressible Navier-Stokes equations Rui-Ping Wen, Fu-Jiao Ren, Guo-Yan Meng, Modified quasi-Chebyshev acceleration to nonoverlapping parallel multisplitting method Oliver J. Sutton, The virtual element method in 50 lines of MATLAB Zheng-Ge Huang, Li-Gong Wang, Zhong Xu, Jing-Jing Cui, A generalized variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems Stoil I. Ivanov, A unified semilocal convergence analysis of a family of iterative algorithms for computing all zeros of a polynomial simultaneously End of Digest **************************