url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://pari.math.u-bordeaux1.fr/dochtml/html/_L_minusfunctions.html | ## L-functions
This section describes routines related to L-functions. We first introduce the basic concept and notations, then explain how to represent them in GP. Let Γ_ℝ(s) = π-s/2Γ(s/2), where Γ is Euler's gamma function. Given d ≥ 1 and a d-tuple A = [α_1,...,α_d] of complex numbers, we let γ_A(s) = ∏α ∈ A Γ_ℝ(s + α).
Given a sequence a = (a_n)n ≥ 1 of complex numbers (such that a_1 = 1), a positive conductor N ∈ ℤ, and a gamma factor γ_A as above, we consider the Dirichlet series L(a,s) = ∑n ≥ 1 a_n n-s and the attached completed function Λ(a,s) = Ns/2γ_A(s).L(a,s).
Such a datum defines an L-function if it satisfies the three following assumptions:
* [Convergence] The a_n = O_ε(nk_1+ε) have polynomial growth, equivalently L(s) converges absolutely in some right half-plane Re(s) > k_1 + 1.
* [Analytic continuation] L(s) has a meromorphic continuation to the whole complex plane with finitely many poles.
* [Functional equation] There exist an integer k, a complex number ε (usually of modulus 1), and an attached sequence a^* defining both an L-function L(a^*,s) satisfying the above two assumptions and a completed function Λ(a^*,s) = Ns/2γ_A(s). L(a^*,s), such that Λ(a,k-s) = ε Λ(a^*,s) for all regular points.
More often than not in number theory we have a^ *= a (which forces |ε |= 1), but this needs not be the case. If a is a real sequence and a = a^*, we say that L is self-dual. We do not assume that the a_n are multiplicative, nor equivalently that L(s) has an Euler product.
Remark. Of course, a determines the L-function, but the (redundant) datum a,a^*, A, N, k, ε describes the situation in a form more suitable for fast computations; knowing the polar part r of Λ(s) (a rational function such that Λ-r is holomorphic) is also useful. A subset of these, including only finitely many a_n-values will still completely determine L (in suitable families), and we provide routines to try and compute missing invariants from whatever information is available.
Important Caveat. The implementation assumes that the implied constants in the O_ε are small. In our generic framework, it is impossible to return proven results without more detailed information about the L function. The intended use of the L-function package is not to prove theorems, but to experiment and formulate conjectures, so all numerical results should be taken with a grain of salt. One can always increase `realbitprecision` and recompute: the difference estimates the actual absolute error in the original output.
Note. The requested precision has a major impact on runtimes. Because of this, most L-function routines, in particular `lfun` itself, specify the requested precision in bits, not in decimal digits. This is transparent for the user once `realprecision` or `realbitprecision` are set. We advise to manipulate precision via `realbitprecision` as it allows finer granularity: `realprecision` increases by increments of 64 bits, i.e. 19 decimal digits at a time.
#### Theta functions
Given an L-function as above, we define an attached theta function via Mellin inversion: for any positive real t > 0, we let θ(a,t) := (1)/(2π i)∫Re(s) = c t-s Λ(s) ds where c is any positive real number c > k_1+1 such that c + Re(a) > 0 for all a ∈ A. In fact, we have θ(a,t) = ∑n ≥ 1 a_n K(nt/N1/2) where K(t) := (1)/(2π i)∫Re(s) = c t-s γ_A(s) ds. Note that this function is analytic and actually makes sense for complex t, such that Re(t2/d) > 0, i.e. in a cone containing the positive real half-line. The functional equation for Λ translates into θ(a,1/t) - ε t^kθ(a^*,t) = P_Λ(t), where P_Λ is an explicit polynomial in t and log t given by the Taylor development of the polar part of Λ: there are no log's if all poles are simple, and P = 0 if Λ is entire. The values θ(t) are generally easier to compute than the L(s), and this functional equation provides a fast way to guess possible values for missing invariants in the L-function definition.
#### Data structures describing L and theta functions
We have 3 levels of description:
* an `Lmath` is an arbitrary description of the underlying mathematical situation (to which e.g., we associate the a_p as traces of Frobenius elements); this is done via constructors to be described in the subsections below.
* an `Ldata` is a computational description of situation, containing the complete datum (a,a^*,A,k,N,ε,r). Where a and a^* describe the coefficients (given n,b we must be able to compute [a_1,...,a_n] with bit accuracy b), A describes the Euler factor, the (classical) weight is k, N is the conductor, and r describes the polar part of L(s). This is obtained via the function `lfuncreate`. N.B. For motivic L-functions, the motivic weight w is w = k-1; but we also support non-motivic L-functions.
Design problem. All components of an `Ldata` should be given exactly since the accuracy to which they must be computed is not bounded a priori; but this is not always possible, in particular for ε and r.
* an `Linit` contains an `Ldata` and everything needed for fast numerical computations. It specifies the functions to be considered (either L(j)(s) or θ(j)(t) for derivatives of order j ≤ m, where m is now fixed) and specifies a domain which limits the range of arguments (t or s, respectively to certain cones and rectangular regions) and the output accuracy. This is obtained via the functions `lfuninit` or `lfunthetainit`.
All the functions which are specific to L or theta functions share the prefix `lfun`. They take as first argument either an `Lmath`, an `Ldata`, or an `Linit`. If a single value is to be computed, this makes no difference, but when many values are needed (e.g. for plots or when searching for zeros), one should first construct an `Linit` attached to the search range and use it in all subsequent calls. If you attempt to use an `Linit` outside the range for which it was initialized, a warning is issued, because the initialization is performed again, a major inefficiency:
``` ? Z = lfuncreate(1); \\ Riemann zeta
? L = lfuninit(Z, [1/2, 0, 100]); \\ zeta(1/2+it), |t| < 100
? lfun(L, 1/2) \\ OK, within domain
%3 = -1.4603545088095868128894991525152980125
? lfun(L, 0) \\ not on critical line !
*** lfun: Warning: lfuninit: insufficient initialization.
%4 = -0.50000000000000000000000000000000000000
? lfun(L, 1/2, 1) \\ attempt first derivative !
*** lfun: Warning: lfuninit: insufficient initialization.
%5 = -3.9226461392091517274715314467145995137
```
For many L-functions, passing from `Lmath` to an `Ldata` is inexpensive: in that case one may use `lfuninit` directly from the `Lmath` even when evaluations in different domains are needed. The above example could equally have skipped the `lfuncreate`:
``` ? L = lfuninit(1, [1/2, 0, 100]); \\ zeta(1/2+it), |t| < 100
```
In fact, when computing a single value, you can even skip `lfuninit`:
``` ? L = lfun(1, 1/2, 1); \\ zeta'(1/2)
? L = lfun(1, 1+x+O(x^5)); \\ first 5 terms of Taylor development at 1
```
Both give the desired results with no warning.
Complexity. The implementation requires O(N(|t|+1))1/2 coefficients a_n to evaluate L of conductor N at s = σ + i t.
We now describe the available high-level constructors, for built-in L functions.
#### Dirichlet L-functions
Given a Dirichlet character χ:(ℤ/Nℤ)^* → ℂ, we let L(χ, s) = ∑n ≥ 1 χ(n) n-s. Only primitive characters are supported. Given a fundamental discriminant D, the function L((D/.), s), for the quadratic Kronecker symbol, is encoded by the `t_INT` D. This includes Riemann ζ function via the special case D = 1.
More general characters can be represented in a variety of ways:
* via Conrey notation (see `znconreychar`): χ_N(m,.) is given as the `t_INTMOD` `Mod(m,N)`.
* via a znstar structure describing the abelian group (ℤ/Nℤ)^*, where the character is given in terms of the znstar generators:
``` ? G = znstar(100, 1); \\ (Z/100Z)^*
? G.cyc \\ ~ Z/20 . g1 + Z/2 . g2 for some generators g1 and g2
%2 = [20, 2]
? G.gen
%3 = [77, 51]
? chi = [a, b] \\ maps g1 to e(a/20) and g2 to e(b/2); e(x) = exp(2ipi x)
```
More generally, let (ℤ/Nℤ)^ *= ⨁ (ℤ/d_iℤ) g_i be given via a znstar structure G (`G.cyc` gives the d_i and `G.gen` the g_i). A character χ on G is given by a row vector v = [a_1,...,a_n] such that χ(∏ g_in_i) = exp(2π i∑ a_i n_i / d_i). The pair [G, v] encodes the primitive character attached to χ.
* in fact, this construction [G, m] describing a character is more general: m is also allowed to be a Conrey index as seen above, or a Conrey logarithm (see `znconreylog`), and the latter format is actually the fastest one.
* it is also possible to view Dirichlet characters as Hecke characters over K = ℚ (see below), for a modulus [N, [1]] but this is both more complicated and less efficient.
In all cases, a non-primitive character is replaced by the attached primitive character.
#### Hecke L-functions
The Dedekind zeta function of a number field K = ℚ[X]/(T) is encoded either by the defining polynomial T, or any absolute number fields structure (preferably at least a bnf).
Given a finite order Hecke character χ: Cl_f(K) → ℂ, we let L(χ, s) = ∑A ⊂ O_K χ(A) (NK/ℚA)-s.
Let Cl_f(K) = ⨁ (ℤ/d_iℤ) g_i given by a bnr structure with generators: the d_i are given by `K.cyc` and the g_i by `K.gen`. A character χ on the ray class group is given by a row vector v = [a_1,...,a_n] such that χ(∏ g_in_i) = exp(2π i∑ a_i n_i / d_i). The pair [bnr, v] encodes the primitive character attached to χ.
``` ? K = bnfinit(x^2-60);
? Cf = bnrinit(K, [7, [1,1]], 1); \\ f = 7 oo_1 oo_2
? Cf.cyc
%3 = [6, 2, 2]
? Cf.gen
%4 = [[2, 1; 0, 1], [22, 9; 0, 1], [-6, 7]~]
? lfuncreate([Cf, [1,0,0]]); \\ χ(g_1) = ζ_6, χ(g_2) = χ(g_3) = 1
```
Dirichlet characters on (ℤ/Nℤ)^* are a special case, where K = ℚ:
``` ? Q = bnfinit(x);
? Cf = bnrinit(Q, [100, [1]]); \\ for odd characters on (Z/100Z)*
```
For even characters, replace by `bnrinit(K, N)`. Note that the simpler direct construction in the previous section will be more efficient.
#### Artin L functions
Given a Galois number field N/ℚ with group G = `galoisinit`(N), a representation ρ of G over the cyclotomic field ℚ(ζ_n) is specified by the matrices giving the images of `G.gen` by ρ. The corresponding Artin L function is created using `lfunartin`.
``` P = quadhilbert(-47); \\ degree 5, Galois group D_5
N = nfinit(nfsplitting(P)); \\ Galois closure
G = galoisinit(N);
[s,t] = G.gen; \\ order 5 and 2
L = lfunartin(N,G, [[a,0;0,a^-1],[0,1;1,0]], 5); \\ irr. degree 2
```
In the above, the polynomial variable (here `a`) represents ζ_5 := exp(2iπ/5) and the two matrices give the images of s and t. Here, priority of `a` must be lower than the priority of `x`.
#### L-functions of algebraic varieties
L-function of elliptic curves over number fields are supported.
``` ? E = ellinit([1,1]);
? L = lfuncreate(E); \\ L-function of E/Q
? E2 = ellinit([1,a], nfinit(a^2-2));
? L2 = lfuncreate(E2); \\ L-function of E/Q(sqrt(2))
```
L-function of hyperelliptic genus-2 curve can be created with `lfungenus2`. To create the L function of the curve y^2+(x^3+x^2+1)y = x^2+x:
``` ? L = lfungenus2([x^2+x, x^3+x^2+1]);
```
Currently, the model needs to be minimal at 2, and if the conductor is even, its valuation at 2 might be incorrect (a warning is issued).
#### Eta quotients / Modular forms
An eta quotient is created by applying `lfunetaquo` to a matrix with 2 columns [m, r_m] representing f(τ) := ∏_m η(mτ)r_m. It is currently assumed that f is a self-dual cuspidal form on Γ_0(N) for some N. For instance, the L-function ∑ τ(n) n-s attached to Ramanujan's Δ function is encoded as follows
``` ? L = lfunetaquo(Mat([1,24]));
? lfunan(L, 100) \\ first 100 values of tau(n)
```
More general modular forms defined by modular symbols will be added later.
#### Low-level Ldata format
When no direct constructor is available, you can still input an L function directly by supplying [a, a^*,A, k, N, ε, r] to `lfuncreate` (see `??lfuncreate` for details).
It is strongly suggested to first check consistency of the created L-function:
``` ? L = lfuncreate([a, as, A, k, N, eps, r]);
? lfuncheckfeq(L) \\ check functional equation
```
#### lfun(L, s, {D = 0})
Compute the L-function value L(s), or if `D` is set, the derivative of order `D` at s. The parameter `L` is either an Lmath, an Ldata (created by `lfuncreate`, or an Linit (created by `lfuninit`), preferrably the latter if many values are to be computed.
The argument s is also allowed to be a power series; for instance, if s = α + x + O(x^n), the function returns the Taylor expansion of order n around α. The result is given with absolute error less than 2-B, where B = realbitprecision.
Caveat. The requested precision has a major impact on runtimes. It is advised to manipulate precision via `realbitprecision` as explained above instead of `realprecision` as the latter allows less granularity: `realprecision` increases by increments of 64 bits, i.e. 19 decimal digits at a time.
``` ? lfun(x^2+1, 2) \\ Lmath: Dedekind zeta for Q(i) at 2
%1 = 1.5067030099229850308865650481820713960
? L = lfuncreate(ellinit("5077a1")); \\ Ldata: Hasse-Weil zeta function
? lfun(L, 1+x+O(x^4)) \\ zero of order 3 at the central point
%3 = 0.E-58 - 5.[...] E-40*x + 9.[...] E-40*x^2 + 1.7318[...]*x^3 + O(x^4)
\\ Linit: zeta(1/2+it), |t| < 100, and derivative
? L = lfuninit(1, [100], 1);
? T = lfunzeros(L, [1,25]);
%5 = [14.134725[...], 21.022039[...]]
? z = 1/2 + I*T[1];
? abs( lfun(L, z) )
%7 = 8.7066865533412207420780392991125136196 E-39
? abs( lfun(L, z, 1) )
%8 = 0.79316043335650611601389756527435211412 \\ simple zero
```
The library syntax is `GEN lfun0(GEN L, GEN s, long D, long bitprec)`.
#### lfunabelianrelinit(bnfL, bnfK, polrel, sdom, {der = 0})
Returns the `Linit` structure attached to the Dedekind zeta function of the number field L (see `lfuninit`), given a subfield K such that L/K is abelian. Here `polrel` defines L over K, as usual with the priority of the variable of `bnfK` lower than that of `polrel`. `sdom` and `der` are as in `lfuninit`.
``` ? D = -47; K = bnfinit(y^2-D);
? rel = quadhilbert(D); T = rnfequation(K.pol, rel); \\ degree 10
? L = lfunabelianrelinit(T,K,rel, [2,0,0]); \\ at 2
time = 84 ms.
? lfun(L, 2)
%4 = 1.0154213394402443929880666894468182650
? lfun(T, 2) \\ using parisize > 300MB
time = 652 ms.
%5 = 1.0154213394402443929880666894468182656
```
As the example shows, using the (abelian) relative structure is more efficient than a direct computation. The difference becomes drastic as the absolute degree increases while the subfield degree remains constant.
The library syntax is `GEN lfunabelianrelinit(GEN bnfL, GEN bnfK, GEN polrel, GEN sdom, long der, long bitprec)`.
#### lfunan(L, n)
Compute the first n terms of the Dirichlet series attached to the L-function given by `L` (`Lmath`, `Ldata` or `Linit`).
``` ? lfunan(1, 10) \\ Riemann zeta
%1 = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
? lfunan(5, 10) \\ Dirichlet L-function for kronecker(5,.)
%2 = [1, -1, -1, 1, 0, 1, -1, -1, 1, 0]
```
The library syntax is `GEN lfunan(GEN L, long n, long prec)`.
#### lfunartin(nf, gal, rho, n)
Returns the `Ldata` structure attached to the Artin L-function attached to the representation ρ of the Galois group of the extension K/ℚ, defined over the cyclotomic field ℚ(ζ_n), where nf is the nfinit structure attached to K, gal is the galoisinit structure attached to K/ℚ, and rho is given either by the value its character on the conjugacy classes (see `galoisconjclasses` and `galoischartable`) or by the matrices that are the images of the generators `gal.gen`. Cyclotomic numbers in rho are represented by polynomials, whose variable is understood as the complex number exp(2 i π/n).
In the following example we build the Artin L-functions attached to the two irreducible degree 2 representations of the dihedral group D10 defined over ℚ(ζ_5), for the extension H/ℚ where H is the Hilbert class field of ℚ(sqrt{-47}). We show numerically some identities involving Dedekind ζ functions and Hecke L series.
``` ? P = quadhilbert(-47);
? N = nfinit(nfsplitting(P));
? G = galoisinit(N);
? [T,n] = galoischartable(G);
? L1 = lfunartin(N,G, T[,3], n);
? L2 = lfunartin(N,G, T[,4], n);
? s = 1 + x + O(x^4);
? lfun(1,s)*lfun(-47,s)*lfun(L1,s)^2*lfun(L2,s)^2 - lfun(N,s)
%6 ~ 0
? lfun(1,s)*lfun(L1,s)*lfun(L2,s) - lfun(P,s)
%7 ~ 0
? bnr = bnrinit(bnfinit(x^2+47),1,1);
? lfun([bnr,[2]], s) - lfun(L1, s)
%9 ~ 0
? lfun([bnr,[1]], s) - lfun(L2, s)
%10 ~ 0
```
The first identity is the factorisation of the regular representation of D10, the second the factorisation of the natural representation of D10 ⊂ S_5, the next two are the expressions of the degree 2 representations as induced from degree 1 representations.
The library syntax is `GEN lfunartin(GEN nf, GEN gal, GEN rho, long n, long bitprec)`.
#### lfuncheckfeq(L, {t})
Given the data attached to an L-function (`Lmath`, `Ldata` or `Linit`), check whether the functional equation is satisfied. This is most useful for an `Ldata` constructed "by hand", via `lfuncreate`, to detect mistakes.
If the function has poles, the polar part must be specified. The routine returns a bit accuracy b such that |w - w| < 2b, where w is the root number contained in `data`, and w is a computed value derived from θ(t) and θ(1/t) at some t != 0 and the assumed functional equation. Of course, a large negative value of the order of `realbitprecision` is expected.
If t is given, it should be close to the unit disc for efficiency and such that θ(t) != 0. We then check the functional equation at that t.
``` ? \pb 128 \\ 128 bits of accuracy
? default(realbitprecision)
%1 = 128
? L = lfuncreate(1); \\ Riemann zeta
? lfuncheckfeq(L)
%3 = -124
```
i.e. the given data is consistent to within 4 bits for the particular check consisting of estimating the root number from all other given quantities. Checking away from the unit disc will either fail with a precision error, or give disappointing results (if θ(1/t) is large it will be computed with a large absolute error)
``` ? lfuncheckfeq(L, 2+I)
%4 = -115
? lfuncheckfeq(L,10)
*** at top-level: lfuncheckfeq(L,10)
*** ^------------------
*** lfuncheckfeq: precision too low in lfuncheckfeq.
```
The library syntax is `long lfuncheckfeq(GEN L, GEN t = NULL, long bitprec)`.
#### lfunconductor(L, {ab = [1, {10000}]}, {flag = 0})
Compute the conductor of the given L-function (if the structure contains a conductor, it is ignored); `ab` = [a,b] is the interval where we expect to find the conductor; it may be given as a single scalar b, in which case we look in [1,b]. Increasing `ab` slows down the program but gives better accuracy for the result.
If `flag` is 0 (default), give either the conductor found as an integer, or a vector (possibly empty) of conductors found. If `flag` is 1, same but give the computed floating point approximations to the conductors found, without rounding to integers. It `flag` is 2, give all the conductors found, even those far from integers.
Caveat. This is a heuristic program and the result is not proven in any way:
``` ? L = lfuncreate(857); \\ Dirichlet L function for kronecker(857,.)
? \p19
realprecision = 19 significant digits
? lfunconductor(L)
%2 = [17, 857]
? lfunconductor(L,,1) \\ don't round
%3 = [16.99999999999999999, 857.0000000000000000]
? \p38
realprecision = 38 significant digits
? lfunconductor(L)
%4 = 857
```
Note. This program should only be used if the primes dividing the conductor are unknown, which is rare. If they are known, a direct search through possible prime exponents using `lfuncheckfeq` will be more efficient and rigorous:
``` ? E = ellinit([0,0,0,4,0]); /* Elliptic curve y^2 = x^3+4x */
? E.disc \\ |disc E| = 2^12
%2 = -4096
\\ create Ldata by hand. Guess that root number is 1 and conductor N
? L(N) = lfuncreate([n->ellan(E,n), 0, [0,1], 2, N, 1]);
? fordiv(E.disc, d, print(d,": ",lfuncheckfeq(L(d))))
1: 0
2: 0
4: -1
8: -2
16: -3
32: -127
64: -3
128: -2
256: -2
512: -1
1024: -1
2048: 0
4096: 0
? lfunconductor(L(1)) \\ lfunconductor ignores conductor = 1 in Ldata !
%5 = 32
```
The above code assumed that root number was 1; had we set it to -1, none of the `lfuncheckfeq` values would have been acceptable:
``` ? L2(N) = lfuncreate([n->ellan(E,n), 0, [0,1], 2, N, -1]);
? [ lfuncheckfeq(L2(d)) | d<-divisors(E.disc) ]
%7 = [0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, -1, -1]
```
The library syntax is `GEN lfunconductor(GEN L, GEN ab = NULL, long 10000], long bitprec)`.
#### lfuncost(L, {sdom}, {der = 0})
Estimate the cost of running `lfuninit(L,sdom,der)` at current bit precision. Returns [t,b], to indicate that t coefficients a_n will be computed, as well as t values of `gammamellininv`, all at bit accuracy b. A subsequent call to `lfun` at s evaluates a polynomial of degree t at exp(h s) for some real parameter h, at the same bit accuracy b. If L is already an `Linit`, then sdom and der are ignored and are best left omitted; the bit accuracy is also inferred from L: in short we get an estimate of the cost of using that particular `Linit`.
``` ? \pb 128
? lfuncost(1, [100]) \\ for zeta(1/2+I*t), |t| < 100
%1 = [7, 242] \\ 7 coefficients, 242 bits
? lfuncost(1, [1/2, 100]) \\ for zeta(s) in the critical strip, |Im s| < 100
%2 = [7, 246] \\ now 246 bits
? lfuncost(1, [100], 10) \\ for zeta(1/2+I*t), |t| < 100
%3 = [8, 263] \\ 10th derivative increases the cost by a small amount
? lfuncost(1, [10^5])
%3 = [158, 113438] \\ larger imaginary part: huge accuracy increase
? L = lfuncreate(polcyclo(5)); \\ Dedekind zeta for Q(zeta_5)
? lfuncost(L, [100]) \\ at s = 1/2+I*t), |t| < 100
%5 = [11457, 582]
? lfuncost(L, [200]) \\ twice higher
%6 = [36294, 1035]
? lfuncost(L, [10^4]) \\ much higher: very costly !
%7 = [70256473, 45452]
? \pb 256
? lfuncost(L, [100]); \\ doubling bit accuracy
%8 = [17080, 710]
```
In fact, some L functions can be factorized algebraically by the `lfuninit` call, e.g. the Dedekind zeta function of abelian fields, leading to much faster evaluations than the above upper bounds. In that case, the function returns a vector of costs as above for each individual function in the product actually evaluated:
``` ? L = lfuncreate(polcyclo(5)); \\ Dedekind zeta for Q(zeta_5)
? lfuncost(L, [100]) \\ a priori cost
%2 = [11457, 582]
? L = lfuninit(L, [100]); \\ actually perform all initializations
? lfuncost(L)
%4 = [[16, 242], [16, 242], [7, 242]]
```
The Dedekind function of this abelian quartic field is the product of four Dirichlet L-functions attached to the trivial character, a non-trivial real character and two complex conjugate characters. The non-trivial characters happen to have the same conductor (hence same evaluation costs), and correspond to two evaluations only since the two conjugate characters are evaluated simultaneously. For a total of three L-functions evaluations, which explains the three components above. Note that the actual cost is much lower than the a priori cost in this case.
The library syntax is `GEN lfuncost0(GEN L, GEN sdom = NULL, long der, long bitprec)`. Also available is `GEN lfuncost(GEN L, GEN dom, long der, long bitprec)` when L is not an `Linit`; the return value is a `t_VECSMALL` in this case.
#### lfuncreate(obj)
This low-level routine creates `Ldata` structures, needed by lfun functions, describing an L-function and its functional equation. You are urged to use a high-level constructor when one is available, and this function accepts them, see `??lfun`:
``` ? L = lfuncreate(1); \\ Riemann zeta
? L = lfuncreate(5); \\ Dirichlet L-function for quadratic character (5/.)
? L = lfuncreate(x^2+1); \\ Dedekind zeta for Q(i)
? L = lfuncreate(ellinit([0,1])); \\ L-function of E/Q: y^2=x^3+1
```
One can then use, e.g., `Lfun(L,s)` to directly evaluate the respective L-functions at s, or `lfuninit(L, [c,w,h]` to initialize computations in the rectangular box Re(s-c) ≤ w, Im(s) ≤ h.
We now describe the low-level interface, used to input non-builtin L-functions. The input is now a 6 or 7 component vector V = [a, astar, Vga, k, N, eps, poles], whose components are as follows:
* `V[1] = a` encodes the Dirichlet series coefficients (a_n). The preferred format is a closure of arity 1: `n- > vector(n,i,a(i))` giving the vector of the first n coefficients. The closure is allowed to return a vector of more than n coefficients (only the first n will be considered) or even less than n, in which case loss of accuracy will occur and a warning that `#an` is less than expected is issued. This allows to precompute and store a fixed large number of Dirichlet coefficients in a vector v and use the closure `n- > v`, which does not depend on n. As a shorthand for this latter case, you can input the vector v itself instead of the closure.
``` ? z = lfuncreate([n->vector(n,i,1), 1, [0], 1, 1, 1, 1]); \\ Riemann zeta
? lfun(z,2) - Pi^2/6
%2 = -5.877471754111437540 E-39
```
A second format is limited to L-functions affording an Euler product. It is a closure of arity 2 `(p,d)- > F(p)` giving the local factor L_p(X) at p as a rational function, to be evaluated at p-s as in `direuler`; d is set to `logint`(n,p) + 1, where n is the total number of Dirichlet coefficients (a_1,...,a_n) that will be computed. In other words, the smallest integer d such that p^d > n. This parameter d allows to compute only part of L_p when p is large and L_p expensive to compute: any polynomial (or `t_SER`) congruent to L_p modulo X^d is acceptable since only the coefficients of X^0,..., Xd-1 are needed to expand the Dirichlet series. The closure can of course ignore this parameter:
``` ? z = lfuncreate([(p,d)->1/(1-x), 1, [0], 1, 1, 1, 1]); \\ Riemann zeta
? lfun(z,2) - Pi^2/6
%4 = -5.877471754111437540 E-39
```
One can describe separately the generic local factors coefficients and the bad local factors by setting `dir` = [F, Lbad], were Lbad = [[p_1,Lp_1],...,[p_k,Lp_k]], where F describes the generic local factors as above, except that when p = p_i for some i ≤ k, the coefficient a_p is directly set to Lp_i instead of calling F.
``` N = 15;
E = ellinit([1, 1, 1, -10, -10]); \\ = "15a1"
F(p,d) = 1 / (1 - ellap(E,p)*'x + p*'x^2);
Lbad = [[3, 1/(1+'x)], [5, 1/(1-'x)]];
L = lfuncreate([[F,Lbad], 0, [0,1], 2, N, ellrootno(E)]);
```
Of course, in this case, `lfuncreate(E)` is preferable!
* `V[2] = astar` is the Dirichlet series coefficients of the dual function, encoded as `a` above. The sentinel values 0 and 1 may be used for the special cases where a = a^* and a = a^*, respectively.
* `V[3] = Vga` is the vector of α_j such that the gamma factor of the L-function is equal to γ_A(s) = ∏1 ≤ j ≤ dΓ(s+α_j), where Γ(s) = π-s/2Γ(s/2). This same syntax is used in the `gammamellininv` functions. In particular the length d of `Vga` is the degree of the L-function. In the present implementation, the α_j are assumed to be exact rational numbers. However when calling theta functions with complex (as opposed to real) arguments, determination problems occur which may give wrong results when the α_j are not integral.
* `V[4] = k` is a positive integer k. The functional equation relates values at s and k-s. For instance, for an Artin L-series such as a Dedekind zeta function we have k = 1, for an elliptic curve k = 2, and for a modular form, k is its weight. For motivic L-functions, the motivic weight w is w = k-1.
By default we assume that a_n = O_ε(nk_1+ε), where k_1 = w and even k_1 = w/2 when the L function has no pole (Ramanujan-Petersson). If this is not the case, you can replace the k argument by a vector [k,k_1], where k_1 is the upper bound you can assume.
* `V[5] = N` is the conductor, an integer N ≥ 1, such that Λ(s) = Ns/2γ_A(s)L(s) with γ_A(s) as above.
* `V[6] = eps` is the root number ϵ, i.e., the complex number (usually of modulus 1) such that Λ(a, k-s) = ϵ Λ(a^*, s).
* The last optional component `V[7] = poles` encodes the poles of the L or Λ-functions, and is omitted if they have no poles. A polar part is given by a list of 2-component vectors [β,Pβ(x)], where β is a pole and the power series Pβ(x) describes the attached polar part, such that L(s) - P_β(s-β) is holomorphic in a neighbourhood of β. For instance P_β = r/x+O(1) for a simple pole at β or r_1/x^2+r_2/x+O(1) for a double pole. The type of the list describing the polar part allows to distinguish between L and Λ: a `t_VEC` is attached to L, and a `t_COL` is attached to Λ. Unless a = a^* (coded by `astar` equal to 0 or 1), it is mandatory to specify the polar part of Λ rather than those of L since the poles of L^* cannot be infered from the latter ! Whereas the functional equation allows to deduce the polar part of Λ^* from the polar part of Λ.
Finally, if a = a^*, we allow a shortcut to describe the frequent situation where L has at most simple pole, at s = k, with residue r a complex scalar: you may then input `poles` = r. This value r can be set to 0 if unknown and it will be computed.
The library syntax is `GEN lfuncreate(GEN obj)`.
#### lfundiv(L1, L2)
Creates the `Ldata` structure (without initialization) corresponding to the quotient of the Dirichlet series L_1 and L_2 given by `L1` and `L2`. Assume that v_z(L_1) ≥ v_z(L_2) at all complex numbers z: the construction may not create new poles, nor increase the order of existing ones.
The library syntax is `GEN lfundiv(GEN L1, GEN L2, long bitprec)`.
#### lfunetaquo(M)
Returns the `Ldata` structure attached to the L function attached to the modular form z` ⟼ `i = 1^n η(Mi,1 z)Mi,2 It is currently assumed that f is a self-dual cuspidal form on Γ_0(N) for some N. For instance, the L-function ∑ τ(n) n-s attached to Ramanujan's Δ function is encoded as follows
``` ? L = lfunetaquo(Mat([1,24]));
? lfunan(L, 100) \\ first 100 values of tau(n)
```
The library syntax is `GEN lfunetaquo(GEN M)`.
#### lfungenus2(F)
Returns the `Ldata` structure attached to the L function attached to the genus-2 curve defined by y^2 = F(x) or y^2+Q(x) y = P(x) if F = [P,Q]. Currently, the model needs to be minimal at 2, and if the conductor is even, its valuation at 2 might be incorrect (a warning is issued).
The library syntax is `GEN lfungenus2(GEN F)`.
#### lfunhardy(L, t)
Variant of the Hardy Z-function given by `L`, used for plotting or locating zeros of L(k/2+it) on the critical line. The precise definition is as follows: if as usual k/2 is the center of the critical strip, d is the degree, α_j the entries of `Vga` giving the gamma factors, and ϵ the root number, then if we set s = k/2+it = ρ e and E = (d(k/2-1)+∑1 ≤ j ≤ dα_j)/2, the computed function at t is equal to Z(t) = ϵ-1/2Λ(s).|s|-Eedtθ/2 , which is a real function of t for self-dual Λ, vanishing exactly when L(k/2+it) does on the critical line. The normalizing factor |s|-Eedtθ/2 compensates the exponential decrease of γ_A(s) as t → oo so that Z(t) ~ 1.
``` ? T = 100; \\ maximal height
? L = lfuninit(1, [T]); \\ initialize for zeta(1/2+it), |t|<T
? \p19 \\ no need for large accuracy
? ploth(t = 0, T, lfunhardy(L,t))
```
Using `lfuninit` is critical for this particular applications since thousands of values are computed. Make sure to initialize up to the maximal t needed: otherwise expect to see many warnings for unsufficient initialization and suffer major slowdowns.
The library syntax is `GEN lfunhardy(GEN L, GEN t, long bitprec)`.
#### lfuninit(L, sdom, {der = 0})
Initalization function for all functions linked to the computation of the L-function L(s) encoded by `L`, where s belongs to the rectangular domain `sdom` = [center,w,h] centered on the real axis, |Re(s)-center| ≤ w, |Im(s)| ≤ h, where all three components of `sdom` are real and w, h are non-negative. `der` is the maximum order of derivation that will be used. The subdomain [k/2, 0, h] on the critical line (up to height h) can be encoded as [h] for brevity. The subdomain [k/2, w, h] centered on the critical line can be encoded as [w, h] for brevity.
The argument `L` is an `Lmath`, an `Ldata` or an `Linit`. See `??Ldata` and `??lfuncreate` for how to create it.
The height h of the domain is a crucial parameter: if you only need L(s) for real s, set h to 0. The running time is roughly proportional to (B / d+π h/4)d/2+3N1/2, where B is the default bit accuracy, d is the degree of the L-function, and N is the conductor (the exponent d/2+3 is reduced to d/2+2 when d = 1 and d = 2). There is also a dependency on w, which is less crucial, but make sure to use the smallest rectangular domain that you need.
``` ? L0 = lfuncreate(1); \\ Riemann zeta
? L = lfuninit(L0, [1/2, 0, 100]); \\ for zeta(1/2+it), |t| < 100
? lfun(L, 1/2 + I)
? L = lfuninit(L0, [100]); \\ same as above !
```
The library syntax is `GEN lfuninit0(GEN L, GEN sdom, long der, long bitprec)`.
#### lfunlambda(L, s, {D = 0})
Compute the completed L-function Λ(s) = Ns/2γ(s)L(s), or if `D` is set, the derivative of order `D` at s. The parameter `L` is either an `Lmath`, an `Ldata` (created by `lfuncreate`, or an `Linit` (created by `lfuninit`), preferrably the latter if many values are to be computed.
The result is given with absolute error less than 2-B|γ(s)Ns/2|, where B = realbitprecision.
The library syntax is `GEN lfunlambda0(GEN L, GEN s, long D, long bitprec)`.
#### lfunmfspec(L)
Returns `[valeven,valodd,omminus,omplus]`, where `valeven` (resp., `valodd`) is the vector of even (resp., odd) periods of the modular form given by `L`, and `omminus` and `omplus` the corresponding real numbers ω^- and ω^+ normalized in a noncanonical way. For the moment, only for modular forms of even weight.
The library syntax is `GEN lfunmfspec(GEN L, long bitprec)`.
#### lfunmul(L1, L2)
Creates the `Ldata` structure (without initialization) corresponding to the product of the Dirichlet series given by `L1` and `L2`.
The library syntax is `GEN lfunmul(GEN L1, GEN L2, long bitprec)`.
#### lfunorderzero(L, {m = -1})
Computes the order of the possible zero of the L-function at the center k/2 of the critical strip; return 0 if L(k/2) does not vanish.
If m is given and has a non-negative value, assumes the order is at most m. Otherwise, the algorithm chooses a sensible default:
* if the L argument is an `Linit`, assume that a multiple zero at s = k / 2 has order less than or equal to the maximal allowed derivation order.
* else assume the order is less than 4.
You may explicitly increase this value using optional argument m; this overrides the default value above. (Possibly forcing a recomputation of the `Linit`.)
The library syntax is `long lfunorderzero(GEN L, long m, long bitprec)`.
#### lfunqf(Q)
Returns the `Ldata` structure attached to the Θ function of the lattice attached to the definite positive quadratic form Q.
``` ? L = lfunqf(matid(2));
? lfunqf(L,2)
%2 = 6.0268120396919401235462601927282855839
? lfun(x^2+1,2)*4
%3 = 6.0268120396919401235462601927282855839
```
The library syntax is `GEN lfunqf(GEN Q, long prec)`.
#### lfunrootres(data)
Given the `Ldata` attached to an L-function (or the output of `lfunthetainit`), compute the root number and the residues. The output is a 3-component vector [r,R,w], where r is the residue of L(s) at the unique pole, R is the residue of Λ(s), and w is the root number. In the present implementation,
* either the polar part must be completely known (and is then arbitrary): the function determines the root number,
``` ? L = lfunmul(1,1); \\ zeta^2
? [r,R,w] = lfunrootres(L);
? r \\ single pole at 1, double
%3 = [[1, 1.[...]*x^-2 + 1.1544[...]*x^-1 + O(x^0)]]
? w
%4 = 1
? R \\ double pole at 0 and 1
%5 = [[1,[...]], [0,[...]]
```
* or at most a single pole is allowed: the function computes both the root number and the residue (0 if no pole).
The library syntax is `GEN lfunrootres(GEN data, long bitprec)`.
#### lfuntheta(data, t, {m = 0})
Compute the value of the m-th derivative at t of the theta function attached to the L-function given by `data`. `data` can be either the standard L-function data, or the output of `lfunthetainit`. The theta function is defined by the formula Θ(t) = ∑n ≥ 1a(n)K(nt/sqrt(N)), where a(n) are the coefficients of the Dirichlet series, N is the conductor, and K is the inverse Mellin transform of the gamma product defined by the `Vga` component. Its Mellin transform is equal to Λ(s)-P(s), where Λ(s) is the completed L-function and the rational function P(s) its polar part. In particular, if the L-function is the L-function of a modular form f(τ) = ∑n ≥ 0a(n)q^n with q = exp(2π iτ), we have Θ(t) = 2(f(it/sqrt{N})-a(0)). Note that an easy theorem on modular forms implies that a(0) can be recovered by the formula a(0) = -L(f,0).
The library syntax is `GEN lfuntheta(GEN data, GEN t, long m, long bitprec)`.
#### lfunthetacost(L, {tdom}, {m = 0})
This function estimates the cost of running `lfunthetainit(L,tdom,m)` at current bit precision. Returns the number of coefficients a_n that would be computed. This also estimates the cost of a subsequent evaluation `lfuntheta`, which must compute that many values of `gammamellininv` at the current bit precision. If L is already an `Linit`, then tdom and m are ignored and are best left omitted: we get an estimate of the cost of using that particular `Linit`.
``` ? \pb 1000
? L = lfuncreate(1); \\ Riemann zeta
? lfunthetacost(L); \\ cost for theta(t), t real >= 1
%1 = 15
? lfunthetacost(L, 1 + I); \\ cost for theta(1+I). Domain error !
*** at top-level: lfunthetacost(1,1+I)
*** ^--------------------
*** lfunthetacost: domain error in lfunthetaneed: arg t > 0.785
? lfunthetacost(L, 1 + I/2) \\ for theta(1+I/2).
%2 = 23
? lfunthetacost(L, 1 + I/2, 10) \\ for theta^((10))(1+I/2).
%3 = 24
? lfunthetacost(L, [2, 1/10]) \\ cost for theta(t), |t| >= 2, |arg(t)| < 1/10
%4 = 8
? L = lfuncreate( ellinit([1,1]) );
? lfunthetacost(L) \\ for t >= 1
%6 = 2471
```
The library syntax is `long lfunthetacost0(GEN L, GEN tdom = NULL, long m, long bitprec)`.
#### lfunthetainit(L, {tdom}, {m = 0})
Initalization function for evaluating the m-th derivative of theta functions with argument t in domain tdom. By default (tdom omitted), t is real, t ≥ 1. Otherwise, tdom may be
* a positive real scalar ρ: t is real, t ≥ ρ.
* a non-real complex number: compute at this particular t; this allows to compute θ(z) for any complex z satisfying |z| ≥ |t| and |arg z| ≤ |arg t|; we must have |2 arg z / d| < π/2, where d is the degree of the Γ factor.
* a pair [ρ,α]: assume that |t| ≥ ρ and |arg t| ≤ α; we must have |2α / d| < π/2, where d is the degree of the Γ factor.
``` ? \p500
? L = lfuncreate(1); \\ Riemann zeta
? t = 1+I/2;
? lfuntheta(L, t); \\ direct computation
time = 30 ms.
? T = lfunthetainit(L, 1+I/2);
time = 30 ms.
? lfuntheta(T, t); \\ instantaneous
```
The T structure would allow to quickly compute θ(z) for any z in the cone delimited by t as explained above. On the other hand
``` ? lfuntheta(T,I)
*** at top-level: lfuntheta(T,I)
*** ^--------------
*** lfuntheta: domain error in lfunthetaneed: arg t > 0.785398163397448
```
The initialization is equivalent to
``` ? lfunthetainit(L, [abs(t), arg(t)])
```
The library syntax is `GEN lfunthetainit(GEN L, GEN tdom = NULL, long m, long bitprec)`.
#### lfunzeros(L, lim, {divz = 8})
`lim` being either a positive upper limit or a non-empty real interval inside [0,+ oo [, computes an ordered list of zeros of L(s) on the critical line up to the given upper limit or in the given interval. Use a naive algorithm which may miss some zeros: it assumes that two consecutive zeros at height T ≥ 1 differ at least by 2π/ω, where ω := `divz`.(dlog(T/2π) +d+ 2log(N/(π/2)^d)). To use a finer search mesh, set divz to some integral value larger than the default ( = 8).
``` ? lfunzeros(1, 30) \\ zeros of Rieman zeta up to height 30
%1 = [14.134[...], 21.022[...], 25.010[...]]
? #lfunzeros(1, [100,110]) \\ count zeros with 100 <= Im(s) <= 110
%2 = 4
```
The algorithm also assumes that all zeros are simple except possibly on the real axis at s = k/2 and that there are no poles in the search interval. (The possible zero at s = k/2 is repeated according to its multiplicity.)
Should you pass an `Linit` argument to the function, beware that the algorithm needs at least
``` L = lfuninit(Ldata, [T+1])
```
where T is the upper bound of the interval defined by `lim`: this allows to detect zeros near T. Make sure that your `Linit` domain contains this one, i.e. a domain [1,T+1] is fine but [0, T] is not! The algorithm assumes that a multiple zero at s = k / 2 has order less than or equal to the maximal derivation order allowed by the `Linit`. You may increase that value in the `Linit` but this is costly: only do it for zeros of low height or in `lfunorderzero` instead.
The library syntax is `GEN lfunzeros(GEN L, GEN lim, long divz, long bitprec)`. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216838479042053, "perplexity": 1423.1210320616042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00309.warc.gz"} |
http://physicistfarmer.blogspot.com/2012/11/the-internal-energy-of-air.html | ## Wednesday, November 21, 2012
### The Internal Energy of Air
You know how sometimes, as you're going about your daily life, a completely random thought leads to you suddenly being intensely curious about something and unable to rest until your curiosity has been sated? This weekend I was thinking about nothing in particular while setting up for the morning at work, when I got the burning desire to know how much internal energy a cubic meter of air contained.
The nice part of being a physicist is that I can satisfy these urges, and the nice part of having a blog is that I can share it with other people! So without further ado, let's attempt to calculate the internal energy of 1 cubic meter of air at standard atmospheric pressure and room temperature (around 80 °F, or specifically for ease of calcuation, 300 kelvin).
This is actually fairly simple in theory. There exists a simple equation in thermodynamics for the internal energy of an ideal gas: $$U=\frac{N}{2}nRT\tag{1}$$ In this equation, U is the internal energy locked up in each of the N degrees of freedom of the gas, n is the number of moles of gas, the constant R is the ideal gas constant with value $$8.3144621\ \text{J}/(\text{mol}\cdot\text{K})$$, and T is the absolute temperature in kelvins.
Now, at this point I should probably elaborate on what internal energy is and what it has to do with degrees of freedom. As I'm sure you know, the temperature of a system is merely a measure of the average energy of its constituent particles. This energy is called the internal energy of the system and can be stored in several different ways, each of which is known as a degree of freedom: in the motions of particles (atoms or molecules), in the rotation or vibration of molecules, or in the excitation and relaxation of electrons in the atoms (not all of these actually apply to all systems, as we shall see).
In thermodynamics there is a theorem known as the equipartition theorem that states that the available internal energy of a system is equally divided among all of its degrees of freedom. If we look at an ideal monatomic gas (such as any of the noble gases), it has only three degrees of freedom, corresponding to the three ways the particles making up the gas can move in three dimensions. Technically, the energy stored by electrons being excited in the atoms could count as another degree of freedom, but at the relatively low temperatures we're considering for this problem there is very little excitation going on and we are free to ignore this effect.
This would be fine if we were considering a monatomic gas, but we are interested in air, which is primarily composed of two diatomic gases: nitrogen (78%) and oxygen (21%). (The monatomic gas argon makes up about 90% of the remaining ~1% of the atmosphere, so we will simply assume that it is all argon for simplicity.) Diatomic molecules bring a new factor into the equation, as they can rotate in two dimensions around their long axis, and since energy can be stored in their rotational motion, this gives them another two degrees of freedom. Diatomic molecules can also store energy in the bond between them, and while this could count as another degree of freedom, in practice it takes temperatures much higher than we are considering here for this to be a significant effect.
So, in summary, monatomic gases have 3 degrees of freedom, representing the kinetic energy associated with their motion through three-dimensional space; diatomic gases – at the temperatures we are interested in – have 5 degrees of freedom since they have the ability to rotate as well. (If I was looking at much higher temperatures I'd have to take into account that vibrational mode I neglect here.)
The upshot of that lengthy diversion is that equation ($$1$$) above looks like $$\frac{3}{2}nRT$$ for monatomic gases and $$\frac{5}{2}nRT$$ for diatomic ones. Since R is a constant and we have a temperature in mind already, all that remains is to find n, the amount of each type of gas.
The n in that equation refers to moles of gas. The mole (abbreviated mol) is a unit used in chemistry and physics to represent a quantity of substance in terms of the number of particles (atoms or molecules) that make it up. A closely related concept is that of Avogadro's Number, $$6.022\times10^{23}$$ (named after the Italian scientist Amadeo Avogadro). One mole of a substance is simply the amount of that substance that contains Avogadro's number of particles in it (it's slightly more complicated than that, but this will suffice for our purposes). Avogadro's number may seem arbitrary, but it is actually measured and defined such that if you have an amount of a substance in grams equal to its mean atomic mass in daltons, then you have one mole of that substance. (The name dalton is given to the unit of mass formerly known as the atomic mass unit, a handy measure for measuring the weight of atoms. It is roughly equivalent to the mass of a nucleon.)
For example: a hydrogen atom has a mean atomic mass of $$1.01$$ daltons. Hydrogen typically combines with itself to form dihydrogen gas, H$$_2$$. Thus dihydrogen gas has a mean molecular mass of $$2.02$$ daltons. If you have $$2.02$$ grams of dihydrogen gas, you then have one mole ($$6.022\times10^{23}$$) of dihydrogen gas molecules. Oxygen (mean atomic mass $$16.00$$ daltons) likewise combines to form dioxygen (O$$_2$$) with a mean atomic mass of $$32.00$$ daltons. If you have $$32.00$$ grams of dioxygen gas, you then have one mole of dioxygen molecules. Combining the two to make water, H$$_2$$O, gives water a mean atomic mass of $$18.02$$ ($$2\times1.01+16.00$$), so if you have $$18.02$$ grams of water, you have one mole of water molecules.
Anyway, this lengthy preface should hopefully enable you to follow what should be a fairly straight-forward calculation, which we are finally ready to begin.
First off, we need to find the number of moles of oxygen, nitrogen, and argon in one cubic meter of our theoretical approximation of air. We can do that by first finding the density of air at our specified conditions (300 K, ~80 °F and 1 standard atmosphere of pressure, 101.325 kPa), then multiplying by the fractions established before to find out how much mass of each gas exists, before converting that mass into moles of each gas to fit the equation.
There is a equation for the density of dry air (which we are assuming it is) given by $\rho=\frac{P}{R_{\text{specific}}T}$ In this equation, $$\rho$$ (the Greek letter rho) stands for density (in kg/m$$^3$$), P stands for pressure, R$$_{\text{specific}}$$ is a version of the ideal gas constant specifically for dry air equal to 287.058 J/(kg$$\cdot$$K), and T is again the temperature in kelvins.
Putting in the numbers and doing the math, we get:
\begin{align}\rho&=\frac{101,325 \frac{\text{N}}{\text{m}^2} }{287.058 \frac{\text{N}\cdot\text{m}}{\text{kg}\cdot\text{K}} \cdot300.00\ \text{K}}\\
&=1.1766\frac{\text{kg}}{\text{m}^3}
\end{align}
Since we are assuming a single cubic meter of air, our mass of air consists of 1.1766 kg (about 2.6 pounds of air). Multiplying by the fractions we assumed for each of the ingredients, we get:
\begin{align}
m_{\text{N}_2}&=0.78\cdot1.1766\ \text{kg}=0.9177\ \text{kg}=917.7\ \text{g}\\
m_{\text{O}_2}&=0.21\cdot1.1766\ \text{kg}=0.2471\ \text{kg}=247.1\ \text{g}\\
m_{\text{Ar}}&=0.01\cdot1.1766\ \text{kg}= 0.0118\ \text{kg}=11.8\ \text{g}
\end{align}
Now that we have the masses involved, we can convert to moles using their mean atomic masses:
\begin{align}
n_{\text{N}_2}&=917.7\ \text{g}/28.013\frac{\text{g}}{\text{mol}}=32.762\ \text{moles}\\
n_{\text{O}_2}&=247.1\ \text{g}/31.9988\frac{\text{g}}{\text{mol}}=7.7222\ \text{moles}\\
n_{\text{Ar}}&= 11.8 \ \text{g}/39.948\frac{\text{g}}{\text{mol}}=0.29538 \ \text{moles}
\end{align}
Having now obtained the number of moles of each gas in our hypothetical approximation to air, we can now use equation ($$1$$) to calculate the amount of internal energy each gas contributes to the whole.
\begin{align}
U _{\text{N}_2}&=\frac{5}{2}\cdot32.762\ \text{mol}\cdot8.314\frac{\text{J}}{\text{K}\cdot\text{mol}}\cdot300.00\ \text{K}=204.3\ \text{kJ}\\
U_{\text{O}_2}&= \frac{5}{2}\cdot7.7222\ \text{mol}\cdot8.314\frac{\text{J}}{\text{K}\cdot\text{mol}}\cdot300.00\ \text{K}=48.12\ \text{kJ} \\
U_{\text{Ar}}&=\frac{3}{2}\cdot0.29538\ \text{mol}\cdot8.314\frac{\text{J}}{\text{K}\cdot\text{mol}}\cdot300.00\ \text{K}=1.105\ \text{kJ}
\end{align}
This gives us a total of
$U _{\text{N}_2}+ U_{\text{O}_2}+ U_{\text{Ar}}=253.5\ \text{kJ}$
That...actually turns out to be a bit more than I was expecting. That's a quarter of a million joules of energy contained in the motion and rotation of the air molecule in a single cubic meter of air.
To put this number in perspective, let's do some conversions to units you may be more familiar with. That many kilojoules is almost exactly 60 kilocalories (or Calories), the unit the energy in food is measured in. Put another way, the normal energy needs of an adult human are typically pegged at around 2,000 Calories per day. If you could somehow extract the energy from air, you'd need only about 33 cubic meters of air per day to survive, a volume smaller than the amount of air in most average-sized homes. Alternatively, the average amount of solar power over a 1 square meter area at the Earth's surface is about 1 kilojoule per second (1 kilowatt), so the amount of energy we calculated is equivalent to the amount hitting an area of 253,500 square meters (a quarter of a square kilometer) every second during full daylight. There's a lot of energy locked up in the air around you.
In a sense, though, I suppose I really shouldn't be too surprised. Gas molecules in the air whiz about at great speed, and this speed comes from the kinetic energy they have. In fact, we can estimate the root-mean-square speed of a typical nitrogen molecule fairly easily (the M is the molar-mass of the gas, in kg/mol):
\begin{align}v_{\text{rms}}&=\sqrt{\frac{3RT}{M}}\\
&=\sqrt{\frac{3\cdot 8.314\frac{\text{J}}{\text{K}\cdot\text{mol}}\cdot300.00\ \text{K}}{0.028013\frac{\text{kg}}{\text{mol}}}}\\
&=516.8\frac{\text{m}}{\text{s}}
\end{align}In case you're wondering, that a whopping 1,156 miles per hour. Those nitrogen molecules are, on average, moving about that fast (oxygen and argon move a bit slower, since they're more massive). So I guess when you consider billions upon billions of tiny atoms all zooming around at speeds comparable to this, it makes sense that there's a lot of energy tied up in their motion as kinetic energy. Wow. Amazing stuff. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9038228988647461, "perplexity": 489.11369670414257}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00059.warc.gz"} |
https://en.wikipedia.org/wiki/Frobenius_algebra | # Frobenius algebra
In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Ferdinand Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory (Nakayama 1939), (Nakayama 1941). Jean Dieudonné used this to characterize Frobenius algebras (Dieudonné 1958). Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory.
## Definition
A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form σ : A × Ak that satisfies the following equation: σ(a·b, c) = σ(a, b·c). This bilinear form is called the Frobenius form of the algebra.
Equivalently, one may equip A with a linear functional λ : Ak such that the kernel of λ contains no nonzero left ideal of A.
A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies λ(a·b) = λ(b·a).
There is also a different, mostly unrelated notion of the symmetric algebra of a vector space.
## Examples
1. Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace.
2. Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra.
3. Every group ring of a finite group over a field is a Frobenius algebra, with Frobenius form σ(a,b) the coefficient of the identity element in a·b. This is a special case of example 2.
4. For a field k, the four-dimensional k-algebra k[x,y]/ (x2, y2) is a Frobenius algebra. This follows from the characterization of commutative local Frobenius rings below, since this ring is a local ring with its maximal ideal generated by x and y, and unique minimal ideal generated by xy.
5. For a field k, the three-dimensional k-algebra A=k[x,y]/ (x, y)2 is not a Frobenius algebra. The A homomorphism from xA into A induced by xy cannot be extended to an A homomorphism from A into A, showing that the ring is not self-injective, thus not Frobenius.
6. Any finite-dimensional Hopf algebra, by a 1969 theorem of Larson-Sweedler on Hopf modules and integrals.
## Properties
• The direct product and tensor product of Frobenius algebras are Frobenius algebras.
• A finite-dimensional commutative local algebra over a field is Frobenius if and only if the right regular module is injective, if and only if the algebra has a unique minimal ideal.
• Commutative, local Frobenius algebras are precisely the zero-dimensional local Gorenstein rings containing their residue field and finite-dimensional over it.
• Frobenius algebras are quasi-Frobenius rings, and in particular, they are left and right Artinian and left and right self-injective.
• For a field k, a finite-dimensional, unital, associative algebra is Frobenius if and only if the injective right A-module Homk(A,k) is isomorphic to the right regular representation of A.
• For an infinite field k, a finite-dimensional, unitial, associative k-algebra is a Frobenius algebra if it has only finitely many minimal right ideals.
• If F is a finite-dimensional extension field of k, then a finite-dimensional F-algebra is naturally a finite-dimensional k-algebra via restriction of scalars, and is a Frobenius F-algebra if and only if it is a Frobenius k-algebra. In other words, the Frobenius property does not depend on the field, as long as the algebra remains a finite-dimensional algebra.
• Similarly, if F is a finite-dimensional extension field of k, then every k-algebra A gives rise naturally to a F algebra, Fk A, and A is a Frobenius k-algebra if and only if Fk A is a Frobenius F-algebra.
• Amongst those finite-dimensional, unital, associative algebras whose right regular representation is injective, the Frobenius algebras A are precisely those whose simple modules M have the same dimension as their A-duals, HomA(M,A). Amongst these algebras, the A-duals of simple modules are always simple.
## Category-theoretical definition
In category theory, the notion of Frobenius object is an abstract definition of a Frobenius algebra in a category. A Frobenius object ${\displaystyle (A,\mu ,\eta ,\delta ,\varepsilon )}$ in a monoidal category ${\displaystyle (C,\otimes ,I)}$ consists of an object A of C together with four morphisms
${\displaystyle \mu :A\otimes A\to A,\qquad \eta :I\to A,\qquad \delta :A\to A\otimes A\qquad \mathrm {and} \qquad \varepsilon :A\to I}$
such that
• ${\displaystyle (A,\mu ,\eta )\,}$ is a monoid object in C,
• ${\displaystyle (A,\delta ,\varepsilon )}$ is a comonoid object in C,
• the diagrams
and
commute (for simplicity the diagrams are given here in the case where the monoidal category C is strict) and are known as Frobenius conditions.[1]
More compactly, a Frobenius algebra in C is a so-called Frobenius monoidal functor A:1C, where 1 is the category consisting of one object and one arrow.
A Frobenius algebra is called isometric or special if ${\displaystyle \mu \circ \delta =\mathrm {Id} _{A}}$.
## Applications
Frobenius algebras originally were studied as part of an investigation into the representation theory of finite groups, and have contributed to the study of number theory, algebraic geometry, and combinatorics. They have been used to study Hopf algebras, coding theory, and cohomology rings of compact oriented manifolds.
### Topological quantum field theories
The product and coproduct on a Frobenius algebra can be interpreted as the functor of a (1+1)-dimensional topological quantum field theory, applied to a pair of pants.
Recently, it has been seen that they play an important role in the algebraic treatment and axiomatic foundation of topological quantum field theory. A commutative Frobenius algebra determines uniquely (up to isomorphism) a (1+1)-dimensional TQFT. More precisely, the category of commutative Frobenius K-algebras is equivalent to the category of symmetric strong monoidal functors from 2-Cob (the category of 2-dimensional cobordisms between 1-dimensional manifolds) to VectK (the category of vector spaces over K).
The correspondence between TQFTs and Frobenius algebras is given as follows:
• 1-dimensional manifolds are disjoint unions of circles: a TQFT associates a vector space with a circle, and the tensor product of vector spaces with a disjoint union of circles,
• a TQFT associates (functorially) to each cobordism between manifolds a map between vector spaces,
• the map associated with a pair of pants (a cobordism between 1 circle and 2 circles) gives a product map VVV or a coproduct map VVV, depending on how the boundary components are grouped – which is commutative or cocommutative, and
• the map associated with a disk gives a counit (trace) or unit (scalars), depending on grouping of boundary.
This relation between Frobenius-algebras and (1+1)-dimensional TQFTs can be used to explain the Khovanov's categorification of the Jones polynomial.[2][3]
## Generalization: Frobenius extension
Let B be a subring sharing the identity element of a unital associative ring A. This is also known as ring extension A | B. Such a ring extension is called Frobenius if
• There is a linear mapping E: AB satisfying the bimodule condition E(bac) = bE(a)c for all b,cB and aA.
• There are elements in A denoted ${\displaystyle \{x_{i}\}_{i=1}^{n}}$ and ${\displaystyle \{y_{i}\}_{i=1}^{n}}$ such that for all aA we have:
${\displaystyle \sum _{i=1}^{n}E(ax_{i})y_{i}=a=\sum _{i=1}^{n}x_{i}E(y_{i}a)}$
The map E is sometimes referred to as a Frobenius homomorphism and the elements ${\displaystyle x_{i},y_{i}}$ as dual bases. (As an exercise it is possible to give an equivalent definition of Frobenius extension as a Frobenius algebra-coalgebra object in the category of B-B-bimodules, where the equations just given become the counit equations for the counit E.)
For example, a Frobenius algebra A over a commutative ring K, with associative nondegenerate bilinear form (-,-) and projective K-bases ${\displaystyle x_{i},y_{i}}$ is a Frobenius extension A | K with E(a) = (a,1). Other examples of Frobenius extensions are pairs of group algebras associated to a subgroup of finite index, Hopf subalgebras of a semisimple Hopf algebra, Galois extensions and certain von Neumann algebra subfactors of finite index. Another source of examples of Frobenius extensions (and twisted versions) are certain subalgebra pairs of Frobenius algebras, where the subalgebra is stabilized by the symmetrizing automorphism of the overalgebra.
The details of the group ring example are the following application of elementary notions in group theory. Let G be a group and H a subgroup of finite index n in G; let g1, ..., gn. be left coset representatives, so that G is a disjoint union of the cosets g1H, ..., gnH. Over any commutative base ring k define the group algebras A = k[G] and B = k[H], so B is a subalgebra of A. Define a Frobenius homomorphism E: AB by letting E(h) = h for all h in H, and E(g) = 0 for g not in H : extend this linearly from the basis group elements to all of A, so one obtains the B-B-bimodule projection
${\displaystyle E\left(\sum _{g\in G}n_{g}g\right)=\sum _{h\in H}n_{h}h\ \ \ {\text{ for }}n_{g}\in k}$
(The orthonormality condition ${\displaystyle E(g_{i}^{-1}g_{j})=\delta _{ij}1}$ follows.) The dual base is given by ${\displaystyle x_{i}=g_{i},y_{i}=g_{i}^{-1}}$, since
${\displaystyle \sum _{i=1}^{n}g_{i}E\left(g_{i}^{-1}\sum _{g\in G}n_{g}g\right)=\sum _{i}\sum _{h\in H}n_{g_{i}h}g_{i}h=\sum _{g\in G}n_{g}g}$
The other dual base equation may be derived from the observation that G is also a disjoint union of the right cosets ${\displaystyle Hg_{1}^{-1},\ldots ,Hg_{n}^{-1}}$.
Also Hopf-Galois extensions are Frobenius extensions by a theorem of Kreimer and Takeuchi from 1989. A simple example of this is a finite group G acting by automorphisms on an algebra A with subalgebra of invariants:
${\displaystyle B=\{x\in A\mid \forall g\in G,g(x)=x\}.}$
By DeMeyer's criterion A is G-Galois over B if there are elements ${\displaystyle \{a_{i}\}_{i=1}^{n},\{b_{i}\}_{i=1}^{n}}$ in A satisfying:
${\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}a_{i}g(b_{i})=\delta _{g,1_{G}}1_{A}}$
whence also
${\displaystyle \forall g\in G:\ \ \sum _{i=1}^{n}g(a_{i})b_{i}=\delta _{g,1_{G}}1_{A}.}$
Then A is a Frobenius extension of B with E: AB defined by
${\displaystyle E(a)=\sum _{g\in G}g(a)}$
which satisfies
${\displaystyle \forall x\in A:\ \ \sum _{i=1}^{n}E(xa_{i})b_{i}=x=\sum _{i=1}^{n}a_{i}E(b_{i}x).}$
(Furthermore, an example of a separable algebra extension since ${\textstyle e=\sum _{i=1}^{n}a_{i}\otimes _{B}b_{i}}$ is a separability element satisfying ea = ae for all a in A as well as ${\textstyle \sum _{i=1}^{n}a_{i}b_{i}=1}$. Also an example of a depth two subring (B in A) since
${\displaystyle a\otimes _{B}1=\sum _{g\in G}t_{g}g(a)}$
where
${\displaystyle t_{g}=\sum _{i=1}^{n}a_{i}\otimes _{B}g(b_{i})}$
for each g in G and a in A.)
Frobenius extensions have a well-developed theory of induced representations investigated in papers by Kasch and Pareigis, Nakayama and Tzuzuku in the 1950s and 1960s. For example, for each B-module M, the induced module AB M (if M is a left module) and co-induced module HomB(A, M) are naturally isomorphic as A-modules (as an exercise one defines the isomorphism given E and dual bases). The endomorphism ring theorem of Kasch from 1960 states that if A | B is a Frobenius extension, then so is A → End(AB) where the mapping is given by aλa(x) and λa(x) = ax for each a,xA. Endomorphism ring theorems and converses were investigated later by Mueller, Morita, Onodera and others. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342717528343201, "perplexity": 547.9414843432178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00608.warc.gz"} |
http://math.stackexchange.com/users/774/neil-g?tab=activity&sort=comments | Neil G
Reputation
788
Top tag
Next privilege 1,000 Rep.
Create new tags
Jun2 comment Intuitive explanation of a definition of the Fisher information @bcf: You're right. And yeah, it is a really well-written answer. All the more reason to remove minor mistakes :) Jun2 comment Intuitive explanation of a definition of the Fisher information @bcf no that doesn't make sense either. I'm going to edit it. Oct31 comment Computing Gradient and Hessian of a vector function @SeeBees: Take a look at Pearlmutter and Schraudolph's work on the R technique that approximates exactly that in linear time. Sep3 comment Intuitive explanation of a definition of the Fisher information Is there a derivative missing in your definition of the score? Apr10 comment Beta function derivation @EricAuld: Bayes rule is much easier to think about without the denominator, which is just a constant given your observations. So: $P(q\mid a) \propto P(a\mid q) P(q)$. (These terms are called posterior, likelihood, and prior respectively.) Then, you can fill it in: $\int_0^x f(q \mid a) dq \propto \int_0^x f(q)f(a \mid q)dq \implies f(q \mid a) \propto f(q)f(a \mid q)$. And, the likelihood $f(a \mid q)$ is just $q$ for heads and $1-q$ for tails. Feel free to ask a question on stats.stackexchange. There are a lot of very helpful people there. Apr9 comment Beta function derivation @EricAuld: Suppose I give you a coin whose bias $p$ you don't know. You flip the coin $n$ times and get heads $a$ times. Then the likelihood induced on $p$ is a Beta distribution. You can work out the Beta distribution up to normalization by writing out the product of the Bernoulli mass function $p^x(1-p)^{1-x}$ for each realization $x_i \in \{0,1\}$ (where $\sum_i x_i = a$ and $\sum_i 1-x_i = n-a$). Dec1 comment What is the purpose of the first test in an inductive proof? IH: All sets of $n \ge 4$ lines on the plane intersect at a single point. If it holds true for $n\ge 4$ points, then it holds true for $n+1$ points since the first $n\ge 3$ points intersect at a point and so too do the last $n$ lines and the two intersection points must be the same point. Therefore, all lines on the plane intersect at a single point. Aug14 comment Reasoning that $\sin2x=2 \sin x \cos x$ +1: This method also gives all of the double-angle (and triple-angle, etc.) formulas. Aug11 comment An interesting puzzle I think a rough sketch might be that in order to create any significant probability mass in $\vert X-Y \vert$ between 1 and 2, you have to also create mass between 0 and 1. So, $P{|X−Y|≤1}$ is no less than half the LHS? Apr15 comment Log likelihood of a realization of a Poisson process? Hi David, do you have a citation for the likelihood? I arrived at the same likelihood by reason directly from the entropy of a Poisson process given by McFadden. Mar21 comment Graphically, what is positive semidefinite-ness? Thanks, I think I see it now. Essentially, Newton's method is looking for points with zero slope, and decides for each eigenvector whether to go towards a local maximum or minimum based on the sign of the eigenvalue. Is that right? Mar21 comment Graphically, what is positive semidefinite-ness? Thanks for taking the time to answer. You're right that Newton's method will only give a local minimum (and will stop at a local maximum) since it's looking for a point where the gradient is zero. I am still having trouble visualizing the second part of your answer. Feb4 comment Probability problem of dice game Feb4 comment Probability problem of dice game @joriki: Yes, it's solving a system of linear equations. You could have written it as a dynamic program that memoizes the transition probabilities and the probability of return for every state, which would be more efficient for a sparse transition matrix. Feb3 comment Probability problem of dice game @joriki: I think you could turn all of the loops into self-loops first, and then you would have a traversal order. Anyway, I calculated the transition matrix. Perhaps you can finish it from there? Feb3 comment Probability problem of dice game @joriki: I was thinking of doing something like this: stats.stackexchange.com/questions/48396/… I see your point that you can get the transition matrix and decompose it, which might give an easier solution. I have some time now to give it a shot. Feb3 comment Probability problem of dice game It's not hard to use dynamic programming to find an exact solution. The state space is the last number rolled cross the run length (1 or 2). So, there are 22 states plus 2 finish states. The start space can be $(12, 1)$. What happens if they both simultaneously win? Feb2 comment Trace of a matrix times outer product Thanks for your answer so far. Is there an intuitive explanation of why someone would prefer to write the first expression? Does it reveal some mathematical structure? The second one seems clearer to me. Nov24 comment Convex function plus $v e^{-x}$ @coffeemath: I meant $w$, but I don't think it makes any difference? Nov24 comment Convex function plus $v e^{-x}$ Yes, thanks for the correction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052323698997498, "perplexity": 372.2286439267274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00169-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://xianblog.wordpress.com/tag/speciation-rate/ | ## Naturally amazed at non-identifiability
Posted in Books, Statistics, University life with tags , , , , , , , , , , , on May 27, 2020 by xi'an
A Nature paper by Stilianos Louca and Matthew W. Pennell, Extant time trees are consistent with a myriad of diversification histories, comes to the extraordinary conclusion that birth-&-death evolutionary models cannot distinguish between several scenarios given the available data! Namely, stem ages and daughter lineage ages cannot identify the speciation rate function λ(.), the extinction rate function μ(.) and the sampling fraction ρ inherently defining the deterministic ODE leading to the number of species predicted at any point τ in time, N(τ). The Nature paper does not seem to make a point beyond the obvious and I am rather perplexed at why it got published [and even highlighted]. A while ago, under the leadership of Steve, PNAS decided to include statistician reviewers for papers relying on statistical arguments. It could time for Nature to move there as well.
“We thus conclude that two birth-death models are congruent if and only if they have the same rp and the same λp at some time point in the present or past.” [S.1.1, p.4]
Or, stated otherwise, that a tree structured dataset made of branch lengths are not enough to identify two functions that parameterise the model. The likelihood looks like
$\frac{\rho^{n-1}\Psi(\tau_1,\tau_0)}{1-E(\tau)}\prod_{i=1}^n \lambda(\tau_i)\Psi(s_{i,1},\tau_i)\Psi(s_{i,2},\tau_i)$\$
where E(.) is the probability to survive to the present and ψ(s,t) the probability to survive and be sampled between times s and t. Sort of. Both functions depending on functions λ(.) and μ(.). (When the stem age is unknown, the likelihood changes a wee bit, but with no changes in the qualitative conclusions. Another way to write this likelihood is in term of the speciation rate λp
$e^{-\Lambda_p(\tau_0)}\prod_{i=1}^n\lambda_p(\tau_I)e^{-\Lambda_p(\tau_i)}$
where Λp is the integrated rate, but which shares the same characteristic of being unable to identify the functions λ(.) and μ(.). While this sounds quite obvious the paper (or rather the supplementary material) goes into fairly extensive mode, including “abstract” algebra to define congruence.
“…we explain why model selection methods based on parsimony or “Occam’s razor”, such as the Akaike Information Criterion and the Bayesian Information Criterion that penalize excessive parameters, generally cannot resolve the identifiability issue…” [S.2, p15]
As illustrated by the above quote, the supplementary material also includes a section about statistical model selections techniques failing to capture the issue, section that seems superfluous or even absurd once the fact that the likelihood is constant across a congruence class has been stated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856747031211853, "perplexity": 1756.697111903125}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00479.warc.gz"} |
http://xrpp.iucr.org/Da/ch1o1v0001/sec1o1o2/ | International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2006). Vol. D, ch. 1.1, pp. 5-7
## Section 1.1.2. Basic properties of vector spaces
A. Authiera*
aInstitut de Minéralogie et de la Physique des Milieux Condensés, Bâtiment 7, 140 rue de Lourmel, 75015 Paris, France
### 1.1.2. Basic properties of vector spaces
| top | pdf |
[The reader may also refer to Section 1.1.4 of Volume B of International Tables for Crystallography (2001).]
#### 1.1.2.1. Change of basis
| top | pdf |
Let us consider a vector space spanned by the set of n basis vectors , , . The decomposition of a vector using this basis is written using the Einstein convention. The interpretation of the position of the indices is given below. For the present, we shall use the simple rules:
(i) the index is a subscript when attached to basis vectors; (ii) the index is a superscript when attached to the components. The components are numerical coordinates and are therefore dimensionless numbers.
Let us now consider a second basis, . The vector x is independent of the choice of basis and it can be decomposed also in the second basis:
If and are the transformation matrices between the bases and , the following relations hold between the two bases: (summations over j and i, respectively). The matrices and are inverse matrices: (Kronecker symbol: if if ).
Important Remark. The behaviour of the basis vectors and of the components of the vectors in a transformation are different. The roles of the matrices and are opposite in each case. The components are said to be contravariant. Everything that transforms like a basis vector is covariant and is characterized by an inferior index. Everything that transforms like a component is contravariant and is characterized by a superior index. The property describing the way a mathematical body transforms under a change of basis is called variance.
#### 1.1.2.2. Metric tensor
| top | pdf |
We shall limit ourselves to a Euclidean space for which we have defined the scalar product. The analytical expression of the scalar product of two vectors and is Let us put The nine components are called the components of the metric tensor. Its tensor nature will be shown in Section 1.1.3.6.1. Owing to the commutativity of the scalar product, we have
The table of the components is therefore symmetrical. One of the definition properties of the scalar product is that if for all x, then . This is translated as
In order that only the trivial solution exists, it is necessary that the determinant constructed from the 's is different from zero: This important property will be used in Section 1.1.2.4.1.
#### 1.1.2.3. Orthonormal frames of coordinates – rotation matrix
| top | pdf |
An orthonormal coordinate frame is characterized by the fact that One deduces from this that the scalar product is written simply as
Let us consider a change of basis between two orthonormal systems of coordinates: Multiplying the two sides of this relation by , it follows that which can also be written, if one notes that variance is not apparent in an orthonormal frame of coordinates and that the position of indices is therefore not important, as
The matrix coefficients, , are the direction cosines of with respect to the basis vectors. Similarly, we have so that where T indicates transpose. It follows that so that The matrices A and B are unitary matrices or matrices of rotation and
If the senses of the axes are not changed – proper rotation. If the senses of the axes are changed – improper rotation. (The right hand is transformed into a left hand.)
One can write for the coefficients giving six relations between the nine coefficients . There are thus three independent coefficients of the matrix A.
| top | pdf |
#### 1.1.2.4.1. Covariant coordinates
| top | pdf |
Using the developments (1.1.2.1) and (1.1.2.5), the scalar products of a vector x and of the basis vectors can be written The n quantities are called covariant components, and we shall see the reason for this a little later. The relations (1.1.2.9) can be considered as a system of equations of which the components are the unknowns. One can solve it since (see the end of Section 1.1.2.2). It follows that with
The table of the 's is the inverse of the table of the 's. Let us now take up the development of x with respect to the basis :
Let us replace by the expression (1.1.2.10): and let us introduce the set of n vectors which span the space . This set of n vectors forms a basis since (1.1.2.12) can be written with the aid of (1.1.2.13) as
The 's are the components of x in the basis . This basis is called the dual basis. By using (1.1.2.11) and (1.1.2.13), one can show in the same way that
It can be shown that the basis vectors transform in a change of basis like the components of the physical space. They are therefore contravariant. In a similar way, the components of a vector x with respect to the basis transform in a change of basis like the basis vectors in direct space, ; they are therefore covariant:
#### 1.1.2.4.2. Reciprocal space
| top | pdf |
Let us take the scalar products of a covariant vector and a contravariant vector : [using expressions (1.1.2.5), (1.1.2.11) and (1.1.2.13)].
The relation we obtain, , is identical to the relations defining the reciprocal lattice in crystallography; the reciprocal basis then is identical to the dual basis .
#### 1.1.2.4.3. Properties of the metric tensor
| top | pdf |
In a change of basis, following (1.1.2.3) and (1.1.2.5), the 's transform according to Let us now consider the scalar products, , of two contravariant basis vectors. Using (1.1.2.11) and (1.1.2.13), it can be shown that
In a change of basis, following (1.1.2.16), the 's transform according to The volumes V ′ and V of the cells built on the basis vectors and , respectively, are given by the triple scalar products of these two sets of basis vectors and are related by where is the determinant associated with the transformation matrix between the two bases. From (1.1.2.17) and (1.1.2.20), we can write
If the basis is orthonormal, and V are equal to one, is equal to the volume V ′ of the cell built on the basis vectors and This relation is actually general and one can remove the prime index:
In the same way, we have for the corresponding reciprocal basiswhere is the volume of the reciprocal cell. Since the tables of the 's and of the 's are inverse, so are their determinants, and therefore the volumes of the unit cells of the direct and reciprocal spaces are also inverse, which is a very well known result in crystallography.
### References
International Tables for Crystallography (2001). Vol. B. Reciprocal space, edited by U. Shmueli. Dordrecht: Kluwer Academic Publishers. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681640863418579, "perplexity": 481.0557950721154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512592.60/warc/CC-MAIN-20181020055317-20181020080817-00235.warc.gz"} |
https://www.repository.cam.ac.uk/browse?type=author&sort_by=1&order=ASC&rpp=20&etal=-1&value=Chan%2C+Juancarlos&starts_with=A | Now showing items 1-1 of 1
• Toward an interactive article: integrating journals and biological databases
(2011-05-19)
Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007898569107056, "perplexity": 3616.913855880111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00531.warc.gz"} |
https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-abstract/IDETC-CIE2005/47381/33/316490 | A normal-mode representation of waveguide behavior is recapitulated in terms of inner and outer responses in order to enhance the analogy with quasi-static beam solutions. This setting is employed to assist in the extension of the classical Saint-Venant’s principle to dynamic problems. The surface load parameters which determine the inner solution in a waveguide with free surfaces, are identified as the frequency and the average power of the load, provided the frequency is low enough and for purely symmetric (or purely antisymmetric) loads. A quantitative estimate of the extension of end effects is given. The one to one correspondence between the equivalent loads and the inner solution is used to reformulate the dynamic analogue of Saint-Venant’s principle.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461015224456787, "perplexity": 473.5738866365881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00429.warc.gz"} |
http://math.stackexchange.com/questions/625473/if-a-number-is-irrational-in-base-10-is-it-irrational-in-other-bases | # If a number is irrational in base 10, is it irrational in other bases?
If a number is irrational in base 10, is it necessarily irrational in all other bases? Or is it possible for a number to be irrational in only a few bases?
-
if you allow irrational bases – Memming Jan 3 '14 at 2:23
FYI: mathworld.wolfram.com/Base.html irrational bases are weird. – Memming Jan 3 '14 at 2:25
Strange question, because irrationality is not a property depending on bases. I think you mean something slightly different, no? – Ian Mateus Jan 3 '14 at 2:29
Irrationality is independent on the base in which you write a number. The definition of irrational is being the quotient of two integers. – Mlazhinka Shung Gronzalez LeWy Jan 3 '14 at 2:29
@ABC ... is not being ... – Andres Caicedo Jan 3 '14 at 2:32
You don't understand what "irrational" means. You have probably been told that an irrational number is one whose decimal expansion does not repeat. Although this is the case, it is a secondary property. An irrational number is one that cannot be written in the form $$a\over b$$ where $a$ and $b$ are integers; a rational number is what that can be written in that form. This definition has nothing to do with the base in which the numbers are written, and this is why your question, as you phrased it, does not really make sense. It's like asking if a verse of a song will still rhyme even if it is printed in colored ink.
Now it is the case that a number has a repeating base-10 representation if, and only if, it is a rational number, that is if it can be written as a fraction $\frac ab$. An irrational number always has a non-repeating base-10 representation.
And it is also the case that a number has a repeating base-$n$ expansion, for any base $n$, if, and only if, it is a rational number; an irrational number has a non-repeating representation in every base. This is probably the question you meant to ask.
Suppose a number $x$ has a base-$n$ expansion that begins with some sequence of digits $a_1a_2a_3\ldots a_i = a$, and then follows with $b_1b_2b_3\ldots b_j = b$ repeated forever. Then it turns out that $x$ is a rational number, and we can even find a fraction for it; the fraction is $$\frac{a}{n^i} + \frac1{n^i}\frac{b}{n^j-1}.$$
For example suppose we are working in base 8, and we want to find a fraction for the number 0.13456456456… where the digits are understood base 8. Then $i=2$, and $a_1a_2 =$ 13; and $j=3$, and $b_1b_2b_3 =$ 456. Then we can calculate that \begin{align}x & = \frac{13_{8}}{8^2} + \frac1{8^2}\frac{456_{8}}{8^3-1} \\ & = \frac{11}{64} + \frac1{64}\frac{302}{511} \\ &=\frac{5621}{32704} + \frac{302}{32704} \\ & = \frac{5923}{32704}\end{align}
and since this is a quotient of two integers, it is rational, because that is what a rational number is.
Its base-8 expansion is of course 0.13456456456456…, because that was how we constructed it, but it also repeats when written in any other base; for example in base 10 it is written $$0.181109\ 344422700587084148727984\ 34442270058708414872798\ \ldots.$$
Similarly, the base-10 decimal 0.13456456456… is equal to the rational number $$\frac{13}{10^2} + \frac1{10^2}\frac{456}{10^3-1} = \frac{13443}{99900} = \frac{4481}{33300}.$$
-
an irrational number has a non-repeating expansion in every rational base. – qaphla Jan 3 '14 at 2:50
"base $b$" as normally understood means a certain family of representations of the form $x= \sum_{i=-k}^\infty w_ib^{-i}$ where $b$ is an integer greater than 1 and $0\le w_i \lt b$. There are many other ways to represent numbers, but they are not part of the normal meaning of "any base". Noninteger bases are not part of the normal meaning; factorial bases are not part of the normal meaning; symmetric ternary is not part of the normal meaning, etc. – MJD Jan 3 '14 at 3:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555941820144653, "perplexity": 232.63154219524446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00031-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/15692-de-question.html | Math Help - DE question
1. DE question
im doing my DE HW and i dont understand how the book came out with an answer.. This is separable variables with an initial value.
(dx/dt) = 4(x^2+1) the intial value is: x((pi)/4)=1
the answer is: y= sin ((1/2)x^2+c)
and i have no idea where the sin came from...i know im probaly missing a rule somewhere, thanks for any help to get the jucices going!
2. Originally Posted by neven87
im doing my DE HW and i dont understand how the book came out with an answer.. This is separable variables with an initial value.
(dx/dt) = 4(x^2+1) the intial value is: x((pi)/4)=1
the answer is: y= sin ((1/2)x^2+c)
and i have no idea where the sin came from...i know im probaly missing a rule somewhere, thanks for any help to get the jucices going!
be sure you don't have a typo here, or that you are looking at the right question. because as the question is stated, your answer should be of the form x = f(t) not y = f(x)
the answer i got is $x = \tan \left( 4t - \frac {3 \pi}{4} \right)$
3. And what's that c in the answer?
Thought it was an IVP...
I get the same answer as jhevon.
4. Originally Posted by neven87
(dx/dt) = 4(x^2+1) the intial value is: x((pi)/4)=1
$\frac{1}{1+x^2} \cdot x' = 4$
$\int \frac{1}{1+x^2} x' \ dt = \int 4 dt$
$\tan^{-1} x = 4t+C$
$x = \tan (4t+C)$
5. It was a misprint the answer is x= tan(4t-3/4(pi))
6. AH! missed the inverse tangent...thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283460974693298, "perplexity": 1424.857046464844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704624/warc/CC-MAIN-20140313024504-00065-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/23389/question-about-uncertainty | Are $3.43\pm 0.04$ $\frac{\mathrm{m}}{\mathrm{s}}$ and $3.48$ $\frac{\mathrm{m}}{\mathrm{s}}$ within expected range of values?
The answer is yes, but I do not clearly see why this is so. I appreciate if you can give me a hint on this.
This is what I could think of:
3.48 could be either $3.47\fbox{5-9}\cdots$ or it could be $3.48\fbox{0-4}\cdots$, 3.43 could be either $3.42\fbox{5-9}\cdots$ or it could be $3.43\fbox{0-4}\cdots$, and 0.04 could be either $0.03\fbox{5-9}\cdots$ or it could be $0.04\fbox{0-4}\cdots$. So if for example, we say that 3.48 was $3.476101$ and 3.43 was $3.43404$ and 0.04 was 0.04304, then $3.43+0.04$ would be 3.47708 and $3.43-0.04$ would be 3.391. So 3.476101 lies between 3.391 and 3.47708. So it could be possible that $3.43\pm 0.04$ and 3.48 be within expected range of values.
• Please consider writing descriptive question titles with appropriate punctuation, grammar, and formatting. See this meta post: How do we write good question titles?. – user191954 Oct 30 '18 at 14:11
When you see a value like $3.43\pm 0.04$ (I will omit units for brevity), in many cases, it actually represents a normal probability distribution with a mean of $3.43$ and a standard deviation of $0.04$. If the $3.43\pm 0.04$ is the result of an experiment, for example, then the probability distribution applies to the true value of the thing the experiment was trying to measure. In particular, the experimenters are saying there is a 68% chance that the true value of the quantity is between $3.39$ and $3.47$, the so-called $1\sigma$ range. There is a 95% chance that the true value is between $3.35$ and $3.51$, the $2\sigma$ range. And so on.
Since the result with uncertainty really defines a probability distribution, when you want to compare another number to this result, the question you need to ask is not "is this number within the acceptable range?" (because there is no hard boundary to the range), but rather "what is the probability that the experiment would be at least this far off?" In the example in your question, the difference between the two values, $3.43$ and $3.48$, is $1.25\sigma$ (that's $1.25\times 0.04$). You can calculate that the probability of being off by $1.25\sigma$ or more is 21%; conversely, the probability of being within $1.25\sigma$ is 79%. So you could say there is a 79% probability that the two values are compatible. That's a fairly large probability, so it seems reasonable to say that these two values are, in fact, compatible (which is the more precise version of saying $3.48$ is within the "expected range"). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583407044410706, "perplexity": 139.71598782399587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00173.warc.gz"} |
http://mathhelpforum.com/math-topics/32809-coordinate-geometry.html | 1. ## Coordinate geometry
Dear forum members,
I have a following problem
A circle touches both the x-axis and the y-axis and passes through the point (1,2). Find the equation of the circle. How many possibilities are there?
My solution
If the circle touches the x-axis and the y-axis, it means that both the x-axis and the y-axis are tangents to the circle, meaning that the distance from the centre to these tangents equals the radius, right? And the distance from the point (1,2) to the centre equals the radius as well.
Using a distance from a line formula I get that the distance from the center to the y-axis =(y)(Please pretend the parentheses are absolute value bars). The distance from the center to the x-axis is (x), pretend the parentheses are absolute value bars again.
that means (y)=(x)
if the center is marked as (x,y)
the distance from the point (1,2) to (x,y) is
$\displaystyle y^2=x^2-2x-4y+5=0$ (I plugged in y instead of the radius)
My question is, is the above argument (y)=(x) good enough, for me to be able to substitute x to the place of y into the above equation?
2. Originally Posted by Coach
A circle touches both the x-axis and the y-axis and passes through the point (1,2). Find the equation of the circle. How many possibilities are there?
...
$\displaystyle (1-r)^2+(2-r)^2 = r^2$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155418038368225, "perplexity": 284.22217757951313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00760.warc.gz"} |
https://math.au.dk/en/currently/activities/event/item/5901/ | # Semiclassical measure for Schrödinger operators with homogeneous potentials of order zero
Keita Mikami (University of Tokyo)
Math/Phys Seminar
Thursday, 4 October, 2018, at 15:15-16:00, in Aud. D2 (1531-119)
Description:
In 1991, Herbst proved that the time evolution for Schrödinger operators with homogeneous potentials of order zero localizes to the direction of the critical point of the potential(localization in direction).
In this talk, we introduce the analysis of Weyl sequences of the Schrödinger operators with homogeneous potentials of order zero. We employ semiclassical analysis, especially semiclassical measure to obtain several results which can be regarded as a semiclassical analogue of the localization in direction.
As an application, we obtain a necessary condition of the observability for Schrödinger operators with homogeneous potentials of order zero.
Contact person: Erik Skibsted | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774203658103943, "perplexity": 1218.8405924121007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00538.warc.gz"} |
http://mathoverflow.net/questions/31020/trapezoidal-rule-error-approximation-what-if-fx-12n2-doesnt-work | # trapezoidal rule error approximation. What if f''(x)/12n^2 doesn't work?
Which method would you recommend for error estimation of the following approximation? $$\frac{1}{K} \sum_{j=0}^{K-1}\frac{cos(2\pi\frac{j}{K}u)}{P_{n}(\cos[\pi\frac{j}{K}])}\approx\int_{0}^{1}\frac{cos(2\pi xu)}{P_{n}(\cos[\pi x])}dx$$ Here $P_{n}$ some polynomial $u=1,2...K/2$
$\frac{1}{12k^2}f''(\psi)$ is a very bad estimator
-
Perhaps explain why the second derivative one is a bad estimator? – Willie Wong Jul 8 '10 at 12:39
I think the function is not smooth enough. Specially for u close to K/2. But even for u=1 the real error is a lot smaller then f''(x) – vilvarin Jul 8 '10 at 13:17
maybe I should look at it like at fourier series? – vilvarin Jul 8 '10 at 14:27
Do you have an explicit formula for the Polynomial? Or at least do you know where and to what order the roots are? – Willie Wong Jul 8 '10 at 17:21
Function $P_n(cos(\pi x))$ depends on some parameter . It can be whether convex, bounded with 0.5 and 1 (this case is more interesting for me) or concave, bounded with 1 and some A>1(depends on the parameter) – vilvarin Jul 9 '10 at 23:03
Added: After thinking about it a bit more, I'm wondering about some things. Firstly, the formula given in the question is not the trapezoidal rule (as promised in the title and suggested by the result for the error), but it is the rectangle rule which is only first order. Secondly, if the integrand has poles in [0,1] (that is, if $P_n(\cos(\pi x))=0$ for some $x\in[0,1]$), then the error estimate becomes meaningless; in this case you probably need different techniques like complex analysis to prove anything. A final remark: perhaps you can use the elementary techniques explained in: Weideman, "Numerical integration of periodic functions: a few examples", Amer. Math. Monthly 109 (2002), no. 1, 21-36 (MathSciNet).
I think I need some more background in order to have further help. In particular, do you know anything about the polynomials $P_n$, and what kind of result do you hope to get? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218793511390686, "perplexity": 423.12540967925486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00366-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://plainmath.net/81166/when-is-a-morphism-between-curves-a-galo | When is a Morphism between Curves a Galois Extension of Function Fields It is known that the catego
Nylah Hendrix 2022-07-05 Answered
When is a Morphism between Curves a Galois Extension of Function Fields
It is known that the category of normal projective curves and dominant morphisms between them is equivalent to the opposite category of fields of transcendence degree 1 over C and algebraic extensions, so that birational invariants of curves are actually invariants of the function fields, etc.
In algebraic geometry, we have nice interpretations of what it means for an extension of function fields to be separable or purely inseparable (even if these ideas are difficult to visualize since they only occur in prime characteristic). Is there a similar geometric way to view Galois extensions? (Or, I suppose, normal extensions?). Does the fact that a Galois extension of degree n is a splitting field of a degree n polynomial have any geometric meaning?
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Kaya Kemp
I hope that the below is not too long to be useful. I just thought that I would expand upon my above comment. But, then I realized that the literal words weren't helpful without some explanation of what they themselves mean geometrically. Things spiraled out of control, and I ended up writing a novella.
Some background on étale morphisms:
A morphism of schemes (or varieties if you'd prefer) $f:X\to Y$ is étale if any of the following equivalent things are satisfied:
$f$ is locally of finite presentation, flat, and unramified.
$f$ is smooth of relative dimension 0.
$f$ is locally of finite presentation, flat, and for all geometric points $\overline{y}$of y, the map ${X}_{\overline{y}}\to \overline{y}$ is isomorphic (as $\overline{y}$schemes) to a finite disjoint union of $\overline{y}$
$f$ is locally of finite presentation and is formally étale.
Now, I think the one with the easiest geometric content to describe is 1. So, let me say some words about what each of the the three terms 'locally of finite presentation', 'flat', and 'unramified' mean.
First, locally of finite presentation means geometrically (ignoring Noetherian complications) that the fibers of f (as varieties) are finite dimensional. So, you can't get something like $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(k\left[{x}_{1},{x}_{2},\dots \right]\right)={\mathbb{A}}_{k}^{\mathrm{\infty }}$ as a fiber of f.
Flatness is an algebraic condition, but can be largely intuited as the statement that 'the fibers of f vary continuously'. As a good example of this, one might imagine that if $f:X\to Y$ has continuously varying fibers then this should mean, in particular, that the fibers ${f}^{-1}\left(y\right)=:{X}_{y}$ have constant dimensions (fibers shouldn't vary continuously if they jump dimensions). This is roughly true, and under some mild conditions on X and Y, the converse is also true.
Finally, we need to talk about the geometric content of unramifiedness. But, before I say something about that, let me cut to the punchline (since this will help us intuit unramifiednes). What are étale morphisms supposed to do? Namely, what is their intuitive geometric content. Well, the rough slogan is
étale morphisms are like local isomorphism or, said differently, covering maps.
The usefulness of having such a notion is clear. Not only do we often times want to talk about covering maps in any geometric setting, but if we could have a reasonable theory of covering maps for a scheme X we might be able to define a notion of, say, the étale fundamental group of X. Indeed, in topology π1 is the group which controls covers. So, in algebraic geometry we might hope that 'π1' (in étale terms) is the group which controls 'covers' (i.e. étale maps).
So, back to the geometric content of unramifiedness. Now, as of now, neither of the conditions that we have imposed (locally of finite presentation and flat) discount the algebro-geometric analogue of a ramified covering of curves $C\to {\mathbb{P}}^{1}$ (in complex geometry). Indeed, every non-constant map of (smooth projective) curves is automatically of finite presentation and flat!
And, recall that a ramified covering is a covering map outside of the finitely many ramified points. So, our goal now should be to think of an algebro-geometric notion which discounts these 'ramification points'.
So, let's imagine what a ramified covering is to look like. It should like like a couple of sheets (what should be the sheets of the covering map), but that they come together at these ramification points (stopping the map from being an actual covering map). The following image borrowed from Wikipedia nicely illustrates the concept:
enter image description here
(I can't figure out how to center it!)
So, what is a geometric condition which we can impose on $f$ which will discount these ramification points, these points where the sheets collide. Well, there are several equivalent possibilities:
$f$ is unramified if ${\mathrm{\Omega }}_{X/Y}^{1}=0$
$f$ is unramified if for all $x\in X$ and $y=f\left(x\right)$ we have that $k\left(x\right)/k\left(y\right)$, and $k\left(x\right)/k\left(y\right)$ is finite separable.
$f$ is unramified if the diagonal map $\mathrm{\Delta }$ is an open embedding.
If you are coming from complex curve theory, then it is 2. that you are most likely familiar with.
Namely, in complex curve theory we don't have any extension of residue fields (they're both C for closed points!) but, in general, ${\mathcal{m}}_{y}{\mathcal{O}}_{X,x}$ will be a power of ${\mathcal{m}}_{x}$. This power is the integer ${e}_{x}$ (this is the same ${e}_{x}$ as in Dr. Lubin's post). Thus, f should be unramified if and only if ${e}_{x}$=1 for all x.
That said, I would like to quickly say why 3. (since it's the simplest) makes geometric sense. Look at the picture above of a ramified map. I want to explain why this map does not have an open diagonal map. Namely, let's look, in X, at the first (from the left) ramification point ${x}_{0}$. And, let's choose a sequence of points $\left({x}_{n},{y}_{n}\right)$ (this is all intuition) on the top two sheets coming together at ${x}_{0}$, subject to the condition that $f\left({x}_{n}\right)=f\left({y}_{n}\right)$ (i.e. that they are in the same vertical position relative to Y). Note then that we have chosen a sequence of points $\left({x}_{n},{y}_{n}\right)\in X{×}_{Y}X$ which are NOT in the diagonal $\mathrm{\Delta }$. That said, $\left({x}_{n},{y}_{n}\right)$ converges to $\left({x}_{0},{x}_{0}\right)$, a point IN the diagonal. Thus, $X{×}_{Y}X-\mathrm{\Delta }$ is NOT closed and so $\mathrm{\Delta }$ is NOT open. This is the geometric intuition for 3.
Let me end this section with two examples:
Let me first state rigorously this complex geometric analogue I've been mentioning. Namely, let's assume that X and Y are varieties over C. Then, $f:X\to Y$ is étale if and only if ${f}^{\mathrm{a}\mathrm{n}}:{X}^{\mathrm{a}\mathrm{n}}\to {Y}^{\mathrm{a}\mathrm{n}}$ (here I am thinking of the analytifications) is a local biholomorphism. A simpler case shows that if X and Y are (smooth projective) curves over C, then $f:X\to Y$ (non-constant) is étale if and only if the map ${f}^{\mathrm{a}\mathrm{n}}:{X}^{\mathrm{a}\mathrm{n}}\to {Y}^{\mathrm{a}\mathrm{n}}$ is a covering map.
Suppose now that X And Y are just spectra of fields, and so we have a map $f:\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$) correspond to an extension of fields L/K. I claim that f is unramified if and only if L/K is finite separable. Indeed, it's clear that if f is étale then L/K is finite separable (look at condition 2. in the unramified condition). Conversely, if L/K is finite separable, then 2. it is clearly flat, of finite presentation, and satisfies 2. of the unramifiedness condition.But, let me give some intuition about why this second bullet should should make sense in more simple terms. If $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$ is going to be a 'covering map' then it should just be a finite disjoint union of points—it's a covering map of a point! But, as all things in algebraic geometry, the 'obvious' geometry only happens over $\overline{K}$. So, really what we should say is that if L/K corresponds to a covering map, then we'd expect $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L{\otimes }_{K}\overline{K}\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\overline{K}\right)$ to be just a finite trivial covering of points. And, indeed, for an extension L/K of fields, it's étale if and only if $L{\otimes }_{K}\overline{K}$is isomorphic to ${\overline{K}}^{\left[L:K\right]}$ (which, geometrically, is a disjoint union of [L:K] points).
If you want to learn more about flat or unramified morphisms, I'll give a shameless plug for my blog. This post and this post in particular are relevant to the above.
Finite Galois covers
So, now we have given a geometric face to a finite separable extension of fields, but you actually asked about Galois, not just separable. So, let's now discuss the geometric content of a Galois cover.
Let's first mention that we only ever talk about finite étale covers when we're trying to think about covering spaces. Why? Well, in simple terms, it's because they're the algebraic ones. Namely, consider the complex manifold ${\mathbb{C}}^{×}$. This has a geometric analogue, namely ${\mathbf{G}}_{m}={\mathbb{A}}_{\mathbb{C}}^{1}-\left\{0\right\}$. So, we might hope to glean some of the 'coverings' of ${\mathbf{G}}_{m}$ from ${\mathbb{C}}^{×}$. This is totally fine for all finite coverings of ${\mathbb{C}}^{×}$ which, essentially, are just the maps ${\mathbb{C}}^{×}\to {\mathbb{C}}^{×}$ given by $z↦{z}^{n}$. These are algebraic, and so we can translate them over to the algebraic world of ${\mathbf{G}}_{m}$. But, the infinite covers, like the universal cover $\mathrm{exp}:\mathbb{C}\to {\mathbb{C}}^{×}$ are now no longer algebraic, and so can't be described in the world of varieties/schemes. So, this is why we only discuss the finite covers.
So, what is our goal? We want to define what it means for a morphism $f:X\to Y$ (where, for comfort, we assume that X and Y are both connected) to be a finite étale Galois cover. What should the topological analogue of this be? Well, what we're after is trying to get the right analogy of a (finite) Galois cover of topological spaces.
Recall that a covering space $f:X\to Y$ is called Galois if the group of deck transformations Aut(X/Y) acts (simply) transitively on the fibers ${f}^{-1}\left(y\right)$ for all y. These are the 'homogenous covers' where all the points are permuted by the action of Aut(X/Y). They are also the covers which correspond to normal subgroups of ${\pi }_{1}\left(X\right)=\mathrm{A}\mathrm{u}\mathrm{t}\left(\stackrel{~}{X}/X\right)$.
So, a naive guess would be that a morphism of varieties/schemes$f:X\to Y$ should be a finite Galois cover if for all points $y\in Y$ we have that Aut(X/Y) acts transitively on ${f}^{-1}\left(y\right)$). But, this is bad for the same reason as why a separable extension L/K is not a trivial cover at face value—real geometry only occurs over algebraically closed fields.
For this reason, we want to replace literal fibers with so-called geometric fibers. Namely, recall that a geometric point of Y is a morphism $\overline{y}:\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\mathrm{\Omega }\right)\to Y$, where $\mathrm{\Omega }$ is an algebraically closed field. We then define, for any geometric point $\overline{y}$ of Y the geometric fiber $\overline{y}$ to be all geometric points lifting $\overline{y}$. Namely, all geometric points $\overline{x}:\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left({\mathrm{\Omega }}^{\prime }\right)\to X$ on X such that $f\circ \overline{x}=\overline{y}$.
We then have the following definition of a finite Galois cover $f:X\to Y$. It is a finite surjective étale morphism such that Gal(X/Y):=Aut(X/Y) acts transitively on any (equiv. one) geometric fiber of Y.
Let me make two comments. First, Gal(X/Y) already acts faithfully on a geometric fiber by an analogue of the 'rigidity of covering maps', and so we can replace 'transitively' by 'simply transitively'. Second, note that we require finite in the second of scheme theory here. Namely, we require that f is an affine map. Why should this make sense? Well, if we're trying to capture the geometry of a finite covering map, then we'd hope (since this is true for finite covering maps) that f is proper and quasi-finite (finite fibers), but these two things (by Zariski's main theorem) actually imply (scheme-theoretic) finiteness.
There is another useful characterization of when a finite étale map f:$X\to Y$ is Galois. Namely, one can show that G:=Aut(X/Y) acts so-called 'admissibly' (under very mild assumptions) on X, and thus it make sense to define the variety/scheme X/G. One can then show that $X\to Y$ is a finite Galois cover if and only if the induced map $X/G\to Y$ is an isomorphism. In fact, one then has a 'Galois correspondence' between the intermediary étale maps $X\to Z\to Y$ and the subgroups of G, the Galois ones corresponding to normal subgroups.
So, let us now give the two examples which, at this point, are probably pretty obvious.
Let X and Y be (smooth projective) curves over C. Then, a map $f:X\to Y$ is a finite étale Galois cover if
${f}^{\mathrm{a}\mathrm{n}}:{X}^{\mathrm{a}\mathrm{n}}\to {Y}^{\mathrm{a}\mathrm{n}}$ is a covering space. And, f is Galois if and only if the covering space ${X}^{\mathrm{a}\mathrm{n}}\to {Y}^{\mathrm{a}\mathrm{n}}$ is a Galois covering space.Let $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$ be an extension of fields. Then, it is a finite étale cover if and only if L/K is finite separable, and it is a Galois cover if and only if L/K is finite Galois.Since it was the main question of your post, let me explain in more detail why the second bullet is true.
First, let us figure out what this geometric fiber means. Namely, we said we can check for any geometric fiber on Spec(K). But, there is an obvious one. Namely, choose an embedding $K↪\overline{K}$, which gives a geometric point
$\overline{y}\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\overline{K}\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$. What then is the geometric fiber of
$f:\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$ over this geometric point? Well, it is:
$\begin{array}{rl}\left\{\overline{x}:\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\overline{K}\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right):f\circ \overline{x}=\overline{y}\right\}& =\left\{L\to \overline{K}:K\to L\to \overline{X}=\overline{y}\right\}\\ & ={\mathrm{H}\mathrm{o}\mathrm{m}}_{K}\left(L,\overline{K}\right)\end{array}$
The action of $\mathrm{A}\mathrm{u}\mathrm{t}\left(\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)/\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)\right)=\mathrm{A}\mathrm{u}\mathrm{t}\left(L/K\right)$ on ${\mathrm{H}\mathrm{o}\mathrm{m}}_{K}\left(L,\overline{K}\right)$ is the obvious one. And, as is classical, L/K should Galois if and only if this action is transitive.
For example, $\mathbb{Q}\left(\sqrt[3]{2}\right)/\mathbb{Q}$ is finite separable, and so $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\mathbb{Q}\left(\sqrt[3]{2}\right)\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(\mathbb{Q}\right)$ is a finite étale cover. But, it's NOT Galois since its automorphism group (the trivial group) does not act transitively on the geometric fiber
${\mathrm{H}\mathrm{o}\mathrm{m}}_{\mathbb{Q}}\left(\mathbb{Q}\left(\sqrt[3]{2}\right),\overline{\mathbb{Q}}\right)=\left\{\sqrt[3]{2}↦\sqrt[3]{2},\sqrt[3]{2}↦{\zeta }_{3}\sqrt[3]{2},\sqrt[3]{2}↦{\zeta }_{3}^{2}\sqrt[3]{2}\right\}$
In fact, the example shows you that if $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(L\right)\to \mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left(K\right)$ is a finite étale cover, so L=K($\alpha$) (by the primitive element theorem), that the geometric fiber corresponds to the roots of the minimal polynomial of $\alpha$, and transitivity of the action corresponds precisely to all of these roots being in L—in other words, L/K being normal.
Also, one can interpret this in terms of the X/G=Y construction. Namely, if G is a finite group acting on an affine scheme Spec(A) ,then $\mathrm{S}\mathrm{p}\mathrm{e}\mathrm{c}\left({A}^{G}\right)$ is just Spec(AG). Thus, for Spec(L)/G to be equal to Spec(K) is just the statement that ${L}^{G}=K$, as per usual. Also, the Galois correspondence mentioned above is the usual Galois correspondence.
Bonus: étale fundamental group
Having written all of the above, I would feel remiss to not give the 'punchline' of the above.
Namely, once one has gone through all the effort of defining the notion of a 'connected Galois cover' $X\to Y$, which is supposed to be like finite Galois covering spaces, one better use it to define a notion of the fundamental group. Namely, one can define the étale fundamental group of $\left(X,\overline{x}\right)$ (i.e. we need to choose a geometric 'base point') to be
${\pi }_{1}^{\stackrel{´}{\mathrm{e}}\mathrm{t}}\left(X,\overline{x}\right)=\underset{←}{\mathrm{lim}}\mathrm{G}\mathrm{a}\mathrm{l}\left({X}^{\prime }/X\right)$where $\left({X}^{\prime },\overline{{x}^{\prime }}\right)$ runs over the 'pointed' connected finite Galois covers of X. You can largely ignore the pointing.
This is analogous to the usual fundamental group in two ways. Namely, one can define ${\pi }_{1}\left(X,x\right)$ to be the projective limit over Aut(X′/X), connected Galois covering spaces. But, in the topological world, this projective system has a final object, namely, the universal cover $\overline{X}\to X$ and so this inverse limit is just Aut$\mathrm{A}\mathrm{u}\mathrm{t}\left(\overline{X}/X\right)$ which is classically known to be π1(X,x). For reasons mentioned above, in the algebraic world we don't have access to a 'universal cover' (it won't be given by an algebraic map) and so we can't simplify this inverse limit in the same way..
We have step-by-step solutions for your answer!
Agostarawz
Also, one can define a fancier notion of a 'fiber functor' which associates to any finite connected covering spaces $f:{X}^{\prime }\to X$ the fiber ${f}^{-1}\left(x\right)$ (note the base point has a role here). This then defines a functor $\mathrm{C}\mathrm{o}\mathrm{v}\left(X\right)\to \mathsf{S}\mathsf{e}\mathsf{t}$ is the category of finite covering spaces of X). One can then prove that ${\pi }_{1}\left(X,x\right)$ is the automorphism group of this functor. If one makes the exact analogous definitions in the étale world, one recovers the étale fundamental group as the automorphisms of the geometric fiber functor.
Let me end by giving the same two examples:
If X/C is a variety, then the Riemann Existence Theorem implies that ${\pi }_{1}^{\stackrel{´}{\mathrm{e}}\mathrm{t}}\left(X,\overline{x}\right)$ is $\stackrel{^}{{\pi }_{1}\left({X}^{\mathrm{a}\mathrm{n}}\right)}$ the profinite completion of the topological fundamental group of the analytification. This makes sense since the étale fundamental group is the inverse limit over Aut's of finite covers, which corresponds to taking the direct limit (on the topological side) of the Aut's of finite covers which are the finite quotients of the topological fundamental group—thus the profinite completion.For a field X=Spec(K), choosing a geometric base point $\overline{x}$ is the same thing as choosing an algebraic closure $\overline{K}$. Then, one can check (it's a nice exercise in the definitions) that ${\pi }_{1}^{\stackrel{´}{\mathrm{e}}\mathrm{t}}\left(X,\overline{x}\right)=\mathrm{G}\mathrm{a}\mathrm{l}\left({K}^{\mathrm{s}\mathrm{e}\mathrm{p}}/K\right)$. Thus, in some sense, we can think of the separable closure ${K}^{\mathrm{s}\mathrm{e}\mathrm{p}}$ of K as being the 'universal cover' of K.EDIT: This is a cursory answer to Dorebell's question in the comments. I'll come back later to answer more in-depth.
Namely, when is a map of, say, compact Riemann surfaces $X\to Y$ correspond to a Galois cover K(X)/K(Y). For me, a Riemann surface is automatically connected.
Well, the obvious guess is the correct one, once one thinks about what K(X)/K(Y) is geometrically. Namely, K(X) is the stalk of OX,x at the generic point of X, and similarly for Y. So, we'd expect that the data which is going on in K(X)/K(Y) should be something like 'what happens generically'.
So, for example, we'd expect that K(X)/K(Y) is separable since generically (say, on dense opens) the map $f:X\to Y$ is étale (away from the finitely many ramification points). Of course, this is a silly observation since K(Y) is characteristic 0, and so perfect. But, it's a good illustration of the concept and, in fact, can be used to prove the other direction in our thought experiment—that since K(X)/K(Y) is separable, that $f:X\to Y$ should be generically étale.
So, we expect to be able to say that K(X)/K(Y) is Galois if and only if $f:X\to Y$ is 'generically Galois. It's a little less clear what this means, but a little thought sorts it out. Namely, let's make our statement that K(X)/K(Y) being separable means that $f:X\to Y$ is generically separable more concrete.
Namely, we can take the finite set ${S}_{Y}\subseteq Y$ of ramification values and ${S}_{X}={f}^{-1}\left(S\right)\subseteq X$ and consider the holmorphic map ${f}^{\prime }:{X}^{\prime }\to {Y}^{\prime }$′ where ${X}^{\prime }=X-{S}_{X}$, ${Y}^{\prime }=Y-{Y}_{S}$, and ${f}^{\prime }=f{\mid }_{{X}^{\prime }}$. Now, $f$′ is an honest to god covering map, opposed to a 'branched cover'. Moreover, ${f}^{\prime }$′ determines $f$ in the usual way—explicitly, this generically defined $f$′ still captures the generic data of K(X)/K(Y) which, by the standard theory, is all one needs to capture $f:X\to Y$.
One then expects that K(X)/K(Y) is Galois if and only if ${f}^{\prime }$ is a covering map. And, indeed, this is the case as one can easily prove.
One thing to note that is that we chose the sets ${S}_{Y}$ and ${S}_{X}$'s sort of minimally. Since K(X)/K(Y) is only capturing generic data, we should be able to choose any sets ${T}_{Y}\supseteq {S}_{Y}$ and ${T}_{X}={f}^{-1}\left({T}_{Y}\right)$ and have the same discussion. But, there is no difference. Namely, one can check the Galoisness of a cover at ONE point (much like one can check normality at one generator), and so the choice really doesn't matter.
For anyone who would like to see proof of the above statements about complex curves, I highly recommend Szamuely's book Galois Groups and Fundamental Groups
We have step-by-step solutions for your answer! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 174, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535912871360779, "perplexity": 295.2974535104547}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00268.warc.gz"} |
https://blog.maisnam.com/archives/set_paragraph_i/ | Set paragraph indentations in Powerpoint 2003
I was having a hard time figuring out how to set paragraph indentations in Powerpoint. For some reason, Powerpoint does not have the “Format > Paragraph > Special” option like Word does. I was working on a Powerpoint template which had the first paragraph set to “Hanging” (i.e. the first line of the paragraph is more left indented than the other lines in the paragraph). The trick is to set the indents using the ruler (View > Ruler).
More details are here:
http://office.microsoft.com/en-us/powerpoint/HP051952391033.aspx?pid=CH063500521033
The only drawback with the above approach is that when you want to make the line a bullet, the bullet is too near the text. In such a case, you can adjust the bullet settings by dragging the ruler indent marker again. The other way (which is quicker) is to click once on the numbered list icon and then again on the bulleted list icon. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829159736633301, "perplexity": 1215.6972644493112}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00790.warc.gz"} |
https://www.jmp.com/en_in/statistics-knowledge-portal/t-test/one-sample-t-test.html | One-Sample t-Test | Introduction to Statistics | JMP
### Statistics Knowledge Portal
A free online introduction to statistics
# The One-Sample t-Test
## What is the one-sample t-test?
The one-sample t-test is a statistical hypothesis test used to determine whether an unknown population mean is different from a specific value.
## When can I use the test?
You can use the test for continuous data. Your data should be a random sample from a normal population.
## What if my data isn’t nearly normally distributed?
If your sample sizes are very small, you might not be able to test for normality. You might need to rely on your understanding of the data. When you cannot safely assume normality, you can perform a nonparametric test that doesn’t assume normality.
## Using the one-sample t-test
The sections below discuss what we need for the test, checking our data, performing the test, understanding test results and statistical details.
### What do we need?
For the one-sample t-test, we need one variable.
We also have an idea, or hypothesis, that the mean of the population has some value. Here are two examples:
• A hospital has a random sample of cholesterol measurements for men. These patients were seen for issues other than cholesterol. They were not taking any medications for high cholesterol. The hospital wants to know if the unknown mean cholesterol for patients is different from a goal level of 200 mg.
• We measure the grams of protein for a sample of energy bars. The label claims that the bars have 20 grams of protein. We want to know if the labels are correct or not.
#### One-sample t-test assumptions
For a valid test, we need data values that are:
• Independent (values are not related to one another).
• Continuous.
• Obtained via a simple random sample from the population.
Also, the population is assumed to be normally distributed.
## One-sample t-test example
Imagine we have collected a random sample of 31 energy bars from a number of different stores to represent the population of energy bars available to the general consumer. The labels on the bars claim that each bar contains 20 grams of protein.
#### Table 1: Grams of protein in random sample of energy bars
Energy Bar - Grams of Protein
20.7027.4622.1519.8521.2924.75
20.7522.9125.3420.3321.5421.08
22.1419.5621.1018.0424.1219.95
19.7218.2816.2617.4620.5322.12
25.0622.4419.0819.8821.3922.3325.79
If you look at the table above, you see that some bars have less than 20 grams of protein. Other bars have more. You might think that the data support the idea that the labels are correct. Others might disagree. The statistical test provides a sound method to make a decision, so that everyone makes the same decision on the same set of data values.
### Checking the data
Let’s start by answering: Is the t-test an appropriate method to test that the energy bars have 20 grams of protein ? The list below checks the requirements for the test.
• The data values are independent. The grams of protein in one energy bar do not depend on the grams in any other energy bar. An example of dependent values would be if you collected energy bars from a single production lot. A sample from a single lot is representative of that lot, not energy bars in general.
• The data values are grams of protein. The measurements are continuous.
• We assume the energy bars are a simple random sample from the population of energy bars available to the general consumer (i.e., a mix of lots of bars).
• We assume the population from which we are collecting our sample is normally distributed, and for large samples, we can check this assumption.
We decide that the t-test is an appropriate method.
Before jumping into analysis, we should take a quick look at the data. The figure below shows a histogram and summary statistics for the energy bars.
Figure 1: Histogram and summary statistics for the grams of protein in energy bars
From a quick look at the histogram, we see that there are no unusual points, or outliers. The data look roughly bell-shaped, so our assumption of a normal distribution seems reasonable.
From a quick look at the statistics, we see that the average is 21.40, above 20. Does this average from our sample of 31 bars invalidate the label's claim of 20 grams of protein for the unknown entire population mean? Or not?
### How to perform the one-sample t-test
For the t-test calculations we need the average, standard deviation and sample size. These are shown in the summary statistics section of Figure 1 above.
We round the statistics to two decimal places. Software will show more decimal places, and use them in calculations. (Note that Table 1 shows only two decimal places; the actual data used to calculate the summary statistics has more.)
We start by finding the difference between the sample average and 20:
$21.40-20\ =\ 1.40$
Next, we calculate the standard error for the mean. The calculation is:
Standard Error for the mean = $\frac{s}{\sqrt{n}}= \frac{2.54}{\sqrt{31}}=0.456$
This matches the value in Figure 1 above.
We now have the pieces for our test statistic. We calculate our test statistic as:
$t = \frac{\text{Difference}}{\text{Standard Error}}= \frac{1.40}{0.456}=3.07$
To make our decision, we compare the test statistic to a value from the t-distribution. This activity involves four steps.
1. We calculate a test statistic. Our test statistic is 3.07.
2. We decide on the risk we are willing to take for declaring a difference when there is not a difference. For the energy bar data, we decide that we are willing to take a 5% risk of saying that the unknown population mean is different from 20 when in fact it is not. In statistics-speak, we set α = 0.05. In practice, setting your risk level (α) should be made before collecting the data.
3. We find the value from the t-distribution based on our decision. For a t-test, we need the degrees of freedom to find this value. The degrees of freedom are based on the sample size. For the energy bar data:
degrees of freedom = $n - 1 = 31 - 1 = 30$
The critical value of t with α = 0.05 and 30 degrees of freedom is +/- 2.043. Most statistics books have look-up tables for the distribution. You can also find tables online. The most likely situation is that you will use software and will not use printed tables.
4. We compare the value of our statistic (3.07) to the t value. Since 3.07 > 2.043, we reject the null hypothesis that the mean grams of protein is equal to 20. We make a practical conclusion that the labels are incorrect, and the population mean grams of protein is greater than 20.
### Statistical details
Let’s look at the energy bar data and the 1-sample t-test using statistical terms.
Our null hypothesis is that the underlying population mean is equal to 20. The null hypothesis is written as:
$H_o: \mathrm{\mu} = 20$
The alternative hypothesis is that the underlying population mean is not equal to 20. The labels claiming 20 grams of protein would be incorrect. This is written as:
$H_a: \mathrm{\mu} ≠ 20$
This is a two-sided test. We are testing if the population mean is different from 20 grams in either direction. If we can reject the null hypothesis that the mean is equal to 20 grams, then we make a practical conclusion that the labels for the bars are incorrect. If we cannot reject the null hypothesis, then we make a practical conclusion that the labels for the bars may be correct.
We calculate the average for the sample and then calculate the difference with the population mean, mu:
$\overline{x} - \mathrm{\mu}$
We calculate the standard error as:
$\frac{s}{ \sqrt{n}}$
The formula shows the sample standard deviation as s and the sample size as n.
The test statistic uses the formula shown below:
$\dfrac{\overline{x} - \mathrm{\mu}} {s / \sqrt{n}}$
We compare the test statistic to a t value with our chosen alpha value and the degrees of freedom for our data. Using the energy bar data as an example, we set a = 0.05. The degrees of freedom (df) are based on the sample size and are calculated as:
$df = n - 1 = 31 - 1 = 30$
Statisticians write the t value with α = 0.05 and 30 degrees of freedom as:
$t_{0.05,30}$
The t value for a two-sided test with α = 0.05 and 30 degrees of freedom is +/- 2.042. There are two possible results from our comparison:
• The test statistic is less extreme than the critical t values; in other words, the test statistic is not less than -2.042, or is not greater than +2.042. You fail to reject the null hypothesis that the mean is equal to the specified value. In our example, you would be unable to conclude that the label for the protein bars should be changed.
• The test statistic is more extreme than the critical t values; in other words, the test statistic is less than -2.042, or is greater than +2.042. You reject the null hypothesis that the mean is equal to the specified value. In our example, you conclude that either the label should be updated or the production process should be improved to produce, on average, bars with 20 grams of protein.
### Testing for normality
The normality assumption is more important for small sample sizes than for larger sample sizes.
Normal distributions are symmetric, which means they are “even” on both sides of the center. Normal distributions do not have extreme values, or outliers. You can check these two features of a normal distribution with graphs. Earlier, we decided that the energy bar data was “close enough” to normal to go ahead with the assumption of normality. The figure below shows a normal quantile plot for the data, and supports our decision.
Figure 4: Normal quantile plot for energy bar data
You can also perform a formal test for normality using software. The figure below shows results of testing for normality with JMP software. We cannot reject the hypothesis of a normal distribution.
Figure 5: Testing for normality using JMP software
We can go ahead with the assumption that the energy bar data is normally distributed.
#### What if my data are not from a Normal distribution?
If your sample size is very small, it is hard to test for normality. In this situation, you might need to use your understanding of the measurements. For example, for the energy bar data, the company knows that the underlying distribution of grams of protein is normally distributed. Even for a very small sample, the company would likely go ahead with the t-test and assume normality.
What if you know the underlying measurements are not normally distributed? Or what if your sample size is large and the test for normality is rejected? In this situation, you can use a nonparametric test. Nonparametric analyses do not depend on an assumption that the data values are from a specific distribution. For the one-sample t-test, the one possible nonparametric test is the Wilcoxon Signed Rank test.
### Understanding p-values
Using a visual, you can check to see if your test statistic is more extreme than a specified value in the distribution. The figure below shows a t-distribution with 30 degrees of freedom.
Figure 6: t-distribution with 30 degrees of freedom and α = 0.05
Since our test is two-sided and we set α = 0.05, the figure shows that the value of 2.042 “cuts off” 5% of the data in the tails combined.
The next figure shows our results. You can see the test statistic falls above the specified critical value. It is far enough “out in the tail” to reject the hypothesis that the mean is equal to 20.
Figure 7: Our results displayed in a t-distribution with 30 degrees of freedom
#### Putting it all together with Software
You are likely to use software to perform a t-test. The figure below shows results for the 1-sample t-test for the energy bar data from JMP software.
Figure 8: One-sample t-test results for energy bar data using JMP software
The software shows the null hypothesis value of 20 and the average and standard deviation from the data. The test statistic is 3.07. This matches the calculations above.
The software shows results for a two-sided test and for one-sided tests. We want the two-sided test. Our null hypothesis is that the mean grams of protein is equal to 20. Our alternative hypothesis is that the mean grams of protein is not equal to 20. The software shows a p-value of 0.0046 for the two-sided test. This p-value describes the likelihood of seeing a sample average as extreme as 21.4, or more extreme, when the underlying population mean is actually 20; in other words, the probability of observing a sample mean as different, or even more different from 20, than the mean we observed in our sample. A p-value of 0.0046 means there is about 46 chances out of 10,000. We feel confident in rejecting the null hypothesis that the population mean is equal to 20. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817405104637146, "perplexity": 450.70705569074084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00162.warc.gz"} |
http://math.stackexchange.com/questions/246988/prove-sum-n-1-inftyf-nx-infty-on-0-1-when-non-negative-and-incr | # Prove $\sum_{n=1}^{\infty}f_n'(x)<\infty$ on $(0,1)$, when non-negative and increasing function $\lim_{x\to \infty}\sum_{n=1}^{\infty}f_n(x)<\infty$
When $f_n$ if non-negative and increasing on $(0,\ \infty)$
$$\lim_{x\to \infty}\sum_{n=1}^{\infty}f_n(x)<\infty$$
Prove that $$\sum_{n=1}^{\infty}f_n'(x)<\infty$$ on $(0,\ 1)$ a.e $[m]$.
Is there the question means $f$ is differentiable? If so I will try mean value theorem.
If not, I am totally stuck at the beginning, since $f$ is not mentioned absolutely continuous or f' belong to $L^1(m)$, I have no idea how to connect $f'$ and $f$ here.
-
monotonic functions are differentiable almost everywhere. – Alex R. Nov 29 '12 at 3:00
What is $f_n$? $\$ – Martin Argerami Nov 29 '12 at 3:22
This result, when stated in full generality, is known as Fubini's Theorem on the Termwise-Differentiation of Series with Monotone Terms.
A proof that uses the theory of Lebesgue integration may be found in the following document: www.math.sc.edu/~howard/Notes/fubini.ps.gz.
To see an elementary proof that does not rely on integration theory at all, please consult the classic text Functional Analysis by Frigyes Riesz and Béla Nagy. For your convenience, I shall provide a link to the relevant page of the text:
-
Hint: there is a positive measure $\mu_n$ on $(0,\infty)$ such that $\mu_n((0, x)) = f_n(x) - f_n(0)$. Decompose $\mu_n$ into its singular and absolutely continuous components.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248631000518799, "perplexity": 166.1418575273138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246647589.15/warc/CC-MAIN-20150417045727-00117-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://astarmathsandphysics.com/university-maths-notes/topology/2368-proof-that-the-union-of-a-set-and-its-boundary-equals-the-closure-of-the-set.html | ## Proof That the Union of a Set and its Boundary Equals the Closure of the Set
Theorem
Ifis a set (open or closed) andis the boundary of the set then the closure oflabelled is equal to the union ofand the boundary oflabelledMore concisely,
Proof
SupposethenhenceSinceand
Suppose now thatIfthen good. Suppose then thatbut Each neighbourhood ofintersectsat a point distinct fromhencetherefore
Hence | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228266477584839, "perplexity": 2049.19011953501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00010.warc.gz"} |
https://eguruchela.com/math/Calculator/intersection-of-set | # Intersection of Set(A Intersect B) or A∩B Calculation
Set is the relation of some given data or say set contains the elements of similar type or category. A set is defined as the collection of elements, such as numbers or other objects, which are arranged in a group. The set with any numbers can be denoted in the symbol braces { }. The intersection of the sets A and set B is represented by A ∩ B and it is pronounced as A intersection B.
for example: The two sets of events A={1, 2, 3,4} and B={3,4, 6, 7, 8}
the intersection of the sets we get A ∩ B = {3, 4}
Enter the values in the set1(seperated by comma) Enter the values in the set2(seperated by comma) Intersection of sets | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868257999420166, "perplexity": 545.4817870963561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00061.warc.gz"} |
https://www.math-only-math.com/decimal-as-fraction.html | # Decimal as Fraction
We will discuss how to express decimal as fraction.
0.5 = $$\frac{5}{10}$$
0.05 = $$\frac{5}{100}$$
0.005 = $$\frac{5}{1000}$$
2.5 = $$\frac{25}{10}$$
2.25 = $$\frac{225}{100}$$
2.275 = $$\frac{2275}{1000}$$
To convert a decimal into a fraction, remember the following steps.
Step I: Write the number as the numerator omitting the decimal point.
Step II: Write 1 in the denominator and add zeroes to it equal to the number of decimal places.
Note: When a decimal is read, each digit of the decimal part is read separately.
Let us consider some of the following examples on expressing a decimal as a fraction.
1. Convert 2.12 into a fraction.
Solution:
2.12 = 2 + 1 tenth + 2 hundredths = 2 + $$\frac{1}{10}$$ + $$\frac{2}{100}$$ = 2 + $$\frac{1 × 10}{10 × 10}$$ + $$\frac{2}{100}$$ = 2 + $$\frac{10}{100}$$ + $$\frac{2}{100}$$ = 2 + $$\frac{10 + 2}{100}$$ = 2 + $$\frac{12}{100}$$ = 2 + $$\frac{3}{25}$$ = 2$$\frac{3}{25}$$ We write the place value of digits of decimal and then add as usual.
2. Convert 5.125 into a fraction.
Solution:
5.125 = 5 + 1 tenth + 2 hundredths + 5 thousandths = 5 + $$\frac{1}{10}$$ + $$\frac{2}{100}$$ + $$\frac{5}{1000}$$ = 5 + $$\frac{1 × 100}{10 × 100}$$ + $$\frac{2 × 10}{100 × 10}$$ + $$\frac{5}{1000}$$ = 5 + $$\frac{1 × 100}{10 × 100}$$ + $$\frac{2 × 10}{100 × 10}$$ + $$\frac{5}{1000}$$ = 5 + $$\frac{100}{1000}$$ + $$\frac{20}{1000}$$ + $$\frac{5}{1000}$$ = 5 + $$\frac{100 + 20 + 5}{1000}$$ = 5 + $$\frac{125}{1000}$$ = 5 + $$\frac{1}{8}$$ = 5$$\frac{1}{8}$$ We write the place value of digits of decimal and then add as usual.
Express the following decimals in expanded form:
3.62 = 3 × 1 + $$\frac{6}{10}$$ + $$\frac{2}{10}$$
75.86 = 7 × 10 + 5 × 1 + $$\frac{8}{10}$$ + $$\frac{6}{10}$$
216.894 = 2 × 100 + 1 × 10 + 6 × 1 + $$\frac{8}{10}$$ + $$\frac{9}{100}$$ + $$\frac{4}{1000}$$
0.562 = $$\frac{5}{10}$$ + $$\frac{6}{100}$$ + $$\frac{2}{1000}$$
Express the following as decimal numbers:
For examples:
$$\frac{6}{10}$$ + $$\frac{3}{100}$$ = 0.63
$$\frac{6}{10}$$ + $$\frac{3}{100}$$ + $$\frac{5}{1000}$$ = 0.635
4 × 1 + $$\frac{3}{10}$$ + $$\frac{2}{100}$$ = 4.32
7 × 10 + 2 × 1 + $$\frac{8}{10}$$ + $$\frac{9}{100}$$ = 72.89
Convert the following decimals to fractions in their lowest terms.
For examples:
0.36 = $$\frac{36}{100}$$ = $$\frac{9}{25}$$ [$$\frac{36 ÷ 4}{100 ÷ 4}$$ = $$\frac{9}{25}$$]
5.65 = 5 + 0.65 = 5 + $$\frac{65}{100}$$ = 5$$\frac{65}{100}$$ = 5$$\frac{13}{20}$$]
14.05 = 14 + 0.05 = 14 + $$\frac{5}{100}$$ = 14$$\frac{5}{100}$$ = 14$$\frac{1}{20}$$]
3.004 = 3 + 0.004 = 3 + $$\frac{4}{1000}$$ = 3$$\frac{4}{1000}$$ = 3$$\frac{1}{250}$$]
Note: We always reduce the fraction converted from a decimal to its lowest form.
Questions and Answers on Conversion of a Decimals to a Fractions:
I. Convert the following decimals as fractions or mixed numerals:
(i) 0.6
(ii) 0.09
(iii) 3.65
(iv) 12.132
(v) 16.5
(vi) 5.46
(vii) 12.29
(viii) 0.008
(ix) 8.08
(x) 162.434
II. Express the following in the expanded form.
(i) 46.25
(ii) 115.32
(iii) 14.568
(iv) 19.005
(v) 77.777
III. Write as decimals:
(i) 2 × 1 + $$\frac{7}{10}$$ + $$\frac{4}{100}$$
(ii) 3 × 10 + 5 × 1 + $$\frac{8}{10}$$ + $$\frac{3}{1000}$$
(iii) 7 × 100 + 4 × 10 + 5 × 1 + $$\frac{4}{1000}$$
(iv) 9 × 100 + $$\frac{7}{10}$$
(v) $$\frac{5}{100}$$ + $$\frac{8}{1000}$$
## You might like these
• ### Like Decimal Fractions | Decimal Places | Decimal Fractions|Definition
Like Decimal Fractions are discussed here. Two or more decimal fractions are called like decimals if they have equal number of decimal places. However the number of digits in the integral part does not matter. 0.43, 10.41, 183.42, 1.81, 0.31 are all like fractions
• ### Expanded form of Decimal Fractions |How to Write a Decimal in Expanded
Decimal numbers can be expressed in expanded form using the place-value chart. In expanded form of decimal fractions we will learn how to read and write the decimal numbers. Note: When a decimal is missing either in the integral part or decimal part, substitute with 0.
• ### Multiplication of Decimal Numbers | Multiplying Decimals | Decimals
The rules of multiplying decimals are: (i) Take the two numbers as whole numbers (remove the decimal) and multiply. (ii) In the product, place the decimal point after leaving digits equal to the total number of decimal places in both numbers.
• ### Division of a Decimal by a Whole Number | Rules of Dividing Decimals
To divide a decimal number by a whole number the division is performed in the same way as in the whole numbers. We first divide the two numbers ignoring the decimal point and then place the decimal point in the quotient in the same position as in the dividend.
• ### Multiplication of a Decimal by a Decimal |Multiplying Decimals Example
To multiply a decimal number by a decimal number, we first multiply the two numbers ignoring the decimal points and then place the decimal point in the product in such a way that decimal places in the product is equal to the sum of the decimal places in the given numbers.
• ### Unlike Decimal Fractions | Unlike Decimals | Number of Decimal Places
Unlike decimal fractions are discussed here. Two or more decimal fractions are called unlike decimals if they have unequal numbers of decimal places. Let us consider some of the unlike decimals; (i) 8.4, 8.41, 8.412 In 8.4, 8.41, 8.412 the number of decimal places are 1, 2
• ### Equivalent Decimal Fractions | Like Decimal Fraction | Unlike Decimal
Equivalent decimal fractions are unlike fractions which are equal in value. Numbers obtained by inserting zeros after the extreme right digit in the decimal part of a decimal number are known as equivalent decimals.
• ### Comparison of Decimal Fractions | Comparing Decimals Numbers | Decimal
While comparing natural numbers we first compare total number of digits in both the numbers and if they are equal then we compare the digit at the extreme left. If they also equal then we compare the next digit and so on. We follow the same pattern while comparing the
Addition of decimal numbers are similar to addition of whole numbers. We convert them to like decimals and place the numbers vertically one below the other in such a way that the decimal point lies exactly on the vertical line. Add as usual as we learnt in the case of whole
• ### Subtraction of Decimal Fractions |Rules of Subtracting Decimal Numbers
The rules of subtracting decimal numbers are: (i) Write the digits of the given numbers one below the other such that the decimal points are in the same vertical line. (ii) Subtract as we subtract whole numbers. Let us consider some of the examples on subtraction | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929338455200195, "perplexity": 834.8105539372806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00537.warc.gz"} |
https://www.physicsforums.com/threads/how-stellar-aberration-was-quantified-in-the-early-days.950193/ | # I How stellar aberration was quantified in the early days
1. Jun 23, 2018
I am working through "Spacetime Physics" and encountered exercise 3-9, which concerns aberration of starlight. They ask the following question: "Since the background of stars also shifts due to aberration, how can the effect be measured at all?"
I got part of the answer. You measure the angle between the north celestial pole and the position of the star. It shifts depending on what time of year it is. That takes care of one component (I think). But I am puzzled as to how you quantify the aberration in the direction perpendicular to this.
Last edited: Jun 23, 2018
2. Jun 25, 2018
### Staff: Mentor
The whole sky "moves" towards the direction Earth is moving to (as seen from the Sun), and this direction changes over time. This distorts the pattern of the stars with a yearly cycle.
3. Jun 26, 2018
The motion in the north-south direction can be detected by reference to celestial north. How is the motion in the East-West direction detected. What is the absolute reference for east-west direction?
4. Jun 26, 2018
The aberration affects the stars around the star you are observing, so you cannot simply compare the star to the stars around it.
5. Jun 26, 2018
### Staff: Mentor
With stars nearby you won't see a strong effect, with stars at larger angles you will see one. The angle between stars at 90 degree angles (e.g. one in "forward"/"backward" direction, one in "outwards"/
inwards" or "upwards" or "downwards" direction 6 months apart) varies by up to 40 arcseconds over a year.
6. Jun 26, 2018 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480580449104309, "perplexity": 1564.4709358771986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592778.82/warc/CC-MAIN-20180721203722-20180721223722-00485.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/209661-i-have-2-numerical-analysis-problems-i-need-help.html | # Math Help - I have 2 " Numerical analysis problems that I need help with
1. ## I have 2 " Numerical analysis problems that I need help with
I am currently working on these 2 problem on the picture but I am not sure if I am doing it right. Any help will be very appreciated.... Thanks
2. ## Re: I have 2 " Numerical analysis problems that I need help with
For the second one, merely check that the $k+1$st Newton iterate is $x_{k+1}=\frac{1}{2}\left(x_k+\frac{5}{x_k}\right)$. Now, I'm sure that you can prove that whatever your initial guess $x_0$ is, that this sequence will be monotone and bounded and so convergent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522842764854431, "perplexity": 287.5994086499583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00183-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://concretenonsense.wordpress.com/2009/04/09/a-useful-symmetric-function-identity/ | Posted by: yanzhang | April 9, 2009
## A Useful Symmetric Function Identity
A not often-mentioned skill that I have been trying to develop is the evaluation of the “usefulness” of statements. Given my horrible memory and degrading mental RAM, I must prioritize which facts to consciously absorb and which ones to skim, since I’m sure I forget at least one theorem every time I learn a new one. Luckily, books aid to an extent by labelling the important things “Theorems” and “Lemmas,” though sometimes the latter are just as important as (if not much more than) the former, and sometimes even more important items appear in the Exercises section. Something about Exercise 7.70 in Richard Stanley’s Enumerative Combinatorics 2 really struck me as “useful.” The statement is:
$\sum_{\lambda \vdash n} H_\lambda^{k-2} \prod_i^k s_\lambda(x^{(i)}) = \frac{1}{n!}\sum_{\prod w_i = 1} \prod_i^k p_{\rho(w_i)}(x^{(i)}),$ where the $x^{(i)}$ are sets of variables, $H_\lambda$ the product of the hook-lengths, $\rho(w)$ gives the cycle structure, and $w_i$ are permutations of size $n$.
Two things make the intuition of “useful” more concrete:
One, the statement has a lot of flexibility; By the symmetry of the symmetric group and the fact that cycles are preserved under inverses, you can sum over, say, products of all ordered $(k-1)$-tuples of $w_i$ by rewriting the sum on the right as over $w_1 = w_2\ldots w_k$, for example, or sum over pairs which give the same product, say $w_1w_2 = w_3w_4$. Furthermore, having the choice over the variables makes a lot of specialization techniques useful. For example, you can “pick out” cycles by specializing $x^{(i)} = (1, \zeta, \ldots, \zeta^{n-1}, 0, 0, \ldots)$, where $\zeta$ is an $n$-th root of unity, because the power sums then give $p_c(x^{(i)}) = 0$ unless $c = n$.
Two, the statement has a lot of power; the proof of this fact has a “Stanley difficulty” of [3] (out of 5) means it contains some nontrivial usage of the symmetric function machinery already. Thus, by using it we know we are backed with some complex stuff in the background. Note when $k = 1$, we get the amazing Hook-length formula directly; when $k = 2$, by observing that summing over $w_1w_2 = 1$ is really summing over a single $w$ (a special case of our discussion above on “flexibility”), we get the familiar “inner product” identity$\sum_{\lambda \vdash n} s_\lambda(x) s_\lambda(y) = \frac{1}{n!}\sum_w p_{\rho(w)}(x) p_{\rho(w)}(y)$. These are two nontrivial statements without an obvious connection with each other, a pleasant surprise.
This warm “finding a hidden gem” feeling is very nice.
-Y
P.S. Well, there is a third (and less glamorous) cue, which is that at least two problems in the problem sets came with the hint: “Look at Exercise 7.70.” The power of hindsight in all its glory, I suppose. However, I liked this statement too much that I needed an excuse to write it down, so shhh. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762066960334778, "perplexity": 595.8431331103616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00437.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/33795-factor-group-order-question.html | # Math Help - Factor group order question
1. ## Factor group order question
$ \cong C_9 \cong $ and $G = \times $.
What is the factor group $G/$ order and write it as the direct product of cyclic groups.
Isn't $G/$ has order 3(correct me if I'm wrong) and I need some help on this the second part "write it as the direct product of cyclic groups".
$ \cong C_9 \cong $ and $G = \times $.
What is the factor group $G/$ order and write it as the direct product of cyclic groups.
Isn't $G/$ has order 3(correct me if I'm wrong) and I need some help on this the second part "write it as the direct product of cyclic groups".
Instead of $\left< x^3,y^3\right>$ I think you mean $\left \times \left< y^3 \right>$. Then it means, $G/\left \times \left< y^3 \right> = \left< x \right> \times \left< y\right> / \left \times \left< y^3 \right> \simeq \left< x \right> / \left \times \left< y\right> / \left< y^3 \right>\simeq \mathbb{Z}_6\times \mathbb{Z}_6$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580771684646606, "perplexity": 323.7014802930549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861743914.17/warc/CC-MAIN-20160428164223-00029-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Isomap | # Isomap
Isomap is a Nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods.[1] Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points. The algorithm provides a simple method for estimating the intrinsic geometry of a data manifold based on a rough estimate of each data point’s neighbors on the manifold. Isomap is highly efficient and generally applicable to a broad range of data sources and dimensionalities.
## Introduction
Isomap is one representative of isometric mapping methods, and extends metric multidimensional scaling (MDS) by incorporating the geodesic distances imposed by a weighted graph. To be specific, the classical scaling of metric MDS performs low-dimensional embedding based on the pairwise distance between data points, which is generally measured using straight-line Euclidean distance. Isomap is distinguished by its use of the geodesic distance induced by a neighborhood graph embedded in the classical scaling. This is done to incorporate manifold structure in the resulting embedding. Isomap defines the geodesic distance to be the sum of edge weights along the shortest path between two nodes (computed using Dijkstra's algorithm, for example). The top n eigenvectors of the geodesic distance matrix, represent the coordinates in the new n-dimensional Euclidean space.
## Algorithm
A very high-level description of Isomap algorithm given below.
• Determine the neighbors of each point.
• All points in some fixed radius.
• K nearest neighbors.
• Construct a neighborhood graph.
• Each point is connected to other if it is a K nearest neighbor.
• Edge length equal to Euclidean distance.
• Compute shortest path between two node.
• Compute lower-dimensional embedding.
## Extensions of ISOMAP
• LandMark ISOMAP (L-ISOMAP): Landmark-Isomap is a variant of Isomap which is faster than Isomap. However, the accuracy of the manifold is compromised by a marginal factor. In this algorithm, n << N landmark points are used out of the total N data points and an nxN matrix of the geodesic distance between each data point to the landmark points is computed. Landmark-MDS (LMDS) is then applied on the matrix to find a Euclidean embedding of all the data points.[2]
• C Isomap : C-Isomap involves magnifying the regions of high density and shrink the regions of low density of data points in the manifold. Edge weights that are maximized in Multi-Dimensional Scaling(MDS) are modified, with everything else remaining unaffected.[2]
## Possible Issues
The connectivity of each data point in the neighborhood graph is defined as its nearest k Euclidean neighbors in the high-dimensional space. This step is vulnerable to "short-circuit errors" if k is too large with respect to the manifold structure or if noise in the data moves the points slightly off the manifold.[3] Even a single short-circuit error can alter many entries in the geodesic distance matrix, which in turn can lead to a drastically different (and incorrect) low-dimensional embedding. Conversely, if k is too small, the neighborhood graph may become too sparse to approximate geodesic paths accurately. But improvements have been made to this algorithm to make it work better for sparse and noisy data sets.[4]
## Relationship with other methods
Following the connection between the classical scaling and PCA, metric MDS can be interpreted as kernel PCA. In a similar manner, the geodesic distance matrix in Isomap can be viewed as a kernel matrix. The doubly centered geodesic distance matrix K in Isomap is of the form
$K = \frac{1}{2} HD^2 H\,$
where $D^2 = D^2_{ij}:=(D_{ij})^2$ is the elementwise square of the geodesic distance matrix D = [Dij], H is the centering matrix, given by
$H = I_n-\frac{1}{N} e_N e^T_N, \quad\text{where }e_N= [1\ \dots\ 1]^T \in \mathbb{R}^N.$
However, the kernel matrix K is not always positive semidefinite. The main idea for kernel Isomap is to make this K as a Mercer kernel matrix (that is positive semidefinite) using a constant-shifting method, in order to relate it to kernel PCA such that the generalization property naturally emerges .[5] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680993914604187, "perplexity": 674.9994715438542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157212.22/warc/CC-MAIN-20160205193917-00325-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2658072/is-there-a-simple-characterization-of-the-sets-that-admit-a-uniform-probability | # Is there a simple characterization of the sets that admit a uniform probability distribution?
The following subsets of $\mathbb{R}$ (not an exhaustive list) admit a uniform probability distribution:
1. Finite sets
2. Intervals (open or closed) with both endpoints finite
3. Finite unions of the above
Is there a simpler way to characterize the set of subsets $S \subset \mathbb{R}$ that admit a uniform probability distribution?
One naive guess is that $S$ is bounded, but this requirement is neither necessary nor sufficient: e.g. the set $$\left \{ \frac{1}{n} : n \in \mathbb{Z}^+ \right \}$$ is bounded but does not admit a uniform probability distribution, while the set $$\bigcup_{n = 0}^\infty \left[ n - 2^{-n}, n + 2^{-n} \right]$$ is not bounded but does admit a uniform distribution.
My guess is that the answer is no, because by combining together sets of types 1 and 2 in the list above, we are combining "apples and oranges", as discrete and continuous probability distributions are qualitatively different. If we want to extend the support of a discrete random variable taking on a finite number of possible values to the real line, then we shouldn't think of its probability distribution as a probability mass function, but instead as a finite sum of Dirac delta functions, which are a very different beast from continuous probability density functions. Is this correct?
• What is your definition of a uniform probability distribution? I'm also confused regarding your point 3: it seems to me that any finite intersection of finite sets and bounded intervals is either a finite set or a bounded interval. – Anthony Carapetis Feb 20 '18 at 3:48
• @AnthonyCarapetis Oops, I meant to say "finite unions", fixed. – tparker Feb 20 '18 at 4:11
The essence of being a discrete distribution is concentrating all probability at point masses. I.e. we have some random variable (capital) $X$ and there is some set of objects (lower-case $X$) $x$ for which $\Pr(X=x)>0,$ and we have $$\sum_x \Pr(X=x) = 1.$$ We can call such a distribution "uniform" if $\Pr(X=x)$ is the same for all values of $x.$ If, but only if, the aforementioned "set of objects" is finite, a uniform distribution exists.
A "continuous" distribution on $\mathbb R$ assigns probabilities to intervals, and hence to Borel sets (i.e. those sets that can be constructed by starting with intervals and closing under countable unions and complements) and has a continuous c.d.f.; thus does not concentrate positive probability at any point. For every $x\in\mathbb R,$ we have $\Pr(X=x) = 0.$ This is true regardless of whether the distribution is what is conventionally called "uniform". In that case, one calls a distribution "uniform" if it assigns equal probabilities, not to every point, but to intervals of equal lengths. And one can say that the probability density function has the same value everywhere within the support of the distribution. The "support" is the set of all points $x$ for which, for every positive number $\varepsilon,$ no matter how small, $\Pr(x-\varepsilon < X<x+\varepsilon)>0.$ If you want to look at sets other than intervals, you can say that a "uniform" distribution assigns equal probabilities to sets with the same "measure", and for that one must know what "measure" is. The measure of an interval is its length; every Borel set has a measure; if you add the same constant to every member of a set to get a new set, you don't alter the measure; if a sequence of sets is pairwise mutually exclusive, then the measure of their union is the sum of their measures. That's enough to determine what the measure of every Borel set is. And the bottom line will be that there is a "uniform" distribution on a Borel set if and only if its measure is finite. For that, it need not be bounded. For example, the set $\displaystyle \bigcup_{n=1}^\infty [n, n + 1/2^n)$ has finite measure but is not bounded.
• I actually edited my question right before reading your answer in a way that addresses your last point already. I was (loosely) also considering "hybrid" distributions consisting of both intervals and discrete points, like $$p(x) = \begin{cases} 2/3 & \text{if } x \in [1, 1/2] \text{ or } x = 1 \\ 0 & \text{otherwise} \end{cases}.$$ I think the fact that such hybrid distributions are a bit contrived to formalize reflects the fact that the answer to my question is no. – tparker Feb 20 '18 at 4:24
• @tparker : I diagree with the punctuation in your comment. I might have written something like this: $$p(x) = \begin{cases} 2/3 & \text{if } x \in [1, 1/2] \text{ or } x = 1, \\ 0 & \text{otherwise}. \end{cases}$$ – Michael Hardy Feb 20 '18 at 15:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645436406135559, "perplexity": 178.73823520716857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00215.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3892331 | # Critical point exponents inequalities - The Rushbrooke inequality
by LagrangeEuler
Tags: critical, exponents, inequalities, inequality, point, rushbrooke
P: 276 The Rushbrooke inequality: $$H=0, T\rightarrow T_c^-$$ $$C_H \geq \frac{T\{(\frac{\partial M}{\partial T})_H\}^2}{\chi_T}$$ $$\epsilon=\frac{T-T_c}{T_c}$$ $$C_H \sim (-\epsilon)^{-\alpha'}$$ $$\chi_T \sim (-\epsilon)^{-\gamma'}$$ $$M \sim (-\epsilon)^{\beta}$$ $$(\frac{\partial M}{\partial T})_H \sim (-\epsilon)^{\beta-1}$$ $$(-\epsilon)^{-\alpha'} \geq \frac{(-\epsilon)^{2\beta-2}}{(-\epsilon)^{-\gamma'}}$$ and we get Rushbrooke inequality $$\alpha'+2\beta+\gamma' \geq 2$$ My only problem here is first step $$C_H \geq \frac{T\{(\frac{\partial M}{\partial T})_H\}^2}{\chi_T}$$ we get this from identity $$\chi_T(C_H-C_M)=T\alpha_H^2$$ But I don't know how?
P: 276 Any idea? Problem is with $$\chi_T(C_H-C_M)=T\alpha^2_H$$ from that relation we obtain $$C_H=\frac{T\alpha^2_H}{\chi_T}+C_M$$ So if $$C_M>0$$ than $$C_H>\frac{T\alpha^2_H}{\chi_T}$$ Equality is in the game if and only if $$C_M=0$$. Or when $$(\frac{\partial^2 F}{\partial T^2})_M=0$$. Is that possible? F is Helmholtz free energy. Can you tell me something more about that physically?
Related Discussions Atomic, Solid State, Comp. Physics 1 Quantum Physics 0 Atomic, Solid State, Comp. Physics 0 Atomic, Solid State, Comp. Physics 2 Advanced Physics Homework 0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091686606407166, "perplexity": 1103.8261344889893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://grad.ncbj.gov.pl/2017/11/ | ## Graduate Seminar, 27th Nov. 2017
Speaker: Paweł Kowalski
Title: Characteristics of the J-PET detector simulated using GATE software
Abstract: Novel PET system based on plastic scintillators is developed by the J-PET collaboration. In order to determine performance characteristics of built scanner prototype, advanced computer simulations must be performed. These characteristics are spatial resolution, scatter fraction and sensitivity. Results of simulations of these characteristics will be presented during the lecture.
## Thesis topic proposal
Study of the intrinsic 3D-structure of the proton at CERN
The structure of strongly interacting particles, like nucleons and mesons, is provided by the theory of strong interactions – Quantum Chromo-Dynamics (QCD). In the framework of QCD the internal structure and properties of nucleons and mesons are determined by the interactions between their elementary constituents, quarks and gluons (commonly referred to as ‘partons’).
Speaker: Rahul Nair
Title: Event Shape Engineering Technique in ultra-relativistic nuclear collisions
Abstract: The evolution of the system created in a high energy nuclear collision is very sensitive to the fluctuations in the initial geometry of the system. Utilizing these large fluctuations, one can select events corresponding to a specific initial shape. This method is called Event Shape Engineering. It provides an opportunity for the quantitative test of the theory of high energy nuclear collisions and understanding the properties of high density hot QCD matter. The technique will be illustrated on the grounds lead ion collisions in ALICE at LHC.
Title: Internal dosimetry in nuclear medicine treatment.
Abstract: Internal radiotherapy in oncology is therapeutic method of increasing importance. Despite many technical difficulties, possibilities which are given by this method, makes it extremely interesting. One of the most important tasks is prediction amount of radiation dose that patient will receive during treatment. That’s more important than determining the radiation dose that patient receives post factum. There is no reliable model to allow prediction of radiation dose as precisely as it is necessary for radiotherapy.
Speaker: Viktor Svensson
Title: Hydrodynamization of kinetic theory
Abstract: In ultrarelativistic heavy-ion collisions, hydrodynamics has been successfully applied to describe the evolution of the quark-gluon plasma. The applicability of hydrodynamics implies a significant reduction in the number of degrees of freedom. We study this reduction for a simple kinetic theory undergoing Bjorken flow. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003878593444824, "perplexity": 1546.616855901956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213158.51/warc/CC-MAIN-20180817221817-20180818001817-00293.warc.gz"} |
https://math.stackexchange.com/questions/2583484/is-there-a-name-for-this-type-of-topology | # Is there a name for this type of topology?
Let $\tau$ be a topology on $X$ such that for every $a, b$ in $X$ there exists a bijective function $f$ that is continuous and has a continuous inverse such that $f(a) = b$.
Examples of such a topology:
the topology induced by the euclidean metric on $\mathbb R^n$
the discrete topology, the trivial topology
non examples: $\tau = \{\emptyset,a,\{b,c\},X\}$ where $X = \{a,b,c\}$
Is there a name for these type of topologies?
• It seems that such spaces are called homogeneous. – Sangchul Lee Dec 28 '17 at 20:56
• yes that appears to be what I want, thank you, feel free to write that as an answer to the question – mathew Dec 28 '17 at 21:00
As Sangchul says in the comments, such spaces are called homogeneous. An equivalent way to phrase the definition is that the group of homeomorphisms $X \to X$ acts transitively on $X$. Important examples include
• any topological group $G$
• any quotient $G/H$ of a topological group by a subgroup.
The term "homogeneous space" is also used for something more specific, namely a pair consisting of a space $X$ and a topological (sometimes Lie) group $G$ acting transitively on it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894073486328125, "perplexity": 97.28712899660331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00333.warc.gz"} |
https://www.openkb.org/windows-7-bios-set-to-raid-crashes-windows/ | # Windows 7: Bios set to Raid crashes Windows
Posted on January 27, 2012
During a random installation of Windows 7 in a Dell XPS unit. We notice windows kept crashing after rebooting and attempting to complete the installation.
After several attempts we found that we needed to change the hd mode to sata instead of raid.
the windows error was
“Windows is unable to complete due to a hardware error”
Nice right , thats for the info Microsoft.
Found this information on a forum as such and verbatim . It might be helpful to someone else that encounters the issue
Do you have windows installed on your SSD in RAID mode before adding these disks as RAID?
If not you need to, so you will need to go into windows and make the following registry changes first, before setting the BIOS to RAID
mode, if you installed windows in AHCI or IDE mode.
Enable switching between all IDE/AHCI/RAID modes by changing “Start” Values in these keys to 0
Code:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Msahci\Start
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Pciide\Start
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\iaStorV\Start
Another user replied as
A many thanks. Your wisdow is seemingly endless. It was the last entry that was not at zero it was set to 3. I did load windows first
after changing the SATA’s to AHCI mode during the first post. After making sure things looked OK a then tried adding the samsungs in
RAID 0. Thanks again for your help.” | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8435437083244324, "perplexity": 3975.149084364483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00497.warc.gz"} |
http://www.rahimcartoon.com/festivals/1732-international-cartoon-competition-the-great-and-the-little-warsaw | ### Galleries Stats
• All images in Gallery 5350
• All Categories 150
• All hits just in Gallery 721396
### Statistics
• Users : 4
• Articles : 2155
• Articles View Hits : 2474178
You are here: HomeFestivalsFestivalsInternational Cartoon Competition „The Great and the Little Warsaw”/Poland 2013
# Cartoon World (Important News)
## International Cartoon Competition „The Great and the Little Warsaw”/Poland 2013
International Cartoon Competition „The Great and the Little Warsaw”/Poland 2013
The Museum of Caricature and Cartoon Art in Warsaw celebrates its 35th anniversary by inviting artists to take part
in an International Cartoon Competition whose topic is:
The Great and the Little Warsaw
– or whatever good or even better things we can say about our capital
Warsaw – the capital of Poland – has one of the largest populations among European cities. Its long history, dating back to the
9th century, can often prove fascinating. It is a city that was razed to the ground, burnt and gutted during World War II, and still,
rebuilt after 1945, it continues to live, though it was supposed to disappear from the face of the earth. It is a city that received
the Silver Cross of Poland's highest military decoration for heroism and courage – the Virtuti Militari.
Fascinating as the city can be, it is certainly not devoid of problems that may get on its inhabitants’ nerves. Like every great
agglomeration it can delight with life on a grand scale, but also – irritate with its traffic jams. What does the city look like in the eyes
of those who live here, but also – of visitors from other Polish cities and of foreigners? What do they find amusing, and what –
annoying? What delights the onlooker? This is the topic that we ask our astute observers – the cartoonists – to illustrate.
Competition Rules
1. The competition is open to artists from all over the world.
2. Each participant may submit up to 5 works.
3. The cartoons must be original and signed by their Authors; the maximum dimensions are 42 x 30 cm. The following data must
be given on the back: author’s name and surname, address, title of the work, year of creation.
4. Along with the works, Authors are obliged to submit their CVs in Polish or English (incl. publications, exhibitions and awards).
5. Entries must be sent in by 15th May 2013 to the following address:
Muzeum Karykatury
ul. Kozia 11
00-070 Warsaw, Poland
6. Regular prizes and distinctions:
Grand Prix – 8,000 PLN
1st Prize 4000 PLN
2nd Prize 3000 PLN
3rd Prize 2000 PLN
3 honourable mentions - 1000 PLN each.
7. The Jury, chaired by Mr Zygmunt Zaradkiewicz, Director of the Museum of Caricature and Cartoon Art in Warsaw,
will announce its verdict by the end of June 2013
8. The Jury reserves the right to divide the prizes differently than specified above. The Jury’s decisions are final and may not be
the subject of an appeal.
9. The awarded works become the exclusive property of the Museum of Caricature and Cartoon Art in Warsaw, together with
copyright and property rights. The Museum also reserves the right to select and include in its collection one work by each
Author. The other works will be returned to their Authors only at their formal written request, after all the activities related to
the post-competition exhibition have been completed.
10. Entering the competition is tantamount to granting permission for reproducing the works free of charge for the publicity
purposes of the post-competition exhibition in all the fields of exploitation.
11. Authors of all the works selected for the post-competition exhibition will receive the exhibition catalogue free of charge.
-
Warszawa i Warszawka - The Great and the Little Warsaw
Registration form - Karta uczestnictwa
Imię:
Name: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photo
Nazwisko:
Surname: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Narodowość:
Nationality: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Address: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kraj:
Country: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tel./ Fax . . . . . . . . . . . . . . . . . . . . . . . . . . e-mail: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Akceptuję warunki uczestnictwa / I accept all terms of the participation rules
Data / Date: . . . . . . . . . . . . . . . . . . . . . . . . Podpis / Signature: . . . . . . . . . . . . . . . . . . . . . . . . . . .{jcomments on}
0
terms and condition. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665680289268494, "perplexity": 156.54741888789925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00131-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://keplerlounge.com/number-theory/2021/08/13/randomness-prime-encodings.html | I’d like to share a critical observation which occurred to me a couple months ago, which relates the randomness of the primes and the computational complexity of integer factorisation.
So far the hypothesis that prime encodings are random has been carefully evaluated using a variation of Yao’s next-bit test as well as a primality test for all prime numbers under $$10^9$$. For the primality test, given a finite number of integers $$X := [1,N] \subset \mathbb{N}$$ with binary representation mapped to $$Y \in \{0,1\}^*$$, a function is approximated using a feedforward neural network $$f_{\theta}$$:
$$f_{\theta}: X \rightarrow Y$$
whose ability to carefully evaluate the primality of integers is tested on integers $$[N+1,2N]$$. In agreement with the hypothesis, the true positive rate of such a function does not exceed 50%. This experimental verification is significant because I am implicitly using the universal approximation property of deep neural networks.
Now, it is worth pointing out that feedforward neural networks can only simulate computable functions that are polynomial time with respect to their input size. Thus, the hypothesis ought to be refined as follows:
$$\mathbb{E}[K^{\text{poly}}(X_N)] \sim \pi(N) \cdot \ln(N) \sim N$$
where $$K^{\text{poly}}(\cdot)$$ denotes a universal compressor that can only simulate Turing Machines with polynomial time complexity.
In order to understand the importance of integer factorisation, it is worth making two observations. First, a reasonable candidate for one-way functions is $$f(P,Q) = P \cdot Q$$ for randomly chosen primes $$P$$ and $$Q$$, and the existence of one-way functions would imply that P != NP. Second, for a neural network to accurately represent a relation between the integers and the prime numbers with strong generalisation ability, it needs to discover an algorithmic formulation of the unique factorisation theorem. That is to say, an algorithmic version of Euclid’s proposition:
Any number is either prime or measured by some prime number.
Such an algorithm would be nothing less than a method for integer factorisation with polynomial time complexity.
## References:
1. Andrew Chi-Chih Yao. Theory and applications of trapdoor functions. In Proceedings of the 23rd IEEE Symposium on Foundations of Computer Science, 1982.
2. Leonid Levin (2003). “The Tale of One-Way Functions”. ACM. arXiv:cs.CR/0012023
3. Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. MIT Press
4. Lenstra, Arjen K (1988). “Fast and rigorous factorization under the generalized Riemann hypothesis”. Indagationes Mathematicae. 50 (4): 443–454.
5. Yanyi Liu, Rafael Pass. On One-way Functions and Kolmogorov Complexity. Arxiv. 2020.
6. Hornik, Kurt (1991). “Approximation capabilities of multilayer feedforward networks”. Neural Networks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9222663044929504, "perplexity": 532.4747695477837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00023.warc.gz"} |
https://studynova.com/quiz/math-sl/trigonometry/3-1/ | # Solution to #3-1
Paper 2 Difficulty:Medium
In the following diagram, a circle has radius r (metres), and angle $\theta$ (radians), and centre O.
The area of the shaded sector is $\frac{6\pi }{5}$, and the length of the sector AB is $\frac{3\pi }{5}$.
a) Find the value of r.
[Maximum mark:3]
b) Find the value of $\theta$.
[Maximum mark:3] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8540260195732117, "perplexity": 1687.845785636202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00420.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-2x-2y-5-6x-5y-3-1-3x-1y-6-and-write-it-using-only-positive-e | Algebra
Topics
# How do you simplify (2x^2y^-5)(-6x^-5y^3)(1/3x^-1y^6) and write it using only positive exponents?
May 31, 2017
$\frac{- 4 {y}^{4}}{{x}^{4}}$
#### Explanation:
Use the following 3 Laws of Exponents to simplify the expression:
${a}^{- m} = \frac{1}{a} ^ m$
$\left({a}^{m}\right) \left({a}^{n}\right) = {a}^{m + n}$
${a}^{m} / {a}^{n} = {a}^{m - n}$
$\left(2 {x}^{2}\right) \left(\frac{1}{y} ^ 5\right) \left(- 6 {y}^{3}\right) \left(\frac{1}{x} ^ 5\right) \left(\frac{1}{3} {y}^{6}\right) \left(\frac{1}{x}\right)$
$\frac{- 12 {x}^{2} {y}^{3} {y}^{6}}{3 {y}^{5} {x}^{5} x}$
$\frac{- 4 {x}^{2} {y}^{9}}{{y}^{5} {x}^{6}}$
$\frac{- 4 {y}^{4}}{{x}^{4}}$
##### Impact of this question
445 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209577441215515, "perplexity": 4219.504155808547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540501887.27/warc/CC-MAIN-20191207183439-20191207211439-00270.warc.gz"} |
https://www.sudoedu.com/en/linear-algebra-review-and-summary/how-to-compute-inverse-matrix/ | # How to Compute Inverse Matrix?
This video shows how to compute the inverse matrix quickly. There are several ways to compute inverse matrix, the most common used way is row reduction because it need the least computation. Arranging the matrix and the identity matrix in a row to build a new matrix, then taking row reduction to this new matrix. If the original matrix reduced to be an identity matrix, the identity matrix is transformed to be the inverse matrix of the original matrix. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9483129382133484, "perplexity": 207.8993397896262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00169.warc.gz"} |
http://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-1-section-1-1-four-ways-to-represent-a-function-1-1-exercises-page-21/35 | ## Calculus: Early Transcendentals 8th Edition
$x\lt0$ or $x\gt5$
$\because h(x)$ is defined for all real x where the denominator is not equal to zero $[\sqrt[4] (x^{2}-5x)\ne0, x^{2}-5x\ne0$], and only the fourth root of non-negative numbers is real $[x^{2}-5x\geq0]$ $\therefore x^{2}-5x\gt0,$ $x(x-5)\gt0,$ $x\lt0$ or $x\gt5$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715534448623657, "perplexity": 158.734263934478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00023.warc.gz"} |
http://bayesiancook.blogspot.com/2016/12/second-order-amino-acid-replacement.html | ## Tuesday, December 13, 2016
### Second-order amino-acid replacement processes
As I mentioned earlier, classical amino-acid replacement matrices indirectly encode site-specific amino-acid preferences in their first-order Markov dependencies, and this is optimal in a low saturation regime. Now, this suggests that we could derive second-order or k-th order Markov processes that would generalize this idea and capture more information about site-specific selection, in particular about the longer-term behavior. How would we do that?
Let us consider the second-order case. Assume that, at time $t$, the current amino-acid at a given site is $b$, and the previous amino-acid state at that site, before the last substitution event, was $a$. We can then characterize the current state of the process by the $(a,b)$ pair of amino-acids.
Then, we can define a Markov process directly on the pairs $(a,b)$, with $a \neq b$, and such that transitions are allowed only between compatible pairs — i.e. such that the current state before the event becomes the previous state after the event. Thus, for instance:
$Q_{(a,b) \to (b,c)}$
is the rate from $(a,b)$ to $(b,c)$, i.e. the rate of substitution from $b$ to $c$, given that the previous amino-acid state (before $b$) was $a$.
All other rates are set equal to 0, i.e.
$Q_{(a,b) \to (c,d)} = 0$
whenever $b \neq c$.
This defines a second order 380x380 amino-acid replacement matrix, which can be exponentiated and then used for likelihood computation. Note that the instant rate matrix Q defined above is sparse, however, this will not be the case for the exponentiated matrix $P = e^{tQ}$.
Concerning the pruning algorithm, the only modification, compared to the first-order version classically implemented, concerns the initialization of the recursion at the tips. If the observed state for a given taxon is b, thern the conditional likelihood should be set equal to 1 for all pairs $(a,b)$.
We can see that, compared to first-order matrices, this second-order model will be in a better position to implement the empirically observed tendency to make repeated substitution events among overlapping subsets of amino-acids — basically, by estimating high rates of transition of the following type:
$Q_{(E,D) \to (D,E)}$
$Q_{(N,D) \to (D,N)}$
etc.
The approach can in principle be generalized to k-th order processes, by defining transition rates between $(x_1,x_2,…x_k)$ and $(x_2,x_3…x_{k+1})$.
Computationally speaking, this will not be super-efficient. The pruning algorithm is quadratic in the number of states, and thus, even for the second-order case, we already end up with a nearly 400-fold increase in computational time.
Still, I could imagine that, compared to simple 20x20 amino-acid replacement matrices, the second-order version could already make quite a difference in terms of phylogenetic accuracy. One advantage of this approach is that it is really a classical parametric likelihood model, without any complicated issue about modeling random-effects across sites, from an unknown, and potentially complex, distribution.
Conceptually speaking, this model also illustrates a fundamental idea about the relation between processes and parameters. Essentially, the parameterization of a model does not necessarily correspond to the actual mechanism. Instead, it already incorporates a level of statistical thinking, about the identifiable signal induced by the process over a large sample of observations. Usually, this signal can be decomposed in terms of frequencies over successive moments — and those frequencies are then captured by a parametric model, in terms of either proportions or rates.
1. Interesting post!
(1) One important feature of the above formulation is that it automatically loses reversibility! Thus, the likelihood depends on the choice of the root node. Is there a formulation of a "second order" process that can maintain reversibility?
(2) Another feature of the above proposal is that it depends on the current and previous amino acid states regardless of how long the current amino acid has been present. Thus a highly conserved site and a purely neutral site have the same rates if they have the same two most recent amino acids.
A different formulation of a second order chain is to have the rates depend on the current amino acid and the amino acid t time units ago.
This can be generalized to a k-th order chain by having the rates depend on the current amino acid and the amino acids i*t time units ago for i=1 to k-1.
This class of models has the nice feature that it approaches the first order model for fixed k as t->0 but can be made to approach the model where the rates depend on the whole substitution history if we let k-> \infty and t->0 in an appropriate manner.
Q: Do there exist non-trivial high-order reversible models in this fixed-lag class? The answer is not obvious to me. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230844378471375, "perplexity": 608.6215507737579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00227.warc.gz"} |
http://www.talkstats.com/threads/code-tags.20779/#post-63198 | # Code tags
#### Dason
I don't know how easy this would be but... I've seen an option in other forums where when people post code there is a "select all" button which selects all the text within the code tags. This would be nice for quickly grabbing code that is posted without having to manually highlight everything.
Once again - I don't know how easy it is but I was wondering if we could get that implemented. Thanks quark for everything you do. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037297129631042, "perplexity": 888.6687075741429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00105.warc.gz"} |
https://www.texasgateway.org/resource/key-terms-30 | ## Key Terms
binomial distribution
a discrete random variable (RV) that arises from Bernoulli trials; there are a fixed number, n, of independent trials
Independent means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV Χ is defined as the number of successes in n trials. The notation is: X ~ B(n, p) μ = np and the standard deviation is . The probability of exactly x successes in n trials is $P(X=x)=(nx)pxqn−xP(X=x)=(nx)pxqn−x$.
confidence interval (CI)
an interval estimate for an unknown population parameter
This depends on the following:
• the desired confidence level
• information that is known about the distribution (for example, known standard deviation)
• the sample and its size
hypothesis
a statement about the value of a population parameter; in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation H0) and the contradictory statement is called the alternative hypothesis (notation Ha)
hypothesis testing
based on sample evidence, a procedure for determining whether the hypothesis stated is a reasonable statement and should not be rejected, or is unreasonable and should be rejected
level of significance of the test
probability of a Type I error (reject the null hypothesis when it is true)
Notation: α. In hypothesis testing, the level of significance is called the preconceived α or the preset α.
normal distribution
a bell-shaped continuous random variable X, with center at the mean value (μ) and distance from the center to the inflection points of the bell curve given by the standard deviation (σ)
We write $X~N(μ,σ)X~N(μ,σ)$. If the mean value is 0 and the standard deviation is 1, the random variable is called the standard normal distribution, and it is denoted with the letter Z.
p-value
the probability that an event will happen purely by chance assuming the null hypothesis is true; the smaller the p-value, the stronger the evidence is against the null hypothesis
standard deviation
a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation
Student's t-distribution
investigated and reported by William S. Gosset in 1908 and published under the pseudonym Student
The major characteristics of the random variable (RV) are as follows:
• It is continuous and assumes any real values.
• The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
• It approaches the standard normal distribution as n gets larger.
• There is a family of t-distributions: every representative of the family is completely defined by the number of degrees of freedom, which is one less than the number of data items.
Type 1 error
the decision is to reject the null hypothesis when, in fact, the null hypothesis is true
Type 2 error
the decision is not to reject the null hypothesis when, in fact, the null hypothesis is false | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444889426231384, "perplexity": 427.61544600462787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589029.26/warc/CC-MAIN-20180716002413-20180716022413-00235.warc.gz"} |
https://www.physicsforums.com/threads/i-need-help-with-this-mcq-question-pleae.827063/ | # Homework Help: I need help with this MCQ question pleae
1. Aug 10, 2015
### Shiv Narayanan
< Mentor Note -- Poster has been warned to show their work in future schoolwork threads >
Hello! I need help with this question. The answer is D but I don't understand why :/ Thanks!
Last edited by a moderator: Aug 10, 2015
2. Aug 10, 2015
### Qwertywerty
Hint : What is formula for ΔQ ? What is power provided ?
3. Aug 10, 2015
### Shiv Narayanan
#### Attached Files:
• ###### IMG_20150811_001821.jpg
File size:
17.1 KB
Views:
82
4. Aug 10, 2015
### Qwertywerty
Why is Q1 = Q2 ?
Heaters are identical , and so power provided by them will be the same . The heat absorbed by the objects in some time ' t ' will depend on their own properties . It will not necessarily be the same for both .
Try again .
Hope this helps .
5. Aug 10, 2015
### Shiv Narayanan
Oh Yes.. I read the question wrongly. I thought amount of energy provided is the same.
So, if I am not wrong, the gradient of the graph is the specific heat capacity and since mass of N is 2 times the mass of M, in this case, I can just take the mass of N to be 2kg while the mass of M to be 1kg right? (For me to be able to calculate)
I find the gradient of N and divide it by 2 as heat capacity is per kg.Then I have to compare with the gradient of M and derive the ratio.. Am I right?
6. Aug 10, 2015
### Qwertywerty
ΔQ = m*c*ΔT ,
∴ P = ΔQ / ΔT = ?
No , I think it looks more like ( ΔT / Δt ) .
And no , you can't assume the masses as so . Use P1 / P2 ratio .
Last edited: Aug 10, 2015 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985227346420288, "perplexity": 1772.0188911208966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00218.warc.gz"} |
http://mathinsight.org/discrete_dynamical_system_elementary_problems_2 | # Math Insight
### Elementary discrete dynamical systems problems, part 2
#### Problem 1
Consider the dynamical system \begin{align*} x_{n+1} &= x_n^2 \quad \text{for $n=0,1,2,3, \ldots$} \end{align*}
1. Find all equilibria.
2. Determine the stability of the equilibria using calculus.
3. Graph the function and confirm the stability of the equilibria using cobwebbing.
#### Problem 2
Consider the dynamical system \begin{align*} y_{t+1} &= y_t^3 \quad \text{for $t=0,1,2,3, \ldots$} \end{align*}
1. Find all equilibria.
2. Determine the stability of the equilibria using calculus.
3. Graph the function and confirm the stability of the equilibria using cobwebbing.
#### Problem 3
Consider the dynamical system \begin{align*} z_{t+1} -z_t &= z_t(1-z_t) \quad \text{for $t=0,1,2,3, \ldots$} \end{align*}
1. Find all equilibria.
2. Determine the stability of the equilibria using calculus.
3. Graph the function and confirm the stability of the equilibria using cobwebbing.
#### Problem 4
Consider the dynamical system \begin{align*} w_{n+1} -w_n &= 3w_n(1-w_n/2) \quad \text{for $n=0,1,2,3, \ldots$} \end{align*}
1. Find all equilibria.
2. Determine the stability of the equilibria.
#### Problem 5
Consider the dynamical system \begin{align*} v_{n+1} -v_n &= 0.9v_n(2-v_n) \quad \text{for $n=0,1,2,3, \ldots$} \end{align*}
1. Find all equilibria.
2. Determine the stability of the equilibria.
#### Problem 6
Consider the dynamical system \begin{align*} u_{t+1} -u_t &= a u_t(1-u_t) \quad \text{for $t=0,1,2,3, \ldots$} \end{align*} where $a$ is a positive parameter.
1. Find all equilibria.
2. For each equilibrium, find the values of $a$ for which you can determine that the equilibrium is stable and the values of $a$ for which you can determine the equilibrium is unstable.
#### Problem 7
Consider the dynamical system \begin{align*} s_{t+1} -s_t &= 0.5s_t(1-s_t/b) \quad \text{for $t=0,1,2,3, \ldots$} \end{align*} where $b$ is a positive parameter.
1. Find all equilibria.
2. Determine the stability of the equilibria.
3. Does the stability of any of the equilibria depend on $b$?
One you have worked on a few problems, you can compare your solutions to the ones we came up with. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992833733558655, "perplexity": 1523.5692545166721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://eprint.iacr.org/2010/120/20100305:065536 | ## Cryptology ePrint Archive: Report 2010/120
Universal One-Way Hash Functions via Inaccessible Entropy
Iftach Haitner and Thomas Holenstein and Omer Reingold and Salil Vadhan and Hoeteck Wee
Abstract: This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of *inaccessible entropy* (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x, i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x' \neq x such that F(x, i) = F(x', i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction simply amplifies and exploits this basic property.
With this and other recent works we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.
Category / Keywords: foundations /
Publication Info: longer version of a paper to appear in Eurocrypt 2010
Date: received 4 Mar 2010
Contact author: hoeteck at cs qc cuny edu
Available format(s): PDF | BibTeX Citation
[ Cryptology ePrint archive ] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112318754196167, "perplexity": 3255.07653302657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.varsitytutors.com/calculus_3-help/surface-area | Calculus 3 : Surface Area
Example Questions
Example Question #206 : Double Integrals
Find the surface area of the part of the plane in the first octant.
Explanation:
Lets recall the equation of surface area.
Now we need to find all the neccessary equations to be able to evaluate the integral.
We will plug in , into the plane equation in order to get a line that intersects with the z axis.
Now we are going to set , in the previous equation and solve for .
We now have all the bounds for our double integral | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742595314979553, "perplexity": 408.0371827127253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595858.79/warc/CC-MAIN-20171217113308-20171217135308-00752.warc.gz"} |
http://mathhelpforum.com/calculus/33959-advanced-calc-properties-continuous-functions.html | Hint: Define $g(x) = f(x) - x$. Now use intermediate value theorem to show $g$ has a zero on $[a,b]$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937233328819275, "perplexity": 54.06093747691749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705407338/warc/CC-MAIN-20130516115647-00039-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3220343/is-the-maximum-of-strictly-convex-functions-also-strictly-convex | # Is the maximum of strictly convex functions also strictly convex?
Let $$\{ f_1,f_2,\ldots,f_m \}$$ be a set of convex functions, where $$f_i : C \subset \mathbb{R}^n \to \mathbb{R}$$ and with $$C$$ a convex set. Then,
$$F(x) := \max \{ f_1(x),f_2(x), \ldots, f_m(x) \}$$ is a convex function. What happens if each $$f_i$$ is strictly convex? Is $$F$$ also strictly convex?
I believe that it is not true. If it is the case, a counterexample would be great.
• It is true when your family of convex functions is finite, but when taking the (pointwise) supremum of an infinite family, it may not be. For example, if you consider $f_n(x) = \frac{x^2 - 1}{n}$ over the interval $C = [-1, 1]$, the pointwise supremum is constant. – Theo Bendit May 24 at 2:16
Consider $$C = \mathbf{R}^n$$ for simplicity; let $$k = \arg\max_{i \in [m]} f_i(\lambda x + (1 - \lambda) y)$$. Then
\begin{align} \max_{i=1, \dots, m} f_i(\lambda x + (1 -\lambda)y) &= f_k(\lambda x + (1 - \lambda)y) \overset{(*)}{<} \lambda f_k(x) + (1- \lambda) f_k(y) \\ &\leq \lambda \max_{i} f_i(x) + (1-\lambda) \max_{i} f_i(y), \end{align} where $$(*)$$ used the fact that all the $$f_i$$'s are strictly convex, so $$F$$ is in fact strictly convex. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991139769554138, "perplexity": 294.62546942414394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00006.warc.gz"} |
http://www.lifeincode.net/programming/leetcode-permutation-sequence-java/ | # [LeetCode] Permutation Sequence (Java)
The set [1,2,3,…,n] contains a total of n! unique permutations.
By listing and labeling all of the permutations in order,
We get the following sequence (ie, for n = 3):
1. "123"
2. "132"
3. "213"
4. "231"
5. "312"
6. "321"
Given n and k, return the kth permutation sequence.
Note: Given n will be between 1 and 9 inclusive.
## Analysis
We can generate all permutations until we get the kth one. But it costs $O(n!)$ time.
Another way is to calculate every digit. For example, assuming we are going to solve the problem when n = 3 and k = 5. In fact, because k starts from 1, we need to subtract 1 from it to make it starting from 0. So we are going to find the permutation 4 now. To calculate the first digit, we can calculate it by k % (n – 1)! = 4 / 2! = 2, which is the position of 3 in array [1,2,3]. Now we need to delete 3 from the array, so the array becomes [1, 2]. And k should become 4 % 2! = 0. Now we calculate k / (n – 2)! = 0 / 1 = 0, which is the position of 1 in array [1, 2]. So the second digit should be 1. We need to delete 1 from the array. And there is only one entry left in the array. So the final digit should be 2. Finally we get the permutation: 312.
## Code
I use an ArrayList to save the numbers, which can easily be used to fetch the number and delete it from the list.
## Complexity
The complexity of this algorithm is $O(n)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563910722732544, "perplexity": 391.5978767877767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00000.warc.gz"} |
https://creighton.pure.elsevier.com/en/publications/beam-energy-dependence-of-charge-balance-functions-from-au-au-col | Beam-energy dependence of charge balance functions from Au + Au collisions at energies available at the BNL Relativistic Heavy Ion Collider
STAR Collaboration
Research output: Contribution to journalArticlepeer-review
13 Scopus citations
Abstract
Balance functions have been measured in terms of relative pseudorapidity (Δη) for charged particle pairs at the BNL Relativistic Heavy Ion Collider from Au + Au collisions at sNN=7.7GeV to 200 GeV using the STAR detector. These results are compared with balance functions measured at the CERN Large Hadron Collider from Pb + Pb collisions at sNN=2.76TeV by the ALICE Collaboration. The width of the balance function decreases as the collisions become more central and as the beam energy is increased. In contrast, the widths of the balance functions calculated using shuffled events show little dependence on centrality or beam energy and are larger than the observed widths. Balance function widths calculated using events generated by UrQMD are wider than the measured widths in central collisions and show little centrality dependence. The measured widths of the balance functions in central collisions are consistent with the delayed hadronization of a deconfined quark gluon plasma (QGP). The narrowing of the balance function in central collisions at sNN=7.7 GeV implies that a QGP is still being created at this relatively low energy.
Original language English (US) 024909 Physical Review C 94 2 https://doi.org/10.1103/PhysRevC.94.024909 Published - Aug 16 2016
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
Fingerprint
Dive into the research topics of 'Beam-energy dependence of charge balance functions from Au + Au collisions at energies available at the BNL Relativistic Heavy Ion Collider'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918310642242432, "perplexity": 3165.1584218550483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00516.warc.gz"} |
https://ai.stackexchange.com/questions/9812/what-does-it-mean-by-high-dimensional-state-in-dqn | # What does it mean by high dimensional state in DQN?
Going through the DQN paper, it said the state-space is high dimensional. I am a little bit confused here. Suppose my state is a high dimensional vector of N length where N is a huge number. Let's say I solve this task using Q-learning and I fix my state space to 10 vectors each of N dimensions. Q-learning can easily work with these settings as we need only a table of dimensions 10 x number of actions.
Let's say my state space can have an infinite number of vectors each of N dimensions. In these settings Q-learning would fail as we cannot store Q-values in a table for each of these infinite vectors. Meanwhile on the other hand DQN would easily work as neural networks can generalize for any vector in the state-space.
Let's also say I have a state space of infinite vectors but each vector is now of length 2 i.e. very small dimensional vectors. Would it make sense to use DQN in these settings? Should this state-space be called high dimensional or low dimensional?
Usually when people write about having a high-dimensional state space, they are referring to the state space actually used by the algorithm.
Suppose my state is a high dimensional vector of $$N$$ length where $$N$$ is a huge number. Let's say I solve this task using $$Q$$-learning and I fix my state space to $$10$$ vectors each of $$N$$ dimensions. $$Q$$-learning can easily work with these settings as we need only a table of dimensions $$10$$ x number of actions.
In this case, I'd argue that the "feature vectors" of length $$N$$ are quite useless. If there are effectively only $$10$$ unique states (which may each have a very long feature vector of length $$N$$)... well, it seems like a bad idea to make use of those long feature vectors, just using the states as identity (i.e. a tabular RL algorithm) is much more efficient. If you end up using a tabular approach, I wouldn't call that a high-dimensional space. If you end up using function approximation with the feature vectors instead, that would be a high-dimensional space (for large $$N$$).
Let's also say I have a state space of infinite vectors but each vector is now of length $$2$$ i.e. very small dimensional vectors. Would it make sense to use DQN in these settings ? Should this state-space be called high dimensional or low dimensional ?
This would typically be referred to as having a low-dimensional state space. Note that I'm saying low-dimensional. The dimensionality of your state space / input space is low, because it's $$2$$ and that's typically considered to be a low value when talking about dimensionality of input spaces. The state space may still have a large size (that's a different word from dimensionality).
As for whether DQN would make sense in such a setting.. maybe. With such low dimensionality, I'd guess that a linear function approximator would often work just as well (and be much less of a pain to train). But yes, you can use DQN with just 2 input nodes.
• I have seen some examples where people map their low dimensional vectors to high dimensions by using kernel approximation techniques before feeding it to the neural net. Do you think this is a good approach ? – Siddhant Tandon Jan 5 '19 at 13:36
• @SiddhantTandon That kind of stuff can be done in RL yes (see e.g. chapter 9 of 2nd edition of Sutton and Barto's book)... but I don't have a lot of experience with that personally. That should typically only be necessary with linear function approximation though, not with DNNs. DNNs should already be powerful enough to work with raw features, as long as they're large/deep enough. – Dennis Soemers Jan 5 '19 at 14:10
• Dennis, do you know offhand the paper the OP refers to? (I'd like to link it in the question.) – DukeZhou Nov 14 '19 at 21:12
• @DukeZhou "the DQN paper" should usually be this one: nature.com/articles/nature14236 – Dennis Soemers Nov 15 '19 at 9:42
Yes, it makes sense to use DQN in state space with small number of dimensions as well. It doesn't really matter how big your state dimension is, but if you have state with 2 dimensions for instance you wouldn't use convolutional layers in your neural net like its used in the paper you mentioned, you can use ordinary fully connected layers, it depends on the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738980889320374, "perplexity": 446.69619828402534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00557.warc.gz"} |
http://mathoverflow.net/questions/117867/recovering-shared-eigenvector-set | # Recovering Shared Eigenvector Set
Suppose we are given a set of $M$ pairs $\{(\vec{x}^{(i)},\vec{y}^{(i)})\}$, with $\vec{x}^{(i)}\in\mathbb{R}^N$, $\vec{y}^{(i)}\in\mathbb{R}^N$, $M\gg N$ such that
$\vec{y}^{(i)} = Q^{(i)} \vec{x}^{(i)}$,
where $Q^{(i)}$'s are unknown orthogonal matrices. It is known however that they can all share the same set of eigenvectors:
$Q^{(i)} = U D^{(i)} U^{-1}$
How can $U$, and therefore $Q^{(i)}$'s, be recovered just from $\{(\vec{x}^{(i)},\vec{y}^{(i)})\}$?
My thoughts so far:
I'm hoping this can be reduced to an eigenproblem, or something easy like that, but I don't see how it can be done. The best approach I have in mind involves numerical solution of a nonlinear system of equations: e.g. require, for each $i$ that the elements of $U^{-1} \vec{y}^{(i)}$ and the corresponding elements of $U^{-1} \vec{x}^{(i)}$ have the same moduli (using the fact that the eigenvalues of an orthogonal matrix have moduli of $1$)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694922566413879, "perplexity": 163.96325703838468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00115-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.freemathhelp.com/forum/threads/log-linearization-around-steady-state.113605/ | # Log-linearization around steady-state
#### janka
##### New member
Joined
Nov 30, 2018
Messages
1
Hi all,
I have a problem with solving the following equation. I need to log-linearize it around the steady state so I can use it in DSGE model as a linear equation. However, every time I try to solve it, exponentials xi and phi are always there so, in the end, it's not a linear equation at all. Does anybody know how to solve it and what's the final equation?
Thank you!
* B_0, B_1, xi and phi are parameters
* b above lambda and every m and u are not exponents, just labels
Last edited: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592475295066833, "perplexity": 801.7475282907657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204857.82/warc/CC-MAIN-20190326054828-20190326080828-00292.warc.gz"} |
https://www.askmehelpdesk.com/mathematics/how-calculate-standard-score-447839.html | I have two questions that seem to be the same. One is if the mean is 3.5 and the standard deviation is .5 and Sally's score is 1.5. How many standard deviations is Sally from the mean? The other is if Billy's score is 5, what is his standard score? Don't both of these questions utilize the same formula? Z = x-means divided by the standard deviation? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672894477844238, "perplexity": 315.92146213225965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00344-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-4-proportions-percents-and-solving-inequalities-4-1-ratios-and-proportions-problem-set-4-1-page-149/28 | ## Elementary Algebra
Published by Cengage Learning
# Chapter 4 - Proportions, Percents, and Solving Inequalities - 4.1 - Ratios and Proportions - Problem Set 4.1 - Page 149: 28
#### Answer
$x=111$
#### Work Step by Step
Using cross-multiplication and the properties of equality, the value of the variable that satisfies the given equation, $\dfrac{x-6}{7}=\dfrac{x+9}{8} ,$ is \begin{array}{l}\require{cancel} 8(x-6)=7(x+9) \\\\ 8x-48=7x+63 \\\\ 8x-7x=63+48 \\\\ x=111 .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980355858802795, "perplexity": 4330.664964481532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00399.warc.gz"} |
http://www.sciforums.com/threads/quantification-of-space-time.158481/ | # Quantification of space-time
Discussion in 'Alternative Theories' started by Thomas P, Dec 23, 2016.
1. ### Thomas PRegistered Member
Messages:
8
Hi,
I present below an alternative version (not a questioning) to the theory of general relativity in order to integrate the accelerating expansion of the universe, the dark matter and to solve the problem of quantum time. To do this, quantifying space-time is essential.
It’s not an easy concept to explain. To do so, I will describe my approach below and have written an article for those who will be interested.
There is a strong similarity (which is not obvious) between the energy of an elementary particle and the energy of a fluid in motion (describes the Bernoulli’s principle):
For a moving particle, the energy comes from two sources: having speed and having mass.
Energy of a moving particle: Energy = motion + mass
Bernoulli's principale (venturi effect) : Ptotal = Pdynamic + Pstatic
This theory has two postulates:
1. Space-time is created by the particles composing it (electrons, higgs boson, possible particles still unknown, etc.). The particles are made of material giving them a volumetric density of material. Therefore, space-time is a kind of ocean of elementary particle.
2. Movement takes place from high densities of material to low densities of material (entropy principle). Therefore, the particles exert a pressure between themselves by "diluting". The pressure forces are equivalent to the energy (the energy increases when the pressure increases, and the pressure increases when the volumetric density of the material increases, Pressure = 1 / volume).
When the dynamic pressure of the particles of a fluid increases, the static pressure of the fluid decreases. This causes a pressure drop.
When the energy of the elementary particles increases, space-time bends.
When an elementary particle accelerates, its mass (its volumetric density of matter) increases and consequently it blocks less the other particles because it takes less “space” (indeed, the quantity of matter for a particle is invariant) . This release of space will generate a pressure drop. This fall is at the origin of the gravity! (low pressure are only attractive)
From the same principle, when the energy of the medium increases, the particles contract and block each other less. They free spaces and are less restrained. As a result, the movement is fluidized as the temperature increases. The effects are comparable to those of the Higgs field!
The speed of a particle depends on its energy and the energy of the particles around it. Thus, we can express the velocity of the particles by a ratio integrating all the results of the pressures forces (0 being a particle at rest and 1 the speed of light).
The inert mass and the gravitational mass are completely equivalent in this theory. The inert mass being generated by a high pressure and the low mass by a pressure drop.
The expansion of the universe is due to the fact that the particles are attracted to a "hyper-universe" less dense than our universe. Particles contract themselves (because their energy increases as they move) and “block” themsleves less and less as the expansion continues. Therefore, the expansion of the universe accelerates.
Black holes can be formed via an intense energy difference.
I have absolutely no pretention, although the subject is very ambitious. Get feedback to build an interesting discussion of physics is my goal.
3. ### Xelasnave.1947Valued Senior Member
Messages:
7,568
Is your idea the same as push gravity?
Can you relate the activity you perceive to the field equations of GR?
Alex
5. ### Thomas PRegistered Member
Messages:
8
Hi Alex,
My idea isn't push gravity, it's very different.
I express an idea and not a successful theory because I have not mathematical level for that.
I express the contraction of the particles relative to Lorentz factor in my article.
I propose an idea that seems relevant conceptually. It physicists are interested in developing the idea, I would be very interested.
I'm sorry for the quality of my English
Thomas
7. ### danshawenValued Senior Member
Messages:
3,950
You might be interested to know:
In other words, it is possible to demonstrate mathematically that:
volume of a sphere of radius r = twice the volume of the same sphere
This obviously contradicts basic geometrical intuition, among other things, and so you should be very careful about making generalizations involving geometric volumes.
This paradox was pointed out to me by my PhD math youngest son, who incidentally was briefly an associate of Andre Weil of Fermat's last theorem fame. If he had found a flaw in the logic, he would have told me. Since he didn't, you can be certain that it could take years, perhaps centuries for anyone else to puzzle out any flaws in this logic as well.
Last edited: Dec 26, 2016
8. ### quantum_waveContemplating the "as yet" unknownValued Senior Member
Messages:
6,626
The paradox can be solved by the difference in the porosity of the two reassembled spheres relative to the original. From the Wiki, "The intuition that such operations preserve volumes is not mathematically absurd and it is even included in the formal definition of volumes. However, this is not applicable here, because in this case it is impossible to define the volumes of the considered subsets, as they are chosen with such a large porosity. Reassembling them reproduces a volume, which happens to be different from the volume at the start."
The paradox that two spheres of equal volume to the original can result from the reassembly of the various pieces is true, but the increase in volume occurs as a factor of the increase in porosity.
danshawen likes this.
9. ### danshawenValued Senior Member
Messages:
3,950
If all three dimensions related to volume are light travel time (and in the physical universe they are), then any arbitrary volume will be not only porous, but Lorentz contractable in an infinitude of paired directions as well. Volume is not defined. Even a point that is an origin for a coordinate system has its limits, because it cannot be endowed with infinite inertia, even and especially if it contains all mass/energy in the universe.
So can porosity also explain the same volumetric paradox for a cube? Almost certainly.
All geometry is subject to relativistic effects due to proximity to other bound, unbound energy and relative motion. Even for mathematical ideas hatched entirely from symbolic constructs, the choices are still incomplete or inconsistent. The sphere volume paradox is an example of the latter, resulting from a volume construct derived of points having zero density and no constraints on relative proximity that derives of that property.
10. ### quantum_waveContemplating the "as yet" unknownValued Senior Member
Messages:
6,626
It is true, and there is clearly a parallel that can be drawn with our Big Bang arena. It starts with a tiny volume, and as it expands it could be said that its "porosity" increases. That is too simple in regard to a meaningful explanation for changing volume of the observable universe, but if we were to equate energy density to porosity, then it works for me.
danshawen likes this. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774091005325317, "perplexity": 798.3054309442033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502204.93/warc/CC-MAIN-20200605174158-20200605204158-00599.warc.gz"} |
http://math.stackexchange.com/tags/geometric-group-theory/hot | # Tag Info
19
$\langle x,y \; | \; x^2=y^3=1 \rangle \cong \operatorname{PSL}_2(\mathbb Z)$ and this isomorphism identifies G with $\operatorname{PSL}_2/T^7=1$ (where $T:z\mapsto z+1$). Result is the symmetry group of the tiling of the hyperbolic plane. From this description one can see that G is infinite (e.g. because there are infinitely many triangles in the tiling and ...
14
$F_2$ acts on a certain tiling of the hyperbolic plane. It looks sort of like this: The above tiling is acted on by the modular group $\Gamma \cong \text{PSL}_2(\mathbb{Z})$, which naturally sits as a subgroup inside of the full group $\text{PSL}_2(\mathbb{R})$ of isometries of the hyperbolic plane. Abstractly, this group is the free product $C_2 \ast ... 13 Grigory has already answered your particular question. However, I wanted to point out that your question "How do you prove that a group specified by a presentation is infinite?" has no good answer in general. Indeed, in general the question of whether a group presentation defines the trivial group is undecidable. 11 First consider an unpainted$2\times 2\times 2$Rubik's cube: What is the "symmetry group"? Before we can discuss the symmetry group of this cube (or any Rubik's cube), we must be clear about what operations are allowed. There are three pertinent questions: Do rigid rotations of the cube count as "symmetries"?(If no, we must somehow exclude them.) Are ... 10 As I said in the comments above, this has a positive solution due to a recent preprint of Lars Louder, which can be found here. The paper proves that surface groups have a "single Nielsen equivalence class of generating$2g$-tuples". I'll explain what this means, and then I will explain why this solves the problem. (In the comments below, Lars Louder has ... 10 The answer to your question is yes: See Bekka–de la Harpe–Valette, Kazhdan's property$(T)$, page 69 (the standard reference on property (T), freely available on Bekka's homepage): The proof is not very difficult, and it is given in a clear fashion in the book, so it doesn't make much sense to reproduce it here. Note also ... 8 Here is a different way of doing the second problem. How many homomorphisms are there from$G = \langle a,b,c \mid a^3b^3 \rangle$to$C_6$(cyclic group of order$6$)? We can map$a$and$c$to each of the$6$elements independently, and then we must map$b$to an element whose cube is the same as the cube of the image of$a^{-1}$. In$C_6 = \langle x ...
7
Solving the relation for $c$, we conclude that there is a homomorphism $\langle\, a,b,c\mid a^2cb^3\,\rangle\to \langle a,b\rangle$ given by $a\mapsto a$, $b\mapsto b$, $c\mapsto a^{-2}b^{-3}$, which is an isomorphism. There exists a homomorphism $\langle \,a,b,c\mid a^3b^3\rangle \to \mathbb Z/3\mathbb Z\times \mathbb Z/3\mathbb Z$ given by $a\mapsto ... 7 I'm an outsider, so maybe what I'm saying is silly. I would just think about the Tits alternative application since that really gives all the intuition. To apply it---and we'll just look at the complex case---you should produce two matrices$A$and$B$that have the following properties.$A$has a dominant eigenvalue$\lambda$(i.e., the eigenspace of ... 7 This counterexample seems to work for$\frac{1}{2}$, and beyond: $$x_g:=Cr^{\ell(g)}\qquad\forall g\in F_2$$ for the right choices of constants $$C>0\qquad\mbox{and}\qquad 0\leq r<\frac{1}{\sqrt{3}}.$$ Here$\ell(g)$denotes the word length of the element$g$, i.e. the minimal number of letters needed in the alphabet ... 7 This might not be the best argument, but it seems to work. First, no nontrivial finite, connected, vertex-transitive graph has a cutpoint (nontrivial here means something like cardinality at least$3$to avoid having to define cutpoint carefully). If it did, then every vertex would be a cutpoint, so you could inductively build an arbitrarily long finite ... 7 You can do this using metric currents in the sense of Ambrosio-Kirchheim. This is a rather new development of geometric measure theory, triggered by Gromov and really worked out only in the last decade. I should warn you that this is rather technical stuff and nothing for the faint-hearted. Urs Lang has a set of nice lecture notes, where you can find most ... 7 More generally, any group$G$defined by a finite presentation with more generators than relations is infinite - in fact$G/[G,G]$is infinite. That follows from the proof of the fundamental theorem of abelian groups. You can prove it directly, by showing that there is a nontrivial epimorphism$\phi$onto${\mathbb Z}$. Let$\phi:G \to {\mathbb Z}$be any ... 6 Let$z = x^2 = y^3$. This element clearly commutes with$x$and$y$. Therefore$z$lies in the center and$\langle z \rangle$is a normal subgroup. We have: $$G / \langle z \rangle = \langle x, y \mid x^2 = y^3 = 1\rangle.$$ Now the abelianization of this quotient group is$\Bbb Z_2 \times \Bbb Z_3$, which is clearly non-trivial. 6 You need to modify the definition of$\Psi$: put $$\Psi(t^na^xt^{-n})=xk^n$$ (instead of$\Psi(t^na^xt^{-n})=xk^{-n}$). The homomorphic property should follow easily along the lines of the computations given in the original question. Alternatively, you could replace$t$by$t^{-1}$in the definition of the Baumslag-Solitar group, in which case the$\Psi$... 6 If you think of$F$as acting by left multiplication on the vertices of$\Delta$, then the natural interpretation of the quotient graph is as a graph whose vertices are the distinct cosets$Gg$of$G$in$F$, with an edge (labelled$x$) from$Gg \to Ggx$for each$x \in \{a,a^{-1},b,b^{-1}\}$. So, if you ignore the labels, it is still a an infinite ... 5 The amalgamation only identifies two isomorphic subgroups, it doesn't perform any further quotienting. So the group you are amalgamating over has to be isomorphic to a subgroup of both$A$and$B$. Thus your first statement is correct. You cannot form an amalgamated free product of$F_2$and a finite group with non-trivial amalgamation, because$F_2$... 5 Remark: For those unacquainted with Property$(T)$, the standard reference is the freely available book by Bekka-de la Harpe-Valette. Since groups with property$(T)$are finitely generated, we can assume that the rank of the free group$F$is finite. If a group$G$has property$(T)$then so does every quotient(1), recall also my answer to your ... 5 Gromov in his original 1987 book (Section 3.1) wrote a classification for arbitrary isometric group actions on hyperbolic spaces (with no further assumption) into 5 main classes. It goes at follows (the terminology is borrowed from here) 1: bounded: orbits are bounded 2: horocyclic: orbits are unbounded,$G$acts with no hyperbolic isometry (hence there's ... 5 The argument I would like to propose is as follows: Fix a wedge of circles representing the free group F. Consider the cover space X representing the normal subgroup N. This is a regular cover space, which implies that the quotient group F/N acts transitively on X. As N is finitely generated, then the cover space X which is an infinite graph has the ... 5 (Of course the trivial group acts freely on all spheres! Let's assume$G$is finite and nontrivial.) Since$G$is finite, the action is certainly properly discontinuous, and since the action is given to be free, it follows that the quotient$q: X \rightarrow X/G$is a finite covering map. Recalling that in any$n$-sheeted covering map$q: X \rightarrow Y$... 5 Yes, your interpretation is correct. There are no further relations since there are no loops in the Cayley graph except those deducible from the relations$a^2 = b^2 = e$. The group$\langle a, b \mid a^2, b^2 \rangle$is known as the infinite dihedral group. 5 Take$G = S_{3}$,$H = U = \langle (12) \rangle = \{ 1, (12) \}$. As a system of right coset representatives, take$R = \{ 1, (123), (132) \}$. Now choose$u = (12) \in U$, and$s = (123) \ne (132) = s'$. We have $$u s H = (12) (123) H = (13) H = \{ (13), (132) \} = (132) H = s' H.$$ Barring mistakes. 5 This is probably the explanation you don't like, since it involves using the Lie structure, but it seems incredibly simple to me and I don't see what you could ask for that is simpler. Let$G$be a Lie group.$G$acts on itself by conjugation, fixing the identity, and thus$G$acts on$T_e(G)$, the tangent space at the identity. Write$Ad(g)$for the action ... 5 Inspiration: in the free product$H=C_2*C_2=\langle a,b|a^2,b^2\rangle$the element$ab$has infinite order. And furthermore the only nontrivial coset of$\langle ab\rangle$is$b\langle ab\rangle$(every word in$H$is either a product of$ab$s or it is$b$followed by a product of$ab$s), so$\langle ab\rangle$has index two. The given information in this ... 5 Given a group$G$and a topological space$X$, an action of$G$on$X$is, formally, a homomorphism from$G$to the group$\text{Homeo}(X)$of all homeomorphisms from$X$to itself. This can also be expressed with more notation as a function which associates to each$g \in G$and each$x \in X$an element$g \cdot x \in X$subject to various properties: (1) ... 5 If$G,H$are groups then$H$is a double cover of$G$iff$H$has a normal subgroup$K$of order 2 such that$K \leq [H,H] \cap Z(H)$and$H/K \cong G$. In other words,$H$has an element$z$that (1) commutes with every element of$H$, (2) can be written as a product of commutators$z=\left(h_1^{-1} h_2^{-1} h_1 h_2\right)\cdots\left(h_{2n-1}^{-1} ...
5
There is an expression for this, if $r$ is an integer. See projecteuclid. Also, OEIS A016725. I am not a number theorist. Just discovered this while searching for something else.
4
It's open too. Indeed, the question whether $Aut(F_n)$ has the Haagerup Property is open as well (actually for all $n\ge 2$). So this one (for $n\ge 4$) is just intermediate between the two. [For an arbitrary infinite discrete group $G$ we have implications: $G$ has Kazhdan's T $\Rightarrow$ $G$ has an infinite subgroup with Kazhdan's T $\Rightarrow$ $G$ ...
4
Yes if you assume a group has a normal subgroup $Z$ of finite index which is infinite cyclic then it's much easier. If $Z$ is central, then a classical theorem shows that the derived subgroup is finite. So modding out by the derived subgroup, you get an abelian group, and eventually get that the group has a homomorphism onto $\mathbf{Z}$ (with finite kernel, ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997255802154541, "perplexity": 629.9540428785441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678861.8/warc/CC-MAIN-20151001215758-00231-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/211403/vector-project-onto-subspace?answertab=oldest | Vector Project onto Subspace
So the question is:
Let S be the subspace of $\mathbb{R}^3$ spanned by the vectors $u_2 = \begin{pmatrix} \frac{2}{3}\\\frac{2}{3}\\\frac{1}{3}\end{pmatrix} u_3 = \begin{pmatrix} \frac{1}{\sqrt{2}}\\\frac{-1}{\sqrt{2}}\\0\end{pmatrix}$. Let $x=(1,2,2)^T$. Find the projection p of x onto S. Show that $(p-x)\perp u_2$ and $(p-x)\perp u_2$.
I understand how to show that they are perpendicular and I actually found the answer for the projection. It's:
$\begin{pmatrix} \frac{23}{18}\\\frac{41}{18}\\\frac{8}{9} \end{pmatrix}$
The problem is, i have no idea why i am doing what I am doing, I just followed my notes. Can someone explain why I was supposed to do:
$(xu_2)u_2 + (xu_3)u_3$
To find p.
-
So, the orthogonal projection. (This has nothing to do with projective space BTW.) Notice $\mathcal{B}=\{u_2,u_3\}$ forms an orthonormal basis for the subspace spanned by the two vectors, hence if $p$ is in the space it can be written as $p=\alpha u_2+\beta u_3$, and dot-producting this expression with $u_2$ (respectively $u_3$) gives $\alpha=p\cdot u_2$ (respectively $\beta=p\cdot u_3$). Furthermore, the orthogonality conditions tell us that $p\cdot u_2=x\cdot u_2$ and $p\cdot u_3=x\cdot u_3$, so there you have it. – anon Oct 12 '12 at 2:02
@anon i kind of had you until $p=\alpha u_2 + \beta u_3$ and then the whole "dot-producting this expression" lost me. Can you rephrase that please? – Charlie Yabben Oct 12 '12 at 2:08
Sure. Since $u_2,u_3$ are basis vectors for the subspace that $p$ is an element of, $p$ can be written as a linear combination of $u_2$ and $u_3$, say with coefficients $\alpha,\beta$. That is, $p=\alpha u_2+\beta u_3$ for some $\alpha,\beta$. We can solve for the coefficients using dot products and the fact that $u_2,u_3$ are orthonormal. For instance, to solve for $\alpha$, take the dot product with $u_2$: $$p\cdot u_2=(\alpha u_2+\beta u_3)\cdot u_2=\alpha(u_2\cdot u_2)+\beta(u_2\cdot u_3)=\alpha(1)+\beta(0)=\alpha,$$ i.e. $\alpha=p\cdot u_2$ and similarly $\beta=p\cdot u_3$. Anything else? – anon Oct 12 '12 at 2:14
@anon omg that is so freakin easy, i don't know why i didn't get it before, that makes perfect sense. So out of curiosity a vector dotted with itself will always be 1? $u_2 \cdot u_2 =1$? P.S: Please submit an answer, I would love to accept it – Charlie Yabben Oct 12 '12 at 2:19
A vector dotted with itself will be $1$ if and only if it is a "unit" vector, which in Euclidean space means of length one. It just so happens that the $u_2$ and $u_3$ you were given were each unit vectors and were orthogonal to each other; if these facts weren't the case, this computation would be much more involved. Now that you understand the solution (hopefully), you can submit an answer to your own question and have it peer reviewed and graded and constructively criticized and so forth - much better in my opinion. – anon Oct 12 '12 at 2:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337807893753052, "perplexity": 148.40699525951442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00031-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/23988-last-one.html | # Math Help - Last one
1. ## Last one
How do I prove that the variance of alpha in the equation:
y hat= alpha hat+beta hat*x(i)
is:
MSE(1/n+(x bar^2/sum of (x(i)-xbar)^2))
Assuming this is normal how would i got about actually proving that this is the variance?? Do I need to start with the expected value of y(i) and manipulate it?
(MSE is the estimator of sigma^2)
2. Originally Posted by Jar23
How do I prove that the variance of alpha in the equation:
y hat= alpha hat+beta hat*x(i)
is:
MSE(1/n+(x bar^2/sum of (x(i)-xbar)^2))
Assuming this is normal how would i got about actually proving that this is the variance?? Do I need to start with the expected value of y(i) and manipulate it?
(MSE is the estimator of sigma^2)
Write down the equation for $\hat{\alpha}$ then calculate the variance.
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918699324131012, "perplexity": 1722.8087017246578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065330.34/warc/CC-MAIN-20150827025425-00009-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/51230/kasper?tab=activity | # Kasper
less info
reputation
1738
bio website location age member for 1 year, 9 months seen 1 hour ago profile views 1,270
# 957 Actions
Sep17 awarded Popular Question Sep15 comment Proving that: 800 + n log n + 200√ n log n = Θ(n log n) You may like a little tool I made, to quickly find and type those symbols: kasperpeulen.github.io/PressAndHold/index.html Sep15 comment Proving that: 800 + n log n + 200√ n log n = Θ(n log n) meta.math.stackexchange.com/questions/5020/… for information how to typeset math formulas here Sep14 revised Find domain of the function $f(x)= (x^3+3x^2-4x)^{-1/4}$ added 2 characters in body; edited title Sep10 revised Show that if $V$ is an irreducible finite dim. representation of $A$, then $z \in Z(A)$ acts in $V$ by multiplication by some scalar $\chi_V(v)$. edited body Sep10 accepted Show that if $V$ is an irreducible finite dim. representation of $A$, then $z \in Z(A)$ acts in $V$ by multiplication by some scalar $\chi_V(v)$. Sep10 accepted What is wrong in this proof where I show that $\text{End}_k(k^2)$ is a division ring? Sep10 comment What is wrong in this proof where I show that $\text{End}_k(k^2)$ is a division ring? I get it thanks ! Sep10 comment What is wrong in this proof where I show that $\text{End}_k(k^2)$ is a division ring? So $AR$ is not a two sided ideal ? Sep10 asked What is wrong in this proof where I show that $\text{End}_k(k^2)$ is a division ring? Sep10 asked Show that if $V$ is an irreducible finite dim. representation of $A$, then $z \in Z(A)$ acts in $V$ by multiplication by some scalar $\chi_V(v)$. Sep7 accepted Show that the homomorphism $\lambda: k[X] \to End_k(V) : p \mapsto p(A)$ corresponding to the $k[X]$-module strucutre of $V$ has a nontrivial kernel. Sep7 asked Show that the homomorphism $\lambda: k[X] \to End_k(V) : p \mapsto p(A)$ corresponding to the $k[X]$-module strucutre of $V$ has a nontrivial kernel. Sep6 awarded Popular Question Sep5 asked Why does Euclid write “Prime numbers are more than any assigned multitude of prime numbers.” Jul18 comment Why is this definition of complex numbers “informal”? So why is what I say close to formal, but not exactly formal ? Jul18 asked Why is this definition of complex numbers “informal”? Jul7 comment How to make a perpendicular construction in 3 moves? @JiK I'm fine with it if you want to post another hint. Jul7 revised How to make a perpendicular construction in 3 moves? added 97 characters in body Jul7 answered How to make a perpendicular construction in 3 moves? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892039656639099, "perplexity": 769.3655777810351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135930.79/warc/CC-MAIN-20140914011215-00181-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://blog.rguha.net/?tag=pig | # Pig and Cheminformatics
Pig is a platform for analyzing large datasets. At its core is a high level language (called Pig Latin), that is focused on specifying a series of data transformations. Scripts written in Pig Latin are executed by the Pig infrastructure either in local or map/reduce modes (the latter making use of Hadoop). Previously I had […] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183925747871399, "perplexity": 3156.676504061794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362918.89/warc/CC-MAIN-20211203182358-20211203212358-00526.warc.gz"} |
https://eprint.iacr.org/2016/170 | ## Cryptology ePrint Archive: Report 2016/170
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning
Ran Raz
Abstract: We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial.
More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a stream of samples $(a_1, b_1), (a_2, b_2)...$, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $n^2/25$ bits of memory, requires an exponential number of samples.
Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is $O(n)$ (where $n$ is the space needed to store one sample).
We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length $n$, as well as time complexity of $n$ per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than $n^2/25$ memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.
Category / Keywords: bounded storage model | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570608496665955, "perplexity": 401.4480094152269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686077.22/warc/CC-MAIN-20170919235817-20170920015817-00698.warc.gz"} |
http://math.stackexchange.com/questions/204478/generating-series-for-the-set-of-all-compositions-which-have-an-even-number-of-p | # Generating Series for the set of all compositions which have an even number of parts.
I'm having trouble showing that the generating series for all compositions which have an even number of parts.
I'm given that each part congruent to 1 mod 5 is equal to:$$\frac{1-2x^5+x^{10}}{1-x^2-2x^5+x^{10}}$$
If you could help me out that would be great!
-
The function you write is the generating function for compositions with an even number of parts and each part $\equiv1\bmod5$. – joriki Sep 30 '12 at 8:18
The question actually refers to having both an even amount of parts, and the congruency clause. – user43646 Oct 4 '12 at 19:38
The number of compositions of $n$ with exactly $k$ parts is $\dbinom{n-1}{k-1}$, so the generating function for the number of compositions with an even number of parts is
$$g(x)=\sum_{n\ge 0}\left(\sum_{k\ge 0}\binom{n-1}{2k-1}\right)x^n\;.\tag{1}$$
$\displaystyle\sum_{k\ge 0}\binom{n-1}{2k-1}$ is simply the number of subsets of $\{1,\dots,n-1\}$ with an odd number of elements. For $n\le 1$ that’s clearly $0$, so we can rewrite $(1)$ as
$$g(x)=\sum_{n\ge 2}\left(\sum_{k\ge 0}\binom{n-1}{2k-1}\right)x^n=x^2\sum_{n\ge 2}\left(\sum_{k\ge 0}\binom{n-1}{2k-1}\right)x^{n-2}=x^2\sum_{n\ge 0}\left(\sum_{k\ge 0}\binom{n+1}{2k-1}\right)x^n\;.$$
Now $\displaystyle\sum_{k\ge 0}\binom{n+1}{2k-1}$ is the number of subsets of $\{1,\dots,n+1\}$ having an odd number of elements, and since $n+1\ge 1$, this has a simple closed form that you should know. Let’s say that that closed form is $f(n)$. Then you have
$$g(x)=x^2\sum_{n\ge 0}f(n)x^n\;,$$
where you should be able to recognize the generating function for $\displaystyle\sum_{n\ge 0}f(n)x^n$ fairly easily.
Added: If you follow the convention that $0$ has one composition, of size $0$, then $g(x)$ should have a constant term $1$ in addition to the terms given by $(1)$.
-
I think you missed the composition of $0$ into $0$ parts. At least Wikipedia says this is conventionally counted as a composition; in any case $\binom{n-1}{k-1}$ doesn't give the number of compositions with exactly $k$ parts for $n=0$. – joriki Sep 30 '12 at 7:57
@joriki: I didn’t learn it that way, so it didn’t occur to me. But I learned my finite combinatorics late and unsystematically, so that doesn’t mean much. Fortunately, that would just add $1$ to my $g(x)$, so it’s not a major problem. I’ll add a note to that effect in a bit. – Brian M. Scott Sep 30 '12 at 8:05
How come the congruency condition wasn't mentioned anywhere in this answer? – Overload119 Oct 1 '12 at 23:19
@Overload119: You mean the bit about parts congruent to $1\bmod 5$? Because it’s irrelevant to this approach. – Brian M. Scott Oct 2 '12 at 1:14
Here's another method. First, you should convince yourself that the generating function for compositions with $k$ parts is given by
$$(x + x^2 + x^3 + \ldots)^k.$$ This is because choosing a composition $k_1 + k_2 + \ldots + k_m$ corresponds to choosing $x^{k_1}$ in the first factor, $x^{k_2}$ in the second, and so on. This is the same way you would multiply out the power series - choosing every possible $k$-tuple of terms, one from each factor, and multiplying them, and then summing the result.
Now simplify this expression and sum this over all even $k$ - it will become a nice rational generating function.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611834645271301, "perplexity": 133.0461372070457}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00199-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-to-determine-the-movement-of-electrons-between-atoms.866915/ | # How to determine the movement of electrons between atoms
1. Apr 13, 2016
### gabede
Let's say I want to determine what will happen during a chemical reaction. My current reasoning is that the electrons will be more likely to move from a less electronegative atom first. For instance, cesium is much more reactive than potassium, because its electronegativity is much lower, allowing it to give up electrons more easily. Is it this simple for any chemical reaction between complex molecules and simple atoms, or are there other key factors I should consider? Also, when determining what product a reaction will create, I am guessing that the most stable product will be the one that takes the least energy to make. As an example, if I decompose aluminum hydroxide (Al(OH)3), I believe it will be more likely to decompose into AlHO2 and water rather than aluminum metal, hydrogen gas, and oxygen gas, because it requires the movement of less electrons from the octet-satisfied oxygen ions. All I wish to know is if my understanding is correct, and if there is anything very important I need to know, please do correct me.
2. Apr 14, 2016
### Staff: Mentor
This is not trivial - reactions are not driven just by the electron movements, and electrons don't just move to the higher electronegativity elements.
If it were possible to describe chemistry as a set of such simple rules there would be nothing to study for for years.
3. Apr 14, 2016
### gabede
Could you please give some examples, because if I am wrong I don't want to remain that way. I'm sure you understand that as a fellow scientist, and I believe you agree with the ability of people to spread knowledge freely. You don't have to tell me everything in the book, but it would be nice if you actually tried to point me in the correct direction. I trust this community, but sometimes the responses I get are very discouraging to the gaining of knowledge. You must be intelligent enough to correct me if you are calling out my failures. Thank you for your time.
4. Apr 14, 2016
### ogg
"You must be intelligent enough to correct me if..."?? Intelligence has little correlation with civility. For instance, implying that any post that doesn't satisfy you isn't "intelligent" is plain rude. But you're no doubt intelligent enough to know that. Either you believe that we should spend our time repeating the things that are easily found in textbooks and courses, or you think that a couple of sentences will "correct" you. If the most "stable" (thermodynamically speaking) state was the end state of every chemical reaction, then all carbon existing on Earth would be carbon dioxide, and obviously it is not. Chemistry can't be understood in *detail* without understanding quantum mechanics. Starting with simple molecules (or atoms), we can think of a chemical reaction as progressing along a "reaction coordinate". That is, a graph of "progress" vs energy. (You can think of "progress" as time, to be even more simplistic) So. a graph with reactants on the left and product(s) on the right. Calculate the energy of the reactants and plot that as a point on the left, do the same calculation for the product(s) and put a point on the far right. There's three ways that those 2 pts can be related: they're at the same energy, reactants are higher, or reactants have lower energy. (What would it mean if a reaction occurred leaving the products with more energy?) Exothermic reactions are far more likely (spontaneous)...Anyway, the key concept is "excited state". Between the reactants and products is an excited state configuration. You can put that point anywhere between the other two on the progress axis, and anywhere on the energy axis (within reason) but... The term "excited state" implies that some extra energy has been added to the system, which isn't too far wrong as a starting point, so the ES should be above the reactants. It can be above the products, too, which would mean the reaction can't proceed without getting extra energy from somewhere... Anyway, there are two important cases: one in which the E.S. is below the Products, and one in which it is above. Acid Base reactions in water occur without a E.S. above the Products. Most organic reactions occur with an E.S. above the Product's energy. From the E.S., the final products are determined by both energetic factors AND steric factors. The drop from the E.S. may be less than the energy used from the Reactants, the same, or more. Only if it is more do we consider the reaction to be exothermic. But as already said, many reactions end with products above the reactants in energy. There are plenty of sources to learn more about chemical kinetics (and chemical thermodynamics).
5. Apr 14, 2016
### gabede
Thank you for telling me what I should do. I apologize for being rude, it's just that I am very uncomfortable when I don't know something. I don't think I should have to feel this way, but thank you for telling me that I can find this in textbooks. I was simply unaware, and my fault has been corrected. That's all I wanted.
6. Apr 15, 2016
### HeavyMetal
This is not true. The "E.S." as you call it -- though, transition state is the more appropriate term -- is never lower in energy (more stable) than the products. As a matter of fact, this is completely counterintuitive! One would think that if there were to be progress along a reaction coordinate, and the system encountered some lower energy state on its way to the products, that this conformation would be the true product. In other words, at the geometry of the "products" in the scheme that you mentioned, there would exist a negative gradient along some coordinate $q_i$, i.e. $\frac{\partial^2f}{\partial q_i^2}<0$. This is never the case! In fact, according to the variational principle, there should always be a positive gradient along all coordinates at the best structure for the given model.
Borek is correct. What you are saying is, unfortunately, indeed simplifed. We use supercomputers and high precision experiments to elaborate upon exactly this. The problem is that chemistry is nasty due to the fact that reactivity depends on individual molecules/atoms, which are many-body problems in themselves. When you consider the interaction between just two of these many-body systems, the problem becomes even more challenging. In wave function theory (a 3N-dimensional interacting model), this problem is unwieldy. In density functional theory (a 3-dimensional non-interacting model), we encounter problems such as mixing of quantum states, and necessary treatments of quantum subsystems as being open, fluctuating in their number of electrons. These approximations leads to ugly results -- discontinuities in parts of the framework needed to describe essential parts of the picture, and interpretively challenging physical results such as non-integer numbers of electrons.
The point is, chemistry is often a subjective science (I was a chemistry major at one point, so I empathize with this), and it is most commonly the job of the theoretical or quantum chemist to worry about the technicalities of the aforementioned issues. If you are a motivated chemist who was as bothered with this as I was, I would learn a bit more about quantum chemistry (see Levine for a popular and accessible description, or Szabo and Ostlund for a more rigorous treatment).
Importantly, if you want to understand some of these concepts a bit better at the more fundamental level, you will need to understand the definition of some of these terms such as electronegativity, which has been interpreted as the negative of the chemical potential,1 i.e. $\chi=-\mu=-\Big(\frac{\partial E[\rho]}{\partial N}\Big)_{\nu_{ext}}$
I would urge you to check out these two papers for more info:
1: http://scitation.aip.org/content/aip/journal/jcp/68/8/10.1063/1.436185
2: http://www.pnas.org/content/83/22/8440.full.pdf
7. Apr 15, 2016
### gabede
Obviously, I need to do some research, because I do not understand any of your explanations. Thank you for actually explaining how these processes work, and enlightening me on the fact that I am of a lower faculty of understanding.
Draft saved Draft deleted
Similar Discussions: How to determine the movement of electrons between atoms | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400560617446899, "perplexity": 603.7062579810721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00266.warc.gz"} |
http://www.zazzle.com/seamonkey+gifts | Showing All Results
60 results
Related Searches: project, mozilla, browser
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
No matches for
Showing All Results
60 results | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8230992555618286, "perplexity": 4359.52785674619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053883/warc/CC-MAIN-20131204131733-00074-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://jaac.ijournal.cn/ch/reader/view_abstract.aspx?journal_id=jaac&file_no=201905060000001&flag=3 | ### For REFEREES
Three radial positive solutions for semilinear \\ elliptic problems in \mathbb{R}^N wang su yun,Zhang Yan hong,Ma ru yun Keywords:Semilinear elliptic problem, radial positive solutions, eigenvalue, bifurcation, connected component. Abstract: This paper is concerned with the semilinear elliptic problem \left\{ \aligned &-\Delta u=\lambda h(|x|)f(u) \ \ \ \ \ \ \ \ \ \ \ \text{in}\ \mathbb{R}^N,\& u(x)>0\hskip 3cm \ \text{in}\ \mathbb{R}^N,\&u\to 0 \hskip 3cm \ \ \ \ \text{as}\ |x|\to \infty,\\endaligned \right. where\lambda$is a real parameter and$h\$ is a weight function which is positive. We show the existence of three radial positive solutions under suitable conditions on the nonlinearity. Proofs are mainly based on the bifurcation technique. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976306557655334, "perplexity": 1753.832190862653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00301.warc.gz"} |
https://proofwiki.org/wiki/Mathematician:Hippocrates_of_Chios | Mathematician:Hippocrates of Chios
Mathematician
Mathematician, geometer and astronomer.
The first to write a systematic textbook on geometry, Elements of Geometry, only a fragment of which survives.
Invented the technique of reduction, that is, transforming a mathematical problem into a more general, easily solvable one.
Demonstrated that the problem of Doubling the Cube is equivalent to finding the Cube Root of 2.
The first geometer to work out the area of a curvilinear figure (that is, the Lune of Hippocrates).
May have been a pupil of Oenopides of Chios.
Not to be confused with his approximate contemporary Hippocrates of Cos, the famous doctor.
Greek
History
• Born: c. 470 BCE, Chios (now Khios), Greece
• Died: c. 410 BCE
Theorems and Definitions
Results named for Hippocrates of Chios can be found here.
Publications
• Elements of Geometry
Critical View
It is well known that people brilliant in one particular field may be quite foolish in most other things. Thus Hippocrates, though skilled in geometry, was so stupid and spineless that he let a tax collector of Byzantium cheat him out of a fortune.
-- Aristotle | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761769533157349, "perplexity": 4447.9441725555525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00588.warc.gz"} |
https://paulschreiber.com/blog/2011/02/15/only-if-i-have-an-equation-editor/ | Only if I have an equation editor
Based on the font, that’s obviously LaTeX, not Word Equation editor. The correct answer is $\frac{1}{\lambda}(intC$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99399733543396, "perplexity": 1136.2748247210836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00340.warc.gz"} |
http://mathhelpforum.com/math-topics/51206-logical-equivalences.html | # Math Help - Logical equivalences
1. ## Logical equivalences
Use the logical equivalences to show that:
(a) (¬p → (q → r)) ≡ (q → (p V r))
(b) ¬(p → ¬q) & ¬(p V q) is a contradiction (i.e. always false).
(c) (p V q) & (¬p V r) → (q V r) is a tautology (i.e. always true)
2. Hello, captainjapan!
$(a \to b) \;\equiv\: \sim a \vee b$ . I call it ADI (alternate definition of implication).
$(a)\;\;[\sim p \to (q \to r)] \: \equiv\: [q \to (p \vee r)]$
On the left side we have:
. . $\begin{array}{ccc}\sim p \to (q \to r) & & \text{Given} \\ \\ [-4mm]
p \vee (q \to r) & & \text{ADI} \\ \\ [-4mm]
p \vee (\sim q \vee r) & & \text{ADI} \\ \\ [-4mm]
\sim q \vee (p \vee r) & & \text{comm., assoc.} \\ \\[-4mm]
q \to (p \vee r) & & \text{ADI} \end{array}$
$(b)\;\;\sim (p \to \; \sim q)\: \wedge \sim(p \vee q)$ is a contradiction (i.e. always false).
. . $\begin{array}{ccc}
\sim(p \to \:\sim q)\; \wedge \sim(p \vee q) & & \text{Given} \\ \\[-4mm]
\sim(\sim p \:\vee \sim q) \;\wedge \sim(p \vee q) & & \text{ADI} \\ \\[-4mm]
(p \wedge q) \wedge (\sim p \:\wedge \sim q) & & \text{DeMorgan} \\ \\[-4mm]
(p \:\wedge \sim p) \wedge (q \:\wedge \sim q) & & \text{comm, assoc.} \\
f \wedge f \\ \\[-4mm]
f
\end{array}$
3. Originally Posted by Soroban
$(a \to b) \;\equiv\: \sim a \vee b$ . I call it ADI (alternate definition of implication).
FYI: In formal logic it is called Material Implication (Impl) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311051964759827, "perplexity": 4136.557483955937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.robbievanaert.com/publication/ci_gamma/ | # Comparing confidence intervals for Goodman and Kruskal's gamma coefficient
### Abstract
This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal’s coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the profile likelihood CI, and the score CI for Goodman and Kruskal’s gamma, under several conditions. The choice for Goodman and Kruskal’s gamma was based on results of Woods [Consistent small-sample variances for six gamma-family measures of ordinal association. Multivar Behav Res. 2009;44:525-551], who found relatively poor coverage for gamma for very small samples compared to other ordinal association measures. The profile likelihood CI and the score CI had the best coverage, close to the nominal value, but those CIs could often not be computed for sparse tables. The coverage of the Goodman-Kruskal CI and the Cliff-consistent CI was often poor. Computation time was fast to reasonably fast for all types of CI.
Type
Publication
Journal of Statistical Computation and Simulation, 85(12), 2491-2505
Date | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883011937141418, "perplexity": 3312.4044197155677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419022527-00042.warc.gz"} |
https://www.physicsforums.com/threads/limit-of-double-integration.628100/ | # Limit of double integration
1. Aug 13, 2012
### trap101
For the following regions in R2 express the double integral in terms of iterated integrals in two different ways:
S = the region between the parabolas y = x2 and y = 6 - 4x -x2
Solution:
Ok I got everything except one limit of integraton in regards to the order dx dy of integration:
for an upper limit of integration the solution said: x = -2 ± (10 - y )1/2. my question is how do they obtain that limit of integration. It's probably more of an algebraic question than calculus.
Also how do you write integrals with iTex code?
2. Aug 13, 2012
### BruceW
look at the equations you have been given. Can you think of how to rearrange one of them to get the limit?
Also, for more information on latex: https://www.physicsforums.com/showthread.php?t=617567 Or google might help. I think I remember the way to write the integral sign is \int
3. Aug 13, 2012
### trap101
That's exactly my problem I can't see how to rearrange them to obtain that solution.
4. Aug 14, 2012
### HallsofIvy
Staff Emeritus
Integrating with respect to x should be straightforward. Looking at a graph indicates that, integrating with respect to y you will need to break this into three separate integrals, from y= 0 to y= 1, from y= 1 to y= 9, and from y= 9 to y= 10. Do you see why?
5. Aug 15, 2012
### trap101
I do somewhat see why, well what I mean by that is I understand exactly the reason for splittiing it up, I think the way I drew the graph doesn't illustrate that. But it is solving for the exact x value that I specified in my first post. I tried to work it backwards and couldn't get anything related to what I have.
6. Aug 15, 2012
### BruceW
You can do it. Get y=6-4x-x2 so that x is the subject of the equation. Also, it is important to draw out the graph, and algebraically work out the points of intersection.
Similar Discussions: Limit of double integration | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580703139305115, "perplexity": 568.1857170705119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00528.warc.gz"} |
https://physics.stackexchange.com/questions/107713/why-does-principle-for-least-action-hold-for-classical-fields?noredirect=1 | # Why does Principle for least action hold for classical fields [duplicate]
Let $\mathscr L (\phi(\mathbf x), \partial \phi(\mathbf x))$ denote the Lagrangian density of field $\phi(\mathbf x)$. Then then actual value of the field $\phi(\mathbf x)$ can be computed from the principle of least action. In case of motion of particles, I know that principle of least action comes from Newton's second laws. But why does the principle of least action also hold for classical fields like EM field and gravitational field? Is there any deep reason why it holds for both EM and gravitational field? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652786254882812, "perplexity": 157.44699084383788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418123243-00488.warc.gz"} |
https://doc.cfd.direct/notes/cfd-general-principles/thermal-boundary-layers | ## 7.13Thermal boundary layers
In a turbulent boundary layer, the distribution of temperature is similar to velocity, with the viscous and intertial sub-layers, separated by a buffer layer, as discussed in Sec. 7.4 .
By analogy with , Eq. (7.10 ), we define friction temperature as
(7.49)
The wall layer is then described by a dimensionless temperature
(7.50)
where is the fluid temperature at the wall. Ignoring heat generation by viscous stresses, the profile in the viscous sub-layer is described by the relation
(7.51)
The profile in the inertial sub-layer is commonly described by a log law for
(7.52)
The derivation of Eq. (7.51 ) and Eq. (7.52) assumes a constant heat flux across the profile, equating to at the wall. In the viscous sub-layer, the heat flux is laminar so and
(7.53)
This equation integrates between at a distance from the wall to at the wall, to yield Eq. (7.51). In the inertial layer, the heat flux is assumed turbulent and
(7.54)
Combining Eq. (6.24), Eq. (7.9) and Eq. (7.15 ) yields the ratio . Substituting in Eq. (7.54 ) and integrating then leads to Eq. (7.52 ) where is the constant of integration.
The constant is generally considered to be a function of . A reasonable approximation for this function is20
(7.55)
Another function, commonly used in thermal wall functions, is , where is the function of :21
(7.56)
The expression for uses the coefficient from Eq. (7.11). These constants of integration are sometimes subsumed within the log function as a coefficient “” in the log law expressions, as the footnote on page 483 explains.
20Hermann Schlichting and Klaus Gersten, Boundary-layer theory, 2017.
21Chandra Jayatilleke, The influence of Prandtl number and surface roughness on the resistance of the laminar sub-layer to momentum and heat transfer, 1966.
Notes on CFD: General Principles - 7.13 Thermal boundary layers | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371998310089111, "perplexity": 2607.5109087317664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00585.warc.gz"} |
https://www.physicsforums.com/threads/need-help-with-fluid-dynamics.260234/ | # Need help with Fluid Dynamics
1. Sep 29, 2008
### azure2s
1. The problem statement, all variables and given/known data
A container has spouts 10 cm, 20 cm, 30 cm, and 40 cm. The water level is maintained at a 45 cm height by an outside supply. (a) what is the speed of the water out of each hole? (b) which water stream has the greatest range relative to the base of the container?
2. Relevant equations
Bernoulli's Equation: p1 + 1/2desnityv12 + pgy1 = p2 + 1/2densityv22 + pgy2
3. The attempt at a solution
I tried to first solve the the spout 40 cm high. Using the equation, the pressure and density gets canceled, so we have:
v12 - v22 = 2g(y2 - y1)
since v2 = 0, then
v1 = square root of 2g(y2 - y1)
v1 = square root of 2(9.8)(45 - 40)
v1 = 9.9
But my answer is wrong according to my teacher...... :( Hope you can help me
2. Sep 29, 2008
### LowlyPion
Your height difference is measured in cm not m.
3. Sep 29, 2008
### azure2s
LOL!!! I can't believe that was the only mistake I did!!! Thank you very much!! =)) Now my answer is the same with my teacher's.
4. Sep 29, 2008
### LowlyPion
Good luck.
Have something to add?
Similar Discussions: Need help with Fluid Dynamics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053818345069885, "perplexity": 1799.1083766638105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00275-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://sjbrown.co.uk/2011/01/03/two-way-path-tracing/ | Simon’s Graphics Blog
# Two-Way Path Tracing
path tracing maths
This post is about a path tracing technique that sits between unidirectional path tracing and bidirectional path tracing.
For want of a better name, let’s call this two-way path tracing. It’s defined as follows:
• Trace eye rays, handle light source intersections and sample light sources explicitly
• Trace light rays, handle sensor intersections and sample sensors explicitly
• When computing weights for multiple importance sampling, take both tracing methods into account
So you can think of this technique as either:
1. Unidirectional path tracing in both directions at once
2. Bidirectional path tracing, but we only connect sub-paths if one of the sub-paths has one vertex
So why is this interesting? Because:
• Like unidirectional path tracing, you only need to track a fixed amount of state, regardless of maximum path length. This is potentially nice for GPU implementations where you usually want to avoid hitting memory and have a large number of paths in flight.
• You can efficiently multiple importance sample between forward and reverse paths, so you can get reduced variance compared to unidirectional path tracing for some types of scenes (e.g. caustics).
In this post I’d like to cover how to multiple importance sample between forward and reverse paths, and show some test images.
## Multiple Importance Sampling
To recap Eric Veach’s thesis on bidirectional path tracing, if you have a path of (k+1) vertices defined by:
$$\newcommand{\x}{\mathbf{x}} \newcommand{\Pa}[1]{P_A({#1})} \newcommand{\sigp}{\sigma^\perp} \newcommand{\Psigp}[2]{P_{\sigp}({#1} \to {#2})} \newcommand{\G}[2]{G({#1} \leftrightarrow {#2})} \newcommand{\PsigpG}[2]{\Psigp{#1}{#2}\,\G{#1}{#2}} \overline{x} = \x_0 \ldots \x_k$$
and you wish to compute the ratio of probability densities
$$p_i = p_{i,(s+t)-i}(\overline{x}_{s,t}), \qquad \text{for } i=0,\ldots,s+t$$
for each of the $(s + t + 1)$ possible sampling techniques, then these are defined by Veach as follows:
\begin{align*} \frac{p_1}{p_0} &= \frac{\Pa{\x_0}}{\PsigpG{\x_1}{\x_0}} \\[3ex] \frac{p_{i+1}}{p_i} &= \frac{\PsigpG{\x_{i-1}}{\x_i}}{\PsigpG{\x_{i+1}}{\x_i}} \qquad \text{for } 0 < i < k \\[3ex] \frac{p_{k+1}}{p_k} &= \frac{\PsigpG{\x_{k-1}}{\x_k}}{\Pa{\x_k}} \end{align*}
These ratios allow us to weight different sampling techniques to reduce variance. For example, here is an image of the cornell box, but with one of the cubes replaced with a glass sphere.
Here is a debug image (click for full size) showing all the techniques used to compute the image when using bidirectional path tracing, weighted using these probability ratios (in this image, t increases from left to right, s+t increases from top to bottom):
Note that the left-hand image in each row is always zero, since we are rendering using a pinhole camera that cannot be hit by light sub-paths.
In two-way path tracing, we restrict ourselves to techniques where s < 2 or t < 2. If we adjust the bidirectional path tracing weights accordingly (using the same ratios) we get the following debug image:
To compute these weights, we only need to compute the following reduced set of ratios, regardless of path length:
$$\frac{p_1}{p_0}, \frac{p_k}{p_1}, \frac{p_{k+1}}{p_k}$$
The first and last are defined already, so let’s try to simplify the middle term from the original definition:
\begin{align*} \frac{p_k}{p_1} &= \prod_{i=1}^{k-1} \frac{p_{i+1}}{p_i} \\ &= \prod_{i=1}^{k-1} \frac{\PsigpG{\x_{i-1}}{\x_i}}{\PsigpG{\x_{i+1}}{\x_i}} \\ &= \frac{\G{\x_0}{\x_1}}{\G{\x_k}{\x_{k-1}}} \prod_{i=1}^{k-1} \frac{\Psigp{\x_{i-1}}{\x_i}}{\Psigp{\x_{i+1}}{\x_i}} \end{align*}
As we can see, all the interior geometry terms cancelled! This product can be computed iteratively as the path is grown. This leads to an efficient algorithm for building up the ratio of $p_k/p_1$ by tracking a single scalar value as the path is constructed.
## Results
Let’s start with a toy scene of diffuse, mirror or glass spheres, lit by a small sphere light. This is the scene rendered using unidirectional path tracing with 128 paths per pixel:
It’s pretty noisy due to caustic paths being poorly sampled by path tracing (since the light source is small and unlikely to be hit). Now let’s look at the same scene rendered using unidirectional light tracing for 128 paths per pixel:
Since we are using light tracing, caustic paths are well sampled, but directly viewed specular surfaces cannot be sampled at all!
What two-way path tracing allows us to do is combine these two techniques with multiple importance sampling, with much less overhead than full bidirectional path tracing. Here’s the same scene rendered using 64 two-way paths per pixel (so the same total number of paths as the previous images):
As we can see, most of the image is now cleaned up. However, you will notice that surfaces viewed indirectly through specular reflection/refraction do not benefit. This is because these surfaces are still only sampled with unidirectional path tracing. In fact, for many types of path (e.g. LSDSE paths), even moving to full bidirectional path tracing will not help since there will still only be a single technique that generates the path (hitting the light source), and you need to move to Metropolis techniques to reduce the variance.
Now let’s see two-way path tracing on a more complex scene. Here’s an image from my work-in-progress GPU two-way path tracer (this scene was inspired by Dietger’s awesome MLT scene on ompf):
The floors are diffuse, and the buddha, dragon and bunny are phong, mirror and glass respectively (although glass doesn’t yet use Beer’s law properly). The scene is lit by a small quad light.
Here’s an alternate angle to zoom in on some caustic paths:
I’ve not been very scientific, but here’s the same angle with just eye paths (i.e. equivalent to unidirectional path tracing). Note the noisy caustics and fireflies around the dragon from specular paths.
Here’s the same angle with just light paths, to show what light paths give in terms of additional sampling.
## Conclusions and Future Work
I’ve shown how to efficiently multiple importance sample between unidirectional path tracing and light tracing by building a single scalar value for the ratio $p_k/p_1$ as the path is constructed. This can help to improve convergence when light tracing is a good sampling technique for the scene, such as directly viewed caustics.
For future work, I’d like to combine this technique with Metropolis sampling, to better explore (forward and reverse) path space for more complex scenes.
I would also expect this technique to work well when sensor can be intersected, since this is one of the light tracing techniques that we can now multiple importance sample with path tracing. If we use the cornell box with sphere scene above, but make the floor a lightmap, we get the following output:
If we render this with two-way path tracing, we get the following debug image, showing that all 4 techniques for each path length now produce non-zero samples: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602965712547302, "perplexity": 1946.7286640613083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00222.warc.gz"} |
https://courses.lumenlearning.com/precalctwo/chapter/graphs-of-the-other-trigonometric-functions/ | ## Analyzing the Graph of y = tan x and Its Variations
We will begin with the graph of the tangent function, plotting points as we did for the sine and cosine functions. Recall that
$\tan x=\frac{\sin x}{\cos x}\\$
The period of the tangent function is π because the graph repeats itself on intervals of kπ where k is a constant. If we graph the tangent function on $−\frac{\pi}{2}\\$ to $\frac{\pi}{2}\\$, we can see the behavior of the graph on one complete cycle. If we look at any larger interval, we will see that the characteristics of the graph repeat.
We can determine whether tangent is an odd or even function by using the definition of tangent.
$\begin{array}  \tan(−x)=\frac{\sin(−x)}{\cos(−x)} \hfill& \text{Definition of tangent.} \\ =\frac{−\sin x}{\cos x} \hfill& \text{Sine is an odd function, cosine is even.} \\ =−\frac{\sin x}{\cos x} &\hfill \text{The quotient of an odd and an even function is odd.} \hfill \\ =−\tan x \hfill& \text{Definition of tangent.} \end{array}\\$
Therefore, tangent is an odd function. We can further analyze the graphical behavior of the tangent function by looking at values for some of the special angles, as listed in the table below.
x $−\frac{\pi}{2}\\$ $−\frac{\pi}{3}\\$ $−\frac{\pi}{4}\\$ $−\frac{\pi}{6}\\$ 0 $\frac{\pi}{6}\\$ $\frac{\pi}{4}\\$ $\frac{\pi}{3}\\$ $\frac{\pi}{2}\\$ tan (x) undefined $−\sqrt{3}\\$ –1 $−\frac{\sqrt{3}}{3}\\$ 0 $\frac{\sqrt{3}}{3}\\$ 1 $\sqrt{3}\\$ undefined
These points will help us draw our graph, but we need to determine how the graph behaves where it is undefined. If we look more closely at values when $\frac{\pi}{3}<x<\frac{\pi}{2}\\$, we can use a table to look for a trend. Because $\frac{\pi}{3}\approx 1.05\\$ and $\frac{\pi}{2}\approx 1.57\\$, we will evaluate x at radian measures 1.05 < x < 1.57 as shown in the table below.
x 1.3 1.5 1.55 1.56 tan x 3.6 14.1 48.1 92.6
As x approaches $\frac{\pi}{2}\\$, the outputs of the function get larger and larger. Because $y=\tan x\\$ is an odd function, we see the corresponding table of negative values in the table below.
x −1.3 −1.5 −1.55 −1.56 tan x −3.6 −14.1 −48.1 −92.6
We can see that, as x approaches $−\frac{\pi}{2}\\$, the outputs get smaller and smaller. Remember that there are some values of x for which cos x = 0. For example, $\cos\left(\frac{\pi}{2}\right)=0\\$ and $\cos\left(\frac{3\pi}{2}\right)=0\\$. At these values, the tangent function is undefined, so the graph of $y=\tan x$ has discontinuities at $x=\frac{\pi}{2}\\$ and $\frac{3\pi}{2}\\$. At these values, the graph of the tangent has vertical asymptotes. Figure 1 represents the graph of $y=\tan x\\$. The tangent is positive from 0 to $\frac{\pi}{2}\\$ and from π to $\frac{3\pi}{2}\\$, corresponding to quadrants I and III of the unit circle.
## Graphing Variations of y = tan x
As with the sine and cosine functions, the tangent function can be described by a general equation.
$y=A\tan(Bx)\\$
We can identify horizontal and vertical stretches and compressions using values of A and B. The horizontal stretch can typically be determined from the period of the graph. With tangent graphs, it is often necessary to determine a vertical stretch using a point on the graph.
Because there are no maximum or minimum values of a tangent function, the term amplitude cannot be interpreted as it is for the sine and cosine functions. Instead, we will use the phrase stretching/compressing factor when referring to the constant A.
### A General Note: Features of the Graph of y = Atan(Bx)
• The stretching factor is |A| .
• The period is $P=\frac{\pi}{|B|}\\$.
• The domain is all real numbers x, where $x\ne \frac{\pi}{2|B|} + \frac{\pi}{|B|} k\\$ such that k is an integer.
• The range is (−∞, ∞).
• The asymptotes occur at $x=\frac{\pi}{2|B|} + \frac{\pi}{|B|}k\\$, where k is an integer.
• $y = A \tan (Bx)\\$ is an odd function.
## Graphing One Period of a Stretched or Compressed Tangent Function
We can use what we know about the properties of the tangent function to quickly sketch a graph of any stretched and/or compressed tangent function of the form $f(x)=A\tan(Bx)\\$. We focus on a single period of the function including the origin, because the periodic property enables us to extend the graph to the rest of the function’s domain if we wish. Our limited domain is then the interval $(−\frac{P}{2}, \frac{P}{2})\\$ and the graph has vertical asymptotes at $\pm \frac{P}{2}\\$ where $P=\frac{\pi}{B}\\$. On $(−\frac{\pi}{2}, \frac{\pi}{2})\\$, the graph will come up from the left asymptote at $x=−\frac{\pi}{2}\\$, cross through the origin, and continue to increase as it approaches the right asymptote at $x=\frac{\pi}{2}\\$. To make the function approach the asymptotes at the correct rate, we also need to set the vertical scale by actually evaluating the function for at least one point that the graph will pass through. For example, we can use
$f\left(\frac{P}{4}\right)=A \tan\left(B\frac{P}{4}\right)=A\tan\left(B\frac{\pi}{4B}\right)=A\\$
because  $\tan\left(\frac{\pi}{4}\right)=1\\$.
### How To: Given the function $f(x)=A\tan(Bx)\\$, graph one period.
1. Identify the stretching factor, |A|.
2. Identify B and determine the period, $P=\frac{\pi}{|B|}\\$.
3. Draw vertical asymptotes at  $x=−\frac{P}{2}\\$ and $x=\frac{P}{2}\\$.
4. For A > 0 , the graph approaches the left asymptote at negative output values and the right asymptote at positive output values (reverse for A < 0 ).
5. Plot reference points at $\left(\frac{P}{4}\text{, }A\right)\\$ (0, 0), and ($−\frac{P}{4}\\$,− A), and draw the graph through these points.
### Example 1: Sketching a Compressed Tangent
Sketch a graph of one period of the function $y=0.5\tan(\frac{\pi}{2}x)\\$.
### Solution
First, we identify A and B.
Figure 2
Because $A=0.5$ and $B=\frac{\pi}{2}\\$, we can find the stretching/compressing factor and period. The period is $\frac{\pi}{\frac{\pi}{2}}=2\\$, so the asymptotes are at $x=\pm 1$. At a quarter period from the origin, we have
$\begin{array}f(0.5)=0.5\tan(\frac{0.5\pi}{2}) \hfill& \\ =0.5\tan(\frac{\pi}{4}) \hfill& \\ =0.5 \end{array}\\$
This means the curve must pass through the points(0.5,0.5),(0,0),and(−0.5,−0.5).The only inflection point is at the origin. Figure shows the graph of one period of the function.
Figure 3
### Try It 1
Sketch a graph of $f(x)=3\tan\left(\frac{\pi}{6}x\right)\\$.
Solution
## Graphing One Period of a Shifted Tangent Function
Now that we can graph a tangent function that is stretched or compressed, we will add a vertical and/or horizontal (or phase) shift. In this case, we add C and D to the general form of the tangent function.
$f(x)=A\tan(Bx−C)+D\\$
The graph of a transformed tangent function is different from the basic tangent function tan x in several ways:
### A General Note: Features of the Graph of y = Atan(Bx−C)+D
• The stretching factor is |A|.
• The period is $\frac{\pi}{|B|}\\$.
• The domain is $x\ne\frac{C}{B}+\frac{\pi}{|B|}k\\$, where k is an integer.
• The range is (−∞,−|A|] ∪ [|A|, ∞).
• The vertical asymptotes occur at $x=\frac{C}{B}+\frac{\pi}{2|B|}k\\$, where k is an odd integer.
• There is no amplitude.
• $y=A\tan(Bx)$ is an odd function because it is the quotient of odd and even functions (sin and cosine perspectively).
### How To: Given the function $y=A\tan(Bx−C)+D\\$, sketch the graph of one period.
1. Express the function given in the form $y=A\tan(Bx−C)+D\\$.
2. Identify the stretching/compressing factor, |A|.
3. Identify B and determine the period, $P=\frac{\pi}{|B|}\\$.
4. Identify C and determine the phase shift, $\frac{C}{B}\\$.
5. Draw the graph of $y=A\tan(Bx)\\$ shifted to the right by $\frac{C}{B}\\$ and up by D.
6. Sketch the vertical asymptotes, which occur at $x=\frac{C}{B}+\frac{\pi}{2|B|}k\\$, where k is an odd integer.
7. Plot any three reference points and draw the graph through these points.
### Example 2: Graphing One Period of a Shifted Tangent Function
Graph one period of the function $y=−2\tan(\pi x+\pi)−1\\$.
### Solution
Step 1. The function is already written in the form $y=A\tan(Bx−C)+D\\$.
Step 2. $A=−2$, so the stretching factor is $|A|=2$.
Step 3. $B=\pi$, so the period is $P=\frac{\pi}{|B|}=\frac{\pi}{\pi}=1\\$.
Step 4. $C=−\pi$, so the phase shift is $\frac{C}{B}=\frac{−\pi}{\pi}=−1\\$.
Step 5–7. The asymptotes are at $x=−\frac{3}{2}\\$ and $x=−\frac{1}{2}\\$ and the three recommended reference points are (−1.25, 1), (−1,−1), and (−0.75, −3). The graph is shown in Figure 4.
Figure 4
### Analysis of the Solution
Note that this is a decreasing function because A < 0.
### Try It 2
How would the graph in Example 2 look different if we made A = 2 instead of −2 ?
Solution
### How To: Given the graph of a tangent function, identify horizontal and vertical stretches.
1. Find the period P from the spacing between successive vertical asymptotes or x-intercepts.
2. Write $f(x)=A\tan(\frac{\pi}{P}x)$.
3. Determine a convenient point (x, f(x)) on the given graph and use it to determine A.
### Example 3: Identifying the Graph of a Stretched Tangent
Find a formula for the function graphed in Figure 5.
Figure 5
### Solution
The graph has the shape of a tangent function.
Step 1. One cycle extends from –4 to 4, so the period is $P=8$. Since $P=\frac{\pi}{|B|}\\$, we have $B=\frac{\pi}{P}=\frac{\pi}{8}\\$.
Step 2. The equation must have the $\text{form}f(x)=A\tan(\frac{\pi}{8}x)\\$.
Step 3. To find the vertical stretch A, we can use the point (2,2).
$2=A\tan(\frac{\pi}{8}\times2)=A\tan(\frac{\pi}{4})\\$
Because $\tan(\frac{\pi}{4})=1\\$, A = 2.
This function would have a formula $f(x)=2\tan(\frac{\pi}{8}x)\\$.
### Try It 3
Find a formula for the function in Figure 6.
Figure 6
Solution
## Using the Graphs of Trigonometric Functions to Solve Real-World Problems
Many real-world scenarios represent periodic functions and may be modeled by trigonometric functions. As an example, let’s return to the scenario from the section opener. Have you ever observed the beam formed by the rotating light on a police car and wondered about the movement of the light beam itself across the wall? The periodic behavior of the distance the light shines as a function of time is obvious, but how do we determine the distance? We can use the tangent function .
### Example 4: Using Trigonometric Functions to Solve Real-World Scenarios
Suppose the function $y=5\tan\left(\frac{\pi}{4}t\right)\\$ marks the distance in the movement of a light beam from the top of a police car across a wall where t is the time in seconds and y is the distance in feet from a point on the wall directly across from the police car.
1. Find and interpret the stretching factor and period.
2. Graph on the interval [0, 5].
3. Evaluate f(1) and discuss the function’s value at that input.
### Â Solution
1. We know from the general form of  $y=A\tan(Bt)\\$  that |A| is the stretching factor and π B is the period.
We see that the stretching factor is 5. This means that the beam of light will have moved 5 ft after half the period.
The period is $\frac{\pi}{\frac{\pi}{4}}=\frac{\pi}{1}\times \frac{4}{\pi}=4\\$. This means that every 4 seconds, the beam of light sweeps the wall. The distance from the spot across from the police car grows larger as the police car approaches.
2. To graph the function, we draw an asymptote at $t=2$ and use the stretching factor and period. See Figure 8.
3. period: $f(1)=5\tan \left(\frac{\pi}{4}\left(1\right)\right)=5\left(1\right)=5\\$; after 1 second, the beam of has moved 5 ft from the spot across from the police car.
## Analyzing the Graphs of y = sec x and y = cscx and Their Variations
The secant was defined by the reciprocal identity $\sec x=\frac{1}{\cos x}$. Notice that the function is undefined when the cosine is 0, leading to vertical asymptotes at $\frac{\pi}{2}\text{, }\frac{3\pi}{2}\text{, etc}$. Because the cosine is never more than 1 in absolute value, the secant, being the reciprocal, will never be less than 1 in absolute value.
We can graph $y=\sec x$ by observing the graph of the cosine function because these two functions are reciprocals of one another. See Figure 9. The graph of the cosine is shown as a dashed orange wave so we can see the relationship. Where the graph of the cosine function decreases, the graph of the secant function increases. Where the graph of the cosine function increases, the graph of the secant function decreases. When the cosine function is zero, the secant is undefined.
The secant graph has vertical asymptotes at each value of x where the cosine graph crosses the x-axis; we show these in the graph below with dashed vertical lines, but will not show all the asymptotes explicitly on all later graphs involving the secant and cosecant.
Note that, because cosine is an even function, secant is also an even function. That is, $\sec(−x)=\sec x$.
As we did for the tangent function, we will again refer to the constant |A| as the stretching factor, not the amplitude.
### A General Note: Features of the Graph of y = Asec(Bx)
• The stretching factor is |A|.
• The period is $\frac{2\pi}{|B|}$.
• The domain is $x\ne \frac{\pi}{2|B|}k$, where k is an odd integer.
• The range is (−∞, −|A|] ∪ [|A|, ∞).
• The vertical asymptotes occur at $x=\frac{\pi}{2|B|}k$, where k is an odd integer.
• There is no amplitude.
• $y=A\sec(Bx)$ is an even function because cosine is an even function.
Similar to the secant, the cosecant is defined by the reciprocal identity $\csc x=1\sin x$. Notice that the function is undefined when the sine is 0, leading to a vertical asymptote in the graph at 0, π, etc. Since the sine is never more than 1 in absolute value, the cosecant, being the reciprocal, will never be less than 1 in absolute value.
We can graph $y=\csc x$ by observing the graph of the sine function because these two functions are reciprocals of one another. See Figure 10. The graph of sine is shown as a dashed orange wave so we can see the relationship. Where the graph of the sine function decreases, the graph of the cosecant function increases. Where the graph of the sine function increases, the graph of the cosecant function decreases.
The cosecant graph has vertical asymptotes at each value of x where the sine graph crosses the x-axis; we show these in the graph below with dashed vertical lines.
Note that, since sine is an odd function, the cosecant function is also an odd function. That is, $\csc(−x)=−\csc x$.
The graph of cosecant, which is shown in Figure 10, is similar to the graph of secant.
### A General Note: Features of the Graph of $y=A\csc(Bx) • The stretching factor is |A|. • The period is [latex]\frac{2\pi}{|B|}$.
• The domain is $x\ne\frac{\pi}{|B|}k$, where k is an integer.
• The range is ( −∞, −|A|] ∪ [|A|, ∞).
• The asymptotes occur at $x=\frac{\pi}{|B|}k$, where k is an integer.
• $y=A\csc(Bx)$ is an odd function because sine is an odd function.
## Graphing Variations of y = sec x and y = csc x
For shifted, compressed, and/or stretched versions of the secant and cosecant functions, we can follow similar methods to those we used for tangent and cotangent. That is, we locate the vertical asymptotes and also evaluate the functions for a few points (specifically the local extrema). If we want to graph only a single period, we can choose the interval for the period in more than one way. The procedure for secant is very similar, because the cofunction identity means that the secant graph is the same as the cosecant graph shifted half a period to the left. Vertical and phase shifts may be applied to the cosecant function in the same way as for the secant and other functions. The equations become the following.
$y=A\sec(Bx−C)+D$
$y=A\csc(Bx−C)+D$
### A General Note: Features of the Graph of $y=A\sec(Bx−C)+D$
• The stretching factor is |A|.
• The period is $\frac{2\pi}{|B|}$.
• The domain is $x\ne \frac{C}{B}+\frac{\pi}{2|B|}k$, where k is an odd integer.
• The range is (−∞, −|A|] ∪ [|A|, ∞).
• The vertical asymptotes occur at $x=\frac{C}{B}+\frac{\pi}{2|B|}k$, where k is an odd integer.
• There is no amplitude.
• $y=A\sec(Bx)$ is an even function because cosine is an even function.
### A General Note: Features of the Graph of $y=A\csc(Bx−C)+D$
• The stretching factor is |A|.
• The period is $\frac{2\pi}{|B|}$.
• The domain is $x\ne\frac{C}{B}+\frac{\pi}{2|B|}k$, where k is an integer.
• The range is (−∞, −|A|] ∪ [|A|, ∞).
• The vertical asymptotes occur at $x=\frac{C}{B}+\frac{\pi}{|B|}k$, where k is an integer.
• There is no amplitude.
• $y=A\csc(Bx)$ is an odd function because sine is an odd function.
### How To: Given a function of the form $y=A\sec(Bx)$, graph one period.
1. Express the function given in the form $y=A\sec(Bx)$.
2. Identify the stretching/compressing factor, |A|.
3. Identify B and determine the period, $P=\frac{2\pi}{|B|}$.
4. Sketch the graph of $y=A\cos(Bx)$.
5. Use the reciprocal relationship between $y=\cos x$ and $y=\sec x$ to draw the graph of $y=A\sec(Bx)$.
6. Sketch the asymptotes.
7. Plot any two reference points and draw the graph through these points.
### Example 6: Graphing a Variation of the Secant Function
Graph one period of $f(x)=2.5\sec(0.4x)$.
### Solution
Step 1. The given function is already written in the general form, $y=A\sec(Bx)$.
Step 2. $A=2.5$ so the stretching factor is 2.5.
Step 3. $B=0.4$, so $P=\frac{2\pi}{0.4}=5\pi$. The period is 5π units.
Step 4. Sketch the graph of the function $g(x)=2.5\cos(0.4x)$.
Step 5. Use the reciprocal relationship of the cosine and secant functions to draw the cosecant function.
Steps 6–7. Sketch two asymptotes at $x=1.25\pi$ and $x=3.75\pi$. We can use two reference points, the local minimum at (0, 2.5) and the local maximum at (2.5π, −2.5). Figure 11 shows the graph.
Figure 11
### Try It 4
Graph one period of $f(x)=−2.5\sec(0.4x)$.
Solution
### Do the vertical shift and stretch/compression affect the secant’s range?
Yes. The range of f(x) = A sec(Bx − C) + D is ( −∞, −|A| + D] ∪ [|A| + D, ∞).
### How To: Given a function of the form $f(x)=A\sec (Bx−C)+D$, graph one period.
1. Express the function given in the form $y=A\sec(Bx−C)+D$.
2. Identify the stretching/compressing factor, |A|.
3. Identify B and determine the period, $\frac{2\pi}{|B|}$.
4. Identify C and determine the phase shift, $\frac{C}{B}$.
5. Draw the graph of $y=A\sec(Bx)$. but shift it to the right by $\frac{C}{B}$ and up by D.
6. Sketch the vertical asymptotes, which occur at $x=\frac{C}{B}+\frac{\pi}{2|B|}k$, where k is an odd integer.
### Example 7: Graphing a Variation of the Secant Function
Graph one period of $y=4\sec \left(\frac{\pi}{3}x−\frac{\pi}{2}\right)+1$.
### Solution
Step 1. Express the function given in the form $y=4\sec \left(\frac{\pi}{3}x−\frac{\pi}{2}\right)+1$.
Step 2. The stretching/compressing factor is |A| = 4.
Step 3. The period is
$\begin{array} \frac{2\pi}{|B|}=\frac{2\pi}{\frac{\pi}{3}} \hfill& \\ =\frac{2\pi}{1}\times\frac{3}{\pi} \hfill& \\ =6 \end{array}$
Step 4. The phase shift is
$\begin{array}\frac{C}{B}=\frac{\frac{\pi}{2}}{\frac{\pi}{3}} \hfill& \\ =\frac{\pi}{2} \times \frac{3}{\pi} \hfill& \\ =1.5 \end{array}$
Step 5. Draw the graph of $y=A\sec(Bx)$,but shift it to the right by $\frac{C}{B}=1.5$ and up by DÂ = 6.
Step 6. Sketch the vertical asymptotes, which occur at x = 0, x = 3, and x = 6. There is a local minimum at (1.5, 5) and a local maximum at (4.5, −3). Figure 12 shows the graph.
Figure 12
### Try It 5
Graph one period of $f(x)=−6\sec(4x+2)−8$.
Solution
### The domain of csc x was given to be all x such that $x\ne k\pi$ for any integer k. Would the domain of $y=A\csc(Bx−C)+D$ be $x\ne\frac{C+k\pi}{B}$?
Yes. The excluded points of the domain follow the vertical asymptotes. Their locations show the horizontal shift and compression or expansion implied by the transformation to the original function’s input.
### How To: Given a function of the form $y=A\csc(Bx)$, graph one period.
1. Express the function given in the form $y=A\csc(Bx)$.
2. |A|.
3. Identify B and determine the period, $P=\frac{2\pi}{|B|}$.
4. Draw the graph of $y=A\sin(Bx)$.
5. Use the reciprocal relationship between $y=\sin x$ and $y=\csc x$ to draw the graph of $y=A\csc(Bx)$.
6. Sketch the asymptotes.
7. Plot any two reference points and draw the graph through these points.
### Example 8: Graphing a Variation of the Cosecant Function
Graph one period of $f(x)=−3\csc(4x)$.
### Solution
Step 1. The given function is already written in the general form, $y=A\csc(Bx)$.
Step 2. $|A|=|−3|=3$, so the stretching factor is 3.
Step 3. $B=4\text{, so}P=\frac{2\pi}{4}=\frac{\pi}{2}$.The period is $\frac{\pi}{2}$ units.
Step 4. Sketch the graph of the function $g(x)=−3\sin(4x)$.
Step 5. Use the reciprocal relationship of the sine and cosecant functions to draw the cosecant function.
Steps 6–7. Sketch three asymptotes at $x=0\text{, }x=\frac{\pi}{4}\text{, and }x=\frac{\pi}{2}$.We can use two reference points, the local maximum at $\left(\frac{\pi}{8}\text{, }−3\right)$ and the local minimum at $\left(\frac{3\pi}{8}\text{, }3\right)$. Figure 13 shows the graph.
Figure 13
### Try It 6
Graph one period of $f(x)=0.5\csc(2x)$.
Solution
### How To: Given a function of the form $f(x)=A\csc(Bx−C)+D$, graph one period.
1. Express the function given in the form $y=A\csc(Bx−C)+D$.
2. Identify the stretching/compressing factor, |A|.
3. Identify B and determine the period, $\frac{2\pi}{|B|}$.
4. Identify C and determine the phase shift, $\frac{C}{B}$.
5. Draw the graph of $y=A\csc(Bx)$ but shift it to the right by and up by D.
6. Sketch the vertical asymptotes, which occur at $x=\frac{C}{B}+\frac{\pi}{|B|}k$, where k is an integer.
### Example 9: Graphing a Vertically Stretched, Horizontally Compressed, and Vertically Shifted Cosecant
Sketch a graph of $y=2\csc\left(\frac{\pi}{2}x\right)+1$. What are the domain and range of this function?
### Solution
Step 1. Express the function given in the form $y=2\csc\left(\frac{\pi}{2}x\right)+1$.
Step 2. Identify the stretching/compressing factor, $|A|=2$.
Step 3. The period is $\frac{2\pi}{|B|}=\frac{2\pi}{\frac{\pi}{2}}=\frac{2\pi}{1}\times \frac{2}{\pi}=4$.
Step 4. The phase shift is $\frac{0}{\frac{\pi}{2}}=0$.
Step 5. Draw the graph of $y=A\csc(Bx)$ but shift it up $D=1$.
Step 6. Sketch the vertical asymptotes, which occur at x = 0, x = 2, x = 4.
The graph for this function is shown in Figure 14.
Figure 14
### Analysis of the Solution
The vertical asymptotes shown on the graph mark off one period of the function, and the local extrema in this interval are shown by dots. Notice how the graph of the transformed cosecant relates to the graph of $f(x)=2\sin\left(\frac{\pi}{2}x\right)+1$, shown as the orange dashed wave.
### Try It 7
Given the graph of $f(x)=2\cos\left(\frac{\pi}{2}x\right)+1$ shown in Figure 15, sketch the graph of $g(x)=2\sec\left(\frac{\pi}{2}x\right)+1$ on the same axes.
Figure 15
Solution
## Analyzing the Graph of y = cot x and Its Variations
The last trigonometric function we need to explore is cotangent. The cotangent is defined by the reciprocal identity $\cot x=\frac{1}{\tan x}$. Notice that the function is undefined when the tangent function is 0, leading to a vertical asymptote in the graph at 0, π, etc. Since the output of the tangent function is all real numbers, the output of the cotangent function is also all real numbers.
We can graph $y=\cot x$ by observing the graph of the tangent function because these two functions are reciprocals of one another. See Figure 16. Where the graph of the tangent function decreases, the graph of the cotangent function increases. Where the graph of the tangent function increases, the graph of the cotangent function decreases.
The cotangent graph has vertical asymptotes at each value of x where $\tan x=0$; we show these in the graph below with dashed lines. Since the cotangent is the reciprocal of the tangent, $\cot x$ has vertical asymptotes at all values of x where $\tan x=0$ , and $\cot x=0$ at all values of x where tan x has its vertical asymptotes.
### A General Note: Features of the Graph of y = Acot(Bx)
• The stretching factor is |A|.
• The period is $P=\frac{\pi}{|B|}$.
• The domain is $x\ne\frac{\pi}{|B|}k$, where k is an integer.
• The range is (−∞, ∞).
• The asymptotes occur at $x=\frac{\pi}{|B|}k$, where k is an integer.
• $y=A\cot(Bx)$ is an odd function.
## Graphing Variations of y = cot x
We can transform the graph of the cotangent in much the same way as we did for the tangent. The equation becomes the following.
$y=A\cot(Bx−C)+D$
### A General Note: Properties of the Graph of y = Acot(Bx−C)+D
• The stretching factor is |A|.
• The period is $\frac{\pi}{|B|}$.
• The domain is $x\ne\frac{C}{B}+\frac{\pi}{|B|}k$, where k is an integer.
• The range is (−∞, −|A|] ∪ [|A|, ∞).
• The vertical asymptotes occur at $x=\frac{C}{B}+\frac{\pi}{|B|}k$, where k is an integer.
• There is no amplitude.
• $y=A\cot(Bx)$ is an odd function because it is the quotient of even and odd functions (cosine and sine, respectively)
### How To: Given a modified cotangent function of the form $f(x)=A\cot(Bx)$, graph one period.
1. Express the function in the form $f(x)=A\cot(Bx)$.
2. Identify the stretching factor, |A|.
3. Identify the period, $P=\frac{\pi}{|B|}$.
4. Draw the graph of $y=A\tan(Bx)$.
5. Plot any two reference points.
6. Use the reciprocal relationship between tangent and cotangent to draw the graph of $y=A\cot(Bx)$.
7. Sketch the asymptotes.
### Example 10: Graphing Variations of the Cotangent Function
Determine the stretching factor, period, and phase shift of $y=3\cot(4x)$, and then sketch a graph.
### Solution
Step 1. Expressing the function in the form $f(x)=A\cot(Bx)$ gives $f(x)=3\cot(4x)$.
Step 2. The stretching factor is $|A|=3$.
Step 3. The period is $P=\frac{\pi}{4}$.
Step 4. Sketch the graph of $y=3\tan(4x)$.
Step 5. Plot two reference points. Two such points are $\left(\frac{\pi}{16}\text{, }3\right)$ and $\left(\frac{3\pi}{16}\text{, }−3\right)$.
Step 6. Use the reciprocal relationship to draw $y=3\cot(4x)$.
Step 7. Sketch the asymptotes, $x=0$, $x=\frac{\pi}{4}$.
The orange graph in Figure 17 shows $y=3\tan(4x)$ and the blue graph shows $y=3\cot(4x)$.
Figure 17
### How To: Given a modified cotangent function of the form $f(x)=A\cot(Bx−C)+D$, graph one period.
1. Express the function in the form $f(x)=A\cot(Bx−C)+D$.
2. Identify the stretching factor, |A|.
3. Identify the period, $P=\frac{\pi}{|B|}$.
4. Identify the phase shift, $\frac{C}{B}$.
5. Draw the graph of $y=A\tan(Bx)$ shifted to the right by $\frac{C}{B}$ and up by D.
6. Sketch the asymptotes $x =\frac{C}{B}+\frac{\pi}{|B|}k$, where k is an integer.
7. Plot any three reference points and draw the graph through these points.
### Example 11: Graphing a Modified Cotangent
Sketch a graph of one period of the function $f(x)=4\cot(\frac{\pi}{8}x−\frac{\pi}{2})−2$.
### Solution
Step 1. The function is already written in the general form $f(x)=A\cot(Bx−C)+D$.
Step 2. $A=4$, so the stretching factor is 4.
Step 3. $B=\frac{\pi}{8}$, so the period is $P=\frac{\pi}{|B|}=\frac{\pi}{\frac{\pi}{8}}=8$.
Step 4. $C=\frac{\pi}{2}$, so the phase shift is $\frac{C}{B}=\frac{\frac{\pi}{2}}{\frac{\pi}{8}}=4$.
Step 5. We draw $f(x)=4\tan\left(\frac{\pi}{8}x−\frac{\pi}{2}\right)−2$.
Step 6-7. Three points we can use to guide the graph are (6,2), (8,−2), and (10,−6). We use the reciprocal relationship of tangent and cotangent to draw $f(x)=4\cot(\frac{\pi}{8}x−\frac{\pi}{2})−2$.
Step 8. The vertical asymptotes are $x=4$ and $x=12$.
The graph is shown in Figure 18.
Figure 18. One period of a modified cotangent function.
## Key Equations
Shifted, compressed, and/or stretched tangent function $y=A\tan(Bx−C)+D$ Shifted, compressed, and/or stretched secant function $y=A\sec(Bx−C)+D$ Shifted, compressed, and/or stretched cosecant $y=A\csc(Bx−C)+D$ Shifted, compressed, and/or stretched cotangent function $y=A\cot(Bx−C)+D$
## Key Concepts
• The tangent function has period Ï€.
• $f(x)=A\tan(Bx−C)+D$ is a tangent with vertical and/or horizontal stretch/compression and shift.
• The secant and cosecant are both periodic functions with a period of2Ï€. $f(x)=A\sec(Bx−C)+D$ gives a shifted, compressed, and/or stretched secant function graph.
• $f(x)=A\csc(Bx−C)+D$ gives a shifted, compressed, and/or stretched cosecant function graph.
• The cotangent function has period Ï€ and vertical asymptotes at 0, ±π,±2Ï€,….
• The range of cotangent is (−∞,∞),and the function is decreasing at each point in its range.
• The cotangent is zero at $\ne\frac{\pi}{2}\text{, }\ne\frac{3\pi}{2}$,….
• $f(x)=A\cot(Bx−C)+D$ is a cotangent with vertical and/or horizontal stretch/compression and shift.
• Real-world scenarios can be solved using graphs of trigonometric functions.
## Section Exercises
1. Explain how the graph of the sine function can be used to graph $y=\csc x$.
2. How can the graph of $y=\cos x$ be used to construct the graph of $y=\sec x$?
3. Explain why the period of $\tan x$ is equal to π.
4. Why are there no intercepts on the graph of $y=\csc x$?
5. How does the period of $y=\csc x$ compare with the period of $y=\sin x$?
For the following exercises, match each trigonometric function with one of the following graphs.
6. $f(x)=\tan x$
7. $f(x)=\sec x$
8. $f(x)=\csc x$
9. $f(x)=\cot x$
For the following exercises, find the period and horizontal shift of each of the functions.
10. $f(x)=2\tan(4x−32)$
11. $h(x)=2\sec\left(\frac{\pi}{4}(x+1)\right)$
12. $m(x)=6\csc\left(\frac{\pi}{3}x+\pi\right)$
13. If tan x = −1.5, find tan(−x).
14. If sec x = 2, find sec(−x).
15. If csc x = −5, find csc(−x).
16. If $x\sin x=2$, find $(−x)\sin(−x)$.
For the following exercises, rewrite each expression such that the argument x is positive.
17. $\cot(−x)\cos(−x)+\sin(−x)$
18. $\cos(−x)+\tan(−x)\sin(−x)$
For the following exercises, sketch two periods of the graph for each of the following functions. Identify the stretching factor, period, and asymptotes.
19. $f(x)=2\tan(4x−32)$
20. $h(x)=2\sec\left(\frac{\pi}{4}\left(x+1\right)\right)$
21. $m(x)=6\csc\left(\frac{\pi}{3}x+\pi\right)$
22. $j(x)=\tan\left(\frac{\pi}{2}x\right)$
23. $p(x)=\tan\left(x−\frac{\pi}{2}\right)$
24. $f(x)=4\tan(x)$
25. $f(x)=\tan\left(x+\frac{\pi}{4}\right)$
26. $f(x)=\pi\tan\left(\pi x−\pi\right)−\pi$
27. $f(x)=2\csc(x)$
28. $f(x)=−\frac{1}{4}\csc(x)$
29. $f(x)=4\sec(3x)$
30. $f(x)=−3\cot(2x)$
31. $f(x)=7\sec(5x)$
32. $f(x)=\frac{9}{10}\csc(\pi x)$
33. $f(x)=2\csc \left(x+\frac{\pi}{4}\right)−1$
34. $f(x)=−\sec \left(x−\frac{\pi}{3}\right)−2$
35. $f(x)=\frac{7}{5}\csc \left(x−\frac{\pi}{4}\right)$
36. $f(x)=5\left(\cot\left(x+\frac{\pi}{2}\right)−3\right)$
For the following exercises, find and graph two periods of the periodic function with the given stretching factor, |A|, period, and phase shift.
37. A tangent curve, $A=1$, period of $\frac{\pi}{3}$; and phase shift $(h\text{,}k)=\left(\frac{\pi}{4}\text{,}2\right)$
38. A tangent curve, $A=−2$, period of $\frac{\pi}{4}$, and phase shift $(h\text{,}k)=\left(−\frac{\pi}{4}\text{,}−2\right)$
For the following exercises, find an equation for the graph of each function.
39.
40.
41.
42.
43.
44.
45.
47.
49.
50. Graph $f(x)=1+\sec^{2}(x)−\tan^{2}(x)$. What is the function shown in the graph?
51. $f(x)=\sec(0.001x)$
52. $f(x)=\cot(100\pi x)$
53. $f(x)=\sin^{2}x+\cos^{2}x$
54. The function $f(x)=20\tan\left(\frac{\pi}{10}x\right)$ marks the distance in the movement of a light beam from a police car across a wall for time x, in seconds, and distance $f(x)$, in feet.
a. Graph on the interval[0,5].
b. Find and interpret the stretching factor, period, and asymptote.
c. Evaluate f(1) and f(2.5) and discuss the function’s values at those inputs.
55. Standing on the shore of a lake, a fisherman sights a boat far in the distance to his left. Let x, measured in radians, be the angle formed by the line of sight to the ship and a line due north from his position. Assume due north is 0 and x is measured negative to the left and positive to the right. (See Figure 19.) The boat travels from due west to due east and, ignoring the curvature of the Earth, the distance $d(x)$, in kilometers, from the fisherman to the boat is given by the function $d(x)=1.5\sec(x)$.
a. What is a reasonable domain for $d(x)$?
b. Graph d(x) on this domain.
c. Find and discuss the meaning of any vertical asymptotes on the graph of $d(x)$.
d. Calculate and interpret $d(−\frac{\pi}{3})$. Round to the second decimal place.
e. Calculate and interpret $d(\frac{\pi}{6})$. Round to the second decimal place.
f. What is the minimum distance between the fisherman and the boat? When does this occur?
Figure 19
56. A laser rangefinder is locked on a comet approaching Earth. The distance $g(x)$, in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by $g(x)=250,000\csc(\frac{\pi}{30}x)$.
a. Graph $g(x)$ on the interval [0,35].
b. Evaluate $g(5)$ and interpret the information.
c. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond?
d. Find and discuss the meaning of any vertical asymptotes.
57. A video camera is focused on a rocket on a launching pad 2 miles from the camera. The angle of elevation from the ground to the rocket after x seconds is $\frac{\pi}{120}x$.
a. Write a function expressing the altitude $h(x)$, in miles, of the rocket above the ground after x seconds. Ignore the curvature of the Earth.
b. Graph $h(x)$ on the interval (0,60).
c. Evaluate and interpret the values $h(0)$ and $h(30)$.
d. What happens to the values of $h(x)$ as x approaches 60 seconds? Interpret the meaning of this in terms of the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369351267814636, "perplexity": 564.6922716598922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202418.22/warc/CC-MAIN-20200929154729-20200929184729-00234.warc.gz"} |
https://cs.stackexchange.com/questions/53573/select-a-subset-of-the-columns-in-3-times-n-matrix-is-it-np-hard | # Select a subset of the columns in $3\times n$ matrix, is it NP-hard?
I want to know if this problem is NP-hard?
The problem: Given a non-negative integer-valued matrix of size $3\times n$ of the form
$$\begin{bmatrix} a_1 & \ldots & a_n\\ b_1 & \ldots & b_n\\ a_1b_1 & \ldots & a_nb_n\\ \end{bmatrix},$$
where $a_i,b_i$ are non-negative integers. Given four non-negative integer numbers $b<n$, $c_1$, $c_2$ and $c_3$.
The goal: Find a subset of the columns of cardinality at most $b$ such that (i) the sum of the first row over the subset of the columns is greater than $c_1$, (ii) the sum of the second row over the subset of the columns is greater than $c_2$ and (iii) the sum of the third row over the subset of the columns is less than $c_3$. Return no if no such subset exists. That is to say, is there exists a set $S$ of cardinality at most $b<n$ such that:
$$\sum_{i\in S}a_i\geq c_1,\\ \sum_{i\in S}b_i\geq c_2,\\ \sum_{i\in S}a_ib_i\leq c_3,$$
How to show that this problem is NP-hard?
Attempt:
Here is a "similar" NP-hard problem that I asked. This problem has as instance $(n,b,c,\mathbf{A})$ where $\mathbf{A}$ is given by $$\begin{bmatrix} x_1 & \ldots & x_n\\ y_1 & \ldots & y_n\\ \end{bmatrix},$$ I tried to reduce the previous problem to the new one as follows: choose $c_1=c_2=c$ and choose $c_3$ very big and create the matrix as follows:
$$\begin{bmatrix} x_1 & \ldots & x_n\\ y_1 & \ldots & y_n\\ x_1y_1 & \ldots & x_ny_n\\ \end{bmatrix},$$
Your problem is NP-complete, indeed. You can, e.g., reduce PARTITION to your problem. If you want to know if there is a subset of the set $\{d_1,\ldots,d_k\}$ that sums up to exactly $D := \frac{1}{2} \sum_{i=1}^k d_i$, you can use your problem by setting $b_i := 1$ and $a_i := d_i$ for all $i$ and by setting $c_1 := D$, $c_2 = 0$, and $c_3 := D$ as well as $b := n-1$.
• And $k$ in PARTITION is $n$ ? – Brika Feb 24 '16 at 20:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90302574634552, "perplexity": 95.35023298430164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00412.warc.gz"} |
http://mathhelpforum.com/geometry/129869-finding-area-circular-sector.html | # Thread: finding area of a circular sector
1. ## finding area of a circular sector
Hello all! I just found this site and was hoping I could get some Trig help.
Could you please check my work?
given r=9cm and theta=120deg
$\displaystyle A=1/2(r^2)(/theta)$
$\displaystyle 1/2(9cm)^2(120/360)$
$\displaystyle 1/2(81cm^2)(120/360)$
$\displaystyle (40.5cm^2)(120/360)$
$\displaystyle A=13.5cm^2$
Thanks for taking a look!
Joe
2. Not quite. Change 120 degrees into radians, and don't divide by 360. I think you are still thinking in terms of the sector being a ratio [fraction] of the entire circle, but that was done in developing the formula in the first place. Once you have the formula, and you do, just plug and chug, but DO use radians, not degrees.
3. $\displaystyle A=1/2(r^2)(/theta)$
$\displaystyle 1/2(9cm)^2(120pi/180)$
$\displaystyle 1/2(81cm^2)(2pi/3)$
$\displaystyle (40.5cm^2)(2pi/3)$ <------ this is where I get stuck. Am I supposed to convert this to decimal form?
$\displaystyle (40.5cm^2)(.1960)$
$\displaystyle A=7.9398cm^2$
4. Originally Posted by rasczak
$\displaystyle A=1/2(r^2)(/theta)$
$\displaystyle 1/2(9cm)^2(120pi/180)$
$\displaystyle 1/2(81cm^2)(2pi/3)$
$\displaystyle (40.5cm^2)(2pi/3)$ <------ this is where I get stuck. Am I supposed to convert this to decimal form? Mr F says: It depends on whether the question requires an exact answer or an approximate answer. Note that this answer can be further simplified.
$\displaystyle (40.5cm^2)(.1960)$
$\displaystyle A=7.9398cm^2$
..
5. Originally Posted by mr fantastic
..
The original question was
a.) Find the length of the arc of the colored sector
b.) Find the area of the sector. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717889189720154, "perplexity": 459.5652883503729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00479.warc.gz"} |
https://www.physicsforums.com/threads/electric-field-and-a-charged-cork-ball-on-a-massless-string.431695/ | # Homework Help: Electric Field and a Charged Cork Ball on a Massless String
1. Sep 23, 2010
### gsquare567
1. The problem statement, all variables and given/known data
A charged cork ball of mass 1g is suspended on a light string in the presence of a uniform electric field. When E = (3i + 5j) * 10^5 N/C, the ball is in equilibrium at $$\theta$$ = 37 degrees. Find the charge on the ball and the tension on the string.
====================
|\
|$$\theta$$\
| \
| \ / <- uniform electric field, E = (3i + 5j) * 10^5 N/C
| \/
| O <- charged cork ball, mass = 1g, q = ?
i
^
|
|
-------> j
2. Relevant equations
F(electric) = qE
F(grav) = mg(-j)
3. The attempt at a solution
I tried using the cosine law with the forces to isolate for $$\theta$$
a^2 = b^2 + c^2 -2bc * cos($$\alpha$$)
but it canceled out.
Anyways,
$$\alpha$$ = $$\theta$$
a = F(elec)
b = F(grav)
c = F(res) = F(elec) + F(grav)
Any ideas? Thanks!
2. Sep 24, 2010
### rl.bhat
In the equilibrium position, resolve mg into two components.
mg*cosθ will provide the tension and mg*sinθ will try to bring the string into vertical position.
The magnitude of the electric field is sqrt(34) and makes an angle α = arctan(5/3)
The electric force on the cork ball is q*E.
In the equilibrium position angle between the string and the electric force is (90 +α - θ )
Its component along the string is Fe*cos(90 +α - θ) and perpendicular to the string is Fe*sin(90 +α - θ ).
Equate them with the components of mg and solve for T and q. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204427361488342, "perplexity": 2242.811243518607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00464.warc.gz"} |
http://math.stackexchange.com/users/26728/sidht?tab=activity | # sidht
less info
reputation
522
bio website location age member for 2 years seen yesterday profile views 728
# 2,095 Actions
13h awarded Yearling Mar8 asked Why is this wrong? Strong Cauchy criterion. Mar1 accepted An simple algebra problem Feb27 revised An simple algebra problem added 2 characters in body Feb27 asked An simple algebra problem Feb27 comment How to prove the sequence $(-1)^n$ has no limit using first principles? I meant to say what $n$ to choose, is what meant when I say "N". Or I meant to say "pick out" the $N$ Feb27 comment How to prove the sequence $(-1)^n$ has no limit using first principles? How do you justify the first equality? Feb27 asked How to prove the sequence $(-1)^n$ has no limit using first principles? Feb25 accepted Convexity of support function Feb25 comment Convexity of support function Oh god, how could I not see that!? Feb25 comment Convexity of support function I thought that's only true (as with equality) if only the set is bounded above. Feb25 asked Convexity of support function Feb23 comment What's the difference between the Well-ordering principle and Least Upper Bound property? I am guessing his worry is that $\sup$ may not belong to the set? But in my example (first comment), there is no fear, because my set is going to finite (and hence closed). Feb23 comment What's the difference between the Well-ordering principle and Least Upper Bound property? But I can actually get the result of the Well-Ordering from LUB right? Because every subset of the positive integers (they must be bounded) has a least upper bound or glb. Feb23 comment Given the differential equation: $\frac{dy}{dt} = (y-2)(y-5)^2$ sketch phase line and identify equilibrium Draw the directions on the lines Feb23 asked What's the difference between the Well-ordering principle and Least Upper Bound property? Feb22 accepted Tangent Cone is a cone? Feb22 revised Tangent Cone is a cone? going to write my own proof in answer, so deleting all unnecessary work. Feb22 answered Tangent Cone is a cone? Feb22 revised Tangent Cone is a cone? going to write my own proof in answer, so deleting all unnecessary work. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249464631080627, "perplexity": 1049.0813672437848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023864543/warc/CC-MAIN-20140305125104-00027-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/hi-again-exam-time-newton-raphson-method | +0
# Hi, again, exam time. Newton Raphson method
0
475
17
+1314
Q: x cos(x) - 1/x.
Thanks, where did I go wrong?
Additionally, this site both in structure (scheme) and naviagtion where messaging and old posts are involved has discouraged me from using the site. I'd have rathered that the anonymous posting went from the site and that it remained the same as it was. I cannot easily see my posts from the profile or menu (top right), and when searching you cannot go to your last comment you must go to the original post which could be some time ago in last posts. And the white scheme is a bit drab, and makes things hard on the eyes visually. Might be the font and size.
Stu May 30, 2014
#12
+85726
+5
Let me see if I can help you with this...here's what Alan is saying...
Let's "guess" that the "zero" of the function is "4.005"
The "method," in simple terms, is just this:
(Our guess) - (Our Guess put into the function)/ (Our guess put into the derivative) = (The next "guess")
So we have
(4.005)- [4.005cos(4.005) - (1/4.005)]/[ cos(4.005)-(4.005)sin(4.005) + (1/(4.005)2] = (The next "guess")
Now.......if the "next guess" happens to be 4.005, we have found the "zero." If not, we put this "next guess" into the "formula" and evaluate that. Notice that, when Alan put 4.005 in, he got back 5.16612. That's not equal to 4.055, so we put 5.16612 in the mill and it cranks out 4.76159. Still no "match" from the previous "guess".....
So he puts 4.76159 into the mill and gets back 4.75661...still no match.....then, he puts 4.75661 into the "formula" and gets back 4.7566....AHA!!......we're on to something!! Notice that this "new" answer is really close to the last one!!
Just to be sure, he puts this "new" answer ("guess") into the formula once more and gets out (wait for it)........4.7566.....note this matches the last "answer," so we've found the "zero.'
See how that works??
(Sometimes - particularly if I have a graph of the function - I find it easier just to evaluate the function at various values just to see if I can find something "close" to zero. This is sometimes less cumbersome than Newton's Method - if you can make some good "guesses.")
Hope i've been of some help!!
CPhill Jun 1, 2014
Sort:
#1
+1314
0
I put in the missing bracket but it still is wrong.
Stu May 30, 2014
#2
+1314
0
Answer given is 5.166r solved to the 1st degree of accuracy.
Stu May 30, 2014
#3
+889
+5
I don't recognise your calculation as Newton-Raphson. Where does that ln come from ?
If f(x)=x.cos(x)-1/x, then f '(x)=cos(x)-x.sin(x)+1/(x^2), (angles in radians).
Then x(n+1)=x(n)-f(x(n))/f '(x(n)).
Bertie May 30, 2014
#4
+1314
0
$${f}{\left({\mathtt{x}}\right)} = {\mathtt{x}}{\mathtt{\,\times\,}}\underset{\,\,\,\,^{{360^\circ}}}{{cos}}{\left({\mathtt{x}}\right)}{\mathtt{\,-\,}}{\frac{{\mathtt{1}}}{{\mathtt{x}}}}$$
Stu May 30, 2014
#5
+26626
+5
If you are trying to find a zero of xcos(x)-1/x you should note that there are an infinite number of them - see the graph below.
There is no solution at 5.166 radians!
$${\mathtt{5.166}}{\mathtt{\,\times\,}}\underset{\,\,\,\,^{{360^\circ}}}{{cos}}{\left({\frac{{\mathtt{5.166}}{\mathtt{\,\times\,}}{\mathtt{180}}}{{\mathtt{\pi}}}}\right)}{\mathtt{\,-\,}}{\frac{{\mathtt{1}}}{{\mathtt{5.166}}}} = {\mathtt{2.070\: \!241\: \!313\: \!464\: \!202\: \!4}}$$
Alan May 30, 2014
#6
+1314
0
I want to solve it, with 4.005rad to get the answer 5.166 rad. But my answer was wrong. I dont undestand your meanings, of putting the 5.266 into the question. How did you get it. Is that the second step of the formula or just proof?
Stu May 31, 2014
#7
+1314
0
I found my error in the 1/x when i made it lnx. I have reworked it to make -(1/x) = derivative: 1/x^2 which should be right. I hope. Is that the only error I made?
Stu May 31, 2014
#8
+1314
0
I tried again from the start. I stil lcan't get it and don't know what i need to do different/ what im doing wrong. I took the derivative/product rule or all value. What's missing?
Stu May 31, 2014
#9
+26626
+5
1. The point of putting 5.166 radians into the function was to show that the result was not zero. In other words, 5.166 is not a root of the function (a "root" is a value that makes the function equal to zero).
2. The Newton-Raphson method requires iteration. That is, you take the result that appears from your initial guess for x and you put it back into the formula to get another guess. You then take the result of that and keep repeating the process until the output x is the same as the input x. You have then converged on a solution. Perhaps the image below might help.
You can see that 5.166 is just the result of the first iteration when you use 4.005 as an initial guess.
You have used 4.005° inside the sin and cos functions, but you must have 4.005 radians. Also you should have a 4.005 multiplying the sin term in the denominator (for the first iteration only, of course. It should be the latest estimate of x for the other iterations).
Alan May 31, 2014
#10
+1314
0
I believe m last answer was right in radians, though i forgot it initially and had that in degrees. If it is right what does the comment mean, "Also you should have a 4.005 multiplying the sin term in the denominator (for the first iteration only, of course." So was my answer not the correct form?
Stu Jun 1, 2014
#11
+1314
0
I tried again this morning - I got the answer 4.6877..... But it is wrong. Where is my mistake this time. That was in radians on wolfram alpha.
Stu Jun 1, 2014
#12
+85726
+5
Let me see if I can help you with this...here's what Alan is saying...
Let's "guess" that the "zero" of the function is "4.005"
The "method," in simple terms, is just this:
(Our guess) - (Our Guess put into the function)/ (Our guess put into the derivative) = (The next "guess")
So we have
(4.005)- [4.005cos(4.005) - (1/4.005)]/[ cos(4.005)-(4.005)sin(4.005) + (1/(4.005)2] = (The next "guess")
Now.......if the "next guess" happens to be 4.005, we have found the "zero." If not, we put this "next guess" into the "formula" and evaluate that. Notice that, when Alan put 4.005 in, he got back 5.16612. That's not equal to 4.055, so we put 5.16612 in the mill and it cranks out 4.76159. Still no "match" from the previous "guess".....
So he puts 4.76159 into the mill and gets back 4.75661...still no match.....then, he puts 4.75661 into the "formula" and gets back 4.7566....AHA!!......we're on to something!! Notice that this "new" answer is really close to the last one!!
Just to be sure, he puts this "new" answer ("guess") into the formula once more and gets out (wait for it)........4.7566.....note this matches the last "answer," so we've found the "zero.'
See how that works??
(Sometimes - particularly if I have a graph of the function - I find it easier just to evaluate the function at various values just to see if I can find something "close" to zero. This is sometimes less cumbersome than Newton's Method - if you can make some good "guesses.")
Hope i've been of some help!!
CPhill Jun 1, 2014
#13
+85726
0
Stu...your answer is "wrong" because 4.005 ISN'T a zero....look at my previous post...it's a pretty detailed explanation of the proceedure - in simple terms....
CPhill Jun 1, 2014
#14
+1314
0
we are asked to find the 1st solution only with the guess 4.005rad and the answer to get is 5.166rad. Although my maths isn't getting that answer i believe it is going wrong somewhere in solving the formula. The anser is 5.166 because i got this once. But im not getting it every time. I do not need to solve more degrees of accuracy to get the 5.166 radian answer. It is the answer for the x0 + 1= 1
As seen in alans answer his first solution is 5.166 although mine is not this answer. I am trying only to find where is my mistake, or have the way to input it, and way the equation final step should look, / identify my mistake to get the right answer.)
Stu Jun 1, 2014
#15
+1314
0
in radian my answer is 4.54 regularly. There is something i have missed.
Stu Jun 1, 2014
#16
+26626
0
Look very carefully at your denominator Stu. here is a copy of your last posted attempt:
The sin term should be multiplied by 4.005. You have added 4.005 to it.
Alan Jun 1, 2014
#17
+1314
0
Aha, spot on. That would be what Im looking for. Thanks.
It was mentioned earlier but i didn't understand why or see why. I get it now, a part of the product rule, and totally overlooked. Cheers.
Stu Jun 1, 2014
### 11 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439614176750183, "perplexity": 1745.4837016844865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00459.warc.gz"} |
https://online.stat.psu.edu/stat414/book/export/html/640 | # 1.5 - Summarizing Quantitative Data Graphically
1.5 - Summarizing Quantitative Data Graphically
## Example 1-2
As discussed previously, how we summarize a set of data depends on the type of data. Let's take a look at an example. A sample of 40 female statistics students were asked how many times they cried in the previous month. Their replies were as follows:
9 5 3 2 6 3 2 2 3 4 2 8 4 4 5 0 3 0 2 4 2 1 1 2 2 1 3 0 2 1 3 0 0 2 2 3 4 1 1 5
That is, one student reported having cried nine times in the one month, while five students reported having cried not at all. It's pretty hard to draw too many conclusions about the frequency of crying for females statistics students without summarizing the data in some way.
Of course, a common way of summarizing such discrete data is by way of a histogram.
Here's what a frequency histogram of these data look like:
As you can see, a histogram gives a nice picture of the "distribution" of the data. And, in many ways, it's pretty self-explanatory. What are the notable features of the data? Well, the picture tells us:
• The most common number of times that the women cried in the month was two (called the "mode").
• The numbers ranged from 0 to 9 (that is, the "range" of the data is 9).
• A majority of women (22 out of 40) cried two or fewer times, but a few cried as much as six or more times.
Can you think of anything else that the frequency histogram tells us? If we took another sample of 40 female students, would a frequency histogram of the new data look the same as the one above? No, of course not — that's what variability is all about.
Can you create a series of steps that a person would have to take in order to make a frequency histogram such as the one above? Does the following set of steps seem reasonable?
## To create a frequency histogram of (finite) discrete data
1. Determine the number, $$n$$, in the sample.
2. Determine the frequency, $$f_i$$, of each outcome $$i$$.
3. Center a rectangle with base of length 1 at each observed outcome $$i$$ and make the height of the rectangle equal to the frequency.
For our crying (out loud) data, we would first tally the frequency of each outcome:
# OF TIMES Tally Frequency $$f_i$$
0 ||||| 5
1 ||||| | 6
2 ||||| ||||| | 11
3 ||||| || 7
4 ||||| 5
5 ||| 3
6 | 1
7 0
8 | 1
9 | 1
$$n$$=40
and then we'd use the first column for the horizontal-axis and the third column for the vertical-axis to draw our frequency histogram:
Well, of course, in practice, we'll not need to create histograms by hand. Instead, we'll just let statistical software (such as Minitab) create histograms for us.
Okay, so let's use the above frequency histogram to answer a few more questions:
• What percentage of the surveyed women reported not crying at all in the month?
• What percentage of the surveyed women reported crying two times in the month? and three times?
Clearly, the frequency histogram is not a 100%-user friendly. To answer these types of questions, it would be better to use a relative frequency histogram:
Now, the answers to the questions are a little more obvious — about 12% reported not crying at all; about 28% reported crying two times; and about 18% reported crying three times.
## To create a relative frequency histogram of (finite) discrete data
1. Determine the number, $$n$$, in the sample.
2. Determine the frequency, $$f_i$$, of each outcome $$i$$.
3. Calculate the relative frequency (proportion) of each outcome $$i$$ by dividing the frequency of outcome $$i$$ by the total number in the sample $$n$$ — that is, calculate $$\frac{f_i}{n}$$ for each outcome $$i$$.
4. Center a rectangle with base of length 1 at each observed outcome i and make the height of the rectangle equal to the relative frequency.
While using a relative frequency histogram to summarize discrete data is a worthwhile pursuit in and of itself, my primary motive here in addressing such histograms is to motivate the material of the course. In our example, if we
• let X = the number of times (days) a randomly selected student cried in the last month, and
• let x = 0, 1, 2, ..., 31 be the possible values
Then $$h_0=\frac{f_0}{n}$$ is the relative frequency (or proportion) of students, in a sample of size $$n$$, crying $$x_0$$ times. You can imagine that for really small samples $$\frac{f_0}{n}$$ is quite unstable (think $$n = 5$$, for example). However, as the sample size $$n$$ increases, $$\frac{f_0}{n}$$ tends to stabilize and approach some limiting probability $$p_0=f(x_0)$$ (think $$n = 1000$$, for example). You can think of the relative frequency histogram serving as a sample estimate of the true probabilities of the population.
It is this $$f(x_0)$$, called a (discrete) probability mass function, that will be the focus of our attention in Section 2 of this course.
## Example 1-3
Let's take a look at another example. The following numbers are the measured nose lengths (in millimeters) of 60 students:
38 50 38 40 35 52 45 50 40 32 40 47 70 55 51 43 40 45 45 55 37 50 45 45 55 50 45 35 52 32 45 50 40 40 50 41 41 40 40 46 45 40 43 45 42 45 45 48 45 45 35 45 45 40 45 40 40 45 35 52
How would we create a histogram for these data? The numbers look discrete, but they are technically continuous. The measuring tools, which consisted of a piece of string and a ruler, were the limiting factors in getting more refined measurements. Do you also notice that, in most cases, nose lengths come in five-millimeter increments... 35, 40, 45, 55...? Of course not, silly me... that's, again, just measurement error. In any case, if we attempted to use the guidelines for creating a histogram for discrete data, we'd soon find that the large number of disparate outcomes would prevent us from creating a meaningful summary of the data. Let's instead follow these guidelines:
## To create a histogram of continuous data (or discrete data with many possible outcomes)
The major difference is that you first have to group the data into a set of classes, typically of equal length. There are many, many sets of rules for defining the classes. For our purposes, we'll just rely on our common sense — having too few classes is as bad as having too many.
1. Determine the number, $$n$$, in the sample.
2. Define $$k$$ class intervals $$(c_0, c_1], (c_1, c_2], ..., (c_{k-1}, c_k]$$.
3. Determine the frequency, $$f_i$$, of each class $$i$$.
4. Calculate the relative frequency (proportion) of each class by dividing the class frequency by the total number in the sample — that is, $$\frac{f_i}{n}$$.
5. For a frequency histogram: draw a rectangle for each class with the class interval as the base and the height equal to the frequency of the class.
6. For a relative frequency histogram: draw a rectangle for each class with the class interval as the base and the height equal to the relative frequency of the class.
7. For a density histogram: draw a rectangle for each class with the class interval as the base and the height equal to $$h(x)=\dfrac{f_i}{n(c_i-c_{i-1})}$$ for $$c_{i-1}<x \leq c_i$$, $$i = 1, 2,..., k$$.
Here's what the work would like for our nose length example if we used 5 mm classes centered at 30, 35, ... 70:
Class Interval Tally Frequency Relative Frequency Density Height
27.5-32.5 || 2 0.033 0.0066
32.5-37.5 ||||| 5 0.083 0.0166
37.5-42.5 ||||| ||||| ||||| || 17 0.283 0.0566
42.5-47.5 ||||| ||||| ||||| ||||| | 21 0.350 0.0700
47.5-52.5 ||||| ||||| | 11 0.183 0.0366
52.5-57.5 ||| 3 0.183 0.0366
57.5-62.5 0 0 0
62.5-67.5 0 0 0
67.5-72.5 | 1 0.017 0.0034
60 0.999 (rounding)
For example, the relative frequency for the first class (27.5 to 32.5) is 2/60 or 0.033, whereas the height of the rectangle for the first class in a density histogram is 0.033/5 or 0.0066. Here is what the density histogram would like in its entirety:
Note that a density histogram is just a modified relative frequency histogram. That is, a density histogram is defined so that:
• the area of each rectangle equals the relative frequency of the corresponding class, and
• the area of the entire histogram equals 1.
Again, while using a density histogram to summarize continuous data is a worthwhile pursuit in and of itself, my primary motive here in addressing such histograms is to motivate the material of the course. As the sample size $$n$$ increases, we can imagine our density histogram approaching some limiting continuous function $$f(x)$$, say. It is this continuous curve $$f(x)$$ that we will come to know in Section 3 as a (continuous) probability density function.
So, in Section 2, we'll learn about discrete probability mass functions (p.m.f.s). In Section 3, we'll learn about continuous probability density functions (p.d.f.s). In Section 4, we'll learn about p.m.f.s and p.d.f.s for two (random) variables (instead of one). In Section 5, we'll learn how to find the probability distribution for functions of two or more (random) variables. Wow! That's a lot of work. Before we can take it on, however, we will first spend some time in this Section 1 filling up our probability toolbox with some basic probability rules and tools.
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878851056098938, "perplexity": 556.3724429329199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00179.warc.gz"} |
http://www.jmlr.org/papers/v16/helmbold15a.html | ## On the Inductive Bias of Dropout
David P. Helmbold, Philip M. Long; 16(Dec):3403−3454, 2015.
### Abstract
Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager et al. We focus on linear classification where a convex proxy to the misclassification loss (i.e. the logistic loss used in logistic regression) is minimized. We show:
• when the dropout-regularized criterion has a unique minimizer,
• when the dropout- regularization penalty goes to infinity with the weights, and when it remains bounded,
• that the dropout regularization can be non- monotonic as individual weights increase from 0, and
• that the dropout regularization penalty may not be convex.
This last point is particularly surprising because the combination of dropout regularization with any convex loss proxy is always a convex function. In order to contrast dropout regularization with $L_2$ regularization, we formalize the notion of when different random sources of data are more compatible with different regularizers. We then exhibit distributions that are provably more compatible with dropout regularization than $L_2$ regularization, and vice versa. These sources provide additional insight into how the inductive biases of dropout and $L_2$ regularization differ. We provide some similar results for $L_1$ regularization.
[abs][pdf][bib] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254406094551086, "perplexity": 809.1752226759146}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00499.warc.gz"} |
https://www.lessonplanet.com/teachers/multiplication-and-division-3-4th | # Multiplication and Division 3
In this division worksheet, 4th graders problem solve 10 division math problems involving 3, 4, 5 and 6 digit whole numbers divided by single digit numbers. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430879712104797, "perplexity": 1049.0979539777354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00418.warc.gz"} |
https://raisingthebar.nl/2017/03/20/binary-choice-models/ | 20
Mar 17
## Binary choice models
### Problem statement
Consider the classical problem of why people choose to drive or use public transportation to go to their jobs. For a given individual $i$, we observe the decision variable $y_i$, which takes values either 0 (drive) or 1 (use public transportation), and various variables that impact the decision, like costs, convenience, availability of parking, etc. We denote the independent variables $x_1,...,x_k$ and their values for a given individual $x_i=(x_{1i},...,x_{ki})$. For convenience, the usual expression $\beta_0+\beta_1x_{1i}+...+\beta_kx_{ki}$ that arises on the right-hand side of a multiple regression is called an index and denoted $Index_i$.
We want to study how people's decisions $y_i$ depend on the index $Index_i$.
### Linear probability model
If you are familiar with the multiple regression, the first idea that comes to mind is
(1) $y_i=Index_i+u_i$.
This turns out to be a bad idea because the range of the variable on the left is bounded and the range of the index is not. Whenever there is a discrepancy between the bounded decision variable and the unbounded index, it has to be made up for by the error term. Thus the error term will be certainly bad. (A detailed analysis shows that it will be heteroscedastic but this fact is less important than the problem with range boundedness.)
The next statement helps to understand the right approach. We need the unbiasedness condition from the first approach to stochastic regressors:
(2) $E(u_i|x_i)=0$.
Statement. A combination of equations (1)+(2) is equivalent to just one equation
(3) $P(y_i=1|x_i)=Index_i$.
Proof. Step 1. Since $y_i$ is a Bernoulli variable, by the definition of conditional expectation we have the identity
(4) $E(y_i|x_i)=P(y_i=1|x_i)\times 1+P(y_i=0|x_i)\times 0=P(y_i=1|x_i)$.
Step 2. If (1)+(2) is true, then by (4)
$P(y_i=1|x_i)=E(y_i|x_i)=E(Index_i+u_i|x_i)=Index_i$,
so (3) holds (see Property 7). Conversely, suppose that (3) is true. Let us write
(5) $y_i=P(y_i=1|x_i)+[y_i-P(y_i=1|x_i)]$
and denote $u_i=y_i-P(y_i=1|x_i)$. Then using (4) we see that (2) is satisfied:
$E(u_i|x_i)=E(y_i|x_i)-E[P(y_i=1|x_i)|x_i]=E(y_i|x_i)-P(y_i=1|x_i)=0$
(we use Property 7 again). (5) and (3) give (1). The proof is over.
This little exercise shows that the linear model (1) is the same as (3), which is called a linear probability model. (3) has the same problem as (1): the variable on the left is bounded and the index on the right is not. Note also a conceptual problem unseen before: while the decisions $y_i$ are observed, the probabilities $P(y_i=1|x_i)$ are not. This is why one has to use the maximum likelihood method.
### Binary choice models
Since we know what is a distribution function, we can guess how to correct (3). A distribution function has the same range as the probability at the left of (3), so the right model should look like this:
(6) $P(y_i=1|x_i)=F(Index_i)$
where $F$ is some distribution function. Two choices of $F$ are common
(a) $F$ is a distribution function of the standard normal; in this case (6) is called a probit model.
(b) $F$ is a logistic function; then (6) is called a logit model.
### Measuring marginal effects
For the linear model (1) the marginal effect of variable $x_{ji}$ on $y_i$ is measured by the derivative $\partial y_i/\partial x_{ji}=\beta_j$ and is constant. If we apply the same idea to (6), we see that the marginal effect is not constant:
(7) $\frac{\partial P(y_i=1|x_i)}{\partial x_{ji}}=\frac{\partial F(Index_i)}{\partial Index_i}\frac{\partial Index_i}{\partial x_{ji}}=f(Index_i)\beta_j$
where $f$ is the density of $F$ (we use the distribution function differentiation equation and the chain rule). In statistical software, the value of $f(Index_i)$ is usually reported at the mean value of the index.
For the probit model equation (7) gives $\frac{\partial P(y_i=1|x_i)}{\partial x_{ji}}=\frac{1}{\sqrt{2\pi}}\exp(-\frac{Index_i^2}{2})\beta_j$ and for the logit $\frac{\partial P(y_i=1|x_i)}{\partial x_{ji}}=\frac{e^{-Index_i}}{(1+e^{-Index_i})^2}\beta_j$. There is no need to remember these equations if you know the algebra. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 66, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717939496040344, "perplexity": 395.9740047675516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00035.warc.gz"} |
https://physics.stackexchange.com/users/3924/wil3?tab=topactivity | wil3
### Questions (7)
3 Helmholtz decomposition allows incompressible flow with an irrotational component? 2 Random walk with self-transitions taking continuum limit 2 Absorption Spectral Broadening 1 Minimal dynamical system with quasiperiodic oscillations 1 What to do with an extra index in the definition of a tensor?
### Reputation (235)
This user has no recent positive reputation changes
This user has not answered any questions
### Tags (17)
0 fluid-dynamics × 2 0 potential-flow 0 non-linear-systems × 2 0 quasi-periodic 0 notation 0 spectroscopy 0 numerical-method 0 tensor-calculus 0 oscillators 0 vector-fields
### Bookmarks (10)
129 Reading the Feynman lectures in 2012 32 Don't understand the integral over the square of the Dirac delta function 23 Linear response theory for Gross Pitaevskii equation 17 How to estimate the Kolmogorov length scale 9 Uniqueness of Helmholtz decomposition? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849283754825592, "perplexity": 3616.768695428854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00462.warc.gz"} |
https://www.physicsforums.com/threads/quantum-matrices.55690/ | # Quantum Matrices
1. Dec 6, 2004
### ^_^
I just started working with mathematical physics coming from a straight physics background. The actual work doesn't seem that hard but some of the notation is unfamilar.
The work is "A Short Course on Quantum Matrices" by Mitsuhiro Takeuchi that can be found at:
http://www.msri.org/publications/books/Book43/files/takeuchi.pdf
Definition 1.5 on p 386 presents the first display of my ignorance...what is M2, M4, what is the circle with the cross and what is the Yang Baxter equation trying to tell me?
Basically what the hell is happening on that whole page from Definition 1.5 onwards? Proposition 1.6 is just as forign to me. Don't warry about things from section 2.
Where can I find a list of all the notation that I should have picked up in an undergrad degree in maths but instead I was off doing physics?
2. Dec 7, 2004
### matt grime
$$\otimes$$ is the tensor product.
M_n(k) is the nxn matrices with entries in the field k
Yang Baxter is telling you how to commute some elements in the algebra, though I forget the analogies people make.
q is an indeterminate, it measures how far way from being the ordinary case the quantum case is. Usually, the limit as q tends to 0 is the classical case.
The notation M_n is undergrad. The tensor product probably isn't in most places.
www.dpmms.cam.ac.uk/~wtg10
on the page of mathematical discussions, there is one called "lose your fear of tensor products".
Similar Discussions: Quantum Matrices | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947951793670654, "perplexity": 1045.6886592240683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00116.warc.gz"} |
https://papers.nips.cc/paper/2021/hash/c21f4ce780c5c9d774f79841b81fdc6d-Abstract.html | #### Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model
The curse of dimensionality is a widely known issue in reinforcement learning (RL). In the tabular setting where the state space $\mathcal{S}$ and the action space $\mathcal{A}$ are both finite, to obtain a near optimal policy with sampling access to a generative model, the minimax optimal sample complexity scales linearly with $|\mathcal{S}|\times|\mathcal{A}|$, which can be prohibitively large when $\mathcal{S}$ or $\mathcal{A}$ is large. This paper considers a Markov decision process (MDP) that admits a set of state-action features, which can linearly express (or approximate) its probability transition kernel. We show that a model-based approach (resp.$~$Q-learning) provably learns an $\varepsilon$-optimal policy (resp.$~$Q-function) with high probability as soon as the sample size exceeds the order of $\frac{K}{(1-\gamma)^{3}\varepsilon^{2}}$ (resp.$~$$\frac{K}{(1-\gamma)^{4}\varepsilon^{2}}$), up to some logarithmic factor. Here $K$ is the feature dimension and $\gamma\in(0,1)$ is the discount factor of the MDP. Both sample complexity bounds are provably tight, and our result for the model-based approach matches the minimax lower bound. Our results show that for arbitrarily large-scale MDP, both the model-based approach and Q-learning are sample-efficient when $K$ is relatively small, and hence the title of this paper. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255209565162659, "perplexity": 253.11811618503665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00172.warc.gz"} |
https://www.physicsforums.com/threads/force-and-potential-energy.234054/ | # Force and Potential Energy
1. May 9, 2008
### ubermuchlove
1. The problem statement, all variables and given/known data
The potential energy of a pair of hydrogen atoms separated by a large distance x is given by U(x)= -C6/x^6, where C6 is a positive constant. What is the force that one atom exerts on the other? Is this force attractive or repulsive?
2. Relevant equations
3. The attempt at a solution
I'm thinking we have to take the derivative of the equation??
Last edited: May 9, 2008
2. May 9, 2008
### ubermuchlove
this is what i have...
after taking the derivative of U(x), i got F(x)= -C6/6x^7. And this force is an attractive force.
Im not sure about the last part.
3. May 9, 2008
### jimmyting
Just as comment on your derivative, I think it should actually be F(x)= -6C/x^7 because if you used the multiplication rule, the -6 is in the numerator
4. May 22, 2010
### fearthebob
So you're absolutely right. It is the derivative of the given eqn. thus getting -6C_6/x^7
And considering atoms are neutral and give off no outward magnetism, they apply a really small attractive force on each other [will have to check but i think it's due to gravity, not sure. but it's definitely an attractive force]
5. Mar 5, 2012
### alpacapages
THe derivative is the correct answer but can anyone explain why? This is potential energy which is supposed to be in figures equivalent to that of work vise-vis Force times distance or Newton meters (Joules). I would think that the answer would be to divide -C6/x^6 by the displacement x but that is the wrong answer. WHYY??
6. Oct 11, 2012
### slickersly
Force is equal to the negative slope of the U(x) (potential energy) function. Also, you get attraction when F > 0 and repulsion when F < 0.
7. Jan 17, 2014
### Christina Anna
I am sorry to tell you that the derivative is +6C/x^(7). I just looked it up. You can re-write the equation as -1C*x^(-6),so -6*-1= +6, and x^(-6) becomes x^(-7), so the derivative is +6C*x^(-7) or +6C/x^(7). Here's the website I found this at. It's really great if you have math questions:
http://www.wolframalpha.com/input/?i=derivative+of+-c/x^(6)
Hope this helps :)
8. Jan 17, 2014
### Polaris
They are approaching the problem using
$F(x) = -\frac{dU(x)}{dx}$
that is why they are giving the answer as negative. So, yes, the derivative is positive but the force does carry a negative.
When taking the derivative you are actually talking about infinitesimal displacements, so you can think of this as the force being exerted by the atoms at this distance. Exact.
Lets think of it in more familiar grounds. Standing on Earth always provides the same amount of acceleration, correct? Being this is the case, you always 'feel' the same force begin applied to you. This is because you are always the same distance away, radially. This is the same scenario except in electrostatics situation.
9. Jan 17, 2014
### lightgrav
F∙dx = dWork , and PE = -Work , do F = - dW/dx ... yes, there's a 6 on top, attractive since negative.
Physically: one atom's electron cloud will fluctuate (off-center) temporarily, becoming an electric dipole;
that will induce a dipole to form in the other atom; those two dipoles attract one another.
Gravity is not involved.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: Force and Potential Energy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418926239013672, "perplexity": 1229.9196898127368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00644.warc.gz"} |
https://www.groundai.com/project/measurement-of-the-semileptonic-charge-asymmetry-using-bs0-d_s-mu-x-decays/ | Measurement of the semileptonic charge asymmetry using \bm{B^{0}_{s}\rightarrow D_{s}\mu X} decays
Measurement of the semileptonic charge asymmetry using B0s→DsμX decays
July 6, 2012
Abstract
We present a measurement of the time-integrated flavor-specific semileptonic charge asymmetry in the decays of mesons that have undergone flavor mixing, , using decays, with and , using 10.4 fb of proton-antiproton collisions collected by the D0 detector during Run II at the Fermilab Tevatron Collider. A fit to the difference between the time-integrated and mass distributions of the and candidates yields the flavor-specific asymmetry , which is the most precise measurement and in agreement with the standard model prediction.
pacs:
11.30.Er, 13.20.He, 14.40.Nd
FERMILAB-PUB-12-338-E
The D0 Collaboration111with visitors from Augustana College, Sioux Falls, SD, USA, The University of Liverpool, Liverpool, UK, UPIITA-IPN, Mexico City, Mexico, DESY, Hamburg, Germany, SLAC, Menlo Park, CA, USA, University College London, London, UK, Centro de Investigacion en Computacion - IPN, Mexico City, Mexico, ECFM, Universidad Autonoma de Sinaloa, Culiacán, Mexico and Universidade Estadual Paulista, São Paulo, Brazil.
CP violation has been observed in the decay and mixing of neutral mesons containing strange, charm and bottom quarks. Currently all measurements of CP violation, either in decay, mixing or in the interference between the two, have been consistent with the presence of a single phase in the CKM matrix. An observation of anomalously large CP violation in oscillations can indicate the existence of physics beyond the standard model (SM) smprediction (). Measurements of the like-sign dimuon asymmetry by the D0 Collaboration dimuon1 (); dimuon2 () show evidence of anomalously large CP-violating effects using data corresponding to 9 fb of integrated luminosity. Assuming that this asymmetry originates from mixed neutral mesons, the measured value is , where is the time-integrated flavor-specific semileptonic charge asymmetry in () decays that have undergone flavor mixing and is the fraction of () events. The value of is extracted from this measurement and found to be dimuon2 (). This Letter presents an independent measurement of using the decay , where and (charge conjugate states are assumed in this Letter).
The asymmetry is defined as
assl=Γ(¯B0s→B0s→ℓ+νX)−Γ(B0s→¯B0s→ℓ−¯ν¯X)Γ(¯B0s→B0s→ℓ+νX)+Γ(B0s→¯B0s→ℓ−¯ν¯X), (1)
where in this analysis and . This includes all decay processes of mesons that result in a meson and an oppositely charged muon in the final state. To study CP violation, we identify events with the decay . The flavor of the meson at the time of decay is identified using the charge of the associated muon, and this analysis does not make use of initial-state tagging. The fraction of mixed events integrated over time is extracted using Monte Carlo (MC) simulations. We assume there is no production asymmetry between and mesons, that there is no direct CP violation in the decay of mesons to the indicated states or in the semileptonic decay of mesons, and that any CP violation in mesons only occurs in mixing. We also assume that any direct CP violation in the decay of baryons and charged mesons is negligible. This analysis does not make use of the decay as used in Ref. d0asls () as the expected statistical uncertainty in this channel is 2.5 times worse than the decay .
The value of the SM prediction for smprediction () is negligible compared with current experimental precision. The best direct measurement of was performed by the D0 Collaboration using data corresponding to 5 fb of integrated luminosity, giving d0asls (). This Letter presents a new and improved measurement of using the full Tevatron data sample with an integrated luminosity of 10.4 fb.
The measurement is performed using the raw asymmetry
A=Nμ+D−s−Nμ−D+sNμ+D−s+Nμ−D+s, (2)
where () is the number of reconstructed () decays. The time-integrated flavor-specific semileptonic charge asymmetry in decays which have undergone flavor mixing, , is then given by
assl⋅FoscB0s=A−Aμ−Atrack−AKK, (3)
where is the reconstruction asymmetry between positive and negatively charged muons in the detector d0det (), is the asymmetry between positive and negative tracks, is the residual kaon asymmetry from the decay of the meson, and is the fraction of decays that originate from the decay of a meson after a oscillation. The factor corrects the measured asymmetry for the fraction of events in which the meson is mixed under the assumptions outlined earlier that no other physics asymmetries are present in the other -hadron backgrounds. While the data selection, fitting models, , , and were studied, the value of the raw asymmetry was offset by an unknown arbitrary value and any distribution that gave an indication of the value of the asymmetry was not examined.
The D0 detector has a central tracking system, consisting of a silicon microstrip tracker (SMT) and a central fiber tracker (CFT), both located within a 2 T superconducting solenoidal magnet d0det (); layer0 (). An outer muon system, at eta (), consists of a layer of tracking detectors and scintillation trigger counters in front of 1.8 T toroidal magnets, followed by two similar layers after the toroids run2muon ().
The data are collected with a suite of single and dimuon triggers. The selection and reconstruction of decays requires tracks with at least two hits in both the CFT and SMT. Muons are required to have hits in at least two layers of the muon system, with segments reconstructed both inside and outside the toroid. The muon track segment has to be matched to a particle found in the central tracking system which has momentum GeV/ and transverse momentum GeV/.
The ; decay is reconstructed as follows. The two particles from the decay are assumed to be kaons and are required to have GeV/, opposite charge and a mass GeV/. The charge of the third particle, assumed to be the charged pion, has to be opposite to that of the muon with GeV/. The three tracks are combined to create a common decay vertex using the algorithm described in Ref. vertex (). To reduce combinatorial background, the vertex is required to have a displacement from the interaction vertex (PV) in the transverse plane with a significance of at least four standard deviations. The cosine of the angle between the momentum and the vector from the PV to the decay vertex is required to be greater than 0.9. The trajectories of the muon and candidates are required to be consistent with originating from a common vertex (assumed to be the decay vertex) and to have an effective mass of GeV, consistent with coming from a semileptonic decay. The cosine of the angle between the combined direction, an approximation of the direction in the direction from the PV to the decay vertex has to be greater than 0.95. The decay vertex has to be displaced from the PV in the transverse plane with a significance of at least four standard deviations. These angular criteria ensure that the and momenta are correlated with that of their parent and that the is not mistakenly associated with a random muon. If more than one candidate passes the selection criteria in an event, then all candidates are included in the final sample.
To improve the significance of the selection we use a likelihood ratio taken from Refs. d0bsmix (); like_ratio (). It combines several discriminating variables: the helicity angle between the and momenta in the center-of-mass frame of the meson; the isolation of the system, defined as , where is the sum of the momenta of the three tracks that make up the meson and is the sum of momenta for all tracks not associated with the in a cone of around the direction eta (); the of the vertex fit; the invariant masses , ; and .
The final requirement on the likelihood ratio variable, , is chosen to maximize the predicted ratio in a data subsample corresponding to 20% of the full data sample, where is the number of signal events and is the number of background events determined from signal and sideband regions of the distributions.
The distribution is analysed in bins of 6 MeV, over a mass range of GeV. The number of events is extracted by fitting the data to a model using a fit. The meson mass distribution is well modelled by two Gaussian functions constrained to have the same mean, but with different widths and relative normalizations. A second peak in the distribution corresponding to the Cabibbo-suppressed decay of the meson is also similarly modelled by two Gaussian functions, and the combinatoric background by a third-order polynomial function. The number of signal decays determined from the fit is , where the uncertainty is statistical.
The polarities of the toroidal and solenoidal magnetic fields are reversed on average every two weeks so that the four solenoid-toroid polarity combinations are exposed to approximately the same integrated luminosity. This allows for a cancellation of first-order effects related to instrumental asymmetries. To ensure full cancellation, the events are weighted according to the number of decays for each data sample corresponding to a different configuration of the magnets’ polarities. The data are then fitted to obtain the number of weighted events, . This is shown in Fig. 1, where the weighted invariant mass distributions in data is compared to the signal and background fit.
The raw asymmetry (Eq. 2) is extracted by fitting the distribution of the candidates using a minimization. The fit is performed simultaneously, using the same models, on the sum (Fig. 1) and the difference (Fig. 2) of the distribution associated with a positively charged muon and distribution associated with a negatively charged muon. The functions used to model the two distributions are
Wsum= Wsig(Ds)+Wsig(D)+Wbgsum, (4) Wdiff= AWsig(Ds)+ADWsig(D)+AbgWbgsum, (5)
where and describe the , mass peaks, and the combinatorial background, respectively. The asymmetry of the mass peak is , and is the asymmetry of the combinatorial background. The result of the fit is shown in Fig. 2 with fitted asymmetry parameters , , and .
The of the fit model with respect to the difference histogram is degrees of freedom over the whole mass range and for 25 bins in the mass range GeV, which corresponds to a -value of . The value of the extracted raw asymmetry, , is checked by calculating the difference between the number of and events in the mass range GeV without using a fit. In this region we observe an asymmetry of which is consistent with the value of extracted by the fitting procedure.
To test the sensitivity of the fitting procedure, the charge of the muon is randomised to introduce an asymmetry signal. We use a range of raw signals from to in steps with 1000 trials performed for each step, and the result of these pseudo-experiments, each with the same statistics as the measurement, is found. In each case, the central value of the asymmetry distribution is consistent with the input value with a fitted width of and no observable bias. The uncertainty found in data agrees with this expected statistical sensitivity.
Systematic uncertainties in the fitting method are evaluated by making reasonable variations to the fitting procedure. The mass range of the fit is shifted from GeV to GeV. The functions modelling the signal, , are modified so that the and peaks are fitted by single Gaussian functions. The background function, , is varied from a second-order polynomial function to a fifth-order polynomial function, and the asymmetry is extracted. Instead of setting the background of to , the background is either set to zero, a constant, or a polynomial function of up to degree three. The width of the mass bins is varied between 2 and 12 MeV. Instead of using the fitted number of decays per magnet polarity to weight the events, the total number of candidates in the mass range GeV/ is used. The systematic uncertainty is assigned to be half of the maximal variation in the asymmetry for each of these sources, added in quadrature. The total effect of all of these systematic sources of uncertainty is a systematic uncertainty of on the raw asymmetry , giving
A=[−0.40±0.33(stat.)±0.05(syst.)]%. (6)
To extract from the raw asymmetry, corrections to the charge asymmetries in the reconstruction have to be made. These corrections are described in detail in Ref. d0adsl (). The residual detector tracking asymmetry, , has been studied in Ref. dimuon1 () and by using and decays. No significant residual track reconstruction asymmetries are found and no correction for tracking asymmetries need to be applied. The tracking asymmetry of charged pions has been studied using MC simulations of the detector. The asymmetry is found to be less than , which is assigned as a systematic uncertainty. The muon and the pion have opposite charge, so any remaining track asymmetries will cancel to first order.
Any asymmetry between the reconstruction of and mesons cancels as we require that the two kaons form a meson. However, there is a small residual asymmetry in the momentum of the kaons produced by the decay of the meson due to - interference bellePhi (). The kaon asymmetry is measured using the decay d0adsl () and is used to determine the residual asymmetry due to this interference, .
The residual reconstruction asymmetry of the muon system, , has been measured using decays as described in dimuon1 (); dimuon2 (); d0adsl (). This asymmetry is determined as a function of and of the muons, and the correction is obtained by a weighted average over the normalized yields, as determined from fits to the distribution. The resulting correction is and the combined corrections are , including the statistical uncertainties combined in quadrature.
The remaining variable required is (Eq. 3), which is the only correction extracted from a MC simulation. The signal decays can also be produced via the decay of mesons, mesons, and from prompt production. The () mesons can oscillate to () states before decaying. We split these MC samples into mixed and unmixed decays. This classification is inclusive and includes most intermediate excited states of both and meson decays.
The MC sample is created using the pythia event generator pythia () modified to use evtgen evtgen () for the decay of hadrons containing and quarks. Events recorded in random beam crossings are overlaid over the simulated events to quantify the effect of additional collisions in the same or nearby bunch crossings. The pythia inclusive jet production model is used and events are selected that contain at least one muon and a ; decay. The generated events are processed by the full simulation chain, and then by the same reconstruction and selection algorithms as used to select events from data. Each event is classified based on the decay chain that is matched to the reconstructed particles.
The mean proper decay lengths of the -hadrons are fixed in the simulation to values close to the current world-average values hfag (). To correct for these differences, a correction is applied to all non-prompt events in simulation, based on the generated lifetime of the candidate, to give the appropriate world-average meson lifetimes and measured value of the width difference lhcbDeltaGamma ().
To estimate the effects of trigger selection and track reconstruction, we weight each event as a function of of the reconstructed muon so that it matches the distribution in the data, and as a function of the lifetime to ensure that the -meson lifetimes and match the world-average hfag ().
In the case of the meson, the time-integrated oscillation probability is essentially 50% and is insensitive to the exact value of . Combining the fraction of decays in the sample and the time-integrated oscillation probability, we find .
To determine the systematic uncertainty on , the branching ratios and production fractions of mesons are varied by their uncertainties. We also vary the -meson lifetimes and and use a coarser binning in the event weighting. The total resulting systematic uncertainty on is determined to be that includes the statistical uncertainty from the MC simulation. An asymmetry of decays of 1% would contribute to the total asymmetry, which is negligible compared to the statistical uncertainties and therefore neglected.
The uncertainty due to the fitting procedure () and the asymmetry corrections () are added in quadrature and scaled by the dilution factor, . The effect of the uncertainty on the dilution factor is then added in quadrature, giving a total systematic uncertainty of .
The resulting time-integrated flavor-specific semileptonic charge asymmetry is found to be
assl=[−1.12±0.74(% stat)±0.17(syst)]%, (7)
superseding the previous measurement of by the D0 Collaboration d0asls (); comment () and in agreement with the SM prediction. This result can be combined with the two measurements that depend on the impact parameter of the muons (IP) dimuon2 () and the average of measurements from the factories, hfag (), (Fig. 3). As a result of this combination we obtain and with a correlation of , which is a significant improvement on the precision of the measurement of and obtained in Ref. dimuon2 (). These results have a probability of agreement with the SM of , which corresponds to a 3.0 standard deviations from the SM prediction.
In summary, we have presented the most precise measurement to date of the time-integrated flavor-specific semileptonic charge asymmetry, , which is in agreement with the standard model prediction and the D0 like-sign dimuon result dimuon2 ().
We thank the staffs at Fermilab and collaborating institutions, and acknowledge support from the DOE and NSF (USA); CEA and CNRS/IN2P3 (France); MON, NRC KI and RFBR (Russia); CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil); DAE and DST (India); Colciencias (Colombia); CONACyT (Mexico); NRF (Korea); FOM (The Netherlands); STFC and the Royal Society (United Kingdom); MSMT and GACR (Czech Republic); BMBF and DFG (Germany); SFI (Ireland); The Swedish Research Council (Sweden); and CAS and CNSF (China).
References
• (1) A. Lenz and U. Nierste, arXiv:1102.4274; A. Lenz and U. Nierste, J. High Energy Phys. 06, 072 (2007).
• (2) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 82, 032001 (2010); V. M. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 105, 081801 (2010).
• (3) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 84, 052007 (2011).
• (4) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 82, 012003 (2010).
• (5) V.M. Abazov et al. (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A 565, 463 (2006).
• (6) R. Angstadt et al. (D0 Collaboration), Nucl. Instrum. Meth. A 622, 278 (2010).
• (7) is the pseudorapidity and is the polar angle between the track momentum and the proton beam direction. is the azimuthal angle of the track.
• (8) V.M. Abazov et al. (D0 Collaboration), Nucl. Instrum. Meth. A 552, 372 (2005).
• (9) J. Abdallah et al. (DELPHI Collaboration), Eur. Phys. J. C 32, 185 (2004).
• (10) V. M. Abazov et al. (D0 Collaboration), Phys. Rev. Lett. 97, 021802 (2006).
• (11) G. Borisov, Nucl. Instrum. Methods Phys. Res. A 417, 384 (1998).
• (12) V. M. Abazov et al. (D0 Collaboration), arXiv:1208.5813, submitted to Phys. Rev. D.
• (13) M. Starič et al. (Belle Collaboration), Phys. Rev. Lett. 108, 071801 (2012).
• (14) T. Sjöstrand, S. Mrenna and P. Z. Skands, J. High Energy Phys. 05, 026 (2006).
• (15) D.G. Lange, Nucl. Instrum. Methods in Phys. Res. A 462, 152 (2001); for details see http://www.slac.stanford.edu/~lange/EvtGen.
• (16) D. Asner et al., Heavy Flavor Averaging Group (HFAG), arXiv:1010.1589; making use of the 2012 update: http://www.slac.stanford.edu/xorg/hfag/osc/PDG_2012/
• (17) R. Aaij et al., (LHCb Collaboration), arXiv:1202.4717; R. Aaij et al., (LHCb Collaboration), Phys. Rev. Lett. 108, 101803 (2012).
• (18) The analysis presented in this Letter has the same statistical uncertainty as the analysis presented in Ref. d0asls () when performed on the same data sample.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529535174369812, "perplexity": 1361.5097130096103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00557.warc.gz"} |
https://tex.stackexchange.com/questions/436971/how-to-draw-an-auslander-reiten-quiver | # How to draw an Auslander-Reiten quiver?
Please, help me in typesetting the following diagram, I'm not an expert in drawing with LaTeX.
I have used the advice of the excellent user @egreg. You can change the tip of the vectors by looking at this link: Is it possible to change the size of an arrowhead in TikZ/PGF?. I believe that the diagram that uses the arrows in that way is from the package xymatrix.
\documentclass{article}
\usepackage{newtxtext,newtxmath}
\usepackage{tikz-cd,amssymb}
\begin{document}
\begin{tikzcd}[column sep=huge,row sep=small]
1 \arrow[rd, "\alpha"] \\ & 3 & 4 \arrow[l, "\gamma"'] \\
2 \arrow[ru, "\beta"'] \end{tikzcd}
\end{document}
\documentclass{article}
\usepackage{newtxtext,newtxmath}
\usepackage{tikz-cd,amssymb}
\newcommand{\pmat}[4]{\begin{pmatrix} #1 & #2 \\ #3 & #4\end{pmatrix}}
\begin{document}
\begin{tikzcd}[column sep=huge,row sep=small]
1 \arrow[rd, "\pmat{-1}{5}{5}{1}"] \\ & 3 & 4 \arrow[l, "\gamma"'] \\
2 \arrow[ru, "\beta"'] \end{tikzcd}
\end{document}
SECOND ADDENDUM From a suggestion of @Bernard's comment I have added psmallmatrix environment (from mathtools).
\documentclass{article}
\usepackage{newtxtext,newtxmath,mathtools}
\usepackage{tikz-cd,amssymb}
\newcommand{\smat}[4]{\begin{psmallmatrix} #1 & #2 \\ #3 & #4\end{psmallmatrix}}
\begin{document}
\begin{tikzcd}[column sep=huge,row sep=small]
1 \arrow[rd, near start, "\smat{-1}{5}{5}{1}"] \\ & 3 & 4 \arrow[l, "\gamma"'] \\
2 \arrow[ru, "\beta"'] \end{tikzcd}
\end{document}
• it was wonderful. Thank you for your help @egreg – tuce Jun 18 '18 at 21:28
• how can I write matrices instead of alpha, gamma, beta on arrows @egreg , – tuce Jun 18 '18 at 21:36
• how can I write matrices instead of alfa beta gamma on arrows @sebastiano – tuce Jun 18 '18 at 21:38
• on the same diagram you could write on the arrows as a matrix. – tuce Jun 18 '18 at 21:49
• no problem @sebastion you are perfect :) – tuce Jun 18 '18 at 22:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999752938747406, "perplexity": 2527.6896566696646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00155.warc.gz"} |
https://www.prepscholar.com/gmat/blog/gmat-fractions-decimals/ | # GMAT Fractions and Decimals: Everything You Need to Know
After integers, fractions and decimals are usually the next most frequently tested concepts in the GMAT Quant section. The good news is, the math itself is fairly simple: you’ve likely learned all the rules you need to know about working with fractions and decimals in middle and early high school math. The bad news is that these rules and properties have probably been gathering dust in some unvisited corner of your brain—and even if they haven’t, you’re going to have to apply them in new ways on the GMAT.
Never fear! In this post, we’ll tell you everything you need to know about fractions and decimals for the GMAT. We’ll give you a refresher on all the relevant rules and formulas, tips and tricks for every question you’ll see on them on the GMAT, and some example questions with thorough explanations so you can see these strategies in action.
## GMAT Fractions: Rules to Know
Below are all the rules that you need to know about fractions for the GMAT.
### Definition of a Fraction
A fraction is a visual representation of a number divided by another number. The top number of a fraction is called the numerator, and it’s the number being divided. The bottom number is called the denominator, and it’s the number that the top number is divided by.
In the fraction \$n/d\$, \$n\$ is the numerator (the top number) and \$d\$ is the denominator (the bottom number). The fraction 1/2, for example, is 1 divided by 2 or one-half.
0 can’t be the denominator in a fraction, because dividing by 0 is undefined.
Two fractions are equivalent when they represent the same number. For example: 2/8 and 4/16 are equivalent, because they both equal 0.25.
When both the numerator and the denominator can be be divided evenly by the same number, the fraction can be simplified into its lowest terms (the smallest equivalent fraction). The largest number that both the numerator and the denominator can be divided by is called the greatest common factor (GCF) or greatest common divisor (GCD). Dividing both by the GCD simplifies the fraction into its lowest form.
For example, 2 is the greatest common factor of both 2 and 8. For the fraction 2/8, when you divide the numerator and the denominator by 2, you get 1/4—the lowest or most simplified form of the fraction. When dealing with fractions in equations, you almost always want them to be in their simplest forms, so that they’re easier to do calculations with.
We've created the world's leading online GMAT prep program that adapts to your strengths and weaknesses. Not sure what to study? Confused by how to improve your score? PrepScholar GMAT guides you step by step guidance and support.
To be clear - you don't NEED a prep program to get a great GMAT score. But we believe PrepScholar is the best GMAT prep program available right now, especially if you find it hard to organize your study schedule and don't know what to study.
### Multiplying and Dividing With Fractions
Multiplying with fractions is easy: you just multiply the numerators and multiply the denominators.
For example:
\$7/10 × 4/9 = 28/90\$ or \$14/45\$
To divide with fractions, “flip” the fraction after the division sign (called the divisor) so that the denominator becomes the numerator and vice versa, and then multiply with that number.
Example:
\$\${7/10} ÷ {4/9} = 7/10 × 9/4 = 63/40\$\$
This “flipped” version of a fraction is called its reciprocal or inversion. The reciprocal or inversion of any fraction \$n\$/\$d\$ is \$d\$/\$n\$ (where \$n\$ and \$d\$ ≠ 0).
### Adding and Subtracting With Fractions
Two fractions with the same denominator can be added or subtracted easily. You simply add or subtract the numerators, and leave the denominators the same.
\$\$3/8 – 2/8 = 1/8\$\$
\$\$5/9 – 1/9 = 4/9\$\$
If you need to add or subtract with fractions that don’t have the same denominator, then you can do the opposite of simplifying and express them as equivalent fractions with the same denominator. As long as you multiply or divide the numerator and the denominator of a fraction by the same number, it will remain equivalent:
\$\$3/8 × 9/9 = 27/72\$\$
\$\$2/8 = 27/72\$\$
This gives us an always true rule, which is helpful in algebraic expressions:
\$\${x + y}/z = x/z + y/z\$\$
When adding or subtracting fractions with different denominators, multiplying the fractions so that the denominators represent the least common multiple (the lowest number that both denominators factor into) is usually the simplest way to go and makes doing calculations easier than working with larger numbers.
Example:
\$\$1/3 + 3/4\$\$
\$\$LCM = 12\$\$
\$\$1/3 × 4/4 = 4/12\$\$
\$\$3/4 × 3/3 = 9/12\$\$
\$\$1/3 + 3/4 = 4/12 + 9/12 = 13/12=1 1/12\$\$
By multiplying 3 and 4, we see that the LCM is 12. We then convert both fractions so that they both have a denominator of 12. Then it’s easy to add them together!
### Mixed Numbers
A number made up of a whole number and a fraction (like 1 and 1/12 above) is called a mixed number. To change a mixed number into a fraction, multiply the whole number by the denominator and then add the result to the numerator. This then becomes the new numerator.
\$\$6 4/9 = {(6 × 9)+ 4}/9 = {54 + 4}/9 = 58/9\$\$
## GMAT Decimals: Rules to Know
Below are all the rules you need to know about decimals for the GMAT.
### Definition of a Decimal
Decimals and fractions are both ways of representing number values in between integers or whole numbers.
Want to improve your GMAT score by 60 points? We have the industry's leading GMAT prep program. Built by Harvard and Wharton graduates and GMAT 99th percentile scorers, the program learns your strengths and weaknesses through advanced statistics, then customizes your prep program to you so you get the most effective prep possible.
Check out our 5-day free trial today:
In the decimal system, the distance from the decimal point represents the place value of each number. For example, the number 412.735, has 4 in the “hundreds” place, 1 in the “tens” place, and 2 in the “ones” or “units” place; and then after the decimal, 7 in the “tenths” place, 2 in the “hundredths” place, and then five in the “thousandths” place. Here’s a table illustrating this information:
4 1 2 . 7 3 5 Hundreds place Tens place Units place [decimal] Tenths place Hundredths place Thousandths place
### The Zero Rule
After you pass the decimal point, you can add an infinite number of zeros to the end of a number:
\$\$1.435 = 1.4350 = 1.4350000000000000000 = 1.43500000000000000000000000000000000000000\$\$
This rule only applies to after the end of the number after the decimal point:
\$\$1435 ≠ 14350\$\$
\$\$1435 = 1435.0 = 1435.0000000000000000000000000\$\$
### Adding and Subtracting With Decimals
To add or subtract two decimals, the decimal places of each need to line up. You can use the zeros rule above if one number has fewer digits to the after decimal place than the other:
7.872 + 6.30285 =
7.87200
+ 6.30285
= 14.17485
### Multiplying and Dividing With Decimals
When multiplying decimals, do not line up the decimal point: the decimal gets inserted afterward. Instead, multiply the two numbers as if they were whole numbers. Once you have the product, it’s time to put the decimal back in.
But how do you figure out where the decimal place goes? The rule is that you add up the amount of numbers after the decimal of each number you multiplied, and that sum is the number of decimal places that should be in the product:
1.56 (two numbers after the decimal)
× 2.3 (one number after the decimal)
= 3.588 (three decimals—sum of one and two above)
To divide any number (a dividend) by a decimal (the divisor) using long division, move the decimal point of the divisor to the right however many places it takes to get to a whole number, and then move the decimal point in the dividend over by that many places as well. If there’s still a decimal left in the dividend after this, make sure you place it directly above the dividend in the answer.
Finally, do the division as you normally would. For example,
90.625 ÷ 12.5 becomes 906.25 ÷ 125
Then you do the long division with 906.25 as the dividend and 125 as the divisor, making sure to place the decimal in the answer directly above its place in the dividend.
### Converting Decimals to Fractions
Every decimal can be expressed as a fraction with these steps:
1. Move the decimal point over however many places to the right until it becomes a whole number
2. Use that as the numerator
3. Place in the denominator the power of 10 that corresponds to however many places you moved the decimal over:
\$\$0.5 = 5/10\$\$
\$\$0.05 = 5/100\$\$
0.005 = 5/1000 or 1/200
Another way to think of it is that the number of places you move the decimal to the right to make the numerator a whole number is the number of 0’s you’ll add after 1 in the denominator.
Numbers less than -1 or greater than +1 with decimals can be expressed as fractions using the above rule in combination with the mixed number rule:
\$\$7.5 = 7 5/10\$\$
\$\$= {(7 × 10) + 5}/10\$\$
\$\$= [70 + 5]/10 = 75/10\$\$
And this can be simplified:
\$\$75/10 = 15/2\$\$
\$\$7.5 = 15/2\$\$
### Converting Fractions to Decimals
When you plug in a fraction as a division problem into a calculator, it automatically gives you the decimal equivalent. Unfortunately, we don’t have access to a calculator on the GMAT Quant section, but the manual conversion isn’t too hard.
You can always find the decimal equivalent of a fraction with long division, by using the numerator as the dividend and the denominator as the divisor. But there’s an alternative method that can be handy as well.
First, find a number you can multiply the denominator of the fraction by to make it 10, or 100, or 1000, or any 1 followed by 0s. Next, multiply both numerator and denominator by that number to get its equivalent expression. Finally, write down just the top number, putting the decimal point in the corresponding place: one space from the right hand side for every zero in the bottom number.
Here’s an example using the fraction 3/4:
\$\$3/4 = ?/100\$\$
\$\$4 × 25 = 100\$\$
\$\${3 × 25}/{4 × 25} = 75/100\$\$
\$\$= 0.75\$\$
### Scientific Notation of Decimals
“Moving over” decimal places with powers of 10 is a useful concept. Sometimes, numbers are expressed as the product of a number multiplied by 10 to a certain power. The power represents how many places you need to “move” the decimal point to get to its decimal expression. The sign of the exponent indicates which direction: a positive exponent moves the decimal over to the right, and a negative exponent moves it to the left.
Examples:
\$\$0.0489 = 4.89 × 10^{-2}\$\$
\$\$60235 = 6.0235 × 10^4\$\$
\$\$540 = 5.4 × 10^2\$\$
\$\$29 = 2.9 × 10^1 = 2.9 × 10\$\$
### Terminating and Recurring Decimals
Terminating decimal GMAT questions sound scary if you don’t know what a terminating decimal is, but it’s actually deceptively simple.
All of the decimals in the examples above have an end. They are called terminating decimals because there aren’t an infinite amount of numbers after the decimal point. Any terminating decimal is can be represented as a fraction with a power of ten in the denominator. For example, 0.0462 = 462/10000 = 231/5000.
It’s possible to have an infinite amount of numbers after the decimal point. 1/3 is an example of a recurring decimal, as we can see when we convert it with long division:
\$\$1/3 = 0.333333333… = 0.\ov 3\$\$
The above are equivalent expressions: Both the ellipses and the line above the three indicate that the threes after the decimal point go on forever.
Recurring decimals are tough to work with. Knowing which fractions have infinite decimal expressions, like 1/3 and 1/9, helps significantly in deciding whether to convert it into a decimal or leave it as a fraction in solving problems.
### The Key Rule for Fractions That Are Terminating Decimals
If the prime factorization of the denominator of a fraction has only 2 and/or 5, then it can be written as something over a power of ten, which means its decimal expression terminates.
If the denominator doesn’t have only 2 and/or 5 as factors, then the decimal expression is recurring. Here are some examples:
1/24 is recurring (24 = 23 × 3, so 24 has a prime factor of 3 in addition to 2)
1/25 is terminating (25 = 52)
1/28 is recurring (28 = 22 x 7, so there’s a prime factor of 7 in addition to 2)
1/40 is terminating (40 = 23 x 5)
1/64 is terminating (64 = 26)
Importantly, this rule only applies for fractions in their simplest forms. For example, 9/12 terminates, even though 12 has 3 as a prime factor, because 9/12 is really just 3/4, which is 3/22
One key way to express this rule is that that the denominator must be some value equivalent to 2m5n, where \$m\$ and \$n\$ are integers. So any fraction that can be expressed as \$x\$/2m5n will terminate, and any other fraction won’t.
Note that the number 1 as a denominator satisfies those requirements, as any number to a power of 0 equals 1, and 0 is an integer, so it can be the value of \$m\$ and \$n\$:
1 = 2050
If you need a refresher on what prime factorization is, head to our guide to integer properties for the GMAT, which includes an entire section devoted to explaining prime factorization.
## GMAT Fractions Questions
Below are the key kinds of GMAT fraction questions. Note that fractions as a concept overlap with some of the other types of questions, such as rate questions and average questions. The line is often blurry between fractions and decimals as well, and sometimes actually converting the given value to decimals from fractions or vice-versa can make the problem clearer. We’ll see an example of that below.
### Example GMAT Fractions Question 1: Problem Solving and Averages
Here’s a GMAT averages problem involving fractions:
If the average of the 4 numbers (\$n\$+2), (2\$n\$-3), (4\$n\$+1) and (7\$n\$+4) is 15, what is the value of \$n\$?
(A) 11/14
(B) 4
(C) 32/7
(D) 11
(E) 13
This is a fraction question baked into an averages question with algebra. As you may know, the formula for averages is simply to add all the numbers together and then divide by the total number of numbers, which is 4 in this case. This gives us the below fraction:
\$\${(n+2) + (2n-3) + (4n+1) + (7n+4)}/4\$\$
We also know from the question that the average is 15, so that equation is equal to 15:
\$\${(n+2) + (2n-3) + (4n+1) + (7n+4)}/4 = 15\$\$
To simplify this equation, let’s get rid of the fraction by multiplying both sides by 4 (the denominator):
\$\${{(n+2) + (2n-3) + (4n+1) + (7n+4)}/4} × 4 = 15 × 4\$\$
\$\$(n+2) + (2n-3) + (4n+1) + (7n+4) = 60\$\$
Since the right side of this equation is all addition and subtraction now, we don’t need the parentheses. Let’s simplify and solve:
\$\$14n + 4 = 60\$\$
\$\$14n = 56\$\$
\$\$n = 56/14\$\$
\$\$n = 4\$\$
### Example GMAT Fractions Question 2: Problem Solving and Rates
Here is a fraction problem in the context of a GMAT rate problem:
A small water pump would take 2 hours to fill an empty tank. A larger pump would take 1/2 hour to fill the same tank. How many hours would it take both pumps, working at their respective constant rates, to fill the empty tank if they began pumping at the same time?
(A) 1/4
(B) 1/3
(C) 2/5
(D) 5/4
(E) 3/2
First, let’s make sure we understand what the numerator and the denominator represent in these fractions. The rate is per hour, so we’re talking about tanks (the numerator) per hour (the denominator).
So the rate of the small pump is 1/2 tank/hour, and the rate of the larger pump is 2 tank/hour, or 2/1 (in fraction expression). Together, the combined rate of the two pumps is:
\$\$1/2 + 2/1\$\$
You probably know this off the top of your head, but just to illustrate the addition of fractions, I’ll show you how to do it out. We need the lowest common multiple of the denominators so that we can render them both as expressions with the same denominator.
The LCM is 2, so:
\$\$2/1 + 2/1 = 1/2 + 4/2 = 5/2 \tanks \per \hour\$\$
To get to the time it takes to fill the tank, we need to divide the job (filling 1 tank) over their collective rate (5/2 tanks per hour).
Want to improve your GMAT score by 7+ points?
Check out our best-in-class online GMAT prep program. We guarantee your money back if you don't improve your GMAT score by 7 points or more.
PrepScholar GMAT is entirely online, and it customizes your prep program to your strengths and weaknesses. We also feature 800+ practice questions and 50+ skill and strategy lessons.
Check out our 5-day free trial now:
Hence together they will fill the tank in \$1/(5/2)\$. Let’s use the rule about fraction division—that it’s simply multiplication with the numerator and denominator flipped—to simplify this:
\$\$1/(5/2) = 1 × 2/5\$\$
\$\$= 2/5 \hours\$\$
### Example GMAT Fractions Question 3: Problem Solving and Probability
A basic knowledge of fractions is required for GMAT probability problems as well. Here’s an example:
In a certain board game, a stack of 48 cards, 8 of which represent shares of stock, are shuffled and then placed face down. If the first 2 cards selected do not represent shares of stock, what is the probability that the third card selected will represent a share of stock?
(A) 1/8
(B) 1/6
(C) 1/5
(D) 3/23
(E) 4/23
As with many questions on the GMAT, this problem is simpler than the lengthy wording makes it sound.
We can think of the first 2 cards as cards that have already been turned “face up” and are therefore out of the pile. So the probability of picking a stock card goes from 8/48 to 8/46. Let’s simplify:
\$\$8/48 = 4/23\$\$
### Example GMAT Fractions Question 4: Problem Solving With Algebra
Sometimes it will be useful to come up with your own algebraic equation to solve a GMAT fractions question. Here’s an example:
The total price of a basic computer and printer is 2,500 dollars. If the same printer had been purchased with an enhanced computer whose price was 500 dollars more than the price of the basic computer, then the price of the printer would have been 1/5 of that total. What was the price of the basic computer?
(A) 1500
(B) 1600
(C) 1750
(D) 1900
(E) 2000
Let the price of basic computer be \$c\$, and the price of the printer be \$p\$.
What do we know? We know that \$c\$+\$p\$=2500. We also know that the price of the enhanced computer will be \$c\$+500, since the question stem tells us that it’s 500 dollars more than the basic computer. So the total price of the enhanced computer and the printer is 500 dollars more than 2500, or 3000 dollars.
Now, we are told that the price of the printer is 1/5 of that new total \$3000 price. Let’s figure that out:
\$\$p = 1/5 × \$3000\$\$
\$\$= \$3000/5\$\$
\$\$= \$600\$\$
Now that we know how much \$p\$ (the printer) is, we can plug this value in the first equation to solve for \$c\$ (the basic computer):
\$\$c + \$600 = \$2500\$\$
\$\$c = \$2500 – \$600\$\$
\$\$c = \$1900\$\$
### Example GMAT Fractions Question 5: Data Sufficiency
Here is a relatively simple data sufficiency fraction problem:
Malik’s recipe for 4 servings of a certain dish requires 1 1/2 cups of pasta. According to this recipe, what is the number of cups of pasta that Malik will use the next time he prepares this dish?
1. The next time he prepares this dish, Malik will make half as many servings as he did the last time he prepared the dish.
2. Malik used 6 cups of pasta the last time he prepared this dish.
(A) Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient to answer the question asked.
(B) Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient to answer the question asked.
(C) BOTH statements (1) and (2) TOGETHER are sufficient to answer the question asked, but NEITHER statement ALONE is sufficient to answer the question asked.
(E) Statements (1) and (2) TOGETHER are NOT sufficient to answer the question asked, and additional data specific to the problem are needed.
So all we know from the prompt is that 4 servings of Malik’s dish require 1 and 1/2 or in decimal expression 1.5 cups of pasta.
Statement 1 is insufficient because it just says: “Malik will make half as many servings as he did the last time he prepared the dish.” However, we have no idea how many servings Malik prepared last time. Since we don’t know the servings, we can’t find how much pasta is required. Hence, insufficient. Eliminate (A) and (D).
Statement 2 says Malik used 6 cups of pasta the last time he prepared this dish. Just looking at this statement by itself (without statement 1), it doesn’t really indicate anything: if 6 cups of pasta was the last time, we clearly can’t say how many cups of pasta Malik will use the next time. Hence, insufficient. Eliminate (B).
Now, let’s combine statements 1 and 2. We know that Malik used 6 cups of pasta the last time and that he will make half as many servings as he did the last time. That being the case, Malik will clearly require 3 cups of pasta next time (1/2 of 6 = 3). Sufficient.
### Example GMAT Fractions Question 6: Data Sufficiency With Algebra
Here’s a slightly more advanced data sufficiency question with fractions, involving algebra:
Is \$x\$ between 0 and 1?
1. \$x\$ is between -1/2 and 3/2
2. 3/4 is 1/4 more than \$x\$
(A) Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient to answer the question asked.
(B) Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient to answer the question asked.
(C) BOTH statements (1) and (2) TOGETHER are sufficient to answer the question asked, but NEITHER statement ALONE is sufficient to answer the question asked.
(E) Statements (1) and (2) TOGETHER are NOT sufficient to answer the question asked, and additional data specific to the problem are needed.
This question is basically asking us if \$x\$ is a fraction/decimal between 0 and 1. Let’s work methodically through the statements.
Statement 1 is insufficient because there are many values not between 0 and 1 that satisfy the condition of being between -1/2 and 3/2. If that’s not obvious, you might want to convert the statement into decimals. In decimal form, all statement 1 is telling us is that \$x\$ is between -0.5 and 1.5. So if \$x\$ was 1.1, 1.2, 1.3, -0.4, etc., it would be between -0.5 and 1.5 but not between 0 and 1. Hence, statement 1 is insufficient. Eliminate (A) and (D).
Now let’s test statement 2. Statement 2 is just an overcomplicated way of saying that:
\$\$x = 3/4 – 1/4\$\$
So we solve for \$x\$ very easily:
\$\$x = 3/4 – 1/4 = 1/2\$\$
1/2 is between 0 and 1, so statement 2 is sufficient and the answer is (B).
## GMAT Decimal Questions
Below are the key kinds of GMAT decimal questions. Like fractions, decimals questions overlap with other kinds of questions, and often come with a fraction aspect as well. The GMAT particularly loves to test you on the concept of terminating versus recurring decimals, so we’ve included several examples of that below.
### Example GMAT Decimal Question 1: Problem Solving With Terminating and Recurring Decimals
Every day a certain bank calculates its average daily deposit for that calendar month up to and including that day. If on a randomly chosen day in June the sum of all deposits up to and including that day is a prime integer greater than 100, what is the probability that the average daily deposit up to and including that day contains fewer than 5 decimal places?
(A) 1/10
(B) 2/15
(C) 4/15
(D) 3/10
(E) 11/30
This question tests you on both fractions and decimals. One key rule to remember for this question is that a fraction in its simplest form with a denominator that has only 2 and/or 5 its prime factors will convert to a terminating decimal:
x/2m5n = terminating decimal
Head back to the section on terminating and recurring decimals above if you need more of a refresher.
Now, onto the question.
First, let’s rephrase it algebraically. Let \$p\$ = the prime integer that’s greater than 100, which = the sum of all the deposits up to and including the day. Let \$d\$ be the number of days, up to and including the chosen one (\$d\$ = 1 would be June 1, \$d\$ = 30 would be June 30).
The average daily deposit up to and including the chosen day will be the sum of the deposit divided by the number of days, or \$p\$/\$d\$.
So the question becomes: What is the probability that \$p\$/\$d\$ will have less than 5 decimal places?
Now that we know what we’re being asked, the next step is to hone in on only the days that would have a terminating decimal, since those that yield a recurring decimal will by definition have more than 5 decimal places.
As stated above, to be a terminating decimal, \$p\$/\$d\$ must = x/2m5n, so \$d\$ must = 2m5n. And luckily, because the numerator \$p\$ is a prime number, all the possible values of \$p\$/\$d\$ will be in their simplest forms, so we can test the denominator without worrying that the terminating decimal fraction rule might not apply. .
The days in June (values for \$d\$) that can be expressed as 2m5n and are thus not recurring are day 1, 2, 4, 5, 8, 10, 16, 20, and 25. You can figure this out by doing a prime factorization of each of the 30 days in June, but as long as you’re still know your multiplication tables, you should be able to look at a number between 1 and 30 and realize almost right away if it has a prime factor other than 2 and/or 5.
So now, of days 1, 2, 4, 5, 8, 10, 16, 20, and 25, we have to check if any of them have more than 5 decimal places, which is possible even though they do terminate. We can do this using the rule for converting fractions to decimals. 5 decimals is the ten thousandths place, so to have 5 decimal places or less, \$p\$/\$d\$ × 10,000 must yield an integer:
p/d × 10000 = an integer (nothing after the decimal point)
For this to work, \$d\$ will have to be a factor of 10,000. As it happens, all of these numbers go into 10,000 (10,000 is divisible by 1, 2, 4, 5, 8, 10, 16, 20, and 25), so for all 9 of these \$d\$’s, \$p\$/\$d\$ = a number with less than 5 decimals.
Thus, out of all of the days of June, there are 9 values for \$d\$ for which \$p\$/\$d\$ has less than 5 decimal places, so the probability is 9/30 = 3/10.
### Example GMAT Decimal Question 2: Problem Solving and Scientific Notation Format
Here’s an example of a GMAT terminating decimals question in which knowledge of scientific notation comes in handy.
If \$d\$ = (1)/[(23)(57)] is expressed as a terminating decimal, how many nonzero digits will \$d\$ have?
(A) One
(B) Two
(C) Three
(D) Seven
(E) Ten
First, let’s multiply both the numerator and the denominator by 24, so that we can get the exponents of both the base numbers in the denominator to be the same:
\$\$1/{2^3×5^7} × 2^4/2^4 = 2^4/{2^7 × 5^7}\$\$
Now we can multiply the bases together, and put the exponent 7 with the result:
\$\$2^4/{2^7 × 5^7} = 2^4/10^7 = 16/10^7\$\$
16/107 is the same as 16 × 10-7
16 × 10-7 is just the scientific notation for the decimal 0.0000016 (you move the decimal to the left 7 times). Thus, \$d\$ will have two non-zero digits, 16, when expressed as a decimal. The answer is (B).
### Example GMAT Decimal Question 3: Problem Solving and Estimation
This is a great example of a GMAT decimal question in which you should use your powers of estimation instead of solving it:
1+0.0001/0.04+10
The value of the expression above is closest to which of the following?
(A) 0.0001
(B) 0.001
(C) 0.1
(D) 1
(E) 10
We don’t want those plus signs in this fraction—let’s do the additions and see what the resulting fraction looks like:
\$\$1+0.0001/{0.04+10} = 1.0001/10.04\$\$
Now we can see that these tiny little decimals are negligible: basically, the numerator is 1 and the denominator is 10. So this fraction is virtually 1/10, which equals 0.1. (C) is the answer.
### Example GMAT Decimal Question 4: Data Sufficiency and Terminating Decimals
Lots of GMAT data sufficiency decimal questions will ask you if a certain equation or variable is a terminating decimal. Here’s an example:
Is \$x\$/\$y\$ a terminating decimal?
1. \$x\$ is a multiple of 2
2. \$y\$ is a multiple of 3
(A) Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient to answer the question asked.
(B) Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient to answer the question asked.
(C) BOTH statements (1) and (2) TOGETHER are sufficient to answer the question asked, but NEITHER statement ALONE is sufficient to answer the question asked.
(E) Statements (1) and (2) TOGETHER are NOT sufficient to answer the question asked, and additional data specific to the problem are needed.
Statement 1 indicates that \$x\$, the numerator, is a multiple of 2, which has nothing to do with the terminating or recurring property of decimals—that’s based on the denominator.
We can test this by plugging in multiple-of-2 values for \$x\$: 2/4 is a terminating decimal, but 4/6 is a recurring decimal. So, statement 1 is not sufficient. Eliminate (A) and (D).
Statement 2 says that y is a multiple of 3. You might be tempted to say that this violates the denominator = 2m5n rule, but be careful! Statement 2 gives no information about whether or not \$x\$ and \$y\$ have common factors. For instance, 12 is a multiple of 3, but 9/12 is terminating, since it simplifies into 3/4. But 8/12 is recurring, as it simplifies 2/3. into So statement 2 is also not sufficient. Eliminate (B).
Now, let’s plug in numbers to test statement 1 and 2 together. 4/9 satisfies both the statements and it’s recurring, but 18/24 also satisfies both requirements and it’s terminating. So even together the statements are not sufficient and the answer is (E).
### Tips for GMAT Fractions and Decimals Questions
Below are the key tips for mastering fraction and decimal questions on the GMAT.
### #1: Memorize the Decimal Conversion for All Single-Digit Fractions
When it comes to fractions, being able to convert them to and from decimals with ease will help you get the correct answer faster on many different kinds of GMAT questions. Just because a question is ostensibly asking about medians or areas or probability doesn’t mean that you won’t need to work with fractions at some point to solve the question.
Here’s the conversion for 1/2 through 1/9:
1/2: 0.500
1/3: 0.333
1/4: 0.250
1/5: 0.200
1/6: 0.167 (half of 1/3)
1/7: 0.143 (just need to know this one)
1/8: 0.125 (half of 1/4)
1/9: 0.111
### #2: Memorize the Terminating Decimal Rule for Fractions
In addition to the basic conversion, take some time to memorize the rules and properties sections above—especially the x/2m5n rule for terminating decimals.
### #3: Convert Freely Between Fractions and Decimals as Needed
By the time you take the GMAT, you should be able to fluently convert fractions to decimals or vice versa, depending on what will make a given problem easier. As you do more and more practice questions, you’ll become better at detecting which expression will be the easiest to use to solve the question. The GMAT will often give you the format that is harder to work with to start, as they are testing both your fluency with fractions and decimals and your ability to come up with the best route for solving the problem on your own.
For example, if you’re trying to determine where a variable x falls on the number line, it’s probably easier to work in decimals than in fractions. On the other hand, if you’re given a number like 0.111111111111111 and you have to do algebra with it, it’s probably easier to use 1/9—especially since the answer options will likely be spaced far enough apart that .000000001 of a difference isn’t going to leave you stuck between options.
Remember, you don’t have a calculator for the Quant section, so if you encounter a question on the GMAT that seems impossible to solve without one, there’s almost always a rule, property, or different form of expression that you can use to make it easier. Keep an eye out for strange wording that’s obscuring a very basic principle, and try converting the given fractions to decimals or vice versa if you’re stuck.
### #4: Look at the Answer Options Before Solving
In the terminating decimal GMAT question about what the fraction with those tiny decimal points was closest in value to, you may have been tempted to solve for the exact value. But a quick glance at the answer options, which are all spaced out by a power of ten, tells you that all you have to do is get the location of the decimal point correct—not the exact value. That’s a pretty wide margin.
You should always look at the answer choices before even beginning to solve a problem—they’ll clue you in to the right approach.
Whenever you see tricky-looking decimals with widely varying answer choices, as in the terminating decimal GMAT question above, that best approach might be simply to estimate.
### #5: Don’t Solve Further Than You Need To
Speaking of approaches, as with the first terminating decimal GMAT question, in the example of Malik’s serving dish you may have been immediately tempted to find how many cups of pasta are needed per serving. 4 servings of the dish required 1.5 cups, so the amount of cups per serving is 1.5/4, or 15/40 when we multiply both numerator and denominator by ten to get rid of the decimal in the fraction. This simplifies to 3/8 a cup of pasta per serving.
But this information is actually useless for finding sufficiency, as the two statements gave us everything we needed to solve the problem of how many cups of pasta Malik will use the next time he makes the dish.
Bottom line: Glance at the answer choices first, and then focus on solving only what you need to solve to get to one answer choice.
## What’s Next?
Another key topic to become familiar with for the GMAT Quant section is integers.
For more general advice, check out our 10 tips to master the Quant section.
If you’re looking for more tips, here’s our list of the best tips and shortcuts for doing well on the Quant section.
Happy studying! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405119776725769, "perplexity": 755.7205078783398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00318.warc.gz"} |
https://www.albert.io/ie/macroeconomics/full-employment-with-separation-and-finding-rates | Free Version
Difficult
# Full Employment with Separation and Finding Rates
MACRO-QULLTJ
Consider a simple model of frictional unemployment where the job finding rate ($f$) and job separation rate ($s$) are constant over time. Assume that all individuals in the economy are workers and that workers are either employed or unemployed. Further, suppose that the total available workforce is also fixed at 1,000,000 workers.
If there is no structural unemployment in the economy, then what will be the unemployment rate when the economy reaches full employment? Write your answer as number, i.e. write 15% as 0.15. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887869119644165, "perplexity": 1796.7879047836266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00176-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://library.kiwix.org/wikipedia_en_computer_novid_2018-10/A/Variance.html | # Variance
Example of samples from two populations with the same mean but different variances. The red population has mean 100 and variance 100 (SD=10) while the blue population has mean 100 and variance 2500 (SD=50).
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .
## Definition
The variance of a random variable is the expected value of the squared deviation from the mean of , :
This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:
The variance is also equivalent to the second cumulant of a probability distribution that generates . The variance is typically designated as , , or simply (pronounced "sigma squared"). The expression for the variance can be expanded:
A mnemonic for the above expression is "mean of square minus square of mean". This equation should not be used for computations using floating point arithmetic because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. There exist numerically stable alternatives.
### Discrete random variable
If the generator of random variable is discrete with probability mass function then
or equivalently
where is the average value, i.e.
(When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.)
The variance of a set of equally likely values can be written as
where is the expected value, i.e.,
The variance of a set of equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all points from each other:[1]
### Continuous random variable
If the random variable represents samples generated by a continuous distribution with probability density function , and is the corresponding cumulative distribution function, then the population variance is given by
or equivalently and conventionally,
where is the expected value of given by
with the integrals being definite integrals taken for ranging over the range of
If a continuous distribution does not have a finite expected value, as is the case for the Cauchy distribution, it does not have a variance either. Many other distributions for which the expected value does exist also do not have a finite variance because the integral in the variance definition diverges. An example is a Pareto distribution whose index satisfies
## Examples
### Normal distribution
The normal distribution with parameters and is a continuous distribution whose probability density function is given by
In this distribution, and the variance is related with via
The role of the normal distribution in the central limit theorem is in part responsible for the prevalence of the variance in probability and statistics.
### Exponential distribution
The exponential distribution with parameter is a continuous distribution whose support is the semi-infinite interval . Its probability density function is given by
and it has expected value . The variance is equal to
So for an exponentially distributed random variable,
### Poisson distribution
The Poisson distribution with parameter is a discrete distribution for . Its probability mass function is given by
and it has expected value . The variance is equal to
So for a Poisson-distributed random variable, .
### Binomial distribution
The binomial distribution with parameters and is a discrete distribution for . Its probability mass function is given by
and it has expected value . The variance is equal to
As a simple example, the binomial distribution with describes the probability of getting heads in tosses of a fair coin. Thus the expected value of the number of heads is and the variance is
### Fair die
A fair six-sided die can be modeled as a discrete random variable, X, with outcomes 1 through 6, each with equal probability 1/6. The expected value of X is Therefore, the variance of X is
The general formula for the variance of the outcome, X, of an n-sided die is
## Properties
### Basic properties
Variance is non-negative because the squares are positive or zero:
The variance of a constant random variable is zero, and if the variance of a variable in a data set is 0, then all the entries have the same value:
Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:
If all values are scaled by a constant, the variance is scaled by the square of that constant:
The variance of a sum of two random variables is given by
where Cov(⋅, ⋅) is the covariance. In general we have for the sum of random variables :
These results lead to the variance of a linear combination as:
If the random variables are such that
they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
Since independent random variables are always uncorrelated, the equation above holds in particular when the random variables are independent. Thus independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.
### Sum of uncorrelated variables (Bienaymé formula)
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances:
This statement is called the Bienaymé formula[2] and was discovered in 1853.[3][4] It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is
That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.
To prove the initial statement, it suffices to show that
The general result then follows by induction. Starting with the definition,
Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows:
### Sum of correlated variables
In general, if the variables are correlated, then the variance of their sum is the sum of their covariances:
(Note: The second equality comes from the fact that Cov(Xi,Xi) = Var(Xi).)
Here Cov(⋅, ⋅) is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory.
So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables.
### Matrix notation for the variance of a linear combination
Define as a column vector of random variables , and as a column vector of scalars . Therefore, is a linear combination of these random variables, where denotes the transpose of . Also let be the covariance matrix of . The variance of is then given by:[5]
### Weighted sum of variables
The scaling property and the Bienaymé formula, along with the property of the covariance Cov(aX, bY) = ab Cov(X, Y) jointly imply that
This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.
The expression above can be extended to a weighted sum of multiple variables:
### Product of independent variables
If two variables X and Y are independent, the variance of their product is given by[6]
Equivalently, using the basic properties of expectation, it is given by
### Product of statistically dependent variables
In general, if two variables are statistically dependent, the variance of their product is given by:
### Decomposition
The general formula for variance decomposition or the law of total variance is: If and are two random variables, and the variance of exists, then
where is the conditional expectation of given , and is the conditional variance of given . (A more intuitive explanation is that given a particular value of , then follows a distribution with mean and variance ). As is a function of the variable , the outer expectation or variance is taken with respect to Y. The above formula tells how to find based on the distributions of these two quantities when is allowed to vary.
In particular, if is a discrete random variable assuming with corresponding probability masses , then in the formula for total variance, the first term on the right-hand side becomes
where . Similarly, the second term on the right-hand side becomes
where and . Thus the total variance is given by
A similar formula is applied in analysis of variance, where the corresponding formula is
here refers to the Mean of the Squares. In linear regression analysis the corresponding formula is
This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.
Similar decompositions are possible for the sum of squared deviations (sum of squares, ):
### Formulae for the variance
A formula often used for deriving the variance of a theoretical distribution is as follows:
This will be useful when it is possible to derive formulae for the expected value and for the expected value of the square.
This formula is also sometimes used in connection with the sample variance. While useful for hand calculations, it is not advised for computer calculations as it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude and floating point arithmetic is used. This is discussed in the article Algorithms for calculating variance.
### Calculation from the CDF
The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using
This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed.
### Characteristic property
The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. . Conversely, if a continuous function satisfies for all random variables X, then it is necessarily of the form , where a > 0. This also holds in the multidimensional case.[7]
### Units of measurement
Unlike expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is √2.9 1.7, slightly larger than the expected absolute deviation of 1.5.
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.
## Approximating the variance of a function
The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by
provided that f is twice differentiable and that the mean and variance of X are finite.
## Population variance and sample variance
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance that would have been calculated from an omniscient set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example that sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n 1, n + 1, and n 1.5: n is the simplest (population variance of the sample), n 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution.
Firstly, if the omniscient mean is unknown (and is computed as the sample mean), then the sample variance is a biased estimator: it underestimates the variance by a factor of (n 1) / n; correcting by this factor (dividing by n 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean.
Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation.
### Population variance
In general, the population variance of a finite population of size N with values xi is given by
{\displaystyle {\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}-\mu \right)^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\mu ^{2}\end{aligned}}}
where the population mean is
.
The population variance can also be computed using
This is true because
{\displaystyle {\begin{aligned}{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}&={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\&={\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)\\&\quad +{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\&={\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\&=\sigma ^{2}\end{aligned}}}
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
### Sample variance
In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[8] Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
We take a sample with replacement of n values y1, ..., yn from the population, where n < N, and estimate the variance on the basis of this sample.[9] Directly taking the variance of the sample data gives the average of the squared deviations:
Here, denotes the sample mean:
Since the yi are selected randomly, both and are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {yi} of size n from the population. For this gives:
Hence gives an estimate of the population variance that is biased by a factor of . For this reason, is referred to as the biased sample variance. Correcting for this bias yields the unbiased sample variance:
Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.
The use of the term n 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n 1.5 yields an almost unbiased estimator.
The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.
### Distribution of the sample variance
Distribution and cumulative distribution of s2/σ2, for various values of ν = n − 1, when the yi are independent normally distributed.
Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that yi are independent observations from a normal distribution, Cochran's theorem shows that s2 follows a scaled chi-squared distribution:[10]
As a direct consequence, it follows that
and[11]
${\displaystyle \operatorname {Var} [s^{2}]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{(n-1)^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.}$
If the yi are independent and identically distributed, but not necessarily normally distributed, then[12][13]
${\displaystyle \operatorname {E} [s^{2}]=\sigma ^{2},\quad \operatorname {Var} [s^{2}]={\frac {\sigma ^{4}}{n}}\left((\kappa -1)+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),}$
where κ is the kurtosis of the distribution and μ4 is the fourth central moment.
If the conditions of the law of large numbers hold for the squared observations, s2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[14][15][16]
### Samuelson's inequality
Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[17] Values must lie within the limits
### Relations with the harmonic and arithmetic means
It has been shown[18] that for a sample {yi} of real numbers,
where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and is the (biased) variance of the sample.
This bound has been improved, and it is known that variance is bounded by
where ymin is the minimum of the sample.[19]
## Tests of equality of variances
Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose.
Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.
The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test.
Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances.
## History
The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:[20]
The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations and , it is found that the distribution, when both causes act together, has a standard deviation . It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
Geometric visualisation of the variance of an arbitrary distribution (2, 4, 4, 4, 5, 5, 7, 9):
1. A frequency distribution is constructed.
2. The centroid of the distribution gives its mean.
3. A square with sides equal to the difference of each value from the mean is formed for each value.
4. Arranging the squares into a rectangle with one side equal to the number of values, n, results in the other side being the distribution's variance, σ².
## Moment of inertia
The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of is given by
This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like
That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is
## Semivariance
The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation. It is sometimes described as a measure of downside risk in an investments context. For skewed distributions, the semivariance can provide additional information that a variance does not.
For inequalities associated with the semivariance, see Chebyshev's inequality § Semivariances.
## Generalizations
### For complex variables
If is a scalar complex-valued random variable, with values in then its variance is where is the complex conjugate of This variance is a real scalar.
### For vector-valued random variables
#### As a matrix
If is a vector-valued random variable, with values in and thought of as a column vector, then a natural generalization of variance is where and is the transpose of and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix).
If is a vector- and complex-valued random variable, with values in then the covariance matrix is where is the conjugate transpose of This matrix is also positive semi-definite and square.
#### As a scalar
Another natural generalization of variance for such vector-valued random variables which results in a scalar value rather than in a matrix, is obtained by interpreting the deviation between the random variable and its mean as the Euclidean distance. This results in which is the trace of the covariance matrix.
## Notes
1. Yuli Zhang, Huaiyu Wu, Lei Cheng (June 2012). Some new deformation formulas about variance and covariance. Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.
2. Loève, M. (1977) "Probability Theory", Graduate Texts in Mathematics, Volume 45, 4th edition, Springer-Verlag, p. 12.
3. Bienaymé, I.-J. (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Comptes rendus de l'Académie des sciences Paris, 37, p. 309–317; digital copy available
4. Bienaymé, I.-J. (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Journal de Mathématiques Pures et Appliquées, Série 2, Tome 12, p. 158–167; digital copy available
5. Johnson, Richard; Wichern, Dean (2001). Applied Multivariate Statistical Analysis. Prentice Hall. p. 76. ISBN 0-13-187715-1
6. Goodman, Leo A., "On the exact variance of products," Journal of the American Statistical Association, December 1960, 708–713. DOI: 10.2307/2281592
7. Kagan, A.; Shepp, L. A. (1998). "Why the variance?". Statistics & Probability Letters. 38 (4): 329–333. doi:10.1016/S0167-7152(98)00041-8.
8. Navidi, William (2006) Statistics for Engineers and Scientists, McGraw-Hill, pg 14.
9. Montgomery, D. C. and Runger, G. C. (1994) Applied statistics and probability for engineers, page 201. John Wiley & Sons New York
10. Knight K. (2000), Mathematical Statistics, Chapman and Hall, New York. (proposition 2.11)
11. Casella and Berger (2002) Statistical Inference, Example 7.3.3, p. 331
12. Cho, Eungchun; Cho, Moon Jung; Eltinge, John (2005) The Variance of Sample Variance From a Finite Population. International Journal of Pure and Applied Mathematics 21 (3): 387-394. http://www.ijpam.eu/contents/2005-21-3/10/10.pdf
13. Cho, Eungchun; Cho, Moon Jung (2009) Variance of Sample Variance With Replacement. International Journal of Pure and Applied Mathematics 52 (1): 43-47. http://www.ijpam.eu/contents/2009-52-1/5/5.pdf
14. Kenney, John F.; Keeping, E.S. (1951) Mathematics of Statistics. Part Two. 2nd ed. D. Van Nostrand Company, Inc. Princeton: New Jersey. http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf
15. Rose, Colin; Smith, Murray D. (2002) Mathematical Statistics with Mathematica. Springer-Verlag, New York. http://www.mathstatica.com/book/Mathematical_Statistics_with_Mathematica.pdf
16. Weisstein, Eric W. (n.d.) Sample Variance Distribution. MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/SampleVarianceDistribution.html
17. Samuelson, Paul (1968). "How Deviant Can You Be?". Journal of the American Statistical Association. 63 (324): 1522–1525. doi:10.1080/01621459.1968.10480944. JSTOR 2285901.
18. Mercer, A. McD. (2000). "Bounds for A–G, A–H, G–H, and a family of inequalities of Ky Fan's type, using a general method". J. Math. Anal. Appl. 243 (1): 163–173. doi:10.1006/jmaa.1999.6688.
19. Sharma, R. (2008). "Some more inequalities for arithmetic mean, harmonic mean and variance". J. Math. Inequalities. 2 (1): 109–114. CiteSeerX 10.1.1.551.9397. doi:10.7153/jmi-02-11. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836407899856567, "perplexity": 418.14034198650756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135513-00392.warc.gz"} |
http://www.thespectrumofriemannium.com/2013/09/16/log132-spacetime-foamii/ | # LOG#132. Spacetime foam(II).
The second post in this thread is completely speculative.
1) Suppose that Dark Energy is some kind of “fluid” made of some long wavelength “particles” or “exotic stuff”.
2) How different are those “Dark Energy particles” from the particles we know from the Standard Model?
A simple argument gives a stunning response:
$N\sim \left(\dfrac{R_H}{L_p}\right)^2$
plus the hypothesis of Boltzmann statistics in a cosmic volume $V\sim R_H^3$ and temperature $T\sim R_H^{-1}$ provides the partition function
$Z_N=\left(\dfrac{V}{\lambda^3}\right)^N\dfrac{1}{N!}$
where $1/N!$ is due to the fact we expect the dark particles are indistinguible. Thus, it gives the entropy
$S=N\left(\ln\left[\dfrac{V}{N\lambda^3}\right]+\dfrac{5}{2}\right)$
But, if $V\sim \lambda^3$, then S (the entropy) becomes negative! This is absurd and the only alternative is that somehow $N\sim 1$ above, or equivalently, that the solution to this issue is not only that N must be absent but that the N dark energy particles are distinguishable particles! In that case, we obtain
$S=N\left(\ln\left(\dfrac{V}{\lambda^3}\right)+\dfrac{3}{2}\right)+const$
i.e.
$S\sim N$
This is a bit weird for a quantum statistics. The only consistent statistics in greater than 2 spacetime dimensions without Gibbs factor ($1/N!$) is the so-called Quantum Boltzmann statistics, and it is also named as infinite statistics.
The big idea of Jack Ng: particles of dark energy obey infinite statistics rahter than Bose or Fermi statistics!
In fact, not unsurprisingly, the M-theory approach to unification suggest something similar. In the paper http://arxiv.org/abs/0705.4581 , the authors have made a similar conjecture.
## Infinite statistics, quons and Cuntz algebra
A q-deformed Heisenberg algebra can be built from the relationships
$a_ka^+_l-qa_l^+a_k=\delta_{kl}$
or
$\boxed{a_ia_j^+-qa_j^+a_i=\delta_{ij}}$
This is the so-called quon statistics, or deformed Heisenberg algebra. Remarkably, you can recover the boson or fermion algebra if you plut $q=+1$ (bosons) or $q=-1$ (fermions).
The Cuntz algebra is the case with $q=0$, and it is also called infinite statistics or quantum Boltzmann statistics! It satisfies, hence
$\boxed{a_ka^+_l=\delta_{kl}}$
Any 2 states in infinite statistics act on vacuum $\vert 0\rangle$ such as
$\boxed{\langle 0\vert a_{i1}a_{i2}\cdots a_{iN}a^+_{jN}\cdots a^+_{j2}a^+_{j1}\vert 0\rangle =\delta_{i1,j1}\delta_{i2,j2}\cdots \delta_{iN,jN}}$
That is, particles with infinite statistics are virtually distinguishable! The partition function is
$Z_N=\sum e^{-\beta H}$
and there is no Gibbs factor. In fact, all the representations of the particle permutation group can occur in infinite statistics!
Extra features of infinite statistics/Quantum Boltzmann statistics:
(1) The number operator $\hat{N}$, the hamiltonian $\hat{H}$ operator, and any other observable are both “nonlocal” and “non polynomial” in the field operators.
(2) The TCP theorem and the cluster decomposition still hold, despite the nonlocal and non polynomial character of infinite statistics!
(3) QFT with infinite statistics are unitary, but it is highly non-trivial.
(4) Non-locality is a virtue from some Quantum Mechanics viewpoints, specially those regarding the issue of the Black Hole information paradox.
MONDian Dark Matter/Dark Energy
Suppose some test mass (m) due to some source (M). The Milgrom’s law of inertia reads
$a=\begin{cases}a_N, \mbox{if}\;\; a>>a_c\\ \sqrt{a_Na_c},\mbox{if}\;\; a<
If we write
$a_N=G_N\dfrac{M}{r^2}$
then
$a_c\sim \dfrac{cH_0}{(2\pi)}\sim 1.2\cdot 10^{-10}m/s^2$
MOdified Newtonian Dynamics (MOND) has some “merits”:
1) It explains galactic rotation curves $v\sim const$ when $r>r_c$.
2) It explains the Tully-Fisher relation (speed of stars are correlated with the galactic brightness as $v^4prop M$).
However, the MOND idea presents some “issues”:
1st. At cluster scales and cosmological scales, CDM (Cold Dark Matter hypothesis) seems to work better than it.
2nd. No MOND or more generally MOG (MOdified Gravity) field based theory predicts such a dynamics!
Idea: Could we reconcile CDM with MOND?
Hints: Verlinde (2010) proposed the entropic gravity scenario. Newton’s law of inertia $F=ma$ follows from
i) $F=T\dfrac{\Delta S}{\Delta x}$
ii) $\Delta S=2\pi k_B\dfrac{mc}{\lambda}\Delta x$
iii) $T_U=\dfrac{\hbar}{2\pi k_Bc}$
Moreover, Verlinde also derived Newton’s universal gravitation from
i) $F=T\dfrac{\Delta S}{\Delta x}$
ii) The holographic screen with area $A=4\pi r^2$ and temperature $T$.
iii) The equipartition theorem $E=\dfrac{N}{2}k_BT$
iv) The holographic principle applied to the degrees of freedom on the holographic screen
$N=\dfrac{Ac^3}{G_N\hbar}=\dfrac{A}{L_p^2}$
and thus he could also derive $F=G_N\dfrac{Mm}{r^2}$ in the same setting.
Therefore, we could modify the entropic argument to get Milgrom’s law in a de Sitter like Universe (dS) or dS spacetime. How could we do that? It is pretty simple, in a dS Universe, any inertial observer sees a temperature
$T_U(dS)=\dfrac{a_0}{2\pi k_Bc}$
For a non-inertial observer, we must substract this temperature to get the true acceleration for the non-inertial frame, i.e.,
$\boxed{T_U(dS)'=T_U(ac)-T_U(dS)=\dfrac{1}{2\pi k_Bc}\left(\sqrt{a^2+a_0^2}-a_0\right)}$
Now, we are ready to apply the Verlinde’s approach/argument:
$F=T\nabla S=m\left(\sqrt{a^2+a_0^2}-a_0\right)$
In the limits where $a>>a_0$ and $a< we obtain, respectively:
$F_e\approx ma$ for $a>>a_0$
$F_e\approx m\dfrac{a^2}{2a_0}$ for $a<, and this is precisely Milgrom’s MOND law! Indeed, the fit is straightforward
$m\sqrt{a_Na_c}=m\dfrac{a^2}{2a_0}$
from which we get
$a^4=4a_0^2a_Na_c$
or
$\boxed{a=\sqrt[4]{4a_0^2a_Na_c}=\left(2a_Na_0^3/\pi\right)^{1/4}}$
Indeed, we can choose units with $a_0\approx 2\pi a_c$, and then
$a_c=\dfrac{a_0}{(2\pi)}$
Remember: $a=G_N\dfrac{M}{r^2}=a_N$ for the purely newtonian gravity!
Indeed, we check that $a< implies the right behaviour for the galactic rotation curves, since
$F_c=\dfrac{mv^2}{r}=m\dfrac{a^2}{a_0}$
provides
$v\sim constant$
because
$\dfrac{v^2}{r}=\dfrac{a^2}{a_0}\longrightarrow \dfrac{v^2}{r}=\dfrac{(2a_Na_0^3/\pi)^{1/2}}{a_0}\sim constant$
We can also check this result “a la Verlinde”:
$2\pi k_B\tilde{T}=2\pi k_B\left(\dfrac{2\tilde{E}}{Nk_B}\right)=4\pi\left(\dfrac{\tilde{M}}{A/G_N}\right)=\dfrac{G_N\tilde{M}}{r^2}$
Here, $\tilde{M}$ is the mass enclosed within a volume $V=\dfrac{4}{3}\pi r^3$, and $\tilde{M}=M+M'$, where
$M'=\dfrac{1}{\pi}\left(\dfrac{a_0}{a}\right)^2M$
From the entropic force argument, we get
$F_e=m\left(\sqrt{a^2+a_0^2}-a_0\right)=ma_N\left[1+\dfrac{1}{\pi}\left(\dfrac{a_0}{a}\right)^2\right]$
And now, we have two limit cases:
1) Large accelerations $a>>a_0$, so
$F_e\approx ma=a_N$
and the equivalence principle holds since $a=a_N$.
2) Small accelerations $a<, so
$F_e\approx m\dfrac{a^2}{a_0}\approx ma_N\left(\dfrac{1}{\pi}\right)\left(\dfrac{a_0}{a}\right)^2$
and thus, we get that
$a=\left(2a_Na_0^3/\pi\right)^{1/4}$
and there is some kind of equivalence principle inequivalence!
In the framework of this entropic MOND, there is no Dark Matter at all, but a MOND/MOG theory at small accelerations! It is interesting that there is some kind of correspondence between MOND and DM since we can write
$F_e=G_N\dfrac{M+M'}{r^2}$
and thus, some type of MOND/Dark Matter unification happens! Dark Matter of this type behaves as if there is NO dark matter but MOND, so it is called MOND dark matter theory. And as we have seen, $F_e$ can be rewritten as a MOND law!
By the other hand, this theory can be also applied to Cosmology and Dark Energy! Let me show you how.
1st Friedmann equation:
$\dfrac{\ddot{R}}{R}=-\dfrac{4\pi G_N}{3}(\rho+3P)+\dfrac{\Lambda}{3}$
and
2nd Friedmann equation:
$H^2=\dfrac{8\pi G_N}{3}+\dfrac{\Lambda}{3}$
Suppose that $\tilde{M}$ is the active gravitational mass and we apply Verlinde’s approach to the Tolman-Komar mass
$\displaystyle{\mathcal{M}=\dfrac{1}{4\pi G_N}\int dV R_{\mu\nu}u^\mu u^\nu}$
We get
$\displaystyle{\mathcal{M}=2\int dV(T_{\mu\nu}-\dfrac{1}{2}Tg_{\mu\nu}+\dfrac{\Lambda g_{\mu\nu}}{8\pi G_N})u^\mu u^\nu=\left(\dfrac{4}{3}\pi r^3\right)\left[(\rho +3P)-\dfrac{\Lambda}{4\pi G_N}\right]}$
Einstein gravity plus the Verlinde approach and the MOND dark matter ide departure from usual MOND when $\tilde{M}\longrightarrow \mathcal{M}$. Why? It is simple. From the non relativistic source versus relativistic source correspondence we observe that
$\sqrt{a^2+a_0^2}-a_0=\dfrac{G_N\mathcal{M}}{\tilde{r}^2}$
for $\tilde{r}=rR(t)$
and
$\sqrt{a^2+a_0^2}-a_0=\dfrac{G_N(M+M')}{\tilde{r}^2}+4\pi G_N\rho\tilde{r}-\dfrac{\Lambda}{3}\tilde{r}$
1) Using a naive MOND at clusters misses two additional terms, $4\pi G_N\rho \tilde{r}$ and $-\dfrac{\Lambda}{3}\tilde{r}$. It could explain why MOND fails at cluster (or larger) scales.
2) At galactic scales, MOND dark matter quanta are “massless” and they reproduce Milgrom’s MOND.
3) At cluster/cosmological scales, MOND dark matter becomes massive!
## A gravitational Born-Infeld theory idea
We can feel that the dS Unruh temperature is not explained in the previous arguments. I mean, where does the Unruh temperature for a dS Universe
$T_U'(dS)=\sqrt{a^2+a_0^2}-a_0$
comes from? Well, it can be explained with a relatively simple field theory. There are some nonlinear electromagnetic-like lagrangians called Dirac-Born-Infeld theory, or simply Born-Infeld theory (BIT) for short. The lagrangian is defined as follows
$\boxed{L_{BIT}=b^2\left(1-\sqrt{1-\dfrac{(E^2-B^2)}{b^2}-\dfrac{(E\cdot B)^2}{b^4}}\right)}$
There is a simplified version of this lagrangian, if we write impose that $B=0$, that we call $L_{BIT}^{II}$:
$L_{BIT}^{II}=b^2\left(1-\sqrt{1-\dfrac{E^2}{b^2}}\right)$
and if we impose $E=0$, we get
$L_{BIT}^{III}=b^2\left(1-\sqrt{1+\dfrac{B^2}{b^2}}\right)$
These lagrangians are nonlinear and describes a nonlinear electrodynamics similar to the relativistic particle lagrangian (nonlinear too):
$L_{SR}=mc^2\left(1-\sqrt{1-\left(\dfrac{v}{c}\right)^2}\right)$
From this analogy, we observe that the maximal velocity in the relativistic particle is translated into a maximal force $b$ in the BIT. Now, let us assume that the non-linear electromagnetism is adopted by modified gravity. Take $L_{BIT}(gravity)\times (1/4\pi)$, due to the spin of gravity (remember than in the case of electrodynamics, $D=\epsilon E$). Therefore, the BIT for gravity has a lagrangian (we take the BIT III version without loss of generality)
$\boxed{H_g=-\dfrac{L_{BIT}^{III}(grav)}{4\pi}=\dfrac{b^2}{4\pi}\left(\sqrt{1+\dfrac{D_g^2}{b^2}}-1\right)}$
Now, write $A_0=b^2$ and $A=bD_g$, and it yields after substitution
$\boxed{H_g=\dfrac{1}{4\pi}\left(\sqrt{A^2+A_0^2}-A_0\right)}$
If we apply the equipartition theorem, such as $H_g=\dfrac{k_BT_{eff}}{2}$ per degree of freedom (DOF), the entropic force argument provides that
$\boxed{a_{eff}=\sqrt{A^2+A_0^2}-A_0}$
and from the equivalence principle between acceleration and gravity, we get
$a_{eff}=\sqrt{a^2+a_0^2}-a_0$
and thus
$\boxed{F_{BIT}(grav)=ma_{eff}=m\left(\sqrt{a^2+a_0^2}-a_0\right)}$
But this equation is precisely the MOND dark matter ansatz we stated above! It seems that MOND has a nonlinear electrodynamical-like origin!
See you in my last spacetime foam post!
View ratings | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 101, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756236672401428, "perplexity": 2639.5300433812013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489343.24/warc/CC-MAIN-20190219041222-20190219063222-00006.warc.gz"} |
http://www.verypdf.com/wordpress/201111/how-to-set-color-depth-when-convert-emf-to-png-14048.html | How to set color depth when convert EMF to PNG?
VeryPDF HTML Converter Command Line has been designed to convert HTML files to other files in formats like PDF, and image files. However, it can also convert other files like EMF to PNG, PS (Postscript), and other image files in a swift and correct way. VeryPDF HTML Converter Command Line also supports color depth setting, which is the number of bits used to represent the color of a single pixel in a digital image. You can free down VeryPDF HTML Converter Command Line at the following website
If you want to set color depth and convert EMF to PNG via a command line, you should do it in three steps: first run the command prompt window, second type a command line and third press “Enter”. The command -bitcount <int> can help you set color depth. This article focuses on the way to set color depth when convert EMF to PNG via a command line.
1. Run the command prompt window
The following is the most commonly used method to run the run the command prompt window, and it will take four steps: click “Start” > click “Run” > enter “cmd” in the “Open” edit box in the “Run” dialog box that pops out; >click on “OK”. When the black and white command window appears on the computer screen, you can proceed to the next step.
2. Type a command line
To set the color depth and convert EMF to PNG, your should type a command line , consisting of four parts as illustrated in the following command line pattern:
htmltools -bitcount <int> <EMF file> <PNG file>
• htmltools represents the executable file, which, to be more specific, is htmltools.exe in the folder titled htmltools in your computer.
• -bitcount <int> is the command that can be used to set color depth for image conversion. int stands for integer, which only allows 1, 8 and 24 as the parameter values in this command. The Angle brackets do not appear in the command prompt window, but are used to mark the essential contents.
• <EMF file> stands for the input file in EMF format.
• <PNG file> represents the output file in PNG format.
When you enter a command line in the command prompt window, the directories of the executable file, the original file and the output file are all required to appear instead of only the file names.Taking the following command line as an example,
D:\htmltools\htmltools.exe -bitcount 24 D:\in\fl.emf D:\out\fl.PNG
In the command line above, the directories of the executable file, the input file and the output file are all listed.
• D:\htmltools\htmltools.exe is the directory of the executable file htmltools.exe, which has been placed in the folder called htmltools by default. This example shows that this folder is placed on disc D.
• -bitcount 24 represents the command that specifies 24-bit as the color depth for the output file converted from EMF to PNG. The number can be replaced by either 1 or 8.The higher the color depth, the more colors appear in the image. You can view the effect of the output files later.
• D:\in\fl.emf is the directory of the input EMF file. It leads the computer to find the input file named fl in the folder in on disk D.
• D:\out\fl.png represents the directory of the output file. It specifies PNG as the format of the output file, names the output file as fl, and indicates to export the output file in the folder named Out on disk D.
You can replace the directories of the executable file, the input file and output file in the example with the directories of the files in your computer respectively.
3. Press “Enter”
Press “Enter” on the keyboard, and you can check the effect of conversion from EMF to PNG in no more than a second. The following can show you the original EMF file, and the files converted from EMF to PNG via command line.
The original EMF file
1-bit PNG file
8-bit PNG file
24-bit PNG file
VN:F [1.9.20_1166] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537555932998657, "perplexity": 2582.8331343991586}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00131.warc.gz"} |
http://www.majiajun.org/publication/ | Publications/preprints
Publications and preprints
1. Transfers of K-types on local theta lifts of characters and unitary lowest weight modules,
(with Hung Yean Loke and U-Liang Tang), ArXiv e-prints:1207.6454,
Israel Journal of Mathematics, Vol 201, Issue 1, pp 1-24, 20 June 2014
2. Derived functor modules, dual pairs and U(𝔤)K-actions, ArXiv e-prints:1310.6378, Journal of Algebra, Volume 450, pp 629–645, Mar 2016
3. Invariants and K-spectrums of local theta lifts,
(with Hung Yean Loke), ArXiv e-prints:1302.1031, Compositio Mathematica, Jan 2015
4. Local theta correspondences between epipelagic supercuspidal representations,
(with Hung Yean Loke and Gordan Savin), ArXiv e-prints:1501.07069, to appear Mathematische Zeitschrift, Jun 2016
5. Local theta correspondence between supercuspidal representations,
(with Hung Yean Loke), ArXiv e-prints: 1512.01797, 2017, accepted by Annales Scientifiques de l’ENS
6. On two questions concerning representations distinguished by the Galois involution,
(with Maxim Gurevich and Arnab Mitra), ArXiv e-prints: 1609.03155, 2017, Forum Mathematicum
Notes
1. Associated cycles of local theta lifts of unitary characters and unitary lowest weight modules,
(with Hung Yean Loke and U-Liang Tang), ArXiv e-prints:1207.6451, 2011
2. Local Theta lifts of one-dimensional representations, Notes on Symposium on Representation Theory 2012, Kagoshima, Japan
Descriptions
Item [1] and [2] explore the relationship between derived functor modules and local theta correspondence over real numbers. In [1], we investigate the transfer of the theta lift of a unitary lowest weight module of a symplectic group and show the transfer is in fact also a theta lift of a certain unitary lowest weight module. In [2], I extend the theorem to all possible dual pairs and simplify the proof by using an identity on Hecke algebras. In addition to these, we have shown that the lifts in [1] are quotients of certain $A_\mathfrak{q}(\lambda)$.
Item [i] and [3] study the associated cycle of the theta lift. In [i], we exam the lift of unitary lowest weight module without restricting on the stable range. When beyond stable range, our results exhibit the subtlety of the problem. Along the discussion some technical lemmas are established which serve as preparations for [3]. In [3], the associated cycle of a full (big) theta lift in the stable range is completely described in terms of the associated cycle of the original representation. Moreover, we show that the big theta lift of a unitary representation is in fact irreducible in the stable range. As a corollary, we obtain a class of unipotent representations via iterated theta lifting which satisfy a conjecture of Vogan on $K$-types.
Item [ii] is a summary of results in [1] and [i] of the case of theta lifts of one dimensional representations.
Item [4] and [5] study theta correspondence between supercuspidal representations. In [4], we consider epipelagic supercuspidal representations and show that our construction exhaust all possible cases under a mild condition of the residual characteristic. We also explored some facts about the Bruhat-Tits building of classical groups, lattice functions and their compatibility with moment maps [4]. In [5], we define a notion of theta lift of (tamely ramified) supercuspidal data and show that local theta correspondence between supercuspidal representations are completely described by the lift of data under the assumption that the residual characteristic is large enough. In appendix B of [5], we gives a short proof of “depth preservation”.
Item [6], we investigate the attempts of using certain symmetry condition/tautological functorial lift to characterize the smooth irreducible representations of GL(n,E) that are distinguished by its subgroup GL(n,F), where E/F is a quadratic extension of p-adic fields. We show that the attempt works for ladder representations. On the other hand, we show that such kind of approach does not work for general admissible representations, by constructing a counter example using irreducibly induced representation from ladders. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886158466339111, "perplexity": 1077.4138752611477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423774.37/warc/CC-MAIN-20170721122327-20170721142327-00351.warc.gz"} |
http://www.antionline.com/showthread.php?275379-port-firewall-problem&p=924626&mode=threaded | ## port/firewall problem
alright i just got WoW, installed it, and tried to run it when i got an error message that occured:
ERROR #134 (0x85100086) Fatal Condition
Program: C:\Program Files\World of Warcraft\WoW.exe
Unabled to establish a connection to socket: The attempt to connect was forcefully rejected.
Then a long description of the error comes up, asks u to describe what was happening when the error occured but when u try to send it it says a connection couldnt be made to the server. here is the error message:
World of WarCraft (build 5428)
Exe: C:\Program Files\World of Warcraft\WoW.exe
Time: May 26, 2007 6:31:33.630 PM
User: Owner
Computer: YOUR-F95424BB04
------------------------------------------------------------------------------
This application has encountered a critical error:
ERROR #134 (0x85100086) Fatal Condition
Program: C:\Program Files\World of Warcraft\WoW.exe
Unable to establish a connection to socket: The attempt to connect was forcefully rejected.
WoWBuild: 5428
------------------------------------------------------------------------------
----------------------------------------
Stack Trace (Manual)
----------------------------------------
00644969 063DCE34 0001:00243969 C:\Program Files\World of Warcraft\WoW.exe
006448A3 063DCE60 0001:002438A3 C:\Program Files\World of Warcraft\WoW.exe
00644994 063DCE6C 0001:00243994 C:\Program Files\World of Warcraft\WoW.exe
005B70DA 063DCF04 0001:001B60DA C:\Program Files\World of Warcraft\WoW.exe
005B6E24 063DCF40 0001:001B5E24 C:\Program Files\World of Warcraft\WoW.exe
005B684D 063DFF84 0001:001B584D C:\Program Files\World of Warcraft\WoW.exe
005B7D18 063DFF94 0001:001B6D18 C:\Program Files\World of Warcraft\WoW.exe
006434FC 063DFFB4 0001:002424FC C:\Program Files\World of Warcraft\WoW.exe
7C80B683 063DFFEC 0001:0000A683 C:\WINDOWS\system32\kernel32.dll
----------------------------------------
Stack Trace (Using DBGHELP.DLL)
----------------------------------------
----------------------------------------
----------------------------------------
0x00320000 - 0x003B0000 C:\Program Files\World of Warcraft\fmod.dll
0x003B0000 - 0x003B9000 C:\WINDOWS\system32\Normaliz.dll
0x00400000 - 0x00CF9000 C:\Program Files\World of Warcraft\WoW.exe
0x063E0000 - 0x064F8000 C:\Program Files\World of Warcraft\dbghelp.dll
0x10000000 - 0x10069000 C:\Program Files\World of Warcraft\DivxDecoder.dll
0x4FDD0000 - 0x4FF76000 C:\WINDOWS\system32\d3d9.dll
0x5A900000 - 0x5A923000 C:\Program Files\iolo\Common\Lib\ioloHL.dll
0x5D090000 - 0x5D12A000 C:\WINDOWS\system32\COMCTL32.dll
0x5DCA0000 - 0x5DCE5000 C:\WINDOWS\system32\iertutil.dll
0x5ED00000 - 0x5EDCC000 C:\WINDOWS\system32\OPENGL32.dll
0x63000000 - 0x63014000 C:\WINDOWS\system32\SynTPFcs.dll
0x662B0000 - 0x66308000 C:\WINDOWS\system32\hnetcfg.dll
0x68B20000 - 0x68B40000 C:\WINDOWS\system32\GLU32.dll
0x6D990000 - 0x6D996000 C:\WINDOWS\system32\d3d8thk.dll
0x71A50000 - 0x71A8F000 C:\WINDOWS\system32\mswsock.dll
0x71A90000 - 0x71A98000 C:\WINDOWS\System32\wshtcpip.dll
0x71AA0000 - 0x71AA8000 C:\WINDOWS\system32\WS2HELP.dll
0x71AB0000 - 0x71AC7000 C:\WINDOWS\system32\WS2_32.dll
0x71BF0000 - 0x71C03000 C:\WINDOWS\system32\SAMLIB.dll
0x72D10000 - 0x72D18000 C:\WINDOWS\system32\msacm32.drv
0x72D20000 - 0x72D29000 C:\WINDOWS\system32\wdmaud.drv
0x73760000 - 0x737A9000 C:\WINDOWS\system32\DDRAW.dll
0x73BC0000 - 0x73BC6000 C:\WINDOWS\system32\DCIMAN32.dll
0x73EE0000 - 0x73EE4000 C:\WINDOWS\system32\KsUser.dll
0x73F10000 - 0x73F6C000 C:\WINDOWS\system32\dsound.dll
0x74720000 - 0x7476B000 C:\WINDOWS\system32\MSCTF.dll
0x755C0000 - 0x755EE000 C:\WINDOWS\system32\msctfime.ime
0x76B40000 - 0x76B6D000 C:\WINDOWS\system32\WINMM.dll
0x76C30000 - 0x76C5E000 C:\WINDOWS\system32\WINTRUST.dll
0x76C90000 - 0x76CB8000 C:\WINDOWS\system32\IMAGEHLP.dll
0x76F60000 - 0x76F8C000 C:\WINDOWS\system32\WLDAP32.dll
0x77120000 - 0x771AC000 C:\WINDOWS\system32\oleaut32.dll
0x771B0000 - 0x7727E000 C:\WINDOWS\system32\WININET.dll
0x773D0000 - 0x774D3000 C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.2982_x-ww_ac3f9c03\comctl32.dll
0x774E0000 - 0x7761D000 C:\WINDOWS\system32\ole32.dll
0x77690000 - 0x776B1000 C:\WINDOWS\system32\NTMARTA.DLL
0x77A80000 - 0x77B14000 C:\WINDOWS\system32\CRYPT32.dll
0x77B20000 - 0x77B32000 C:\WINDOWS\system32\MSASN1.dll
0x77BD0000 - 0x77BD7000 C:\WINDOWS\system32\midimap.dll
0x77BE0000 - 0x77BF5000 C:\WINDOWS\system32\MSACM32.dll
0x77C00000 - 0x77C08000 C:\WINDOWS\system32\VERSION.dll
0x77C10000 - 0x77C68000 C:\WINDOWS\system32\msvcrt.dll
0x77E70000 - 0x77F01000 C:\WINDOWS\system32\RPCRT4.dll
0x77F10000 - 0x77F57000 C:\WINDOWS\system32\GDI32.dll
0x77F60000 - 0x77FD6000 C:\WINDOWS\system32\SHLWAPI.dll
0x77FE0000 - 0x77FF1000 C:\WINDOWS\system32\Secur32.dll
0x7C800000 - 0x7C8F4000 C:\WINDOWS\system32\kernel32.dll
0x7C900000 - 0x7C9B0000 C:\WINDOWS\system32\ntdll.dll
0x7C9C0000 - 0x7D1D5000 C:\WINDOWS\system32\SHELL32.dll
0x7E410000 - 0x7E4A0000 C:\WINDOWS\system32\USER32.dll
----------------------------------------
Memory Dump
----------------------------------------
Stack: 1024 bytes starting at (ESP = 063DB864)
063DB860: 64 B8 3D 06 70 24 00 00 96 BA 3D 06 00 00 00 00 d.=.p\$....=.....
063DB870: 64 B8 3D 06 7C B8 3D 06 6C 3B 66 00 90 B8 3D 06 d.=.|.=.l;f...=.
063DB880: 58 55 64 00 70 24 00 00 03 00 00 00 00 00 00 00 XUd.p\$..........
063DB890: 0C C6 3D 06 E2 4D 64 00 00 00 00 00 86 00 10 85 ..=..Md.........
063DB8A0: 00 00 00 00 14 42 85 00 00 00 00 00 00 00 00 00 .....B..........
063DB8B0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB8C0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB8D0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB8E0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB8F0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB910: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB920: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB930: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB940: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB950: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB960: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB970: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB990: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DB9A0: 00 00 00 00 00 00 00 00 54 68 69 73 20 61 70 70 ........This app
063DB9B0: 6C 69 63 61 74 69 6F 6E 20 68 61 73 20 65 6E 63 lication has enc
063DB9C0: 6F 75 6E 74 65 72 65 64 20 61 20 63 72 69 74 69 ountered a criti
063DB9D0: 63 61 6C 20 65 72 72 6F 72 3A 0A 0A 45 52 52 4F cal error:..ERRO
063DB9E0: 52 20 23 31 33 34 20 28 30 78 38 35 31 30 30 30 R #134 (0x851000
063DB9F0: 38 36 29 20 46 61 74 61 6C 20 43 6F 6E 64 69 74 86) Fatal Condit
063DBA00: 69 6F 6E 0A 50 72 6F 67 72 61 6D 3A 09 43 3A 5C ion.Program:.C:\
063DBA10: 50 72 6F 67 72 61 6D 20 46 69 6C 65 73 5C 57 6F Program Files\Wo
063DBA20: 72 6C 64 20 6F 66 20 57 61 72 63 72 61 66 74 5C rld of Warcraft\
063DBA30: 57 6F 57 2E 65 78 65 0A 0A 55 6E 61 62 6C 65 20 WoW.exe..Unable
063DBA40: 74 6F 20 65 73 74 61 62 6C 69 73 68 20 61 20 63 to establish a c
063DBA50: 6F 6E 6E 65 63 74 69 6F 6E 20 74 6F 20 73 6F 63 onnection to soc
063DBA60: 6B 65 74 3A 20 54 68 65 20 61 74 74 65 6D 70 74 ket: The attempt
063DBA70: 20 74 6F 20 63 6F 6E 6E 65 63 74 20 77 61 73 20 to connect was
063DBA80: 66 6F 72 63 65 66 75 6C 6C 79 20 72 65 6A 65 63 forcefully rejec
063DBA90: 74 65 64 2E 0A 0A 00 00 00 00 00 00 00 00 00 00 ted.............
063DBAA0: 00 00 00 00 00 00 00 00 DC BA 3D 06 00 00 14 00 ..........=.....
063DBAB0: 32 07 91 7C 45 00 00 00 78 13 14 00 00 00 14 00 2..|E...x.......
063DBAC0: 68 EE 15 00 B4 BA 3D 06 00 00 00 00 F8 BC 3D 06 h.....=.......=.
063DBAD0: 18 EE 90 7C 38 07 91 7C FF FF FF FF 32 07 91 7C ...|8..|....2..|
063DBAE0: AB 06 91 7C EB 06 91 7C 00 00 00 00 10 C0 3D 06 ...|...|......=.
063DBAF0: 04 C0 3D 06 32 07 91 7C 45 00 00 00 78 13 14 00 ..=.2..|E...x...
063DBB00: 00 00 14 00 68 EE 15 00 F8 BA 3D 06 00 00 00 00 ....h.....=.....
063DBB10: 3C BD 3D 06 18 EE 90 7C 38 07 91 7C FF FF FF FF <.=....|8..|....
063DBB20: 32 07 91 7C AB 06 91 7C EB 06 91 7C 00 00 00 00 2..|...|...|....
063DBB30: 54 C0 3D 06 48 C0 3D 06 00 00 00 00 00 00 00 00 T.=.H.=.........
063DBB40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBB50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBB60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBB70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBB80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBB90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
063DBBA0: BC BB 3D 06 41 50 91 7C E4 BB 3D 06 E4 00 13 00 ..=.AP.|..=.....
063DBBB0: 04 00 00 00 D4 00 13 00 00 00 13 00 FC BB 3D 06 ..............=.
063DBBC0: 33 52 91 7C E4 BB 3D 06 D4 00 13 00 00 00 00 00 3R.|..=.........
063DBBD0: 10 00 00 00 00 00 00 00 80 BC 3D 06 A0 10 13 00 ..........=.....
063DBBE0: D4 57 68 F4 34 BC 3D 06 C9 55 91 7C 18 BD 3D 06 .Wh.4.=..U.|..=.
063DBBF0: 00 00 00 00 01 00 00 00 88 BC 3D 06 A0 10 13 00 ..........=.....
063DBC00: 08 00 15 C0 F8 BB F4 7F 18 1F 24 00 54 1F 24 00 ..........\$.T.\$.
063DBC10: CC BE 3D 06 7D 5D 91 7C B6 26 92 7C 12 BC F4 7F ..=.}].|.&.|....
063DBC20: B4 5D 91 7C 88 02 AA 71 F8 BB F4 7F 00 00 00 00 .].|...q........
063DBC30: A0 BC 3D 06 8C BC 3D 06 40 BC 3D 06 A0 10 13 00 ..=...=.@.=.....
063DBC40: 00 00 00 00 90 BC 3D 06 F5 53 91 7C 68 BC 3D 06 ......=..S.|h.=.
063DBC50: 28 C2 97 7C 8C BC 3D 06 74 BC 3D 06 41 50 91 7C (..|..=.t.=.AP.|
063DBC60: 9C BC 3D 06 E4 00 13 00 04 00 00 00 D4 00 13 00 ..=.............
------------------------------------------------------------------------------
======================================================================
Hardware/Driver Information:
Processor: 0x0
Page Size: 4096 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162815809249878, "perplexity": 582.8017373088564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645293619.80/warc/CC-MAIN-20150827031453-00037-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-do-i-find-these-things.69874/ | # How do I find these things?
1. Apr 3, 2005
### PrudensOptimus
An SRS of 16 Orange County Schools' juniors had a mean SAT MATH of x = 500; and a Standard Deviation of s=100. We know that the population of SAT Math scores for juniors in the district is approximately normaly distributed. We wish to determine a 90% confidence interval for the mean SAT math score mu for the population of all juniors in the district.
Question: 1. For the SAT MATH Data above, find the power of the test against the alternative mu=500 at the 5percent significance level. Assume that Sigma=100.
How do I find the alternative? I know power... mu = 500... what is 5% significance level mean?
Can you offer guidance or do you also need help?
Draft saved Draft deleted
Similar Discussions: How do I find these things? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991765379905701, "perplexity": 1732.3944660626616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00477-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://googology.wikia.com/wiki/Recursion | ## FANDOM
10,044 Pages
In mathematics, recursion describes a situation where a function's output is plugged back into it. If we have a function \(f(x)\), we can recurse it to form functions \(f(f(x))\), \(f(f(f(x)))\), etc. \(f(f(\ldots f(f(x))\ldots ))\) with \(t\) copies of \(f\) is often abbreviated \(f^t(x)\). Recursion is very important to googology since it easily produces quickly-growing functions. If \(f(x)\) displays rapid growth, \(f^t(x)\) will grow much faster for large \(t\).
Every primitive recursive function breaks into two types of rules: base case, where for some argument value is given immediately, and recursive rule, where function applies for itself.
A recursive hierarchy of functions must have three rules: "brake" rule (starting function), "plugging" rule (it tells how many times the previous function must be plugged) and "decomposing" rule (for diagonalizing purposes, it finds the fundamental sequences of so-called limit ordinals).
Examples of recursive hierarchies are FGH, HAN, Hyper-E Notation and Bird's array notation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980475664138794, "perplexity": 4744.638962125529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00489.warc.gz"} |
http://mathhelpforum.com/differential-equations/123519-second-order-ode-print.html | # The second order ODE
• January 12th 2010, 08:01 PM
Ment
The second order ODE
Help me with this
$y'' = 18\sin ^3y\cos y$, $y(1) =\frac{\pi}{2}$, $y'(1) = 3$.
I tried to solve it, but I got non-elementary integral (Crying)
• January 13th 2010, 03:26 AM
mr fantastic
Quote:
Originally Posted by Ment
Help me with this
$y'' = 18\sin ^3y\cos y$, $y(1) =\frac{\pi}{2}$, $y'(1) = 3$.
I tried to solve it, but I got non-elementary integral (Crying)
Note that $\frac{d^2y}{dt^2} = f(y) \Rightarrow v \frac{dv}{dy} = f(y)$ where $v = \frac{dy}{dt}$.
So solve for v (note that the DE is seperable) and then use your solution for v to solve for y.
• January 13th 2010, 04:15 AM
Ment
Quote:
Originally Posted by mr fantastic
Note that $\frac{d^2y}{dt^2} = f(y) \Rightarrow v \frac{dv}{dy} = f(y)$ where $v = \frac{dy}{dt}$.
So solve for v (note that the DE is seperable) and then use your solution for v to solve for y.
Here's my solution
$y'' = 18{\sin ^3}y\cos y, y(1) = \frac{\pi}{2},y'(1) = 3.$
$y' = p(y) \Rightarrow y'' = p'(y)p(y)$
$p'p = 18\sin^3y\cos y$
$\int pdp = 18\int \sin^3y\cos ydy$
$\frac{p^2}{2} = \frac{9}{2}\sin^4y + C_1 \Rightarrow (y')^2 = 9\sin^4y + C_1$
$y' = \pm \sqrt {9\sin^4y + C_1}$
$x = \pm \int \frac{dy}{\sqrt{9\sin^4y + C_1}}$
I got non-elementary integral.
• January 13th 2010, 04:23 AM
HallsofIvy
I guess, then, the question is why you would expect to get a solution in terms of an elementary integral! That's a very non-linear equation and non-linear differential equations tend to have non-elementary solutions. Something as simple (comparatively) as y"= sin(y) has solutions that can only be written in terms of elliptic integrals.
• January 13th 2010, 10:05 AM
DeMath
Quote:
Originally Posted by Ment
Here's my solution
$y'' = 18{\sin ^3}y\cos y, y(1) = \frac{\pi}{2},y'(1) = 3.$
$y' = p(y) \Rightarrow y'' = p'(y)p(y)$
$p'p = 18\sin^3y\cos y$
$\int pdp = 18\int \sin^3y\cos ydy$
$\frac{p^2}{2} = \frac{9}{2}\sin^4y + C_1 \Rightarrow (y')^2 = 9\sin^4y + C_1$
$y' = \pm \sqrt {9\sin^4y + C_1}$
$x = \pm \int \frac{dy}{\sqrt{9\sin^4y + C_1}}$
I got non-elementary integral.
You do not necessarily find a general solution to solve your problem of Cauchy.
You have the initial conditions in the symmetrical form: $x=1\ \leftrightarrow\ y={\pi\over2}\ \leftrightarrow\ y'=3$
Also you got $\left(y'\right)^2 = 9\sin^4y + C_1$. Now find constant $C_1$:
$\left\{\begin{gathered}\left(y'\right)^2 = 9\sin^4y + C_1, \hfill \\ y(1) = \frac{\pi}{2} ~\wedge~ y'\left( 1 \right) = 3; \hfill \\ \end{gathered} \right. ~ \Rightarrow ~ 3^2 = 9 \sin^4\frac{\pi}{2} + C_1 ~ \Leftrightarrow ~ C_1 = 0.$
So you have
$\left(y'\right)^2 = 9\sin^4y ~ \Leftrightarrow ~ y' = 3\sin^2y ~ \Rightarrow ~ x = \frac{1}{3}\int \frac{dx}{\sin^2y} = C_2 - \frac{1}{3}\cot y.$
Now find the constant $C_2$:
$\left\{ \begin{gathered}x = C_2 - \frac{1}{3}\cot y, \hfill \\ y\left( 1 \right) = \frac{\pi}{2}; \hfill \\ \end{gathered} \right. ~ \Rightarrow ~ 1 = C_2 - \frac{1}{3}\cot\frac{\pi}{2} ~ \Leftrightarrow ~ C_2 = 1.$
Finally you have:
$x = 1 - \frac{1}{3}\cot y.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600703716278076, "perplexity": 1556.1270347202774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00038-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://learn.saylor.org/mod/book/view.php?id=31052&chapterid=7119 | ## The Logic of Maximizing Behavior and Maximizing in the Marketplace
Read these sections revisit the concept of marginal costs and benefits within the context of the consumer's (and the firm's) maximizing behavior. The later pages in this section define two new concepts: consumer surplus and producer surplus. Take a moment to read through the stated learning outcomes, which should be your goals as you read through the chapter. Attempt the "Try It" problem for each section.
### The Logic of Maximizing Behavior
#### ANSWER TO TRY IT! PROBLEM
Here are the completed data table and the table showing total and marginal benefit and cost.
Ms. Phan maximizes her net benefit by reducing her time studying economics to 2 hours. The change in her expectations reduced the benefit and increased the cost of studying economics. The completed graph of marginal benefit and marginal cost is at the far left. Notice that answering the question using the marginal decision rule gives the same answer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441307544708252, "perplexity": 1279.5435873520755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00142.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.