url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://proofwiki.org/wiki/Ring_of_Integers_Modulo_Prime_is_Field/Proof_2
# Ring of Integers Modulo Prime is Field/Proof 2 ## Theorem Let $m \in \Z: m \ge 2$. Let $\struct {\Z_m, +, \times}$‎ be the ring of integers modulo $m$. Then: $m$ is prime $\struct {\Z_m, +, \times}$ is a field. ## Proof Let $p$ be prime. From Irreducible Elements of Ring of Integers, we have that $p$ is irreducible in the ring of integers $\struct {\Z, +, \times}$. From Ring of Integers is Principal Ideal Domain, $\struct {\Z, +, \times}$ is a principal ideal domain. Thus by Principal Ideal of Principal Ideal Domain is of Irreducible Element iff Maximal, $\ideal p$ is a maximal ideal of $\struct {\Z, +, \times}$. Hence by Maximal Ideal iff Quotient Ring is Field, $\Z / \ideal p$ is a field. But $\Z / \ideal p$ is exactly $\struct {\Z_p, +, \times}$. $\Box$ Let $p$ be composite. Then $p$ is not irreducible in $\struct {\Z, +, \times}$. Thus by Principal Ideal of Principal Ideal Domain is of Irreducible Element iff Maximal, $\ideal p$ is not a maximal ideal of $\struct {\Z, +, \times}$. Hence by Maximal Ideal iff Quotient Ring is Field, $\Z / \ideal p$ is not a field. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914840042591095, "perplexity": 162.80689614858633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00575.warc.gz"}
https://www.wsa.rwth-aachen.de/cms/WSA/Forschung/Publikationen/~gkev/Details/?file=230332&lidx=1
# A high resolution collision algorithm for anisotropic particle populations Pischke, Philipp (Corresponding author); Kneer, Reinhold Aachen : Publikationsserver der RWTH Aachen University (2014) Conference Presentation Abstract In turbulent particle laden flows such as liquid sprays, droplet collisions make a signicant contribution to momentum transfer and energy dissipation. By Lagrangian particle tracking with the stochastic parcel method, only a computational subset of the particle population is simulated, known as computational parcels; the prediction of particle collisions in based on a statistical assessment of collision probabilities. Prior to the preparation of this work, various collision algorithms have been investigated with respect to exactness, robustness, and convergence. Within these preliminary studies, two signicant errors were identied, namely voidage errors and gradient errors, which are unresolved in any collision algorithm found today. To address these issues, a hybrid deterministic-stochastic formulation for the collision probability has been derived, which treats parcel collisions in a deterministic manner, i.e. based on their trajectories and distances, while the collisions of the represented particles are predicted in a stochastic manner. The methods derived are not introducing additional numerical parameters, i.e. spatial and temporal resolution are dened by the number of parcels and the numerical time-step only. The formulations have undergone a rigorous analytical and numerical validation, and show advanced accuracy and convergence behavior when compared to other formulations for stochastic collision algorithms. The methodology, convergence tests, and exemplary applications are subject to this presentation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842476010322571, "perplexity": 2180.537686542341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00105.warc.gz"}
http://www.nag.com/numeric/fl/nagdoc_fl24/html/D01/d01gcf.html
D01 Chapter Contents D01 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentD01GCF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose D01GCF calculates an approximation to a definite integral in up to $20$ dimensions, using the Korobov–Conroy number theoretic method. ## 2  Specification SUBROUTINE D01GCF ( NDIM, F, REGION, NPTS, VK, NRAND, ITRANS, RES, ERR, IFAIL) INTEGER NDIM, NPTS, NRAND, ITRANS, IFAIL REAL (KIND=nag_wp) F, VK(NDIM), RES, ERR EXTERNAL F, REGION ## 3  Description D01GCF calculates an approximation to the integral $I= ∫ c1 d1 dx1 ,…, ∫ cn dn dxn f x1,x2,…,xn$ (1) using the Korobov–Conroy number theoretic method (see Korobov (1957), Korobov (1963) and Conroy (1967)). The region of integration defined in (1) is such that generally ${c}_{i}$ and ${d}_{i}$ may be functions of ${x}_{1},{x}_{2},\dots ,{x}_{i-1}$, for $i=2,3,\dots ,n$, with ${c}_{1}$ and ${d}_{1}$ constants. The integral is first of all transformed to an integral over the $n$-cube ${\left[0,1\right]}^{n}$ by the change of variables $xi = ci + di - ci yi , i= 1 , 2 ,…, n .$ The method then uses as its basis the number theoretic formula for the $n$-cube, ${\left[0,1\right]}^{n}$: $∫01 dx1 ⋯ ∫01 dxn g x1,x2,…,xn = 1p ∑k=1p g k a1p ,…, k anp - E$ (2) where $\left\{x\right\}$ denotes the fractional part of $x$, ${a}_{1},{a}_{2},\dots ,{a}_{n}$ are the so-called optimal coefficients, $E$ is the error, and $p$ is a prime integer. (It is strictly only necessary that $p$ be relatively prime to all ${a}_{1},{a}_{2},\dots ,{a}_{n}$ and is in fact chosen to be even for some cases in Conroy (1967).) The method makes use of properties of the Fourier expansion of $g\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ which is assumed to have some degree of periodicity. Depending on the choice of ${a}_{1},{a}_{2},\dots ,{a}_{n}$ the contributions from certain groups of Fourier coefficients are eliminated from the error, $E$. Korobov shows that ${a}_{1},{a}_{2},\dots ,{a}_{n}$ can be chosen so that the error satisfies $E≤CK p-α ln αβ ⁡p$ (3) where $\alpha$ and $C$ are real numbers depending on the convergence rate of the Fourier series, $\beta$ is a constant depending on $n$, and $K$ is a constant depending on $\alpha$ and $n$. There are a number of procedures for calculating these optimal coefficients. Korobov imposes the constraint that $a1 = 1 and ai = ai-1 mod p$ (4) and gives a procedure for calculating the parameter, $a$, to satisfy the optimal conditions. In this routine the periodisation is achieved by the simple transformation $xi = yi2 3-2yi , i= 1 , 2 ,…, n .$ More sophisticated periodisation procedures are available but in practice the degree of periodisation does not appear to be a critical requirement of the method. An easily calculable error estimate is not available apart from repetition with an increasing sequence of values of $p$ which can yield erratic results. The difficulties have been studied by Cranley and Patterson (1976) who have proposed a Monte–Carlo error estimate arising from converting (2) into a stochastic integration rule by the inclusion of a random origin shift which leaves the form of the error (3) unchanged; i.e., in the formula (2), $\left\{k\frac{{a}_{i}}{p}\right\}$ is replaced by $\left\{{\alpha }_{i}+k\frac{{a}_{i}}{p}\right\}$, for $i=1,2,\dots ,n$, where each ${\alpha }_{i}$, is uniformly distributed over $\left[0,1\right]$. Computing the integral for each of a sequence of random vectors $\alpha$ allows a ‘standard error’ to be estimated. This routine provides built-in sets of optimal coefficients, corresponding to six different values of $p$. Alternatively, the optimal coefficients may be supplied by you. Routines D01GYF and D01GZF compute the optimal coefficients for the cases where $p$ is a prime number or $p$ is a product of two primes, respectively. ## 4  References Conroy H (1967) Molecular Shroedinger equation VIII. A new method for evaluting multi-dimensional integrals J. Chem. Phys. 47 5307–5318 Cranley R and Patterson T N L (1976) Randomisation of number theoretic methods for mulitple integration SIAM J. Numer. Anal. 13 904–914 Korobov N M (1957) The approximate calculation of multiple integrals using number theoretic methods Dokl. Acad. Nauk SSSR 115 1062–1065 Korobov N M (1963) Number Theoretic Methods in Approximate Analysis Fizmatgiz, Moscow ## 5  Parameters 1:     NDIM – INTEGERInput On entry: $n$, the number of dimensions of the integral. Constraint: $1\le {\mathbf{NDIM}}\le 20$. 2:     F – REAL (KIND=nag_wp) FUNCTION, supplied by the user.External Procedure F must return the value of the integrand $f$ at a given point. The specification of F is: FUNCTION F ( NDIM, X) REAL (KIND=nag_wp) F INTEGER NDIM REAL (KIND=nag_wp) X(NDIM) 1:     NDIM – INTEGERInput On entry: $n$, the number of dimensions of the integral. 2:     X(NDIM) – REAL (KIND=nag_wp) arrayInput On entry: the coordinates of the point at which the integrand $f$ must be evaluated. F must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D01GCF is called. Parameters denoted as Input must not be changed by this procedure. 3:     REGION – SUBROUTINE, supplied by the user.External Procedure REGION must evaluate the limits of integration in any dimension. The specification of REGION is: SUBROUTINE REGION ( NDIM, X, J, C, D) INTEGER NDIM, J REAL (KIND=nag_wp) X(NDIM), C, D 1:     NDIM – INTEGERInput On entry: $n$, the number of dimensions of the integral. 2:     X(NDIM) – REAL (KIND=nag_wp) arrayInput On entry: ${\mathbf{X}}\left(1\right),\dots ,{\mathbf{X}}\left(j-1\right)$ contain the current values of the first $\left(j-1\right)$ variables, which may be used if necessary in calculating ${c}_{j}$ and ${d}_{j}$. 3:     J – INTEGERInput On entry: the index $j$ for which the limits of the range of integration are required. 4:     C – REAL (KIND=nag_wp)Output On exit: the lower limit ${c}_{j}$ of the range of ${x}_{j}$. 5:     D – REAL (KIND=nag_wp)Output On exit: the upper limit ${d}_{j}$ of the range of ${x}_{j}$. REGION must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D01GCF is called. Parameters denoted as Input must not be changed by this procedure. 4:     NPTS – INTEGERInput On entry: the Korobov rule to be used. There are two alternatives depending on the value of NPTS. (i) $1\le {\mathbf{NPTS}}\le 6$. In this case one of six preset rules is chosen using $2129$, $5003$, $10007$, $20011$, $40009$ or $80021$ points depending on the respective value of NPTS being $1$, $2$, $3$, $4$, $5$ or $6$. (ii) ${\mathbf{NPTS}}>6$. NPTS is the number of actual points to be used with corresponding optimal coefficients supplied in the array VK. Constraint: ${\mathbf{NPTS}}\ge 1$. 5:     VK(NDIM) – REAL (KIND=nag_wp) arrayInput/Output On entry: if ${\mathbf{NPTS}}>6$, VK must contain the $n$ optimal coefficients (which may be calculated using D01GYF or D01GZF). If ${\mathbf{NPTS}}\le 6$, VK need not be set. On exit: if ${\mathbf{NPTS}}>6$, VK is unchanged. If ${\mathbf{NPTS}}\le 6$, VK contains the $n$ optimal coefficients used by the preset rule. 6:     NRAND – INTEGERInput On entry: the number of random samples to be generated in the error estimation (generally a small value, say $3$ to $5$, is sufficient). The total number of integrand evaluations will be ${\mathbf{NRAND}}×{\mathbf{NPTS}}$. Constraint: ${\mathbf{NRAND}}\ge 1$. 7:     ITRANS – INTEGERInput On entry: indicates whether the periodising transformation is to be used. ${\mathbf{ITRANS}}=\text{'0'}$ The transformation is to be used. ${\mathbf{ITRANS}}\ne \text{'0'}$ The transformation is to be suppressed (to cover cases where the integrand may already be periodic or where you want to specify a particular transformation in the definition of F). Suggested value: ${\mathbf{ITRANS}}=\text{'0'}$. 8:     RES – REAL (KIND=nag_wp)Output On exit: the approximation to the integral $I$. 9:     ERR – REAL (KIND=nag_wp)Output On exit: the standard error as computed from NRAND sample values. If ${\mathbf{NRAND}}=1$, then ERR contains zero. 10:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{NDIM}}<1$, or ${\mathbf{NDIM}}>20$. ${\mathbf{IFAIL}}=2$ On entry, ${\mathbf{NPTS}}<1$. ${\mathbf{IFAIL}}=3$ On entry, ${\mathbf{NRAND}}<1$. ## 7  Accuracy An estimate of the absolute standard error is given by the value, on exit, of ERR. The time taken by D01GCF will be approximately proportional to ${\mathbf{NRAND}}×p$, where $p$ is the number of points used. The exact values of RES and ERR on return will depend (within statistical limits) on the sequence of random numbers generated within D01GCF by calls to G05SAF. Separate runs will produce identical answers. ## 9  Example This example calculates the integral $∫01 ∫01 ∫01 ∫01 cos 0.5+2 x1 + x2 + x3 + x4 - 4 dx1 dx2 dx3 dx4 .$ ### 9.1  Program Text Program Text (d01gcfe.f90) None. ### 9.3  Program Results Program Results (d01gcfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 110, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970242977142334, "perplexity": 2021.0308407877394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663135.34/warc/CC-MAIN-20140930004103-00067-ip-10-234-18-248.ec2.internal.warc.gz"}
https://jp.maplesoft.com/support/help/maple/view.aspx?path=ifelse
ifelse - Maple Help ifelse the conditional function Calling Sequence Conditional Function ifelse(conditional expression, true expression, false expression) if(conditional expression, true expression, false expression) Description • The ifelse(c,a,b) function requires three arguments and returns the evaluation of the second or third, depending on the truth value of the first. • The first argument a must be a conditional expression; if it evaluates to true, b is evaluated and returned.  If a evaluates to false or FAIL, then c is evaluated and returned. The name if is an alias for ifelse. When using this name, if must be enclosed in back quotes (left single quotes) because if is a Maple reserved word. Conditional Expressions • A conditional expression is any Boolean expression formed using the relational operators ( <, <=, >, >=, =, <> ), the logical operators (and, or, not), and the logical names (true, false, FAIL). • When a conditional expression is evaluated in this context, it must evaluate to true, false, or FAIL; otherwise, an error occurs. • The ifelse function is thread-safe as of Maple 15. Examples Simple Case > $a≔3;$$b≔5$ ${a}{≔}{3}$ ${b}{≔}{5}$ (1) The conditional operator can be used inside an equation. Since b (5) is not less than a (3), the result of the inner ifelse command will be the false expression, b, which is used as part of the calculation. > $5\left(\mathrm{\pi }+\mathrm{ifelse}\left(b ${5}{}{\mathrm{\pi }}{+}{25}$ (2) Using ifelse with NULL Since a is less than b, x is assigned NULL. Unlike every other function call, which removes NULLs from the calling sequence, NULL can be used with the ifelse operator. > $x≔\mathrm{ifelse}\left(a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764679431915283, "perplexity": 4477.414027887914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00607.warc.gz"}
http://www.mathcaptain.com/algebra/solving-equations-with-variables-on-both-sides.html
Equations can be classified into linear, quadratic or cubic etc. Similarly linear equations can be classified into those containing, one variable, two variables or three or more variables. While solving these equations we follow the rules of equality we discussed in earlier sections. To solve the linear equation of only one variable, it is enough if we have only one equation. While solving linear equations containing two variables we need two equations to solve for the unknown variables. In this section let us see how to solve equations with variables on both sides and discuss with few examples also. ## How to Solve Equations with Variables on Both Sides Solution to a given equation is the value of the variable, which satisfy the equations. Rules to solve equations with variables on both sides: Linear Equations: The general form of linear equations will be ax + b = 0, where a $\ne$ 0. In the case of linear equations which have only one variable for which the variables are on both sides, we need to bring the variables to one side of the equal to sign, and the constants to other side of the equation and then eliminate the constants with the variable and solve for x. ### Example: 2x - 5 = 13 - 4x Solution: We have 2x - 5 = 13 - 4x Eliminating the constant in the left hand side, we get, 2x - 5 + 5 = 13 - 4x + 5 => 2x = 18 - 4x => 2x + 4x = 18 - 4x + 4x => 6x = 18 => x = $\frac{18}{6}$ = 3 Therefore, solution x = 3 Quadratic Equations: The quadratic equations are of the form, ax2 + b x + c = 0, where a $\ne$ 0. A quadratic equation is a polynomial of degree two. There will be two solutions for the quadratic equations. The solutions are called as the roots of the equations. The equations can be solved by a. Factorization b. Formula Method c. Completing the square method. When the equations have variables on both sides, we should follow the rules of equality, such that all the terms containing the variables and the constants are brought to the left side of the equation, so that the final equation is of the form, ax2 + bx + c = 0, then we solve the equation by one of the three methods shown above. ### Example: Solve( x + 2 ) = $\frac{(6x+3)}{(x-2)}$ Solution: We have ( x + 2 ) = $\frac{(6x+3)}{(x-2)}$ ( x + 2 ) ( x - 2 ) = 6x + 3 [ by multiplying by ( x - 2 ) on both sides ] x2 - 4 = 6x + 3 => x2 - 6x - 4 - 3 = 0 => x2 - 6x - 7 = 0 => x2 - 7x + x - 7 = 0 => x ( x - 7 ) + 1 ( x - 7 ) = 0 => ( x - 7 ) ( x + 1 ) = 0 => x - 7 = 0 or x + 1 = 0 => x = 7 or x = -1 Therefore the solution to the above equation is x = { -1, 7}. ### Polynomial Equation Solving Equations with Fractions on both Sides How to Solve a Linear Equation with Two Variables Solving One Sided Limits Solving for Variables Graphing Linear Equations in three Variables Solving Equations Factoring to Solve Quadratic Equations How do you Solve a Radical Equation How do you Solve Polynomial Equations How to Solve a Quadratic Equation by Graphing How to Solve Absolute Value Equations How to Solve Differential Equations Solve for a Variable Calculator How to Solve Equations How to Solve Hyperbola Equations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637669682502747, "perplexity": 186.9591583316851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067100.54/warc/CC-MAIN-20141017150107-00339-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.ias.edu/video/galois/liu
The theory of $\varphi, \Gamma)$-modules, which was introduced by Fontaine in the early 90's, classifies local Galois representations into modules over certain power series rings carrying certain extra structures $(\varphi and \Gamma)$. In a recent joint work with Kedlaya, we generalize Fontaine's theory to geometric families of local Galois representations. Namely, we exhibit an equivalence between etale local systems over nonarchimedean analytic spaces and certain modules over commutative period rings carrying similar extra structures. In this talk, I will explain some of the basic ideas and constructions of this work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566448092460632, "perplexity": 382.21355374771736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00407.warc.gz"}
http://mathhelpforum.com/algebra/67332-x-3-kx-7-divided-x-2-a.html
# Math Help - x^3+kx+7 is divided by (x-2) .... 1. ## x^3+kx+7 is divided by (x-2) .... the remainder is 3. find the value of k. thanks in advanced 2. Hello, Originally Posted by coyoteflare the remainder is 3. find the value of k. thanks in advanced We have $P(x)=x^3+kx+7$ So there exists a polynomial $Q(x)$ such that : $P(x)=Q(x)(x-2)+3$ One simple way to solve it is to let $x=2$ So we have : $P(2)=Q(2)(2-2)+3=3$ $(2)^3+2k+7=3$ $8+2k+7=3$ $k=\dots$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076089859008789, "perplexity": 1796.625270735751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547818.113/warc/CC-MAIN-20141224185907-00049-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.wikihow.com/Calculate-Inflation
Edit Article How to Calculate Inflation Inflation is a key concept in economics that represents the increase in the price of consumer products over a period of time.[1] It can also be used to calculate deflation, or a drop in prices. Calculating inflation requires a consumer price index or historical price records and a formula. You can use the formula to calculate inflation over almost any period of time. 1. 1 Look up the average prices of the several products across a few years. Inflation is calculated by comparing prices of standard goods across time -- things like a loaf of white bread or a gallon of milk. This information is either given to you in your problem ("In 1962, the cost of milk was $1.00 a gallon...") or you can look up the real-life prices on the Consumer Price Index (CPI).[2] • The more data you have, the better. If you're offered sample prices on your practice problem, you must average all of the prices and use this -- don't just choose one price to compare. • The CPI is calculated annually based on the average price of many goods, making it far more effective than a single good.[3] 2. 2 Load the Consumer Price Index. This is a breakdown, by month and year, of the changes in inflation based off of the averages mentioned above. Any time the CPI is higher than the current month, it means there has been inflation. If it is lower, then there has been deflation. • You can also go directly to the [bls.gov/cpi US Bureau of Labor Statistics website] to download current inflation reports. • Inflation is calculated with the same formula in each country. Make sure you are using the same currency for all your numbers in the calculation. 3. 3 Choose the period of time for which you will be calculating inflation. You can use months, years, or decades, as long as you clarify the period of time in your answer. Make sure you note the time frame you're measuring.[4] • Inflation must be over a timeframe -- there is no such thing as "general inflation." Remember, inflation tracks the worth of money: how much you need at one time to buy a good vs how much you need during another time. You must compare the price now to the price during some other time period to get inflation. 4. 4 Find the price of the product you're studying or the figure on the Consumer Price Index for your earlier date. Look up your first date on the CPI, or get the average of the good you're planning on calculating. 5. 5 Find the price of the product or consumer price index figure for your later date. Now, look up or calculate the data for the current price of the object. If you're doing historical research (for example, examining inflation before and after the Vietnam War), then it can help to get data from 2-3 different years, allowing you to account for any single-year spikes in inflation that could change the overall economic trends you're studying.[5] Part 2 Calculating Inflation 1. 1 Learn the Inflation Rate Formula. This formula is simple. The "top" finds the difference in the CPI (rate of inflation), the bottom finds out what ratio of the total inflation that difference represents. Then you can multiply this answer by 100 to turn it into an easily-read percentage: • ${\displaystyle {\frac {CurrentCPI-HistoricalCPI}{CurrentCPI}}*100}$[6] 2. 2 Plug the data into the formula. For example, imagine that we are calculating the inflation based on the price of bread between 2010 and 2012. For example, imagine the price of bread in 2012 is$3.67 and the price of bread in 2010 is $3.25. • ${\displaystyle {\frac {\3.67-\3.25}{\3.67}}*100}$ 3. 3 Simplify the problem through order of operations. Solve for the difference in price, then divide it. Multiply the outcome by 100 to get a percentage. • ${\displaystyle {\frac {\3.67-\3.25}{\3.67}}*100}$ • ${\displaystyle {\frac {\0.42}{\3.67}}*100}$ • Finally, ${\displaystyle Inflation=0.1144*100}$. • The inflation rate is 11.4% 4. 4 Check your answer against the US government-run Inflation Calculator, which can check inflation between any two years in US history. If you're looking for perfect, real-world numbers, then go straight to the source. The calculator simply requires you to place in your amount, years to compare, and then spits out an inflation rate. 5. 5 Know how to read inflation. This percentage means that, in current dollars, your money is worth about 12.9% less in today's dollars than they were in 2010. In other words, most products cost on average, 12.9% more than they did in 2010 (note -- this is an example, not actual data). If you had a negative number for your answer, you've been dealing with deflation, where a scarcity of cash makes your money more valuable, not less, over time. Use the formula just as you would with a positive figure.[7][8] 6. 6 Label the inflation rate according to the period of time you have calculated. The inflation is useful only when assigned to the time period. Make sure any studies, news, or problems correctly account for the exact amount of time.[9] Community Q&A Search Add New Question • How do I calculate inflation given the nominal GDP and the real GDP plus the deflator? wikiHow Contributor Get the difference of the RGDP and NGDP. Divide the difference by the NGDP. The quotient of the two is the inflation rate. • If the inflation rate is 1500%, does that mean that a$5 item will cost 1500 times the base price? 1500% is 15 times the original, so 1500% of $5 is$75. But since inflation is a rate of growth, it means the $5 item would inflate to$5 + $75 or$80. • What would \$30 today have been in 1925? • When calculating the inflation rate do you multiply by 1 or do you subtract 1? 200 characters left Tips • You can use the inflation calculator on the Bureau of Labor Statistics website to calculate inflation between years in the United States. bls.gov/data/inflation_calculator.htm and type in an amount of money and the period of years you want to use. Things You'll Need • Paper • Pencil • Calculator • Consumer price index Article Info Categories: Probability and Statistics In other languages: Español: calcular la inflación, Italiano: Calcolare l'Inflazione, Português: Calcular Inflação, 中文: 计算通货膨胀, Русский: вычислить инфляцию, Deutsch: Inflation berechnen, Français: calculer l’inflation, Bahasa Indonesia: Menghitung Inflasi, Nederlands: Inflatie berekenen Thanks to all authors for creating a page that has been read 156,612 times.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313071131706238, "perplexity": 1675.1517105745384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542250.48/warc/CC-MAIN-20161202170902-00383-ip-10-31-129-80.ec2.internal.warc.gz"}
https://notes.aquiles.me/sources_of_error_in_measuring_diffusion_coefficient_through_nanoparticle_tracking_analysis/
# Sources of error in measuring diffusion coefficient through nanoparticle tracking analysis First published: Last Edited: Number of edits: When performing nanoparticle tracking analysis measurements, there can be many different sources of error. First, it is important to distinguish that there are two methods that can be used to calculate the diffusion coefficient: The second is the one that yields the best results and is the one most widely employed. When measuring the average diffusion coefficient, we first must consider sampling error when measuring the average diffusion coefficient which describe how accurate is the distribution of values we obtain from a given sample. At a single particle level, it is important to note the effect of temperature on determination of error in diffusion measurements. However, this seems more like a systematic error than a statistical fluctuation at a single-particle level. The same applies to effect of vibration in diffusion coefficient determination. It is important to consider that there may be an Optimal track length for MSD measurements as well as an optimal number of fitting points for MSD measurements, which are further discussed in [].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862254858016968, "perplexity": 446.78940226172864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00714.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-319-01315-2_10
# A Quasi Ramsey Theorem • Hans Jürgen Prömel Chapter ## Abstract The basic problem of (combinatorial) discrepancy theory is how to color a set with two colors as uniformly as possible with respect to a given family of subsets. The aim is to achieve that each of the two colors meets each subset under consideration in approximately the same number of elements. From the finite Ramsey theorem (cf. Corollary 7.2) we know already that if the set of all 2-subsets of n is 2-colored, and the family of all -subsets for some $$\ell< \frac{1} {2}\log n$$ is considered, the situation is as bad as possible: for any 2-coloring we will find a monochromatic -set. As gets larger one can color more uniformly though one still has the preponderance phenomenon. ## References 1. Beck, J., Sós, V.T.: Discrepancy theory. In: Handbook of Combinatorics, vols. 1, 2, pp. 1405–1446. Elsevier, Amsterdam (1995)Google Scholar 2. Chazelle, B.: The Discrepancy Method. Cambridge University Press, Cambridge (2000) 3. Chernoff, H.: A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Stat. 23, 493–507 (1952) 4. Erdős, P.: On a lemma of Littlewood and Offord. Bull. Am. Math. Soc. 51, 898–902 (1945) 5. Erdős, P.: On combinatorial questions connected with a theorem of Ramsey and van der Waerden (in hungarian). Mat. Lapok 14, 29–37 (1963) 6. Erdős, P., Rado, R.: A combinatorial theorem. J. Lond. Math. Soc. 25, 249–255 (1950) 7. Erdős, P., Spencer, J.: Imbalances in k-colorations. Networks 1, 379–385 (1971/1972)Google Scholar 8. Littlewood, J.E., Offord, A.C.: On the number of real roots of a random algebraic equation. III. Rec. Math. [Mat. Sbornik] N.S. 12, 277–286 (1943)Google Scholar 9. Sós, V.T.: Irregularities of partitions: Ramsey theory, uniform distribution. In: Surveys in Combinatorics (Southampton, 1983), pp. 201–246. Cambridge University Press, Cambridge (1983)Google Scholar 10. Spencer, J.: Probabilistic methods. Graphs Comb. 1, 357–382 (1985) 11. Sperner, E.: Ein Satz über Untermengen einer endlichen Menge. Mathematische Zeitschrift 27, 544–548 (1928)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923103213310242, "perplexity": 2488.5950325099266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00316.warc.gz"}
https://rdrr.io/cran/bioRad/f/vignettes/range_correction.Rmd
# Correction for range effects In bioRad: Biological Analysis and Visualization of Weather Radar Data ## 1 Preparations library(bioRad) ## 2 Beam propagation We start with the changing properties of the radar beam as it propagates through the atmosphere. We define: • range: the length of the skywave path between target and the radar antenna, also known as slant range • distance: the distance of the target from the radar as measured along the earth's surface, also known as down range. The beam height is changing as a result of earth's curvature, and beam refraction: # define a range grid from 0 to 200 km: my_range <- seq(0, 200000, 1000) # plot the beam height for each range, for a 0.5 degree elevation beam: plot(my_range, beam_height(my_range, elev = .5), xlab = "range [m]", ylab = "beam height [m]") The beam width also broadens with range: # plot the beam width, for a beam opening angle of 1 degree (typical for most weather radars): plot(my_range, beam_width(my_range, beam_angle = 1), xlab = "range [m]", ylab = "beam width [m]") We can now combine the relations for beam width and beam height, to calculate the beam profile as a function of height, i.e. the altitudinal normalized distribution of radiated energy by the beam. Let's plot the radiation profile for a 0.5 and 2 degree beam elevation at 50 km from the radar (the height axis runs from ground level straight up): # plot the beam profile, for a 0.5 degree elevation beam at 50 km distance from the radar: plot(beam_profile(height = 0:4000, 50000, 0.5), 0:4000, xlab = "normalized radiated energy", ylab = "height [m]", main = "beam elevation: 0.5 deg, distance=50km") # plot the beam profile, for a 2 degree elevation beam at 50 km distance from the radar: plot(beam_profile(height = 0:4000, 50000, 2), 0:4000, xlab = "normalized radiated energy", ylab = "height [m]", main = "beam elevation: 2 deg, distance=50km") We can also calculate the normalized radiation profile for the two beams combined: # plot the combined beam profile for a 0.5 and 2.0 degree elevation beam at 50 km distance from the radar: plot(beam_profile(height = 0:4000, 50000, c(0.5, 2)), 0:4000, xlab = "normalized radiated energy", ylab = "height [m]", main = "beam elevations: 0.5,2 deg, distance=50km") ## 3 Vertical profiles and polar volumes Let us now assume we have calculated a vertical profile (of birds) for a certain polar volume file. Let's load an example # let's load an example polar volume: pvolfile <- system.file("extdata", "volume.h5", package = "bioRad") # a vertical profile can also be calculated from the polar volume directly, using # calculate_vp(pvolfile) # but for now we will use bioRad's example vertical profile already calculated: example_vp Let's plot the vertical profile, for the quantity eta (the linear reflectivity): plot(example_vp, quantity = "eta") Note that eta is directly related to reflectivity factor (DBZH), i.e. a reflectivity factor of 5 dBZ amounts to the following eta at a radar wavelength of 5.3 cm: dbz_to_eta(5, wavelength = 5.3) ## 4 Spatial estimates of vertically integrated density (VID) Our goal is to estimate a spatial image of vertically integrated density (VID) based on all elevation scans of the radar, while accounting for the changing overlap between the radar beams as a function of range. • a polar volume, containing multiple scans • a vertical profile (of birds) calculated for that polar volume • a grid that we define relative to the earth's surface, on which we will calculate the range corrected image. The pixel locations on the ground are easily translated into a corresponding azimuth and range of the various scans (see function beam_range()). For each scan within the polar volume we: • calculate for each ground surface pixel the vertical radiation profile at that pixel for that particular scan, following above paragraph 2. • calculate the reflectivity we expect at that pixel ($eta_{expected}$), given the vertical profile (of birds) and the part of the profile radiated by the beam. This $eta_{expected}$ is simply the average of (linear) eta in the profile (as plotted in paragraph 3 above), weighted by the vertical radiation profile (as plotted in paragraph 2). • we also calculated the observed eta at this pixel, which is simply given by dbz_to_eta(DBZH) (see paragraph 3), with DBZH the reflectivity factor measured at the pixel's distance from the radar. For each pixel on the ground, we thus end up with a set of $eta_{expected}$ and a set of $eta_{observed}$. From those we can calculate a correction factor at that pixel, as $R=\sum{eta_{observed}}/\sum{eta_{expected}}$, with the sum running over scans. To arrive at the final PPI image: • we calculate the vertically integrated density (vid) and vertically integrated reflectivity (vir) for the profile, using the function integrate_profile(). • the spatial range-corrected PPI for VID is simply the adjustment factor image (R), multiplied by the vid calculated for the profile • Likewise, the spatial range-corrected PPI for VIR is simply the R image, multiplied by the vir calculated for the profile ## 5 Example: estimating a VID image Let's first make a PPI plot of the lowest uncorrected scan: # extract the first scan from the polar volume: my_scan <- example_pvol\$scans[[1]] # project it as a PPI on the ground: my_ppi <- project_as_ppi(my_scan, range_max = 100000) # plot it plot(my_ppi) Now let's calculate the range-corrected PPI # let's use a 500 metre spatial grid (res), and restrict to 100x100 km area my_corrected_ppi <- integrate_to_ppi(example_pvol, example_vp, res = 500, xlim = c(-100000, 100000), ylim = c(-100000, 100000)) my_corrected_ppi The range corrected PPI has four parameters: VIR, VID, R, overlap. Let's plot the adjustment factor R: # plot the adjustment factor R: plot(my_corrected_ppi, param = "R") Let's also plot the vertically integrated reflectivity: plot(my_corrected_ppi, param = "VIR") Or plot the vertically integrated density on a map: bm <- download_basemap(my_corrected_ppi) map(my_corrected_ppi, bm, param = "VIR", alpha = .5) ## 6 Overlap between radiation profile and bird profile At large distances from the radar we expect correction for range effects to become more difficult: 1. radar beams start to overshoot the aerial layers with biology, i.e. the overlap between the vertical profile and the emitted radiation profile decreases 2. at large distances weak biological echoes may get below the radar's detection level We can calculate overlap between emitted radiation and the biological profile as follows: # calculate overlap between vertical profile of birds bpo <- beam_profile_overlap(example_vp, get_elevation_angles(example_pvol), seq(0, 100000, 1000), quantity = "eta") # plot the calculated overlap: plot(bpo) The function first normalizes the vertical profile altitudinal distribution. This can be either using profile quantity eta or dens, whichever is preferred (note that dens, the density, is sometimes thresholded to zero based on radial velocity standard deviation, see sd_vvp_threshold(), while eta is not) It then calculates the overlap between the normalized vertical profile, and the normalized radiation profile as calculated with beam_profile(), using a metric called the Bhattacharyya distance. This metric is zero when there is no overlap, and 1 when the distributions are identical. The range-corrected PPI also contains a field overlap, which contains the same metric but then calculated on the grid of the PPI: plot(my_corrected_ppi, param = "overlap") The overlap metric might be used in the future to show only certain section of the PPI, based on their overlap value. ## 7 To do • implementation of a noise_floor parameter, by which one can determine at which distance birds are expected to be below noise level, given their vertical profile. Affects function beam_profile_overlap().
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495040774345398, "perplexity": 4276.021607388809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00230.warc.gz"}
http://docmadhattan.fieldofscience.com/2013/07/first-evidence-of-photon-polarisation.html
### First evidence of photon polarisation in a quark transition There are a lot of model about physics beyond standard model, and the experimental work is concentrate to search signals to select the new models for the future. LHCb has recently released a press release about the transition of a b-quark to an s-quark with the emission of a photon. This transition is considered a very important process to investigate possible manifestation of new physics. This decay process is forbidden in the first approximation in the Standard Model (SM) of particle physics and moreover in the second-order processes that govern the process in the SM the emitted photon is expected to be strongly polarised. Therefore it is very sensitive to new physics effects arising from the exchange of new heavy particles in electroweak penguin diagrams (see 14 June 2013 news). Indeed, several models of new physics predict that the emitted photon should be less polarised than in the SM. Up to now different experiments have measured the decay rate of this process, ruling out significant deviations of the rate from the SM prediction and strongly reducing the allowed parameter space of new physics models. The photon polarisation was, however, never previously observed. I think that this is a really intriguing news for a particle physics point of view. #### 1 comment: 1. Mr Sulu, prepare the Photon Torpedos. Markup Key: - <b>bold</b> = bold - <i>italic</i> = italic - <a href="http://www.fieldofscience.com/">FoS</a> = FoS
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578571915626526, "perplexity": 1052.0858201441472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540519149.79/warc/CC-MAIN-20191209145254-20191209173254-00203.warc.gz"}
https://notes.kenbw.com/docs/smooth-manifold/motivations/
# motivations 2020-10-16 Formulations that motivate manifold optimization. ## Logistic regression Taking a well known formulation from linear algebra and teasing out a curvy structure. Consider $x_1,\ldots,x_m \in \xi$ each with a corresponding binary label $y_1,\ldots,y_m \in \{0,1\}$. We can define some vector $\theta \in \xi$ that we can “grab onto” with each $x_i$ using an inner-product defined over the space. Applying a logistic transform $\sigma$, we can obtained well-behaved probabilities: $$P[y=1|x,\theta] = \sigma(\lang\theta, x_i\rang)$$ $$P[y=0|x,\theta] = 1 - \sigma(\lang\theta, x_i\rang)$$ (Recall $\sigma : \mathbb{R}\rightarrow(0,1)$) If we want to learn a good $\theta$, we want optimize w.r.t. the likelihood function: $$L(\theta) = \prod_{i=1}^{m}\sigma(\lang\theta,x_i\rang)^{y_i}\sigma(-\lang\theta,x_i\rang)^{1-y_i}$$ The compressed notation is just Bayes rule. First marginalize over $y$, then sum over all $x$ to get the full posterior. We can reframe as the convex optimization objective: $$\min_{\theta \in \xi} L(\theta) + \lambda \lVert\theta\rVert^2$$ Where the constraints now define a smooth manifold. ## Sensor network localization $$\ldots$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995432436466217, "perplexity": 1434.476118214913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00502.warc.gz"}
https://math.stackexchange.com/questions/1388414/show-lim-m-to-infty-n-to-infty-f-frac-left-lfloor-mx-right-rfloo
# Show $\lim_{m \to \infty ,n \to \infty } f(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}) = f(x,y)$ Suppose $$f(x,y)$$ is defined on $$[0,1]\times[0,1]$$ and continuous on each dimension, i.e. $$f(x,y_0)$$ is continuous with respect to $$x$$ when fixing $$y=y_0\in [0,1]$$ and $$f(x_0,y)$$ is continuous with respect to $$y$$ when fixing $$x=x_0\in [0,1]$$. Show $$\lim_{m \to \infty ,n \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = f(x,y)$$ My attempt: First, I know $$\lim\limits_{m \to \infty ,n \to \infty } \left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = (x,y)$$ Secondly it looks $$\lim\limits_{m \to \infty }\lim\limits_{n \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = \lim \limits_{m \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},y\right) = f(x,y)$$ and $$\lim\limits_{n \to \infty } \lim\limits_{m \to \infty } f\left(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = \lim\limits_{n \to \infty } f\left(x,\frac{{\left\lfloor {ny} \right\rfloor }}{n}\right) = f(x,y)$$ since $$f(x,y)$$ is continuous on each dimension. However, I am not sure if this can infer $$\lim\limits_{m \to \infty ,n \to \infty } f(\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}) = f(x,y)$$. Can anyone provide some help? Thank you! I am now sure $$\lim\limits_{m \to \infty } \lim\limits_{n \to \infty } {a_{mn}} = \lim\limits_{n \to \infty } \lim\limits_{m \to \infty } {a_{mn}} = L$$ does not imply $$\lim\limits_{m \to \infty ,n \to \infty } {a_{mn}} =L$$ in general. Hope someone can help solve the problem. • how did you define $\mathop {\lim }\limits_{m \to \infty ,n \to \infty }$, maybe something like $\lim_{(m,n) \to (\infty,\infty) }$? How did you prove $\mathop {\lim }\limits_{m \to \infty ,n \to \infty } (\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}) = (x,y)$ and why do you think you cannot use this directly to conclude your claim? By the way, if all three limits exists, then they are equal. – user190080 Aug 8 '15 at 11:01 • I don't think if the three limits exist then they are equal. Check this example. $f(x,y) = \begin{cases}0, & x = y = 0\\[2ex] \dfrac{xy}{x^2+y^2}, & x^2+y^2 > 0\end{cases}$. $\mathop {\lim }\limits_{m \to \infty ,n \to \infty } a_{mn} = L$ means for any $\epsilon>0$ there exists $N\in \Bbb{N}$ st. $m,n>N$ implies $|a_{mn}-L|<\epsilon$. – Tony Aug 8 '15 at 11:05 • I guess you mean the limiting point $(0,0)$? The problem here is that $\lim_{(x,y) \to (0,0) }$ does not exist - you can see this by approaching from the positive and negative side which leads to different results ($\pm$) – user190080 Aug 8 '15 at 11:20 • Yes. Sorry about my imprecise comment. If $\mathop {\lim }\limits_{m \to \infty ,n \to \infty } {a_{mn}}$ exists it can imply $\mathop {\lim }\limits_{m \to \infty } {a_{mn}}$ and the other when some conditions are met. But I don't think the converse is true. In my question, unfortunately, I only have $\mathop {\lim }\limits_{m \to \infty } \mathop {\lim }\limits_{n \to \infty } {a_{mn}} = \mathop {\lim }\limits_{n \to \infty } \mathop {\lim }\limits_{m \to \infty } {a_{mn}} = L$, so I am unable to infer $\mathop {\lim }\limits_{m \to \infty ,n \to \infty } {a_{mn}} = L$. – Tony Aug 8 '15 at 11:27 • I am not quite sure I get your problem. You have proved that $\mathop {\lim }\limits_{m \to \infty ,n \to \infty } (\frac{{\left\lfloor {mx} \right\rfloor }}{m},\frac{{\left\lfloor {ny} \right\rfloor }}{n}) = (x,y)$ holds, have you not? But this is already sufficient to conclude your claim (due to continuity)...please correct me if you think I took you completely wrong – user190080 Aug 8 '15 at 12:31 It can't be done.For brevity, let $$u=x-1/2,v=y-1/2$$ and let $$f(x,y)=uv/(u^2+v^2)$$ when $$u^2+v^2\not=0$$, and $$f(1/2,1/2)=0$$. Then $$f$$ is continuous in each variable, but the formula whose limit you seek has the value $$(m-1)(n-1)/4(m^2+n^2)$$ when $$x=y=1/2$$ and $$m,n$$ are odd positive integers.This has no limit, e.g. try $$m=n$$ and then try $$m=3n$$. The trouble is that $$f$$ is not continuous as a function from $$I^2$$ to $$R$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724293947219849, "perplexity": 145.30505948260324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00377.warc.gz"}
https://www.cuemath.com/numbers/graphing-complex-numbers/
# Graphing Complex Numbers Graphing Complex Numbers A complex number is the sum of a real number and an imaginary number; that is, a complex number is of the form $$x+iy$$ and is usually represented by $$z$$. Here, both $$x$$ and $$y$$ are real numbers. • $$x$$ is called the real part. • $$y$$ is called the imaginary part. • $$iy$$ is an imaginary number. In this mini-lesson, we will explore the world of graphing complex numbers on a complex plane. We will learn plotting complex numbers on a complex plane, complex plane grapher, graphing imaginary numbers, and discover other interesting aspects of it. You can check out the interactive simulations to know more about the lesson and try your hand at solving a few interesting practice questions at the end of the page. ## Lesson Plan 1 What Is a Complex Plane? 2 Important Notes on Graphing Complex Numbers 3 Solved Examples on Graphing Complex Numbers 4 Thinking out of the Box! 5 Interactive Questions on Graphing Complex Numbers ## What Is a Complex Plane? Every complex number can be represented by a point in the XY-plane. The complex number $$x+iy$$ indicates the point $$(x,y)$$ in the XY-plane. The plane where a complex number is assigned to each of its points is called a complex plane. A complex plane is also called an argand plane. ## How to Plot Complex Numbers as Points on a Complex Plane? ### Graphing Complex Numbers on a Complex Plane Grapher In this section, you will learn plotting complex numbers on a complex plane. ### Steps to Plot Complex Numbers Follow the steps mentioned below to plot complex numbers on the complex plane. 1. Determine the real part and imaginary part of the given complex number. For example, for $$z=x+iy$$, the real part is $$x$$ and the imaginary part is $$y$$. 2. Form an ordered pair where the first element is the real part and the second element is the imaginary part. For example, for $$z=x+iy$$, the ordered pair is $$(x,y)$$ 3. Plot the point $$(x,y)$$ on the plane. ### Examples Let us consider the complex number $$z=3+4i$$. The real part is 3 and the imaginary part is 4. So, the ordered pair is (3, 4). The complex number $$z=3+4i$$ is represented in the graph below. Important Notes 1. We can plot real numbers on a complex plane. 2. On the complex plane, the horizontal axis is the real axis and the vertical axis is the imaginary axis. 3. The conjugate of a complex number $$z=x+iy$$ is the reflection of the point $$(x,y)$$ about the $$x$$-axis on the complex plane. 4. Multiplying complex numbers with its conjugate, gives a simple real number value. ## Solved Examples Example 1 Ms Dolma asked her students to classify the following complex numbers on the basis of the quadrant in which they lie. Can you classify them? Solution Let's find the points corresponding to each complex number. \begin{aligned} A &=3+7i\;\;\;\;\;\;\; \rightarrow (3,7)\\[0.2cm] B&=6-i\;\;\;\;\;\;\;\; \rightarrow (6,-1)\\[0.2cm] C&=-2-4i\;\;\; \rightarrow (-2,-4)\\[0.2cm] D&=-5+2i\;\;\; \rightarrow (-5, 2)\end{aligned} Let's plot the given complex numbers on a complex plane. So, the given complex numbers can be classified as: So, the complex numbers are classified. Example 2 Jenny says to Jolly that the points $$2-i, i$$, and $$2+3i$$ form the vertices of a right-angled triangle. Solution Let us assume the given points to be: \begin{aligned} A &=2-i\;\;\;\; \rightarrow (2,-1)\\[0.2cm] B&=i\;\;\;\;\;\;\;\;\;\; \rightarrow (0,1)\\[0.2cm] C&=2+3i\;\;\; \rightarrow (2,3)\\[0.2cm] \end{aligned} We will find the distance between every two points using the distance formula. \begin{aligned} AB&\! =\! \sqrt{(0-2)^2+(1-(-1)^2}\!\\&=\! \sqrt{(-2)^2+(2)^2}\!=\! \sqrt{4+4}\!=\! \sqrt{8}\\ BC &\!=\! \sqrt{(2-0)^2+(3-1)^2}\!\\&=\! \sqrt{(2)^2+(2)^2}\!=\! \sqrt{4+4}\!=\! \sqrt{8}\\ CA &\!=\! \sqrt{(2-2)^2+(3-(-1))^2}\! \\&=\! \sqrt{0^2+4^2}\!=\! \sqrt{16}\!=\!4\end{aligned} Now that we know the lengths of all three sides, \begin{aligned} AB^2+BC^2 &= CA^2 \\[0.3cm] (\sqrt{8})^2 +(\sqrt{8})^2 &= 4^2 \\[0.3cm] 8+8&=16\\[0.3cm] 16&=16 \end{aligned} Thus, $$A, B,$$ and $$C$$ satisfy Pythagoras theorem. So, $$\Delta ABC$$ is a right-angled triangle. We can prove the same by marking all the coordinates on a graph: $$\therefore$$ The given points form a right-angled triangle Example 3 The town of Lotto is mapped on a complex plane as shown. The chocolate house is located at the point $$3+7i$$ and the cake factory is located at the point $$-1-3i$$. The main entrance of the town is located halfway between the chocolate house and the cake factory. Can you calculate the point of the main entrance? Solution The complex numbers $$3+7i$$ and $$-1-3i$$ correspond to the points $$(3, 7)$$ and $$(-1, -3)$$ on the complex plane. To calculate the point of the main entrance, we have to calculate the midpoint of (3, 7) and (-1, -3). Let $$x_{1}=3$$, $$y_{1}=7$$, $$x_{2}=-1$$, and $$y_{2}=-3$$ The coordinates of the main entrance are calculated as: \begin{align}\!\!\left(\dfrac{x_{1}+x_{2}}{2}, \dfrac{y_{1}+y_{2}}{2}\right)\!\!&=\!\!\left(\dfrac{3+(-1)}{2}, \dfrac{7+(-3)}{2}\right)\\&=\!\!\left(\dfrac{2}{2}, \dfrac{4}{2}\right)\\&=\!\left(1, 2\right)\end{align} $$\therefore$$ The main entrance is located at the point $$1+2i$$. Think Tank 1. The diameter of a circle has endpoints 2-3i and -6+5i. Can you find the coordinates of the center of this circle? ## Interactive Questions Here are a few activities for you to practice. Select/Type your answer and click the "Check Answer" button to see the result. ## Let's Summarize The mini-lesson targeted the fascinating concept of Graphing Complex Numbers. The math journey around Graphing Complex Numbers starts with what a student already knows, and goes on to creatively crafting a fresh concept in the young minds. Done in a way that is not only relatable and easy to grasp, but will also stay with them forever. Here lies the magic with Cuemath. We hope you learned plotting complex numbers on a complex plane and graphing imaginary numbers in this lesson on graphing complex numbers. At Cuemath, our team of math experts is dedicated to making learning fun for our favorite readers, the students! Through an interactive and engaging learning-teaching-learning approach, the teachers explore all angles of a topic. Be it worksheets, online classes, doubt sessions, or any other form of relation, it’s the logical thinking and smart learning approach that we, at Cuemath, believe in. ## FAQs on Graphing complex numbers ### 1. What is a complex plane used for? A complex plane is used to plot complex numbers on a graph. ### 2. How do you plot a complex number on a complex plane? Follow the steps mentioned below to plot complex numbers on a complex plane. 1. Determine the real part and imaginary part of the given complex number. For example, for $$z=x+iy$$, the real part is $$x$$ and the imaginary part is $$y$$. 2. Form an ordered pair where the first element is the real part and the second element is the imaginary part. For example, for $$z=x+iy$$, the ordered pair is $$(x,y)$$ 3. Plot the point $$(x,y)$$ on the plane. ### 3. What is a complex graph? A complex graph is a graph where complex numbers are represented. ### 4. How do you graph complex numbers? Follow the steps mentioned below to plot complex numbers on a complex plane. 1. Determine the real part and imaginary part of the given complex number. For example, for $$z=x+iy$$, the real part is $$x$$ and the imaginary part is $$y$$. 2. Form an ordered pair where the first element is the real part and the second element is the imaginary part. For example, for $$z=x+iy$$, the ordered pair is $$(x,y)$$ 3. Plot the point $$(x,y)$$ on the plane. ### 5. Where are complex numbers used in real life? Complex numbers are used to solve quadratic equations. For example, the solution of $$x^2+1=0$$ is $$z=i$$. ### 6. What is Z* in complex numbers? $$z*$$ in complex numbers is the conjugate of the complex number $$z=x+iy$$ given by $$z*=x-iy$$. ### 7. How do you graph i on a complex plane? The number $$i$$ corresponds to point $$0,1$$ on the graph. ### 8. Where is the real part of the complex number plotted on the graph? The real part of the complex number is plotted on the horizontal axis in the graph. ### 9. Where is the imaginary part of the complex number plotted on the graph? The imaginary part of the complex number is plotted on the vertical axis in the graph. Complex Numbers Complex Numbers Complex Numbers grade 10 | Questions Set 2 Complex Numbers grade 10 | Questions Set 1 More Important Topics Numbers Algebra Geometry Measurement Money Data Trigonometry Calculus More Important Topics Numbers Algebra Geometry Measurement Money Data Trigonometry Calculus Learn from the best math teachers and top your exams • Live one on one classroom and doubt clearing • Practice worksheets in and after class for conceptual clarity • Personalized curriculum to keep up with school
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855854332447052, "perplexity": 969.249076880834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00158.warc.gz"}
https://admin.clutchprep.com/organic-chemistry/practice-problems/15973/classify-the-elementary-steps-in-the-mechanism-according-to-their-molecularity-a
# Problem: Classify the elementary steps in the mechanism according to their molecularity.A. Step 1 is unimolecular; step 2 is bimolecular.B. Step 1 is bimolecular; step 2 is unimolecular.C. Both steps are unimolecular.D. Both steps are bimolecular. ###### Problem Details Classify the elementary steps in the mechanism according to their molecularity. A. Step 1 is unimolecular; step 2 is bimolecular. B. Step 1 is bimolecular; step 2 is unimolecular. C. Both steps are unimolecular. D. Both steps are bimolecular.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654810786247253, "perplexity": 4699.456719515713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00117.warc.gz"}
https://socratic.org/questions/59154e8d11ef6b4e761d8dad
Chemistry Topics # A mixture of propane and butane are confined in a bottle with a small hole. What is the rate of effusion of butane with respect to butane? May 15, 2017 Well, for $\text{LPG}$ more $\text{propane}$ would have been leaked than $\text{butane}$. $\text{Liquefied petroleum gas}$, $\text{LPG}$, is typically a mixture of $\text{butane}$, ${C}_{4} {H}_{10}$, and $\text{propane}$, ${C}_{3} {H}_{8}$. This is a clean burning fuel with high heat output (combustion is fairly complete due to the short hydrocarbon chains). This problem could be quantified if you gave us the makeup of the gas, and the initial and end masses.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219133615493774, "perplexity": 505.96093791433594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00151.warc.gz"}
http://mathhelpforum.com/differential-geometry/175823-f-bounded-above-implies-f-bounded-below.html
# Thread: f bounded above implies -f bounded below 1. ## f bounded above implies -f bounded below Let $f:A\to\mathbb R.$ Show that $f$ is bounded above, then $\underset{x\in A}{\mathop{\inf }}\,(-f(x))=-\underset{x\in A}{\mathop{\sup }}\,(f(x)).$ Okay, so I first showed that $-f$ is bounded below, that's easy, but as for second part, I want to know whether my work is correct or not: Since $f$ is bounded above, then it has supremum, so $f\le \sup f$ (from this and the rest, sup and inf are taken for $x\in A$), and on the other hand $\inf(-f)\le -f,$ since $f$ and $-f$ are bounded above and below, respectively, then it exists a $k\in\mathbb R$ so that $f\le k$ and $-k\le -f,$ namely we have $k=\sup f$ and $\inf(-f)=-k,$ hence $\underset{x\in A}{\mathop{\inf }}\,(-f(x))=-\underset{x\in A}{\mathop{\sup }}\,(f(x)).$ If it's not correct, what's the way to do this? 2. Originally Posted by Connected Let $f:A\to\mathbb R.$ Show that $f$ is bounded above, then $\underset{x\in A}{\mathop{\inf }}\,(-f(x))=-\underset{x\in A}{\mathop{\sup }}\,(f(x)).$ Okay, so I first showed that $-f$ is bounded below, that's easy, but as for second part, I want to know whether my work is correct or not: Since $f$ is bounded above, then it has supremum, so $f\le \sup f$ (from this and the rest, sup and inf are taken for $x\in A$), and on the other hand $\inf(-f)\le -f,$ since $f$ and $-f$ are bounded above and below, respectively, then it exists a $k\in\mathbb R$ so that $f\le k$ and $-k\le -f,$ namely we have $k=\sup f$ and $\inf(-f)=-k,$ hence $\underset{x\in A}{\mathop{\inf }}\,(-f(x))=-\underset{x\in A}{\mathop{\sup }}\,(f(x)).$ If it's not correct, what's the way to do this? Looks fine to me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967873692512512, "perplexity": 120.39723236808004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607636.66/warc/CC-MAIN-20170523122457-20170523142457-00400.warc.gz"}
http://mathhelpforum.com/pre-calculus/154635-factorial-n.html
# Thread: Factorial N 1. ## Factorial N If Factorial( N ) * Factorial( N +2 ) = Factorial( 2N ), How to find Solution Set of N in the simplest way..Note: (nPr ans nCr may be included in the solution.) Please I want a solution simple as much as possible as it's for a high school student. Thanks in advance. 2. Originally Posted by islamdiaa If Factorial( N ) * Factorial( N +2 ) = Factorial( 2N ), How to find Solution Set of N in the simplest way..Note: (nPr ans nCr may be included in the solution.) Please I want a solution simple as much as possible as it's for a high school student. Thanks in advance. N = 3 is an obvious solution.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408137559890747, "perplexity": 2456.3210684721057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541886.85/warc/CC-MAIN-20161202170901-00210-ip-10-31-129-80.ec2.internal.warc.gz"}
http://science.sciencemag.org/content/295/5555/638
PerspectiveAstronomy # Radio Noise from Dust Grains See allHide authors and affiliations Science  25 Jan 2002: Vol. 295, Issue 5555, pp. 638-639 DOI: 10.1126/science.1067917 ## Summary Cosmic dust plays an important role in the evolution of galaxies. Usually, the grains can be detected in the infrared, but as Ferrara explains in his Perspective, experiment and theory now show that spinning cosmic dust grains can also emit radio waves. These emissions provide an important new tool for studies of the interstellar medium in galaxies.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207140207290649, "perplexity": 2377.727481026086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00018.warc.gz"}
http://mathhelpforum.com/calculus/27052-integrals-using-u-substitution.html
1. ## Integrals using U-Substitution Can anyone check if I'm doing these correct 1. $\int{\frac{(lnx)^2}{x}dx}$ $U = lnx , DU = \frac{dx}{x}$ $\int{u}{du}$ $= \frac{1}{2}(u^2)$ $= \frac {1}{2}(lnx)^2$ 2. $\int\frac{x}{(x^2+2)^2}dx$ $U = x^2 + 1 , \frac{1}{2}DU = xdx$ $= \frac{1}{1}\int\frac{1}{u^2}du$ $= \frac{1}{2}ln(u^2)$ $= \frac{1}{2}ln|(x^2)+1|$ I know I made mistakes I just don't know where. 2. You made a mistake in #1: after you substitute u = ln(x) you should get: $\int u^2 du$ And in #2 you should make $u = x^2 + 2$. 3. Originally Posted by TrevorP You made a mistake in #1: after you substitute u = ln(x) you should get: $\int u^2 du$ And in #2 you should make $u = x^2 + 2$. Yes, and in the second one, you would then be integrating $u^{-2}$ which is $-u^{-1}$ 4. Originally Posted by FalconPUNCH! $= \frac{1}{1}\int\frac{1}{u^2}du$ $= \frac{1}{2}ln(u^2)$ This is a common mistake. It is $\int {\frac{1} {u}\,du} = \ln \left| u \right| + k.$ For $\int {\frac{1} {{u^2 }}\,du}$ write the integrand as $u^{-2}$ and apply the power rule. 5. Originally Posted by FalconPUNCH! Can anyone check if I'm doing these correct 1. $\int{\frac{(lnx)^2}{x}dx}$ $U = lnx , DU = \frac{dx}{x}$ $\int{u}{du}$ $= \frac{1}{2}(u^2)$ $= \frac {1}{2}(lnx)^2$ 2. $\int\frac{x}{(x^2+2)^2}dx$ $U = x^2 + 1 , \frac{1}{2}DU = xdx$ $= \frac{1}{1}\int\frac{1}{u^2}du$ $= \frac{1}{2}ln(u^2)$ $= \frac{1}{2}ln|(x^2)+1|$ I know I made mistakes I just don't know where. Most of what needs to be said has been said and said well. My 2 cents is that if the original questions asked for the integral, the arbitrary constant of integration must be included in the answer. 6. Thanks guys. Sorry I didn't respond back until today. Well I have more problems which are harder and I don't really know where to start with some of them. 1. $\int\frac{x^2}{\sqrt{1-x}}dx$ For this one I don't know here to begina 2. $\frac{sinx}{1+cos^{2} x} dx$ 3. $\frac{sin2x}{1+cos^{2}x}dx$ For these two I was never that good in trig. so it's hard for me to understand where to start. 4. $\int_{0}^{1} (x)\sqrt{x-1}dx$ I'm not really looking for answers but I am looking for how to start these problems. Thank you guys 7. Use u-substitution for the first one: $u=1-x$ $du=-dx$ $x=1-u$ $x^2=(1-u)^2 \Rightarrow u^2 - 2u + 1$ Your integrand then becomes: $\frac{-u^2+2u-1}{\sqrt{u}}$ Separate each term into its own fraction and apply the power rules for integration to each. 8. Originally Posted by FalconPUNCH! 1. $\int\frac{x^2}{\sqrt{1-x}}dx$ Substitute $u^2=1-x.$ Originally Posted by FalconPUNCH! 2. $\frac{sinx}{1+cos^{2} x} dx$ This is an arctangent. First substitute $u=\cos x.$ Originally Posted by FalconPUNCH! 3. $\frac{sin2x}{1+cos^{2}x}dx$ This was posted before, you just need to note that $\left(1+ {\cos ^2 x} \right)' = - 2\sin x\cos x = - \sin 2x.$ Originally Posted by FalconPUNCH! 4. $\int_{0}^{1} (x)\sqrt{x-1}dx$ Substitute $u^2=x-1.$ (Do you know Barrow's Rule? When makin' a change of variables you need to get the new integration limits for $u.$) 9. Thank you all for your help. I will try to solve them and come back tomorrow or after tomorrow with my results. Edit: I do not know Barrow's Rule. 10. Originally Posted by FalconPUNCH! 3. $\frac{sin2x}{1+cos^{2}x}dx$ For this one is u = sin2x? 11. Originally Posted by FalconPUNCH! For this one is u = sin2x? No, I would say not. There is a trig identity that says sin2x = 2sinxcosx. If you fill that in then set u = cos x, you should get somewhere. 12. Ok I sort of understood. I'm not to good with trig but here's what I did $\int\frac{2sinxcosx}{1+cos^{2}x}$ $u = cos x ; du = -sinxdx$ $-2\int\frac{u}{1+u^2}du$ $-2tan^{-1}(cos^{2}x) + c$ 13. Yes the only problem I see with that is that you should have -2Arctan(Arccos(x)) 14. Originally Posted by TrevorP Yes the only problem I see with that is that you should have -2Arctan(Arccos(x)) Sorry, I think that's wrong. You had $\int \frac{2sinxcosx}{1+cos^2x}dx$ Integrating this gives $-ln(1+cos^2x)+C$ 15. Originally Posted by a tutor Sorry, I think that's wrong. You had $\int \frac{2sinxcosx}{1+cos^2x}dx$ Integrating this gives $-ln(1+cos^2x)+C$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 58, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138302206993103, "perplexity": 828.9475454797434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.studypug.com/algebra-help/slant-asymptote
# Slant asymptote - Rational Functions ### Slant asymptote When the polynomial in the numerator is exactly one degree higher than the polynomial in the denominator, there is a slant asymptote in the rational function. To determine the slant asymptote, we need to perform long division.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407854080200195, "perplexity": 328.14341249592263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00394.warc.gz"}
https://proofwiki.org/wiki/Talk:Russell%27s_Paradox
The issue isn't really allowing sets to contain themselves, it's allowing the instantiation of sets defined by any property. Assuming just unrestricted comprehension (UC), existential instantiation (EI), and universal-instantiation (UI), we can do this: $\displaystyle (1)$ $\displaystyle \exists y \forall x (x\in y \leftrightarrow x \notin x)$ (UC using the property given by $x\notin x$) $\displaystyle (2)$ $\displaystyle \forall x (x\in R \leftrightarrow x \notin x)$ (EI (1) y as R) $\displaystyle (3)$ $\displaystyle R\in R \leftrightarrow R \notin R$ (UI (2) x as R) This is already a contradiction, modulo some propositional logic stuff, using the $P\vee \neg P$ tautology and deriving a statement in the form $Q \wedge \neg Q$ if you want. Using the axiom of foundation (AF), we could mess around with (3) a bit, but this won't prevent a contradiction; it's already inherent at step (1). It gets us there a different way. For example: $\displaystyle (4)$ $\displaystyle \forall x (x\notin x)$ (as a consequence of AF) $\displaystyle (5)$ $\displaystyle R\notin R$ (UI (4) x as R) $\displaystyle (6)$ $\displaystyle R\in R$ (3 and 5) Russel's paradox shows us that any theory containing unrestricted comprehension is already inconsistent. Adding axioms doesn't resolve inconsistency; unrestricted comprehension has to be repealed to avoid this. ZFC does this by using a restricted version of comprehension instead of this one, so that we can't justify (1) from an axiom. -- Qedetc 19:17, 22 June 2011 (CDT) Amended comment. How's that? Feel free to add the material on this page as another consequence of UC, on another page in the "Paradoxes" and "Naive Set Theory" categories, perhaps. --prime mover 00:24, 23 June 2011 (CDT) I've edited it, and I'll try to justify not mentioning the axiom of foundation in it (looking back up there, it probably wasn't clear what I was complaining about): While adding axioms can constrain the models of a theory (more things to satisfy), it actually removes constraints on proofs and lets us do more (more ways to justify statements). Because of this, since we're essentially discussing how to prevent the proof of a contradiction, it isn't right to say that the axiom of foundation is a constraint or suggest that it prevents steps in the argument. ZFC first tries to avoid the paradox by not having a certain axiom. Then, after the problem is cut out, it carefully reinserts enough axioms to build the sets we want in order to have a worthwhile theory, hopefully not reintroducing any paradoxes. If my claim about about adding axioms removing constraints on proofs seems fishy, remember that one way of defining valid proofs are as finite sequences $(S_1, \dots, S_k)$ of statements where each $S_i$ is either an axiom or follows from earlier statements in the sequence by a rule of inference. So, two points can be made: 1) Any valid proof using axioms from a set $A$ is also a valid proof in any larger set of axioms $A'\supseteq A$. This is because if $S_i$ is an axiom in $A$, then it is also an axiom in $A'$. 2) If $A'$ is a proper superset of $A$, then there are valid proofs using $A'$ which are not valid proofs using $A$. For example, just take any proof over $A$ and stick in an extra statement at the front which is an axiom from $A' - A$. Together those tell us that adding axioms means we get to keep all the proofs we had, but also get some more proofs (and consequently maybe some more theorems). --Qedetc 03:32, 24 June 2011 (CDT) Also, I should point this out since it might be a more convincing point (I had hinted at this in the discussion page for the set of all sets): These are equivalent: A) The axioms of ZFC (including the axiom of foundation) are consistent. B) The axioms of ZFC, without the axiom of foundation, but with a (certain) axiom that implies the negation of the axiom of foundation, are consistent. Hopefully that made sense. I'll confess that I haven't actually gone through a proof of this. I'm trusting some comments in Jech and Hrbacek's set theory book. It looks like this is the content of chapter 3 of a paper by Aczel. A very skimpy wikipedia article on the axiom he uses is here. The paper is listed as a reference at the bottom. --Qedetc 13:21, 24 June 2011 (CDT) I wonder whether this page is trying to do too much? Perhaps all we need to do is say: "This is an antinomy arising from the comprehension principle" and leave it at that. If we then decide to put another page together which analyses all this stuff in detail (a good move IMO) then we can link to it.--prime mover 13:55, 24 June 2011 (CDT) The comment section could probably be broken off into a broader discussion of attempts to formalize mathematical reasoning while avoiding arguments like this. I don't know enough about other systems to say much more than what I've said here unfortunately. There are definitely some nuances hanging around all of this that would be good to ramble on about. I think the source of some confusion is that the argument in the proof on this page is extremely similar to the proof that a set of all sets can't exist in ZFC. Russell's paradox kind of has multiple uses, depending on what framework you're in. I think getting a sense of all of this requires a decent amount of discussion and links back to definitions in logic. I don't know if it's currently overreaching; I came here having talked about a bunch of this before, and now that I've edited it, I've also conformed it to what makes sense in my head. So it seems okay to me, but that's clearly a biased view. It would be good to get comments from some other members. --Qedetc 15:23, 24 June 2011 (CDT) No problem with any of this, I just think it belongs on a different page. Perhaps we ought to wait for someone who knows enough about other systems to attempt to put it into whatever different contexts there are. --prime mover 16:11, 24 June 2011 (CDT)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607613444328308, "perplexity": 494.5947145893736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00277.warc.gz"}
http://math.stackexchange.com/questions/39865/borel-sets-and-clopen-decompositions
# Borel sets and clopen decompositions Let $X$ be a locally compact (Hausdorff) space, and suppose I can write $X$ as the disjoint union of a family of open subsets $(X_i)_{i\in I}$ (so each $X_i$ is actually clopen). A typical example I have in mind is an uncountable disjoint union of copies of $\mathbb R$ (so I am certainly not wishing so assume $X$ is $\sigma$-finite, etc.) Let's be careful: let $B$ be the Borel $\sigma$-algebra on $X$, so this is the intersection of all $\sigma$-algebras containing the open sets. For each $i$, treat $X_i$ as a locally compact space in its own right, and let $B_i$ be the Borel $\sigma$-algebra. Let $E\subseteq X$ be such that $E\cap X_i\in B_i$ for each $i$. Is $E\in B$? The converse is true: if $\Omega=\{ E\subseteq X : \forall i, E\cap X_i\in B_i\}$ then $\Omega$ is a $\sigma$-algebra, and $\Omega$ contains all the open subsets of $E$, so $B\subseteq\Omega$. I want to show that $\Omega=B$, which seems much harder?? - Is there any other assumption on $X$? Something like second-countable (or that the decomposition into $X_i$ gives each $X_i$ second-countable)? – Asaf Karagila May 18 '11 at 17:55 I'm certainly happy to just assume that $X$ is indeed an uncountable disjoint union of copies of $\mathbb R$; so each $X_i$ can be "nice", but there are a lot of them! – Matthew Daws May 18 '11 at 19:05 There are plenty of spaces for which (using your notation) $\Omega$ is strictly larger than $B$, and one such space is $\omega_1 \times \mathbb{R}$ (with the topology you described). To build a set in $\Omega \backslash B$, just paste a set of Borel rank $\alpha$ in the $\alpha^{\mathrm{th}}$ copy of $\mathbb{R}$. Thanks-- I guess the key point is that $A$ is of Borel rank $\alpha$ in $X$ if and only if $A\cap X_i$ is Borel rank $\alpha$ in $X_i$; so if $A\cap X_i$ can have arbitrarily high rank, it's not possible for $A$ to be Borel (else $A$ would have countable rank...) Do you happen to know a reference for this, or is it just folklore? – Matthew Daws May 18 '11 at 19:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691065549850464, "perplexity": 105.18472627454094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448389.58/warc/CC-MAIN-20151124205408-00050-ip-10-71-132-137.ec2.internal.warc.gz"}
http://proceedings.mlr.press/v108/ahmadian20a.html
# Fair Correlation Clustering Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4195-4205, 2020. #### Abstract In this paper, we study correlation clustering under fairness constraints. Fair variants of $k$-median and $k$-center clustering have been studied recently, and approximation algorithms using a notion called fairlet decomposition have been proposed. We obtain approximation algorithms for fair correlation clustering under several important types of fairness constraints.Our results hinge on obtaining a fairlet decomposition for correlation clustering by introducing a novel combinatorial optimization problem. We define a fairlet decomposition with cost similar to the $k$-median cost and this allows us to obtain approximation algorithms for a wide range of fairness constraints. We complement our theoretical results with an in-depth analysis of our algorithms on real graphs where we show that fair solutions to correlation clustering can be obtained with limited increase in cost compared to the state-of-the-art (unfair) algorithms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342275261878967, "perplexity": 667.5408602793428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00520.warc.gz"}
http://math.stackexchange.com/questions/257060/checking-a-solution-for-a-sde
# Checking a solution for a SDE I want to show that the process $Y(t) = e^t \int_0^t e^{-s}dW(s)$ satisfies the following SDE: $dX(t) = X(t)dt + dW(t), \ \ t\geq 0 , \quad X(0) = 0$ I think the right approach is to use Ito's formula, but i don't see how. I thought that the argument $dW(s)dt = 0$ is totally wrong in the direct approach. If one would try Ito's formula, $f$ would be $Y(t) = f(t,X(t)) = e^t \int_0^t e^{-s}dX(s)$, where $X(t) = W(t)$, the brownian motion. Now if we try to calculate $dY(t)$ with Ito, we get: $\frac{\partial f}{\partial t} = tY(t)$ (?) $\frac{\partial f}{\partial X} = X(t)$ $\frac{\partial^2 f}{\partial X^2} = dX(t)$ Could you give me a hint on how to do this? Thanks! - If we set $$Z(t) := \int_0^t e^{-s} \, dW_s,$$ then $Z_t$ is a Itô process. Moreover, $dZ_t = e^{-t} \, dW_t$. If we apply Itô's formula to $$f(t,z) := e^t \cdot z,$$ $$\underbrace{f(t,Z_t)}_{Y_t}-\underbrace{f(0,0)}_{0} = \int_0^t \underbrace{e^s}_{\partial_z f(s,Z_s)} \, dZ_s + \int_0^t (\underbrace{e^s \cdot Z_s}_{\partial_t f(s,Z_s)} + \underbrace{0}_{\partial_z^2 f(s,Z_s)}) \, ds \\ = \int_0^t \, dW_s + \int_0^t Y_s \, ds \\ \Leftrightarrow dY_t = dW_t + Y_t \, dt$$ using that $Y_t = f(t,Z_t)$ by definition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903833866119385, "perplexity": 96.85325160904115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/warc/CC-MAIN-20140820021320-00308-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-3-section-3-5-equations-of-lines-exercise-set-page-174/89
Intermediate Algebra (6th Edition) A vertical line is always perpendicular to a horizontal line. This cannot be justified, though, by algebra and the use of slopes. This is because a horizontal line has a slope of $0$ while a vertical line has an undefined slope. These two slopes are not negative reciprocals of each other. The justification, instead, uses geometry and angles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333390355110168, "perplexity": 349.3312233307859}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583743003.91/warc/CC-MAIN-20190120224955-20190121010955-00354.warc.gz"}
http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.chi2_contingency.html
# scipy.stats.chi2_contingency¶ scipy.stats.chi2_contingency(observed, correction=True, lambda_=None)[source] Chi-square test of independence of variables in a contingency table. This function computes the chi-square statistic and p-value for the hypothesis test of independence of the observed frequencies in the contingency table [R304] observed. The expected frequencies are computed based on the marginal sums under the assumption of independence; see scipy.stats.contingency.expected_freq. The number of degrees of freedom is (expressed using numpy functions and attributes): dof = observed.size - sum(observed.shape) + observed.ndim - 1 Parameters: observed : array_like The contingency table. The table contains the observed frequencies (i.e. number of occurrences) in each category. In the two-dimensional case, the table is often described as an “R x C table”. correction : bool, optional If True, and the degrees of freedom is 1, apply Yates’ correction for continuity. The effect of the correction is to adjust each observed value by 0.5 towards the corresponding expected value. lambda_ : float or str, optional. By default, the statistic computed in this test is Pearson’s chi-squared statistic [R305]. lambda_ allows a statistic from the Cressie-Read power divergence family [R306] to be used instead. See power_divergence for details. chi2 : float The test statistic. p : float The p-value of the test dof : int Degrees of freedom expected : ndarray, same shape as observed The expected frequencies, based on the marginal sums of the table. Notes An often quoted guideline for the validity of this calculation is that the test should be used only if the observed and expected frequency in each cell is at least 5. This is a test for the independence of different categories of a population. The test is only meaningful when the dimension of observed is two or more. Applying the test to a one-dimensional table will always result in expected equal to observed and a chi-square statistic equal to 0. This function does not handle masked arrays, because the calculation does not make sense with missing values. Like stats.chisquare, this function computes a chi-square statistic; the convenience this function provides is to figure out the expected frequencies and degrees of freedom from the given contingency table. If these were already known, and if the Yates’ correction was not required, one could use stats.chisquare. That is, if one calls: chi2, p, dof, ex = chi2_contingency(obs, correction=False) then the following is true: (chi2, p) == stats.chisquare(obs.ravel(), f_exp=ex.ravel(), ddof=obs.size - 1 - dof) The lambda_ argument was added in version 0.13.0 of scipy. References [R304] (1, 2) “Contingency table”, http://en.wikipedia.org/wiki/Contingency_table [R305] (1, 2) “Pearson’s chi-squared test”, http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test [R306] (1, 2) Cressie, N. and Read, T. R. C., “Multinomial Goodness-of-Fit Tests”, J. Royal Stat. Soc. Series B, Vol. 46, No. 3 (1984), pp. 440-464. Examples A two-way example (2 x 3): >>> from scipy.stats import chi2_contingency >>> obs = np.array([[10, 10, 20], [20, 20, 20]]) >>> chi2_contingency(obs) (2.7777777777777777, 0.24935220877729619, 2, array([[ 12., 12., 16.], [ 18., 18., 24.]])) Perform the test using the log-likelihood ratio (i.e. the “G-test”) instead of Pearson’s chi-squared statistic. >>> g, p, dof, expctd = chi2_contingency(obs, lambda_="log-likelihood") >>> g, p (2.7688587616781319, 0.25046668010954165) A four-way example (2 x 2 x 2 x 2): >>> obs = np.array( ... [[[[12, 17], ... [11, 16]], ... [[11, 12], ... [15, 16]]], ... [[[23, 15], ... [30, 22]], ... [[14, 17], ... [15, 16]]]]) >>> chi2_contingency(obs) (8.7584514426741897, 0.64417725029295503, 11, array([[[[ 14.15462386, 14.15462386], [ 16.49423111, 16.49423111]], [[ 11.2461395 , 11.2461395 ], [ 13.10500554, 13.10500554]]], [[[ 19.5591166 , 19.5591166 ], [ 22.79202844, 22.79202844]], [[ 15.54012004, 15.54012004], [ 18.10873492, 18.10873492]]]])) #### Previous topic scipy.stats.circstd #### Next topic scipy.stats.contingency.expected_freq
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452160596847534, "perplexity": 3489.3709277923494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930423.94/warc/CC-MAIN-20150521113210-00136-ip-10-180-206-219.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/208614/table-formatting-problem-with-multiple-multi-rows
# Table formatting problem with multiple multi-rows I am trying to complete the formatting of this table. I cannot get the text in the final column to wrap and remain in the cell. The "stuff stuff stuff" text just keeps going off the edge of the page. Also, I cannot get the heading separators to work. Any thoughts greatly appreciated. \usepackage{tabularx} \usepackage{multirow} \begin{table} \centering \small \resizebox{\textwidth}{!}{ \begin{tabularx}{\textwidth}{|X|X|X|X|X|} \hline \rowcolor[HTML]{C0C0C0} Section & Sub-Section & Description & \multicolumn{2}{l|}{Key} \\ \hline & Stage & The & Stuff & \\ \cline{2-4} & Stuff & The & Stuff & \\ \cline{2-4} \multirow{-3}{*}{Stuff} & Stuff & The & Cole & \multirow{-3}{*}{Stuff Stuff Stuff Stuff Stuff Stuff Stuff Stuff Stuff } \\ \hline \end{tabularx}} \end{table} • \resizebox{\textwidth}{!}{ scaling tables is (almost) always wrong, but scaling a table to \textwidth when its size is already forced to \textwidth in the tabularx argument is.. odd (also if you do use \scalebox you need a % and no blank line after the { ) – David Carlisle Oct 23 '14 at 12:05 • Welcome to Tex.SX! You will have to give the multirow a width. \multirow{-3}{2.3cm}{ for example... (but the last cell is to small to fit all this "stuff" – LaRiFaRi Oct 23 '14 at 12:05 • Also please next time make your example a complete document so that people can run it and see the problem. – David Carlisle Oct 23 '14 at 12:06 I changed from \small to \scriptsize in order to fit everything in your cells. You may reformat your table with differing width in each row. Or maybe the original text fits for your chosen fontsize. The main trick was to give the multirow a certain width. As the X-column is stretchable, the multirow does not know the actual width. If you do not want to search this discrete value, you may use \hsize. \hsize works because it happens internally inside the X-column set up for the table and X (and p) sets \hsize so line breaking works correctly within the cell. So you can use \hsize here to tell multirow to use the size that tabularx decided to use. % arara: pdflatex \documentclass{article} \usepackage{tabularx} \usepackage{multirow} \usepackage[table]{xcolor} \usepackage{microtype} % handy in narrow cells with much text \begin{document} \begin{table} \centering \scriptsize \begin{tabularx}{\textwidth}{|X|X|X|X|X|} \hline \rowcolor[HTML]{C0C0C0} Section & Sub-Section & Description & \multicolumn{2}{l|}{Key} \\ \hline & Stage & The & Stuff & \\ \cline{2-4} & Stuff & The & Stuff & \\ \cline{2-4} \multirow{-3}{*}{Stuff} & Stuff & The & Cole & \multirow{-3}{\hsize}{Stuff Stuff Stuff Stuff Stuff Stuff Stuff Stuff Stuff } \\ \hline \end{tabularx} \end{table} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966475963592529, "perplexity": 3097.0011424255695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00148.warc.gz"}
https://www.physicsforums.com/threads/can-a-black-hole-form-using-only-photons-in-theory.100926/
# Can a black hole form using only photons - in theory? 1. Nov 21, 2005 (Still a newbie here so pardon my naivety.) Now if possible, this question asks what’s the minimum mass-energy required and if you like, for a black-hole that exists at least 1 second. If not possible why? Also a side question. Can 2 or more photons occupy the same space? 2. Nov 21, 2005 ### pervect Staff Emeritus You can form a black hole using only radiation (photons) if you have a sufficient amount of energy in the center-of-momentum frame. A beam of radiation propagating in a single direction (or a single photon, no matter how high its energy) will not form a black hole. However, a collection of radiation from all directions, focused onto a spot in space, could theoretically form a black hole. If you are using visible light, the smallest black hole you could make would have a Schwarzschild radius of something near the wavelength of the light used to form it. (This is just a rough approximation). Since R= 2M (in geometric units), the mass would be R/2 * (c^2/G) in standard units. Taking R to be .5 micron, we get 3*10^20 kg, multiplying by C^2 this is 3*10^37 joules. This is an enormous amount of energy - for reference, the mass of the Earth is about 6*10^24 kg. I'd have to look up the evaporation formulas to figure out the lifetime. 3. Nov 21, 2005 Could you theoreticaly use lasers to creat a standing waves capable of creating many consecutive balck holes? 4. Nov 21, 2005 why s that? What if the photons were high energy gamma rays, would it shrink the Schwarzschild radius? Would it require less mass-energy? 5. Nov 21, 2005 ### Physics Monkey cybernomad, if you have a beam of light (or a single photon) propagating in the +z direction, then in a coordinate system moving in the +z direction at high speed, the energy (and momentum) of the beam tend to zero. You can check this for yourself by simply Lorentz transforming the photon's four momentum to the moving frame. Since in this frame of reference the beam has no energy, the beam of light can't produce a gravitational field. On the other hand, two counter propagating beams would produce a gravitational effect. Can you explain why the argument I gave above is no longer valid? Of course, a universe with just an infinite beam of light in it isn't very physical, so these kinds of considerations are somewhat artificial. Edit: clarified a few things. Last edited: Nov 21, 2005 6. Nov 21, 2005 yes, it's just mainly a theoretical musing. 7. Nov 21, 2005 ### NateTG Theoretically, it could be done with light bulbs. 8. Nov 21, 2005 ### pervect Staff Emeritus Take a look at, for instance, the FAQ http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/black_fast.html" [Broken] (This is about what happens with high speed particles, but you can think of light as the limiting case of a particle with less and less rest mass going faster and faster). With high energy gamma rays, you could use less mass (lower energy). The Schwarzschild radius scales linearly with mass, so if you halve the radius, you halve the reqiured mass. Last edited by a moderator: May 2, 2017 9. Nov 21, 2005 ### pervect Staff Emeritus I very much doubt it. For one thing, there is a critical field intensity at which your laser beams would start to create particles, the Schwinger critical field. So if you try to do the job with one laser, it would have to be very large physically to keep the beam intensity below this limit. In addition, I strongly suspect that the resulting system would be unstable in the sense that as soon as one black hole started to form, it would absorb all the energy. Of course this is rather speculative, I haven't worked out any of the details (but the whole scenario is pretty unlikely). I think you'd have much better luck with about 10^31 or so megajoule lasers (if they were designed to produce very short, femtosecond pulses, otherwise you'll need more lasers) all aimed at the same spot within a half-micron precision :-). Even with this approach (a Very large number of focused pulsed lasers) you still might have problems with the Schwinger critical field being exceeded and the resulting particle creation "bleeding" too much energy away from the beam before the black hole density at the center could be reached. I have no clue how to go about calculating this effect, unfortunately. I don't seem to have any good references for the Schwinger field, which is of course a quantum phenomenon rather than a GR phenomenon. The abstract below at least talks about proposals to exceed it. http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRLTAO000091000008085001000001&idtype=cvips&gifs=yes [Broken] Last edited by a moderator: May 2, 2017 10. Nov 21, 2005 ### pervect Staff Emeritus A more reliable way to form a black hole out of light might be this, on second thought: Convert about 300,000 solar masses to radiation, and concentrate the resulting radiation within a radius of about 880,000 kilometers. Unless I've missed a decimal point somewhere (quite likely!) this will keep the electric field intensity below the Schwinger limit. This is of course a much larger black hole than the first one which assumed we could concentrate radiation more densely without any problems. The formula I pasted into google calculator to get this was 1/sqrt(64*(permittivity of free space) * (10^18 volts/meter)^2/c^2*(G/c^2)^3)= The factor of 64 came from taking (rho+3P) * 4/3 pi (2M)^3 = M and solving for M, noting that pi was roughly 3. The pressure P of the EM field is .5 (permitivity of free space) * E^2 where E is in volts per meter, and rho = P/c^2 Similar Discussions: Can a black hole form using only photons - in theory?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484238982200623, "perplexity": 792.9872328336526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809695.97/warc/CC-MAIN-20171125071427-20171125091427-00023.warc.gz"}
https://physics.stackexchange.com/questions/101493/divergent-path-integral
# Divergent path integral What does it mean to have a divergent path integral in a QFT? More specifically, if $$\int e^{i S[\phi]/\hbar} D\phi (t)=\infty$$ What does this mean for the QFT of the field $\phi$? The field $\phi$ has action $$S[\phi]=\int\left(\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-V(\phi)\right)\mbox{d}vol$$ where we use Minkowski signature $(+,-,-,-)$. • What's the action? – Alex Nelson Mar 2 '14 at 0:16 • The reason I ask about the action is: (a) you could be dealing with a "trivial infinity" from the volume of a gauge orbit; (b) it could be a legitimate infinity [e.g., the action could be unbounded from below after Wick rotating], where you would need to use Lattice approximations or effective field theory; (c) sometimes it could be "ghosts of departed quantities" when taking the continuum limit of finite differences. Can't tell which unless we see the action... – Alex Nelson Mar 3 '14 at 1:50 • Comment to the question (v3): What is the potential $V$? – Qmechanic Mar 3 '14 at 12:42 • Presumably, it's any interaction $V(\phi)~\mathcal{O}(\phi^3)$. I'm at work, and cannot answer (perhaps @Qmechanic or someone else will give a coherent/non-tweet-lengthed response). Basically, you consider $\int\exp(S_{0}[\phi]+\int V(\phi)\,\mathrm{d}^{n}x)\, D\phi = \int\sum_{k=0}(\int V(\phi)\,\mathrm{d}^{n}x)^{k}\exp(S_{0}[\phi])\,D\phi$ then truncate the series after a couple terms, otherwise it diverges and you get infinities. – Alex Nelson Mar 3 '14 at 15:35 • @AlexNelson I understand that. However, what does it mean physically for a divergent path integral? – user28355 Mar 3 '14 at 23:40 If the path integral itself diverges, it means that the v.e.v. diverges. That by itself is bad, because then any arbitrary $n$-point function vanishes. Recall that to compute correlation functions, we append a $J(x)\phi(x)$ to the action and calculate $$\frac{\delta^n}{\delta J(x_1)\ldots \delta J(x_n)} \int e^{i S[\phi]/\hbar + J(x)\phi(x)}\mathcal{D}\phi = \langle\phi(x_1)\ldots \phi(x_n)\rangle$$ which is normalized by the v.e.v.. Thus, you wouldn't be able to calculate anything sensible.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564443230628967, "perplexity": 519.4363729614346}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00448.warc.gz"}
https://www.maplesoft.com/support/help/errors/view.aspx?path=updates/Maple2020/MapleCalculatorApp&L=E
Maple Calculator App - Maple Help Home : Support : Online Help : System : Information : Updates : Maple 2020 : Maple Calculator App Maple Calculator App The Maple Calculator is a free mobile app that acts both as a complement to desktop Maple and as standalone math tool. As a Maple user, you can use the Maple Calculator to bring the math that is right in front of you into Maple using your phone's camera, where you can then access the full power of Maple for solving, visualizing, and exploring your math. This is particularly useful for bringing in large expressions, and for avoiding transcription errors. As a standalone math tool, the Maple Calculator helps students learn math and provides a way for them to check their homework even when the answers are not in the back of the book. The app can be used to solve problems from algebra, precalculus, calculus, linear algebra, differential equations, and more. You enter your expression using your phone’s camera or by entering the problem in the math editor. You can plot expressions, find integrals, factor polynomials, invert matrices, solve systems of equations, solve ODEs, and much more, all on your phone. Your problem and results can also be brought into Maple for further exploration. The free Maple Calculator app is available for iOS and Android devices. Visit Maple Calculator to learn about the most recent release and to download the app.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802587628364563, "perplexity": 1475.075035576225}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00189.warc.gz"}
http://mathhelpforum.com/geometry/78576-dot-products-vectors-please-help-me-understand.html
Hi there. I am working through a worksheet, however my knowledge of mathematical symbols is not very sound. Here is the problem. (P1 and P2 are Point 1 and Point 2 respectively.) T1 is a Tangent. P1 := 0 , 0 , 0 (xyz) P2 := 73.14 , 55.84 , 38.44 T1 := 0.6409 , 0.6409 , 0.42 I have to find a number from the following equation: Ψ² = dist between points η = perp dist from P1 in dir of T1 ξ = dist normal to T1 Ψ² = |P2 - P1|² η = (P2 - P1) · T1 ξ = (Ψ² - η²)^0.5 Please tell me if my findings are true.... I have no idea how to calculate these vectors! I understand they are dot products, but there don't seem to be any 'layman' explainations on the web.... Here's what I solved: Ψ² = 3592.88 η = 7.73 ξ = 59.44 Can anyone tell me if I'm correct? I'm 99% sure it's incorrect, as a following equation produces an error based on the square root of a negative number. I would be so grateful if someone could walk me through it! Thanks! 2. Right, I've done a little more digging, and Mathcad's giving me a COMPLETELY different solution, most likely the correct one. It's something to do with the DETERMINENT ie |x| What I need to know now, is how to transpose a determinent of a scalar, into an EXCEL function?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175520896911621, "perplexity": 1205.0722134896303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00382-ip-10-171-6-4.ec2.internal.warc.gz"}
https://lemon.cs.elte.hu/trac/lemon/changeset/1f4f01870c1e20fd9ea3c999805f0de2c161c772/lemon
# Changeset 1420:1f4f01870c1e in lemon Ignore: Timestamp: 02/17/18 23:44:15 (5 years ago) Branch: default Phase: public Message: API doc improvements for Path structures (#250) File: 1 edited Unmodified Added Removed • ## lemon/path.h r1336 /// \tparam GR The digraph type in which the path is. /// /// In a sense, the path can be treated as a list of arcs. The /// LEMON path type stores just this list. As a consequence, it /// cannot enumerate the nodes of the path and the source node of /// a zero length path is undefined. /// In a sense, a path can be treated as a list of arcs. The /// LEMON path type simply stores this list. As a consequence, it /// cannot enumerate the nodes in the path, and the source node of /// a zero-length path is undefined. /// /// This implementation is a back and front insertable and erasable /// \brief The n-th arc. /// /// \pre \c n is in the [0..length() - 1] range. /// Gives back the n-th arc. This function runs in O(1) time. /// \pre \c n is in the range [0..length() - 1]. const Arc& nth(int n) const { return n < int(head.size()) ? *(head.rbegin() + n) : /// \tparam GR The digraph type in which the path is. /// /// In a sense, the path can be treated as a list of arcs. The /// LEMON path type stores just this list. As a consequence it /// cannot enumerate the nodes in the path and the zero length paths /// cannot store the source. /// In a sense, a path can be treated as a list of arcs. The /// LEMON path type simply stores this list. As a consequence, it /// cannot enumerate the nodes in the path, and the source node of /// a zero-length path is undefined. /// /// This implementation is a just back insertable and erasable path /// type. It can be indexed in O(1) time. The back insertion and /// erasure is amortized O(1) time. This implementation is faster /// then the \c Path type because it use just one vector for the /// than the \c Path type because it use just one vector for the /// arcs. template /// \brief The n-th arc. /// /// \pre \c n is in the [0..length() - 1] range. /// Gives back the n-th arc. This function runs in O(1) time. /// \pre \c n is in the range [0..length() - 1]. const Arc& nth(int n) const { return data[n]; /// \tparam GR The digraph type in which the path is. /// /// In a sense, the path can be treated as a list of arcs. The /// LEMON path type stores just this list. As a consequence it /// cannot enumerate the nodes in the path and the zero length paths /// cannot store the source. /// In a sense, a path can be treated as a list of arcs. The /// LEMON path type simply stores this list. As a consequence, it /// cannot enumerate the nodes in the path, and the source node of /// a zero-length path is undefined. /// /// This implementation is a back and front insertable and erasable /// /// This function looks for the n-th arc in O(n) time. /// \pre \c n is in the [0..length() - 1] range. /// \pre \c n is in the range [0..length() - 1]. const Arc& nth(int n) const { Node *node = first; /// \c it will put into \c tpath. If \c tpath have arcs /// before the operation they are removed first.  The time /// complexity of this function is O(1) plus the the time of emtying /// complexity of this function is O(1) plus the time of emtying /// \c tpath. If \c it is \c INVALID then it just clears \c tpath void split(ArcIt it, ListPath& tpath) { /// \tparam GR The digraph type in which the path is. /// /// In a sense, the path can be treated as a list of arcs. The /// LEMON path type stores just this list. As a consequence it /// cannot enumerate the nodes in the path and the source node of /// a zero length path is undefined. /// In a sense, a path can be treated as a list of arcs. The /// LEMON path type simply stores this list. As a consequence, it /// cannot enumerate the nodes in the path, and the source node of /// a zero-length path is undefined. /// /// This implementation is completly static, i.e. it can be copy constucted /// modified. /// /// Being the the most memory efficient path type in LEMON, /// it is intented to be /// used when you want to store a large number of paths. /// Being the most memory-efficient path type in LEMON, it is /// intented to be used when you want to store a large number of paths. template class StaticPath { /// \brief The n-th arc. /// /// \pre \c n is in the [0..length() - 1] range. /// Gives back the n-th arc. This function runs in O(1) time. /// \pre \c n is in the range [0..length() - 1]. const Arc& nth(int n) const { return _arcs[n]; int empty() const { return len == 0; } /// \brief Erase all arcs in the digraph. /// \brief Reset the path to an empty one. void clear() { len = 0; } /// \brief Class which helps to iterate through the nodes of a path /// /// In a sense, the path can be treated as a list of arcs. The /// LEMON path type stores only this list. As a consequence, it /// cannot enumerate the nodes in the path and the zero length paths /// cannot have a source node. /// /// This class implements the node iterator of a path structure. To /// provide this feature, the underlying digraph should be passed to /// \brief Class for iterating through the nodes of a path /// /// Class for iterating through the nodes of a path. /// /// In a sense, a path can be treated as a list of arcs. The /// LEMON path type simply stores this list. As a consequence, it /// cannot enumerate the nodes in the path, and the source node of /// a zero-length path is undefined. /// /// However, this class implements a node iterator for path structures. /// To provide this feature, the underlying digraph should be passed to /// the constructor of the iterator. template Note: See TracChangeset for help on using the changeset viewer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617039561271667, "perplexity": 3481.2523354466894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00605.warc.gz"}
https://www.physicsforums.com/threads/ball-dropped-from-building-find-speed-and-kinetic-energy.198202/
# Ball dropped from building - find speed and kinetic energy 1. Nov 14, 2007 ### NewJersey An 0.50kg mass is droped from the top of a 20m tall building. Immediately before hitting the sidewalk a) the speed of the ball is ? b) The KE of the ball half way down will be Help?? 2. Nov 14, 2007 ### hotcommodity What's being conserved here, and what equation should you use? Any thoughts? 3. Nov 14, 2007 ### NewJersey Basically right now im at this point vf= (2*9.8* 20) the sqaure root of this answer . Would that tell me the final speed 4. Nov 14, 2007 ### hotcommodity Yes, that is correct. 5. Nov 14, 2007 ### NewJersey So for part b vf= 2*9.8*10 and the sqaure root of this = 14 Is this part right. 6. Nov 14, 2007 ### hotcommodity Well, that is the speed that you would plug into the equation for kinetic energy, but yes, you are correct. Have something to add? Similar Discussions: Ball dropped from building - find speed and kinetic energy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901072144508362, "perplexity": 4754.085963038613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00118-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/579307/linearly-independence-of-functions-over-mathbbr
# linearly independence of functions over $\mathbb{R}$ I have been asked to prove that $\{\tan(ax)|a\in \mathbb{R}^+\}$ is linearly independent. I was wondering if there is a generic method/idea for proving linear independence of functions over $\mathbb{R}$. Hint: Since $\tan \in C^{\infty}(\mathbf{R})$. Pick any finite number of $a_i$'s from $\mathbf{R}$. Show the Wronskian of $\tan{a_1x},\dots,\tan{a_nx}$ is nonzero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114556908607483, "perplexity": 51.96258672993321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00320.warc.gz"}
http://www.researchgate.net/researcher/8491492_Ryan_Chornock
# R. Chornock Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, United States Are you R. Chornock? ## Publications (411)1008.17 Total impact • Source ##### Article: Zooming In on the Progenitors of Superluminous Supernovae With HST [Hide abstract] ABSTRACT: We present Hubble Space Telescope rest-frame ultraviolet imaging of the host galaxies of 16 hydrogen-poor superluminous supernovae (SLSNe), including 11 events from the Pan-STARRS Medium Deep Survey. Taking advantage of the superb angular resolution of HST, we characterize the galaxies' morphological properties, sizes and star formation rate densities. We determine the SN locations within the host galaxies through precise astrometric matching, and measure physical and host-normalized offsets, as well as the SN positions within the cumulative distribution of UV light pixel brightness. We find that the host galaxies of H-poor SLSNe are irregular, compact dwarf galaxies, with a median half-light radius of just 0.9 kpc. The UV-derived star formation rate densities are high ( ~ 0.1 M_sun/yr/kpc^2), suggesting that SLSNe form in overdense environments. Their locations trace the UV light of their host galaxies, with a distribution intermediate between that of LGRBs (which are strongly clustered on the brightest regions of their hosts) and a uniform distribution (characteristic of normal core-collapse SNe). Taken together, this strengthens the picture that SLSN progenitors require different conditions than those of ordinary core-collapse SNe to form, and that they explode in broadly similar galaxies as do LGRBs. However, the locations of SLSNe are less clustered on the brightest regions than are LGRBs, suggesting a different and potentially lower-mass progenitor population for SLSNe than LGRBs. 11/2014; • Source ##### Article: A Sample of Type II-L Supernovae [Hide abstract] ABSTRACT: What are Type II-Linear supernovae (SNe II-L)? This class, which has been ill defined for decades, now receives significant attention -- both theoretically, in order to understand what happens to stars in the ~15-25Mo range, and observationally, with two independent studies suggesting that they cannot be cleanly separated photometrically from the regular hydrogen-rich SNe II-P characterised by a marked plateau in their light curve. Here, we analyze the multi-band light curves and extensive spectroscopic coverage of a sample of 35 SNe II and find that 11 of them could be SNe II-L. The spectra of these SNe are hydrogen deficient, typically have shallow Halpha absorption, may show indirect signs of helium via strong OI 7774 absorption, and have faster line velocities consistent with a thin hydrogen shell. The light curves can be mostly differentiated from those of the regular, hydrogen-rich SNe II-P by their steeper decline rates and higher luminosity, and we propose as a defining photometric characteristic the decline in the V band: SNe II-L seem to decline by more than 0.5 mag from peak brightness by day 50 after explosion. Using our sample we provide template light curves for SNe II-L and II-P in 4 photometric bands. Monthly Notices of the Royal Astronomical Society 08/2014; 445(1). · 5.23 Impact Factor • Source ##### Article: ALMA Observations of the Host Galaxy of GRB090423 at z=8.23: Deep Limits on Obscured Star Formation 630 Million Years After the Big Bang [Hide abstract] ABSTRACT: We present rest-frame far-infrared (FIR) and optical observations of the host galaxy of GRB090423 at z=8.23 from the Atacama Large Millimeter Array (ALMA) and the Spitzer Space Telescope, respectively. The host remains undetected to 3-sigma limits of Fnu(222 GHz)<33 microJy and Fnu(3.6 micron)<81 nJy. The FIR limit is about 20 times fainter than the luminosity of the local ULIRG Arp220, and comparable to the local starburst M82. Comparing to model spectral energy distributions we place a limit on the IR luminosity of L_IR(8-1000 micron)<3e10 Lsun, corresponding to a limit on the obscured star formation rate of SFR_IR<5 Msun/yr; for comparison, the limit on the unobscured star formation rate from Hubble Space Telescope rest-frame UV observations is SFR_UV<1 Msun/yr. We also place a limit on the host galaxy stellar mass of <5e7 Msun (for a stellar population age of 100 Myr and constant star formation rate). Finally, we compare our millimeter observations to those of field galaxies at z>4 (Lyman break galaxies, Ly-alpha emitters, and submillimeter galaxies), and find that our limit on the FIR luminosity is the most constraining to date, although the field galaxies have much larger rest-frame UV/optical luminosities than the host of GRB090423 by virtue of their selection techniques. We conclude that GRB host galaxies at z>4, especially those with measured interstellar medium metallicities from afterglow spectroscopy, are an attractive sample for future ALMA studies of high redshift obscured star formation. The Astrophysical Journal 08/2014; 796(2). · 6.28 Impact Factor • ##### Article: OPTICAL OBSERVATIONS OF THE TYPE Ic SUPERNOVA 2007gr IN NGC 1058 [Hide abstract] ABSTRACT: We present extensive optical observations of the normal Type Ic supernova (SN) 2007gr, spanning from about one week before maximum light to more than one year thereafter. The optical light and color curves of SN 2007gr are very similar to those of the broad-lined Type Ic SN 2002ap, but the spectra show remarkable differences. The optical spectra of SN 2007gr are characterized by unusually narrow lines, prominent carbon lines, and slow evolution of the line velocity after maximum light. The earliest spectrum (taken at t = –8 days) shows a possible signature of helium (He I λ5876 at a velocity of ~19,000 km s–1). Moreover, the larger intensity ratio of the [O I] λ6300 and λ6364 lines inferred from the early nebular spectra implies a lower opacity of the ejecta shortly after the explosion. These results indicate that SN 2007gr perhaps underwent a less energetic explosion of a smaller-mass Wolf-Rayet star (~8-9 M ☉) in a binary system, as favored by an analysis of the progenitor environment through pre-explosion and post-explosion Hubble Space Telescope images. In the nebular spectra, asymmetric double-peaked profiles can be seen in the [O I] λ6300 and Mg I] λ4571 lines. We suggest that the two peaks are contributed by the blueshifted and rest-frame components. The similarity in velocity structure and the different evolution of the strength of the two components favor an aspherical explosion with the ejecta distributed in a torus or disk-like geometry, but inside the ejecta the O and Mg have different distributions. The Astrophysical Journal 07/2014; 790(2):120. · 6.28 Impact Factor • Source ##### Article: Optical Observations of the Type Ic Supernova 2007gr in NGC 1058 and Implications for the Properties of its Progenitor [Hide abstract] ABSTRACT: We present extensive optical observations of the normal Type Ic supernova (SN) 2007gr, spanning from about one week before maximum light to more than one year thereafter. The optical light and color curves of SN 2007gr are very similar to those of the broad-lined Type Ic SN 2002ap, but the spectra show remarkable differences. The optical spectra of SN 2007gr are characterized by unusually narrow lines, prominent carbon lines, and slow evolution of the line velocity after maximum light. The earliest spectrum (taken at t=-8 days) shows a possible signature of helium (He~I 5876 at a velocity of ~19,000 km s{-1}). Moreover, the larger intensity ratio of the [O I] 6300 and 6364 lines inferred from the early nebular spectra implies a lower opacity of the ejecta shortly after the explosion. These results indicate that SN 2007gr perhaps underwent a less energetic explosion of a smaller-mass Wolf-Rayet star (~ 8--9 Msun) in a binary system, as favored by an analysis of the progenitor environment through pre-explosion and post-explosion Hubble Space Telescope images. In the nebular spectra, asymmetric double-peaked profiles can be seen in the [O~I] 6300 and Mg~I] 4571 lines. We suggest that the two peaks are contributed by the blueshifted and rest-frame components. The similarity in velocity structure and the different evolution of the strength of the two components favor an aspherical explosion with the ejecta distributed in a torus or disk-like geometry, but inside the ejecta the O and Mg have different distributions. 06/2014; • Source ##### Article: GRB 140515A at z=6.33: Constraints on the End of Reionization From a Gamma-ray Burst in a Low Hydrogen Column Density Environment [Hide abstract] ABSTRACT: We present the discovery and subsequent spectroscopy with Gemini-North of the optical afterglow of the Swift gamma-ray burst (GRB) 140515A. The spectrum exhibits a well-detected continuum at wavelengths longer than 8915 Angs with a steep decrement to zero flux blueward of 8910 Angs due to Ly-alpha absorption at redshift z~6.33. Some transmission through the Lyman-alpha forest is present at 5.2<z<5.733, but none is detected at higher redshift, consistent with previous measurements from quasars and GRB 130606A. We model the red damping wing of Lyman-alpha in three ways that provide equally good fits to the data: (a) a single host galaxy absorber at z=6.327 with log(N_HI)=18.62+/-0.08; (b) pure intergalactic medium (IGM) absorption from z=6.0 to z=6.328 with a constant neutral hydrogen fraction of x_HI=0.056+0.011-0.027; and (c) a hybrid model with a host absorber located within an ionized bubble of radius 10 comoving Mpc in an IGM with x_HI=0.12+/-0.05 (x_HI<0.21 at the 2-sigma level). Regardless of the model, the sharpness of the dropoff in transmission is inconsistent with a substantial neutral fraction in the IGM at this redshift. No narrow absorption lines from the host galaxy are detected, indicating a host metallicity of [Z/H]<~ -0.8. Even if we assume that all of the hydrogen absorption is due to the host galaxy, the column is unusually low for a GRB sightline, similar to two out of the other three highest-redshift bursts with measured log(N_HI). This is possible evidence that the escape fraction of ionizing photons from normal star-forming galaxies increases at z>~6. 05/2014; • Source ##### Article: Rapidly-Evolving and Luminous Transients from Pan-STARRS1 [Hide abstract] ABSTRACT: In the past decade, several rapidly-evolving transients have been discovered whose timescales and luminosities are not easily explained by traditional supernovae (SN) models. The sample size of these objects has remained small due, at least in part, to the challenge of detecting short timescale transients with traditional survey cadences. Here we present the results from a search within the Pan-STARRS1 Medium Deep Survey (PS1-MDS) for rapidly-evolving and luminous transients. We identify 10 new transients with a time above half-maximum of less than 12 days and -16.5 > M > -20 mag. This increases the number of known events in this region of SN phase space by roughly a factor of three. The median redshift of the PS1-MDS sample is z=0.275 and they all exploded in star forming galaxies. In general, the transients possess faster rise than decline timescale and blue colors at maximum light (g - r < -0.2). Best fit blackbodies reveal photospheric temperatures/radii that expand/cool with time and explosion spectra taken near maximum light are dominated by a blue continuum, consistent with a hot, optically thick, ejecta. We find it difficult to reconcile the short timescale, high peak luminosity (L > 10^43 erg/s), and lack of UV line blanketing observed in many of these transients with an explosion powered mainly by the radioactive decay of Ni-56. Rather, we find that many are consistent with either (1) cooling envelope emission from the explosion of a star with a low-mass extended envelope which ejected very little (<0.03 M_sun) radioactive material, or (2) a shock breakout within a dense, optically thick, wind surrounding the progenitor star. After calculating the detection efficiency for objects with rapid timescales in the PS1-MDS we find a volumetric rate of 3000 - 5500 events/yr/Gpc^3 (1-6% of the core-collapse SN rate at z=0.2). 05/2014; • Source ##### Article: Towards Characterization of the Type IIP Supernova Progenitor Population: a Statistical Sample of Light Curves from Pan-STARRS1 [Hide abstract] ABSTRACT: In recent years, the advent of wide-field sky surveys which obtain deep multi-band imaging has presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1 (PS1), obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence for the existence of a discontinuous sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant continuous relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single predominant explosion parameter, likely determined by initial stellar mass, controlling the most significant observational outcomes of red supergiant explosions. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the PS1 Type IIP SN sample. We show that correction of certain systematic discrepancies between modeled and observed SN IIP light curve properties, and an expanded grid of progenitor mass and explosion energy ranges, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches. 04/2014; • Source ##### Article: Photometric and Spectroscopic Properties of Type II-P Supernovae [Hide abstract] ABSTRACT: We study a sample of 23 Type II Plateau supernovae (SNe II-P), all observed with the same set of instruments. Analysis of their photometric evolution confirms that their typical plateau duration is 100 days with little scatter, showing a tendency to get shorter for more energetic SNe. The rise time from explosion to plateau does not seem to correlate with luminosity. We analyze their spectra, measuring typical ejecta velocities, and confirm that they follow a well behaved power-law decline. We find indications of high-velocity material in the spectra of six of our SNe. We test different dust extinction correction methods by asking the following - does the uniformity of the sample increase after the application of a given method? A reasonably behaved underlying distribution should become tighter after correction. No method we tested made a significant improvement. Monthly Notices of the Royal Astronomical Society 04/2014; 442(1). · 5.23 Impact Factor • Source ##### Article: Light Echoes from $\eta$ Carinae's Great Eruption: Spectrophotometric Evolution and the Rapid Formation of Nitrogen-rich Molecules [Hide abstract] ABSTRACT: We present follow-up optical imaging and spectroscopy of one of the light echoes of $\eta$ Carinae's 19th-century Great Eruption discovered by Rest et al. (2012). By obtaining images and spectra at the same light echo position between 2011 and 2014, we follow the evolution of the Great Eruption on a three-year timescale. We find remarkable changes in the photometric and spectroscopic evolution of the echo light. The $i$-band light curve shows a decline of $\sim 0.9$ mag in $\sim 1$ year after the peak observed in early 2011 and a flattening at later times. The spectra show a pure-absorption early G-type stellar spectrum at peak, but a few months after peak the lines of the [Ca II] triplet develop strong P-Cygni profiles and we see the appearance of [Ca II] 7291,7324 doublet in emission. These emission features and their evolution in time resemble the spectra of some Type IIn supernovae and supernova impostors. Most surprisingly, starting $\sim 300$ days after peak brightness, the spectra show strong molecular transitions of CN at $\gtrsim 6800$ \AA. The appearance of these CN features can be explained if the ejecta are strongly Nitrogen enhanced, as it is observed in modern spectroscopic studies of the bipolar Homunculus nebula. Given the spectroscopic evolution of the light echo, velocities of the main features, and detection of strong CN, we are likely seeing ejecta that contributes directly to the Homunculus nebula. The Astrophysical Journal Letters 03/2014; 787(1). · 5.60 Impact Factor • ##### Article: Spectroscopic Classification of PSN J07562525+2700488 [Hide abstract] ABSTRACT: We obtained a low-resolution optical spectrum of PSN J07562525+2700488 on 2014 March 21.2 UT using the FAST spectrograph mounted on the F.L. Whipple Observatory 1.5-m telescope. The spectrum was obtained during an observing trip to FLWO as part of the Harvard Astronomy 100 ("Observational Methods") undergraduate course. 02/2014; • Source ##### Article: Selecting superluminous supernovae in faint galaxies from the first year of the Pan-STARRS1 Medium Deep Survey [Hide abstract] ABSTRACT: The Pan-STARRS1 (PS1) survey has obtained imaging in 5 bands (grizy_P1) over 10 Medium Deep Survey (MDS) fields covering a total of 70 square degrees. This paper describes the search for apparently hostless supernovae (SNe) within the first year of PS1 MDS data with an aim of discovering new superluminous supernovae (SLSNe). A total of 249 hostless transients were discovered down to a limiting magnitude of M_AB ~ 23.5, of which 75 were classified as Type Ia SNe. There were 58 SNe with complete light curves that are likely core-collapse SNe (CCSNe) or SLSNe and 13 of these have had spectra taken. Of these 13 hostless, non-Type Ia SNe, 9 were SLSNe of Type I at redshifts between 0.5-1.4. Thus one can maximise the discovery rate of Type I SLSNe by concentrating on hostless transients and removing normal SNe Ia. We present data for three new possible SLSNe; PS1-10pm (z = 1.206), PS1-10ahf (z = 1.16) and PS1-11acn (z ~ 0.61), and estimate the rate of SLSNe-I to be between 0.6pm0.3 * 10^-4 and 1.0pm0.3 * 10^-4 of the CCSNe rate within 0.3 <= z <= 1.4 by applying a Monte-Carlo technique. The rate of slowly evolving, SN2007bi-like explosions is estimated as a factor of 10 lower than this range. 02/2014; • Source ##### Article: High Density Circumstellar Interaction in the Luminous Type IIn SN 2010jl: The first 1100 days [Hide abstract] ABSTRACT: We analyze HST and ground based observations of the luminous Type IIn SN 2010jl from 26 to 1128 days. At maximum the bolometric luminosity was 3x10^{43} erg/s and even at ~ 850 days exceeds 10^{42} erg/s. An emission excess in the NIR, dominating after 400 days, probably originates in dust in the CSM. The observed total radiated energy is at least 6.5x10^{50} ergs. The spectral lines display two distinct components, one broad, due to electron scattering, and one narrow. The broad component is initially symmetric around zero velocity, but becomes blueshifted after ~50 days. We find that dust absorption in the ejecta is unlikely to explain the line shifts, and attribute this instead to radiative acceleration by the SN radiation. From the lines, and the X-ray and dust properties, there is strong evidence for large scale asymmetries in the circumstellar medium. The narrow line component suggests an expansion velocity of ~100 km/s for the CSM. The UV spectrum shows strong low and high ionization lines, while the optical shows a number of narrow coronal lines excited by the X-rays. From the narrow UV lines we find large N/C and N/O ratios, indicative of CNO processing in the progenitor. The luminosity evolution is consistent with a radiative shock in an r^{-2} CSM and indicates a mass loss rate of ~ 0.1 M_O/yr for a 100 km/s wind. The total mass lost is at least ~3 Msun. The mass loss rate, wind velocity, density and CNO enrichment are consistent with the SN expanding into a dense CSM characteristic of that of an LBV progenitor. Even in the last full spectrum at 850 days we do not see any indication of debris processed in a core collapse SN. We attribute this to the extremely dense CSM, which is still opaque to electron scattering. Finally, we discuss the relevance of these UV spectra for detecting Type IIn supernovae in high redshift surveys. The Astrophysical Journal 12/2013; 797(2). · 6.28 Impact Factor 11/2013; • Source ##### Article: GRB 091024A and the nature of ultra-long gamma-ray bursts [Hide abstract] ABSTRACT: We present a broadband study of gamma-ray burst (GRB) 091024A within the context of other ultra-long-duration GRBs. An unusually long burst detected by Konus-Wind, Swift, and Fermi, GRB 091024A has prompt emission episodes covering ~1300 s, accompanied by bright and highly structured optical emission captured by various rapid-response facilities, including the 2-m autonomous robotic Faulkes North and Liverpool Telescopes, KAIT, S-LOTIS, and SRO. We also observed the burst with 8- and 10-m class telescopes and determine the redshift to be z = 1.0924 \pm 0.0004. We find no correlation between the optical and gamma-ray peaks and interpret the optical light curve as being of external origin, caused by the reverse and forward shock of a highly magnetized jet (R_B ~ 100-200). Low-level emission is detected throughout the near-background quiescent period between the first two emission episodes of the Konus-Wind data, suggesting continued central-engine activity; we discuss the implications of this ongoing emission and its impact on the afterglow evolution and predictions. We summarize the varied sample of historical GRBs with exceptionally long durations in gamma-rays (>~ 1000 s) and discuss the likelihood of these events being from a separate population; we suggest ultra-long GRBs represent the tail of the duration distribution of the long GRB population. The Astrophysical Journal 11/2013; 778(1):54. · 6.28 Impact Factor • Source ##### Article: Hydrogen-Poor Superluminous Supernovae and Long-Duration Gamma-Ray Bursts Have Similar Host Galaxies [Hide abstract] ABSTRACT: We present optical spectroscopy and optical/near-IR photometry of 31 host galaxies of hydrogen-poor superluminous supernovae (SLSNe), including 15 events from the Pan-STARRS1 Medium Deep Survey. Our sample spans the redshift range 0.1 < z < 1.6 and is the first comprehensive host galaxy study of this specific subclass of cosmic explosions. Combining the multi-band photometry and emission-line measurements, we determine the luminosities, stellar masses, star formation rates and metallicities. We find that as a whole, the hosts of SLSNe are a low-luminosity ( ~ -17.3 mag), low stellar mass (<M_*> ~ 2 x 10^8 Msun) population, with a high median specific star formation rate ( ~ 2 Gyr^{-1}). The median metallicity of our spectroscopic sample is low, 12 + log(O/H) ~ 8.35 ~ 0.45 Z_sun, although at least one host galaxy has solar metallicity. The host galaxies of H-poor SLSNe are statistically distinct from the hosts of GOODS core-collapse SNe (which cover a similar redshift range), but resemble the host galaxies of long-duration gamma-ray bursts (LGRBs) in terms of stellar mass, SFR, sSFR and metallicity. This result indicates that the environmental causes leading to massive stars forming either SLSNe or LGRBs are similar, and in particular that SLSNe are more effectively formed in low metallicity environments. We speculate that the key ingredient is large core angular momentum, leading to a rapidly spinning magnetar in SLSNe and an accreting black hole in LGRBs. The Astrophysical Journal 10/2013; 787(2). · 6.28 Impact Factor • Source ##### Article: Slowly fading super-luminous supernovae that are not pair-instability explosions. [Hide abstract] ABSTRACT: Super-luminous supernovae that radiate more than 10(44) ergs per second at their peak luminosity have recently been discovered in faint galaxies at redshifts of 0.1-4. Some evolve slowly, resembling models of 'pair-instability' supernovae. Such models involve stars with original masses 140-260 times that of the Sun that now have carbon-oxygen cores of 65-130 solar masses. In these stars, the photons that prevent gravitational collapse are converted to electron-positron pairs, causing rapid contraction and thermonuclear explosions. Many solar masses of (56)Ni are synthesized; this isotope decays to (56)Fe via (56)Co, powering bright light curves. Such massive progenitors are expected to have formed from metal-poor gas in the early Universe. Recently, supernova 2007bi in a galaxy at redshift 0.127 (about 12 billion years after the Big Bang) with a metallicity one-third that of the Sun was observed to look like a fading pair-instability supernova. Here we report observations of two slow-to-fade super-luminous supernovae that show relatively fast rise times and blue colours, which are incompatible with pair-instability models. Their late-time light-curve and spectral similarities to supernova 2007bi call the nature of that event into question. Our early spectra closely resemble typical fast-declining super-luminous supernovae, which are not powered by radioactivity. Modelling our observations with 10-16 solar masses of magnetar-energized ejecta demonstrates the possibility of a common explosion mechanism. The lack of unambiguous nearby pair-instability events suggests that their local rate of occurrence is less than 6 × 10(-6) times that of the core-collapse rate. Nature 10/2013; 502(7471):346-9. · 42.35 Impact Factor • Source ##### Article: The superluminous supernova PS1-11ap: bridging the gap between low and high redshift [Hide abstract] ABSTRACT: We present optical photometric and spectroscopic coverage of the superluminous supernova (SLSN) PS1-11ap, discovered with the Pan-STARRS1 Medium Deep Survey at z = 0.524. This intrinsically blue transient rose slowly to reach a peak magnitude of M_u = -21.4 mag and bolometric luminosity of 8 x 10^43 ergs^-1 before settling onto a relatively shallow gradient of decline. The observed decline is significantly slower than those of the superluminous type Ic SNe which have been the focus of much recent attention. Spectroscopic similarities with the lower redshift SN2007bi and a decline rate similar to 56Co decay timescale initially indicated that this transient could be a candidate for a pair instability supernova (PISN) explosion. Overall the transient appears quite similar to SN2007bi and the lower redshift object PTF12dam. The extensive data set, from 30 days before peak to 230 days after, allows a detailed and quantitative comparison with published models of PISN explosions. We find that the PS1-11ap data do not match these model explosion parameters well, supporting the recent claim that these SNe are not pair instability explosions. We show that PS1-11ap has many features in common with the faster declining superluminous Ic supernovae and the lightcurve evolution can also be quantitatively explained by the magnetar spin down model. At a redshift of z = 0.524 the observer frame optical coverage provides comprehensive restframe UV data and allows us to compare it with the superluminous SNe recently found at high redshifts between z = 2-4. While these high-z explosions are still plausible PISN candidates, they match the photometric evolution of PS1-11ap and hence could be counterparts to this lower redshift transient. Monthly Notices of the Royal Astronomical Society 10/2013; 437(1). · 5.23 Impact Factor • Source ##### Article: Cosmological Constraints from Measurements of Type Ia Supernovae discovered during the first 1.5 years of the Pan-STARRS1 Survey [Hide abstract] ABSTRACT: We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae (0.03<z<0.65) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We have investigated spatial and time variations in the photometry, and we find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the HST Calspec definition of the AB system. We discuss our efforts to minimize the systematic uncertainties in the photometry. A Hubble diagram is constructed with a subset of 112 SNe Ia (out of the 146) that pass our light curve quality cuts. The cosmological fit to 313 SNe Ia (112 PS1 SNe Ia + 201 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields w = -1.015^{+0.319}_{-0.201}(Stat)+{0.164}_{-0.122}(Sys). When combined with BAO+CMB(Planck)+H0, the analysis yields \Omega_M = 0.277^{+0.010}_{-0.012} and w = -1.186^{+0.076}_{-0.065} including all identified systematics, as spelled out in the companion paper by Scolnic et al. (2013a). The value of w is inconsistent with the cosmological constant value of -1 at the 2.4 sigma level. This tension has been seen in other high-z SN surveys and endures after removing either the BAO or the H0 constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find w = -1.142^{+0.076}_{-0.087}, which diminishes the discord to <2 sigma. We cannot conclude whether the tension with flat CDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample will be 3 times as large as this initial sample, which should provide more conclusive results. 10/2013; • Source ##### Article: Systematic Uncertainties Associated with the Cosmological Analysis of the First Pan-STARRS1 Type Ia Supernova Sample [Hide abstract] ABSTRACT: We probe the systematic uncertainties from 112 Type Ia supernovae (SNIa) in the Pan-STARRS1 (PS1) sample along with 201 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. (2013) describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ~0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.040\pm0.031 mag for host galaxies with high and low masses. Assuming flatness in our analysis of only SNe measurements, we find w = -1.015^{+0.319}_{-0.201} (Stat) ^{+0.164}_{-0.122) (Sys). With additional constraints from BAO, CMB(Planck) and H0 measurements, we find w = -1.186^{+0.076}_{-0.065} and \Omega_M= 0.277^{+0.010}_{-0.012} (statistical and systematic errors added in quadrature). One of the largest total systematic uncertainties when combining measurements from different cosmological probes is the choice of CMB data to include in the analysis. Using WMAP9 data instead of Planck, we see a shift of \Delta w = +0.044. 10/2013; #### Publication Stats 5k Citations 1,008.17 Total Impact Points #### Institutions • ###### Harvard-Smithsonian Center for Astrophysics • Smithsonian Astrophysical Observatory Cambridge, Massachusetts, United States • ###### Queen's University Belfast • Astrophysics Research Centre (ARC) Béal Feirste, N Ireland, United Kingdom • ###### University of California, Berkeley • • Department of Astronomy • • Theoretical Astrophysics Center Berkeley, California, United States • ###### Johns Hopkins University • Department of Physics and Astronomy Baltimore, MD, United States • ###### Harvard University • Department of Astronomy Cambridge, Massachusetts, United States • ###### Honolulu University Honolulu, Hawaii, United States
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848953902721405, "perplexity": 4815.624938983684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862015.5/warc/CC-MAIN-20150124161102-00177-ip-10-180-212-252.ec2.internal.warc.gz"}
https://openstax.org/books/introductory-business-statistics/pages/10-3-test-for-differences-in-means-assuming-equal-population-variances
# 10.3Test for Differences in Means: Assuming Equal Population Variances Introductory Business Statistics10.3 Test for Differences in Means: Assuming Equal Population Variances Typically we can never expect to know any of the population parameters, mean, proportion, or standard deviation. When testing hypotheses concerning differences in means we are faced with the difficulty of two unknown variances that play a critical role in the test statistic. We have been substituting the sample variances just as we did when testing hypotheses for a single mean. And as we did before, we used a Student's t to compensate for this lack of information on the population variance. There may be situations, however, when we do not know the population variances, but we can assume that the two populations have the same variance. If this is true then the pooled sample variance will be smaller than the individual sample variances. This will give more precise estimates and reduce the probability of discarding a good null. The null and alternative hypotheses remain the same, but the test statistic changes to: $tc=(x¯1−x¯2)−δ0Sp2(1n1+1n2)tc=(x¯1−x¯2)−δ0Sp2(1n1+1n2)$ where $Sp2Sp2$ is the pooled variance given by the formula: $Sp2=(n1−1)s21+(n2−1)s22n1+n2−2Sp2=(n1−1)s21+(n2−1)s22n1+n2−2$ ### Example 10.5 A drug trial is attempted using a real drug and a pill made of just sugar. 18 people are given the real drug in hopes of increasing the production of endorphins. The increase in endorphins is found to be on average 8 micrograms per person, and the sample standard deviation is 5.4 micrograms. 11 people are given the sugar pill, and their average endorphin increase is 4 micrograms with a standard deviation of 2.4. From previous research on endorphins it is determined that it can be assumed that the variances within the two samples can be assumed to be the same. Test at 5% to see if the population mean for the real drug had a significantly greater impact on the endorphins than the population mean with the sugar pill.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949869692325592, "perplexity": 380.22347779995334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00361.warc.gz"}
http://tdunning.blogspot.ru/2010/04/
## Thursday, April 22, 2010 ### Hadoop user group AKA Mahout Users Anonymous At the Hadoop User Group meeting last evening it was quite the Mahout love fest.  First the Yahoo team described their spam fighting efforts which apparently are partially anchored by frequent item set mining using Mahout (thanks to Robin Anil's hard work as a GSoC student).  Then later Ken Krugler demonstrated some of what you can do with web-mining and Mahout. These are just two of a growing number of real-world uses of Mahout which is really beginning to grow up.  Related to that growing up, the Apache board just decided to make Mahout a top level project.  That will take a little while to make real, but the first steps are already happening. You can find out more about how Mahout is being used by perusing the Mahout wiki. Slide decks available here for the HUG presentations. ## Tuesday, April 20, 2010 ### Interview on Mahout available I just had an interchange with InfoQ (as did Grant).  The results are to be found in the article on Mahout status that resulted. ## Saturday, April 10, 2010 ### Sampling a Dirichlet Distribution (revised) Last year, I casually wrote a post about how to sample from a prior for Dirichlet distributions.  There were recently a few comments on this blog and it now occurs to me that there is a massive simplification that is possible for the problem. What I suggested back then was to sample the parameters of a Dirichlet distribution by sampling from a Dirichlet distribution and then multiplying that by a magnitude sampled from an exponential distribution.  As it turns out, this is a special case of a much nicer general method. The new method is to sample each parameter independently from a gamma distribution $\gamma_i \sim \mathrm{Gamma}(\alpha_i, \beta_i)$ This can be related to my previous method where we had the parameters expressed as a magnitude $$\alpha$$ multiplied by a vector $$\vec m$$ whose components sum to 1. Expressed in terms of the new method $\alpha = \sum_i \gamma_i$ and $m_i = \gamma_i / \alpha$ Moreover, if all of the $$\beta_i$$ are equal, then $$\alpha$$ is gamma distributed with shape parameter $$\sum_i \alpha_i$$. If this sum is 1, then we have an exponential distribution for $$\alpha$$ as in the original method. I am pretty sure that this formulation also makes MCMC sampling from the posterior distribution easier as well because the products inside the expression for the joint probability will get glued together in a propitious fashion. ## Tuesday, April 6, 2010 ### Sampling Dirichlet Distributions (chapter 2-b) In the previous post, I talked about how sampling in soft-max space can make Metropolis algorithms work better for Dirichlet distributions. Here are some pictures that show why. These pictures are of three parameter Dirichlet distributions with the simplex projected into two dimensions. Dirichlet distributions with different parameters can look like this: or this or this These were all generated using the standard algorithm for sampling from a Dirichlet distribution. In R, that algorithm is this rdirichlet = function(n, alpha) { k = length(alpha) r = matrix(0, nrow=n, ncol=k) for (i in 1:k) { r[,i] = rgamma(n, alpha[i], 1) } r = matrix(mapply(function(r, s) {return (r/s)}, r, rowSums(r)), ncol=k) return (r) } That is, first sample $(y_1 \ldots y_n)$  using a gamma distribution $y_i \sim \mathrm {Gamma} (\alpha_i, 1)$ Then normalize to get the desired sample $\pi_i = {y_i \over \sum_j y_j}$ In contrast, here are 30,000 samples computed using a simple Metropolis algorithm to draw samples from a Dirichlet distribution with $\alpha = (1,1,1)$ .  This should give a completely uniform impression but there is noticeable thinning in the corners where the acceptance rate of the sampling process drops. The thinning is only apparent.  In fact, what happens is that the samples in the corners are repeated, but that doesn't show up in this visual presentation.  Thus the algorithm is correct, but it exhibits very different mixing behavior depending on where you look. A particularly unfortunate aspect of this problem is that decreasing the size of the step distribution in order to increase the acceptance rate in the corners makes the sampler take much longer to traverse the region of interest and thus only partially helps the problem.  Much of the problem is that when we have reached a point near the edge of the simplex or especially in a corner, many candidate steps will be off the simplex entirely and will thus be rejected.  When many parameters are small, this is particularly a problem because steps that are not rejected due to being off the simplex are likely to be rejected because the probability density drops dramatically as we leave the edge of the simplex.  Both of these problems become much, much worse in higher dimensions. A much better alternative is to generate candidate points in soft-max space using a Gaussian step distribution.  This gives us a symmetric candidate distribution that takes small steps near the edges and corners of the simplex and larger steps in the middle of the range.  The sensitivity to the average step size (in soft-max space) is noticeably decreased and the chance of stepping out of the simplex is eliminated entirely. This small cleverness leads to the solution of the question posed in a comment some time ago by Andrei about how to sample from the posterior distribution where we observe counts sampled from a multinomial $\vec k \sim \mathrm {Multi} (\vec x)$ where that multinomial has a prior distribution defined by \begin{aligned} \vec x &= \alpha \vec m \\ \alpha &\sim \mathrm{Exp} (\beta_0) \\ \vec m &\sim \mathrm{Dir} (\beta_1 \ldots \beta_n) \end{aligned} More about that in my next post. ### Sampling Dirichlet Distributions (chapter 2-a) In a previous post on sampling from Dirichlet distributions that related to a comment by Andrew Gelman on the same topic I talked a bit about how to sample the parameters of a Dirichlet (as opposed to sampling from a Dirichlet distribution with known parameters). In responding to some questions and comments on my post, I went back and reread Andrew's thoughts and think that they should be amplified a bit. Basically, what Andrew suggested is that when sampling from a Dirichlet, it is much easier to not sample from Dirichlet at all, but rather to sample from some unbounded distribution and then reduce that sample back to the unit simplex. The transformation that Andrew suggested is the same as the so-called soft-max basis that David Mackay also advocates for several purposes. This method is not well know, however, so it deserves a few more pixels. The idea is that to sample $(\pi_1 \ldots \pi_n)$ from some distribution you instead sample $(x_1 \ldots x_n) \sim \mathrm{Dir} (\beta_1 \ldots \beta_n)$ and then reduce to the sample you want $\pi_i = \frac {e^{x_i}} {\sum_j e^{x_j}}$ Clearly this gives you values on the unit simplex. What is not clear is that this distribution is anything that you want. In fact, if your original sample is from a normal distribution $(x_1 \ldots x_n) \sim \mathcal N (\vec \mu ,\Sigma)$ then you can come pretty close to whatever desired Dirichlet distribution you like. More importantly in practice, this idea of normally sampling $(x_1 \ldots x_n)$ and then transforming gives a very nice prior serves as well as a real Dirichlet distribution in many applications. Where this comes in wonderfully handy is in MCMC sampling where you don't really want to sample from the prior, but instead want to sample from the posterior. It isn't all that hard to use the Dirichlet distribution directly in these cases, but the results are likely to surprise you. For instance, if you take the trivial case of picking a uniformly distributed point on the simplex and then jumping around using Metropolis updates based on a Dirichlet distribution, you will definitely have the right distribution for your samples after just a bit of sampling. But if you plot those points, they won't look at all right if you are sampling from a distribution that has any significant density next to the boundaries of the simplex. What is happening is that the high probability of points near the boundaries is accounted for in the Metropolis algorithm by having a high probability of rejecting any jump if you are near the boundary. Thus, the samples near the boundary are actually repeated many times leading to the visual impression of low density where there should be high density. On the other hand, if you were to do the Metropolis jumps in the soft-max basis, the problem is entirely avoided because there are no boundaries in soft-max space. You can even use the soft-max basis for the sole purpose of generating candidate points and do all probability calculations in terms on the simplex. In my next post, I will provide a concrete example and even include some pictures. ## Sunday, April 4, 2010 ### Big Data and Big Problems In talks recently, I have mentioned an ACM paper that showed just how bad random access to various storage devices can be.  About half the time I mention this paper, somebody asks for the reference which I never know off the top of my head.  So here it is The Pathologies of Big Data Of course, the big deal is not that disks are faster when read sequentially.  We all know that and most people I talk to know that the problem is worse than you would think.  The news is that random access to memory is much, much worse than most people would imagine.  In the slightly contrived example in this paper, for instance, sequential reads from a hard disk deliver data faster than randomized reads from RAM.  That is truly horrifying. Even though you can pick holes in the example, it shows by mere existence that the problem of random access is much worse than almost anybody has actually internalized.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808292746543884, "perplexity": 646.7056465465563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00022.warc.gz"}
https://nicocourts.com/talks/schur-weyl/
# Schur Algebras & Duality ### Nico Courts (UW Seattle) #### [email protected] Slides for this presentation (as well as the master's thesis it was based on) can be found at ## Part I: Schur-Weyl Duality #### Representations of $\mathfrak{S}_n$ The representation theory for the symmetric group $\mathfrak{S}_n$ over $\mathbb{C}$ is completely controlled by its irreducible representations. These are in bijection with partitions of $n$. The connection is cycle type: e.g. $\sigma=(1\,5\,7\,2\,6)(9\,3\,8\,4)(10)\in\mathfrak{S}_{10}$ is of the form above. #### (Polynomial) Representations of $\mathrm{GL}_n(\mathbb{C})$ Given a representation $\rho:\operatorname{GL}_n\to \operatorname{GL}_m$, we say that $\rho$ is polynomial if (there exist bases such that) the maps are polynomials in the input coordinates. #### Homogeneous polynomial representations One can use the usual notion of homogeneity to define the homogeneous degree $d$ polynomial representations of $\operatorname{GL}_n$. We denote by $P_k(n,d)=P(n,d)$ the category of homogeneous degree $d$ polynomial representations of $\operatorname{GL}_n$ over $k$. #### Other ways to define the category • The module category for the Schur algebra $S(n,d)$ • Dual of a subcoalgebra of the ring of functions on the group scheme $\operatorname{GL}_n$ • Strict polynomial functors (Friedlander-Suslin) • Objects are endo"functors" on $\operatorname{Vect}_k,$ such that the maps of morphisms are (homogeneous degree $d$) polynomial maps. • Strict polynomial functors (Krause) • Let $\Gamma^d P_k$ be the category of vector spaces with $\operatorname{Hom}_{\Gamma^d P_k}(V,W)=\Gamma^d\operatorname{Hom}_k(V,W)$. Then $P(n,d)\simeq \operatorname{Rep}(\Gamma^dP_k)=\operatorname{Func}(\Gamma^d P_k,\operatorname{Vect}_k).$ #### The Schur-Weyl Functor One can develop a theory of weights for $P(n,d)$ and for $d\le n$, there is a particular weight $\omega$ such that taking the $\omega$ weight space induces a functor $$\mathcal{F}:P(n,d)\to\operatorname{Rep}(\mathfrak{S}_d)\quad\text{via}\quad \mathcal{F}(M)=M^\omega$$ along with an opposing functor $$\mathcal G:\operatorname{Rep}(\mathfrak{S}_d)\to P(n,d),$$ both of which preserve simples. #### More general fields If $k$ is any infinite field of characteristic 0 or $p>d$, we are in the semisimple case and the representation theory is mirrored in these two categories $P_k(n,d)$ and $\operatorname{Rep}_k(\mathfrak{S}_d)$. In the modular case, things get more interesting.... ## Part II: The Balmer Spectrum ### A question How do we have a chance of studying the category of representations of something that has complicated representation type (i.e. lots of non-isomorphic indecomposables)? Use a coarser notion of similarity than isomorphism type. #### Intuition from commutative algebra Objects like DVRs and Dedekind domains are nice to study because their spectra have nice structure. We don't need to compute things on the individual elements of these rings to say strong things about them. ### The Balmer spectrum If our representation category (or a suitable analog) is "enough like a ring" we can compute the Balmer spectrum of prime (thick tensor) ideals. It ends up that being "enough like a ring" can be justifiably interpreted as being a tensor triangulated category (TTC). There are several places these arise in nature. ### Examples of TTCs • The stable homotopy category with smash product • The stable module category $\operatorname{stab}(kG)$ • The bounded derived category $\operatorname{D}^\mathrm{b}(A)$ for an algebra $A$ (with derived $\otimes$) ### Cool Results • [Balmer, Thomason] If $X$ is a topologically Noetherian scheme, then $$\operatorname{Spec}_\mathrm{Bal}\mathrm{D}^\mathrm{perf}(\mathrm{coh}(X))\cong X$$ as schemes. • [Balmer, Friedlander-Pevtsova] If $G$ is a finite group scheme over $k$, $$\operatorname{Spec}_\mathrm{Bal}(\operatorname{stab}(kG))\cong \operatorname{Proj}(H^\bullet(G,k)).$$ ## Part III: Putting it together ### A leg up In 2013, Krause gave (pdf) a construction of $P(n,d)$ that more easily admitted a description of a monoidal structure. In 2017, Aquilino and Reischuk showed (pdf) that the Schur-Weyl functor was monoidal! #### A connecting idea If we are interested in computing the spectrum of $\mathrm{D}^\mathrm{b}(S(n,d))$, perhaps we can use the structure of $$\operatorname{Spec}_\mathrm{Bal}(\operatorname{stab}(k\mathfrak{S}_d))\cong \operatorname{Proj}(H^\bullet(\mathfrak{S}_d,k))$$ (which was proved first by Benson, Carlson, and Rickard: pdf) to say something about it, leveraging the monoidicity of the Schur-Weyl functor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615304470062256, "perplexity": 809.888545527605}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00081.warc.gz"}
https://www.geteasysolution.com/67.2222222222_as_a_fraction
# 67.2222222222 as a fraction ## 67.2222222222 as a fraction - solution and the full explanation with calculations. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. If it's not what You are looking for, type in into the box below your number and see the solution. ## What is 67.2222222222 as a fraction? To write 67.2222222222 as a fraction you have to write 67.2222222222 as numerator and put 1 as the denominator. Now you multiply numerator and denominator by 10 as long as you get in numerator the whole number. 67.2222222222 = 67.2222222222/1 = 672.222222222/10 = 6722.22222222/100 = 67222.2222222/1000 = 672222.222222/10000 = 6722222.22222/100000 = 67222222.2222/1000000 = 672222222.222/10000000 = 6722222222.22/100000000 = 67222222222.2/1000000000 = 672222222222/10000000000 And finally we have: 67.2222222222 as a fraction equals 672222222222/10000000000
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8794252276420593, "perplexity": 1212.5762511895844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00457.warc.gz"}
http://recasarfati.com/tags/simulation-methods/
# Simulation Methods ## Estimating HANK: Macro Time Series and Micro Moments We show how to use sequential Monte Carlo methods to estimate a heterogeneous agent New Keynesian (HANK) model featuring both nominal and real rigidities. We demonstrate how the posterior distribution may be specified as the product of the standard … ## Sequential Monte Carlo Julia implementation of the sequential Monte Carlo algorithm for approximation of posterior distributions. ## Tempered Particle Filter Julia implementation of self-tuning tempered particle filter from Herbst and Schorfheide (2017) for likelihood evaluation of non-linear models with non-Gaussian innovations. Code is integrated into the NY Fed's package of filtering and smoothing routines for state-space models, StateSpaceRoutines.jl. ## Online Estimation of DSGE Models This paper illustrates the usefulness of sequential Monte Carlo (SMC) methods in approximating DSGE model posterior distributions. We show how the tempering schedule can be chosen adaptively, explore the benefits of an SMC variant we devise termed …
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492022395133972, "perplexity": 2804.572262127347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00151.warc.gz"}
http://mathoverflow.net/questions/102989/skein-modules-on-closed-3-manifolds
# Skein Modules on closed 3-manifolds Hi all! Why are skein modules 1-dimensional on closed 3-manifolds? The result seems clear on closed manifolds with vanishing first Betti number (e.g. $S^3$), but I don't see how to prove it for, say, $\mathbb{T}^3$. Can anyone point me to an old reference (I'm rather new to this field)? - Maybe you mean the reduced skein module, defined by Roberts when $A$ is a primitive $4r^{\rm th}$ root of unity: in that case it depends only on $\partial M$ and is hence 1-dimensional when $\partial M = \emptyset$. The original paper of Roberts is unfortunately not available (as far as I can see) but you can find a proof in this paper of Sikora.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875337243080139, "perplexity": 121.80952063411907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00126-ip-10-236-191-2.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2020_AMC_12B_Problems/Problem_22&oldid=129615
# 2020 AMC 12B Problems/Problem 22 ## Problem 22 What is the maximum value of for real values of ## Solution 1 We proceed by using AM-GM. We get . Thus, squaring gives us that . Rembering what we want to find(divide by ), we get the maximal values as , and we are done. ## Solution 2 Set . Then the expression in the problem can be written as It is easy to see that is attained for some value of between and , thus the maximal value of is . ## Solution 3 (Calculus Needed) We want to maximize . We can use the first derivative test. Use quotient rule to get the following: Therefore, we plug this back into the original equation to get ~awesome1st ## Solution 4 First, substitute so that Notice that When seen as a function, is a synthesis function that has as its inner function. If we substitute , the given function becomes a quadratic function that has a maximum value of when . Now we need to check if can have the value of in the range of real numbers. In the range of (positive) real numbers, function is a continuous function whose value gets infinitely smaller as gets closer to 0 (as also diverges toward negative infinity in the same condition). When , , which is larger than . Therefore, we can assume that equals to when is somewhere between 1 and 2 (at least), which means that the maximum value of is . ## Solution 5 Let the maximum value of the function be . Then we have Solving for , we see We see that Therefore, the answer is .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707871079444885, "perplexity": 395.16626276779766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00655.warc.gz"}
https://www.lmfdb.org/L/2/2736/19.11/c1-0
## Results (1-50 of 529 matches) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 2-2736-1.1-c1-0-35 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.59639 Modular form 2736.2.a.y.1.2 2-2736-1.1-c1-0-38 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.67111$ Modular form 2736.2.a.ba.1.2 2-2736-19.11-c1-0-46 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 19.11 $$1.0 1 0.383 0 1.62316 Modular form 2736.2.s.x.1873.3 2-2736-19.11-c1-0-47 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 19.11$$ $1.0$ $1$ $-0.413$ $0$ $1.87752$ Modular form 2736.2.s.bb.1873.4 2-2736-1.1-c1-0-13 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.956996 Modular form 2736.2.a.bf.1.3 2-2736-1.1-c1-0-16 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.04398$ Modular form 2736.2.a.bb.1.1 2-2736-1.1-c1-0-3 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.555857 Modular form 2736.2.a.be.1.1 2-2736-1.1-c1-0-30 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.42837$ Modular form 2736.2.a.bd.1.1 2-2736-76.75-c1-0-42 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 76.75 $$1.0 1 0.416 0 1.62889 Modular form 2736.2.k.p.2431.5 2-2736-76.75-c1-0-43 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 76.75$$ $1.0$ $1$ $0.398$ $0$ $1.62946$ Modular form 2736.2.k.d.2431.1 2-2736-1.1-c1-0-0 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.478797 Modular form 2736.2.a.bf.1.2 2-2736-1.1-c1-0-15 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.02783$ Modular form 2736.2.a.bb.1.2 2-2736-1.1-c1-0-1 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.480309 Elliptic curve 2736.b Modular form 2736.2.a.b Modular form 2736.2.a.b.1.1 2-2736-1.1-c1-0-10 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $0.781777$ Elliptic curve 2736.k Modular form 2736.2.a.k Modular form 2736.2.a.k.1.1 2-2736-1.1-c1-0-11 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.799584 Elliptic curve 2736.i Modular form 2736.2.a.i Modular form 2736.2.a.i.1.1 2-2736-1.1-c1-0-14 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $0.985714$ Elliptic curve 2736.w Modular form 2736.2.a.w Modular form 2736.2.a.w.1.1 2-2736-1.1-c1-0-17 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 1.07628 Elliptic curve 2736.s Modular form 2736.2.a.s Modular form 2736.2.a.s.1.1 2-2736-1.1-c1-0-18 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.10108$ Elliptic curve 2736.q Modular form 2736.2.a.q Modular form 2736.2.a.q.1.1 2-2736-1.1-c1-0-2 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.537084 Elliptic curve 2736.h Modular form 2736.2.a.h Modular form 2736.2.a.h.1.1 2-2736-1.1-c1-0-21 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.13058$ Elliptic curve 2736.u Modular form 2736.2.a.u Modular form 2736.2.a.u.1.1 2-2736-1.1-c1-0-22 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 1.14202 Modular form 2736.2.a.bf.1.4 2-2736-1.1-c1-0-23 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.17122$ Modular form 2736.2.a.be.1.3 2-2736-1.1-c1-0-24 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 1.29739 Elliptic curve 2736.x Modular form 2736.2.a.x Modular form 2736.2.a.x.1.1 2-2736-1.1-c1-0-25 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.32173$ Modular form 2736.2.a.y.1.1 2-2736-1.1-c1-0-26 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.32969 Modular form 2736.2.a.ba.1.1 2-2736-1.1-c1-0-27 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $1.34947$ Elliptic curve 2736.v Modular form 2736.2.a.v Modular form 2736.2.a.v.1.1 2-2736-1.1-c1-0-29 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.40889 Elliptic curve 2736.c Modular form 2736.2.a.c Modular form 2736.2.a.c.1.1 2-2736-1.1-c1-0-31 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.43340$ Elliptic curve 2736.f Modular form 2736.2.a.f Modular form 2736.2.a.f.1.1 2-2736-1.1-c1-0-32 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.44248 Elliptic curve 2736.e Modular form 2736.2.a.e Modular form 2736.2.a.e.1.1 2-2736-1.1-c1-0-33 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.47847$ Elliptic curve 2736.j Modular form 2736.2.a.j Modular form 2736.2.a.j.1.1 2-2736-1.1-c1-0-34 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.56702 Elliptic curve 2736.l Modular form 2736.2.a.l Modular form 2736.2.a.l.1.1 2-2736-1.1-c1-0-36 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.59940$ Elliptic curve 2736.n Modular form 2736.2.a.n Modular form 2736.2.a.n.1.1 2-2736-1.1-c1-0-37 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.60866 Elliptic curve 2736.m Modular form 2736.2.a.m Modular form 2736.2.a.m.1.1 2-2736-1.1-c1-0-40 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.74595$ Elliptic curve 2736.t Modular form 2736.2.a.t Modular form 2736.2.a.t.1.1 2-2736-1.1-c1-0-41 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.75738 Elliptic curve 2736.r Modular form 2736.2.a.r Modular form 2736.2.a.r.1.1 2-2736-1.1-c1-0-42 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0.5$ $1$ $1.81707$ Elliptic curve 2736.p Modular form 2736.2.a.p Modular form 2736.2.a.p.1.1 2-2736-1.1-c1-0-43 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0.5 1 1.81801 Elliptic curve 2736.o Modular form 2736.2.a.o Modular form 2736.2.a.o.1.1 2-2736-1.1-c1-0-44 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $2$ $2.43899$ Elliptic curve 2736.a Modular form 2736.2.a.a Modular form 2736.2.a.a.1.1 2-2736-1.1-c1-0-6 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.694080 Elliptic curve 2736.d Modular form 2736.2.a.d Modular form 2736.2.a.d.1.1 2-2736-1.1-c1-0-8 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 1.1$$ $1.0$ $1$ $0$ $0$ $0.743821$ Elliptic curve 2736.g Modular form 2736.2.a.g Modular form 2736.2.a.g.1.1 2-2736-1.1-c1-0-9 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 1.1 $$1.0 1 0 0 0.770470 Modular form 2736.2.a.bf.1.1 2-2736-12.11-c1-0-0 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 12.11$$ $1.0$ $1$ $0.431$ $0$ $0.234244$ Modular form 2736.2.d.b.2015.20 2-2736-12.11-c1-0-1 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 12.11 $$1.0 1 0.402 0 0.252118 Modular form 2736.2.d.a.2015.12 2-2736-12.11-c1-0-10 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 12.11$$ $1.0$ $1$ $-0.0979$ $0$ $0.582752$ Modular form 2736.2.d.a.2015.4 2-2736-12.11-c1-0-11 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 12.11 $$1.0 1 -0.0687 0 0.661759 Modular form 2736.2.d.b.2015.4 2-2736-12.11-c1-0-12 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 12.11$$ $1.0$ $1$ $-0.264$ $0$ $0.715690$ Modular form 2736.2.d.b.2015.24 2-2736-12.11-c1-0-13 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 12.11 $$1.0 1 -0.0687 0 0.742451 Modular form 2736.2.d.b.2015.17 2-2736-12.11-c1-0-14 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 12.11$$ $1.0$ $1$ $-0.0979$ $0$ $0.836740$ Modular form 2736.2.d.a.2015.11 2-2736-12.11-c1-0-15 $4.67$ $21.8$ $2$ $2^{4} \cdot 3^{2} \cdot 19$ 12.11 $$1.0 1 -0.235 0 0.853643 Modular form 2736.2.d.b.2015.18 2-2736-12.11-c1-0-16 4.67 21.8 2 2^{4} \cdot 3^{2} \cdot 19 12.11$$ $1.0$ $1$ $-0.0687$ $0$ $0.889678$ Modular form 2736.2.d.b.2015.14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612517952919006, "perplexity": 704.6133456708213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00244.warc.gz"}
http://math.stackexchange.com/questions/35942/exercise-in-atiyah-localization
# Exercise in Atiyah (localization) Let me refer you to: http://www-users.math.umd.edu/~karpuk/chap3solns.pdf Page 2, ex. 4 Can you please explain the following step: $tb=f(s')b=s'b$ Why $f(s')=s'$ ? Thanks - @user6495: That is a 16-page document. You need to be more specific about where. –  Zev Chonoles Apr 29 '11 at 22:59 @Zev Chonoles: sorry, just edited it. –  user6495 Apr 29 '11 at 23:00 Hint: We have a map $f:A\rightarrow B$. This makes $B$ into an $A$-module. What is the definition of "$ab$", for $a\in A$ and $b\in B$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667870759963989, "perplexity": 1272.6878385572586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375102712.76/warc/CC-MAIN-20150627031822-00073-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/85918/3-manifolds-with-solvable-fundamental-group?sort=votes
# 3-manifolds with solvable fundamental group Is there a nice reference for the classification of closed 3-manifolds with solvable (nilpotent, abelian, etc.) fundamental group, assuming the Geometrization Conjecture? - To follow up, I was hoping for a citable source with a theorem of the form: If $\pi_1(M)$ is abelian, $M$ is homeomorphic to one of the following ... If it is nilpotent, but not abelian... If it is solvable, but not nilpotent... The Evans-Moser paper comes pretty close, but one still has to argue that "equivalent" in that paper is the same as homeomorphic, due to the Poincaré Conjecture. –  Andy Hammerlindl Jan 18 '12 at 16:21 - While Scott's survey sure is a nice reference, does it really give a full classification of closed 3-manifolds with solvable fundamental group? I do not think it stated there. After brief search I found the classification of compact 3-manifolds with amenable fundamental group in section 4 of math.fsu.edu/~aluffi/archive/paper409.pdf –  Igor Belegradek Jan 17 '12 at 21:30 While it's not explicitly stated, I think you can extract it without too much fuss. The manifold will either be Seifert fibered or modeled on Sol. In the first case, the classification of Seifert fibered spaces kicks in, and in the second, it's a torus or Klein bottle bundle and the homotopy class of the monodromy determines the manifold. –  Richard Kent Jan 17 '12 at 22:47 OK, I guess you have to be a little careful since manifolds can admit different Seifert fiberings or bona fide fiberings. –  Richard Kent Jan 17 '12 at 22:53 It's probably not a hard exercise either to find the constant curvature manifolds (spherical, Euclidean) with solvable fundamental group. –  Ian Agol Jan 18 '12 at 0:20 • Charles B. Thomas, Nilpotent groups and compact $3$-manifolds, Proc. Cambridge Philos. Soc. 64 (1968), 303-306; MR0233359. • Benny Evans and Louise Moser, Solvable fundamental groups of compact $3$-manifolds, Trans. Amer. Math. Soc. 168 (1972), 189–210; MR0301742. • Peter Teichner, Maximal nilpotent quotients of $3$-manifold groups, Math. Res. Lett. 4 (1997), no. 2-3, 283-293; MR1453060. A little nugget (due to John Milnor): among the Brieskorn manifolds $\Sigma(p,q,r)$, the only nilmanifolds are $\Sigma(2,3,6)$, $\Sigma(2,4,4)$, and $\Sigma(3,3,3)$, which are circle bundles over the torus with Euler number $1$, $2$, and $3$, respectively. • - If you have trouble with the links above, Teichner's paper can also be found here: math.berkeley.edu/~teichner/Papers/integral.pdf. –  Neil Hoffman Mar 25 '13 at 11:08 In the first chart, Thurston uses "almost" to mean virtually, i.e. has a finite index subgroup with some property. Considering compact manifolds with abelian fundamental group, we have the following classification. (The references are to Thurston's book although older references exist for nearly all of these statements.) If $\pi_1(M)$ is finite, then $M$ is a lens space with finite cyclic fundamental group (see Theorem 4.4.14). If $\pi_1(M)$ is not finite and abelian, then $\pi_1(M)= Z, Z\times Z\times Z$ . The first case implies that $M\cong S^1\times S^2$ (see Exercise 4.7.1) and the second implies that $M \cong T^3$ (see Theorem 4.3.4). Finally to finish our classification for oriented manifolds, we also have to worry about the cusped manifolds. In general, the second chart has this information. The one caveat is that the unknot and Hopf link complements ($S^1\times D^2$ and $T^2 \times I$) might not neatly fit on this list, because they do not admit a cofinite $H^3$, $\widetilde{PSL(2,R)}$ or $H^2 \times R$ structures. However, unknot and Hopf link complements should be added to our list of manifolds having abelian fundamental groups as well since their fundamental groups are $Z$ and $Z\times Z$, respectively. I believe they are not considered in this chart because they would not show up in a decomposition prescribed by the Geometrization. For example, if you attach a Hopf link complement to the boundary of geometric piece via a boundary identification the resulting manifold remains geometric. The non-orientable case, includes $S^1 \times P^2$, which has fundamental group $Z \times Z/2Z$ (see Epstein's paper Theorem 9.1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472960233688354, "perplexity": 458.73914618907514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
http://teaching.paulhartzer.com/hamtramck/other-cultures-and-math/
# Other Cultures and Math I’ve mentioned in class that German uses slightly different symbols than we do. Here’s a screen shot from a German video on linear functions. Most of this is recognizable. The 1s are different than ours, but there are two bigger differences: 1. The original problem is: Given the point $$P(1, 2)$$ and the slope $$m = -2$$, what is the graph? In this case, the point is given as $$P(1|2)$$. 2. In the last step of solving for b, instead of writing $$+2$$ on both sides, the instructor wrote $$| +2$$ to the right of the entire equation. I find it interesting to look at how other countries and cultures represent mathematics. The notation is indeed a language, and like any language there are variations around the world. The underlying concepts are universal, though.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581022024154663, "perplexity": 360.9275196587498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00567.warc.gz"}
http://en.wikipedia.org/wiki/Solar_mass
# Solar mass "Weight of the Sun" redirects here. For the song, see Tao of the Dead. Size and mass of very large stars: Most massive example, the blue Pistol Star (150 M). Others are Rho Cassiopeiae (40 M), Betelgeuse (20 M), and VY Canis Majoris (17 M). The Sun (1 M) which is not visible in this thumbnail is included to illustrate the scale of example stars. Earth's orbit (grey), Jupiter's orbit (red), and Neptune's orbit (blue) are also given. The solar mass (M) is a standard unit of mass in astronomy that is used to indicate the masses of other stars, as well as clusters, nebulae and galaxies. It is equal to the mass of the Sun, about two nonillion kilograms: $M_\odot=( 1.98855\ \pm\ 0.00025 )\ \times10^{30}\hbox{ kg}$[1][2] The above mass is about 332,946 times the mass of the Earth (M), or 1,048 times the mass of Jupiter (MJ). Because the Earth follows an elliptical orbit around the Sun, its solar mass can be computed from the equation for the orbital period of a small body orbiting a central mass.[3] Based upon the length of the year, the distance from the Earth to the Sun (an astronomical unit or AU), and the gravitational constant (G), the mass of the Sun is given by: $M_\odot=\frac{4 \pi^2 \times (1\ {\rm AU})^3}{G\times(1\ {\rm year})^2}$ The value of the gravitational constant was derived from measurements that were made by Henry Cavendish in 1798 with a torsion balance. The value he obtained differs by only 1% from the modern value.[4] The diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769,[5] yielding a value of 9″ (compared to the present 1976 value of 8.794148″). If we know the value of the diurnal parallax, we can determine the distance to the Sun from the geometry of the Earth.[6] The first person to estimate the mass of the Sun was Isaac Newton. In his work Principia, he estimated that the ratio of the mass of the Earth to the Sun was about 1/28,700. Later he determined that his value was based upon a faulty value for the solar parallax, which he had used to estimate the distance to the Sun (1 AU). He corrected his estimated ratio to 1/169,282 in the third edition of the Principia. The current value for the solar parallax is smaller still, yielding an estimated mass ratio of 1/332,946.[7] As a unit of measurement, the solar mass came into use before the AU and the gravitational constant were precisely measured. This is because the relative mass of another planet in the solar System or the combined mass of two binary stars can be calculated in units of Solar mass directly from the orbital radius and orbital period of the planet or stars using Kepler's third law, provided that orbital radius is measured in astronomical units and orbital period is measured in years. The mass of the Sun has decreased since the time it formed. This has occurred through two processes in nearly equal amounts. First, in the Sun's core hydrogen is converted into helium by nuclear fusion, in particular the pp chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun. Second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as a solar wind. The original mass of the Sun at the time it reached the main sequence remains uncertain. The early Sun had much higher mass-loss rates than at present, so it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime.[8] The Sun gains a very small mass through the impact of asteroids and comets; however the Sun already holds 99.86% of the Solar System's total mass, so these impacts cannot offset the mass lost by radiation and ejection. ## Related units One solar mass, M, can be converted to related units: It is also frequently useful in general relativity to express mass in units of length or time. $M_\odot \frac{G}{c^2} \approx 1.48~\mathrm{km};\ \ M_\odot \frac{G}{c^3} \approx 4.93~ \mathrm{\mu s}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795820116996765, "perplexity": 598.140385769687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121934081.85/warc/CC-MAIN-20150124175214-00025-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.lmfdb.org/L/rational/4/2166%5E2/1.1/c3e2-0
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 4-2166e2-1.1-c3e2-0-0 $11.3$ $1.63\times 10^{4}$ $4$ $2^{2} \cdot 3^{2} \cdot 19^{4}$ 1.1 $$3.0, 3.0 3 1 0 0.713766 Modular form 2166.4.a.n 4-2166e2-1.1-c3e2-0-1 11.3 1.63\times 10^{4} 4 2^{2} \cdot 3^{2} \cdot 19^{4} 1.1$$ $3.0, 3.0$ $3$ $1$ $2$ $0.903681$ Modular form 2166.4.a.j 4-2166e2-1.1-c3e2-0-2 $11.3$ $1.63\times 10^{4}$ $4$ $2^{2} \cdot 3^{2} \cdot 19^{4}$ 1.1 $$3.0, 3.0 3 1 0 0.911773 Modular form 2166.4.a.q 4-2166e2-1.1-c3e2-0-3 11.3 1.63\times 10^{4} 4 2^{2} \cdot 3^{2} \cdot 19^{4} 1.1$$ $3.0, 3.0$ $3$ $1$ $2$ $1.10165$ Modular form 2166.4.a.l 4-2166e2-1.1-c3e2-0-4 $11.3$ $1.63\times 10^{4}$ $4$ $2^{2} \cdot 3^{2} \cdot 19^{4}$ 1.1 $$3.0, 3.0 3 1 2 1.16734 Modular form 2166.4.a.k 4-2166e2-1.1-c3e2-0-5 11.3 1.63\times 10^{4} 4 2^{2} \cdot 3^{2} \cdot 19^{4} 1.1$$ $3.0, 3.0$ $3$ $1$ $2$ $1.19306$ Modular form 2166.4.a.m 4-2166e2-1.1-c3e2-0-6 $11.3$ $1.63\times 10^{4}$ $4$ $2^{2} \cdot 3^{2} \cdot 19^{4}$ 1.1 $$3.0, 3.0 3 1 2 1.22479 Modular form 2166.4.a.o 4-2166e2-1.1-c3e2-0-7 11.3 1.63\times 10^{4} 4 2^{2} \cdot 3^{2} \cdot 19^{4} 1.1$$ $3.0, 3.0$ $3$ $1$ $2$ $1.45363$ Modular form 2166.4.a.p
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861559271812439, "perplexity": 307.73828600681327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00626.warc.gz"}
https://www.physicsforums.com/threads/can-someone-explain-this-to-me-please.786565/
# Can someone explain this to me please? 1. Dec 9, 2014 ### gcombina 1. The problem statement, all variables and given/known data Which equation is valid only when the angular measure is expressed in radians? a) α = Δθ / Δt b) ω= Δω / Δt c) ω^2 = ωo^2 + 2αθ d) ω = Vt/r (here T is a subscript) e) θ = 1/2αt^2 + ωαt 2. Relevant equations I know angles can be measured in either degrees or radians 3. The attempt at a solution angular velocity = ??? divided by radius I guess I don't understand what Vt is 2. Dec 9, 2014 ### Vagn How do you relate the length of an arc to the angle subtended by the arc and the radius? What should the units of each quantity be? 3. Dec 9, 2014 ### Staff: Mentor Presumably $V_t$ will be defined in the text surrounding the problem. In this context I would imagine that it represents the tangential velocity of a rotating object at distance r from the center of rotation. Draft saved Draft deleted Similar Discussions: Can someone explain this to me please?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781540036201477, "perplexity": 1553.162888491747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00080.warc.gz"}
https://ai.stackexchange.com/questions/22558/is-a-with-an-admissible-but-inconsistent-heuristic-optimal/22600
# Is A* with an admissible but inconsistent heuristic optimal? I understand that, in tree search, an admissible heuristic implies that $$A*$$ is optimal. The intuitive way I think about this is as follows: Let $$P$$ and $$Q$$ be two costs from any respective nodes $$p$$ and $$q$$ to the goal. Assume $$P. Let $$P'$$ be an estimation of $$P$$. $$P'\le P \Rightarrow P'. It follows from uniform-cost-search that the path through $$p$$ must be explored. What I don't understand, is why the idea of an admissible heuristic does not apply as well to "graph-search". If a heuristic is admissible but inconsistent, would that imply that $$A*$$ is not optimal? Could you please provide an example of an admissible heuristic that results in a non-optimal solution? It depends on what you mean by optimal. 1. A* will always find the optimal solution (that is, the algorithm is admissible) as long as the heuristic is admissible. (Note that the definition of admissible is overloaded and means something slightly different for an algorithm and a heuristic.) 2. If you talk about the set of nodes expanded by A*, then it expands the minimal set of nodes up to tie-breaking even with an inconsistent heuristic. 3. If you talk about the number of expansions, then A* is not optimal. A* can perform up to $$2^N$$ expansions of $$N$$ states with an inconsistent heuristic. This result comes from Martelli, 1977. 4. Algorithm B has worst-case $$N^2$$ expansions and budgeted graph search (BGS) has worst-case $$N \log C$$, where $$C$$ is the optimal solution cost. You can see a demo that shows the worst-case performance of A* as well as the performance of BGS here: https://www.movingai.com/SAS/INC/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872768878936768, "perplexity": 475.0587144754807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00464.warc.gz"}
http://mathhelpforum.com/discrete-math/155636-interesting-combinatorics-question-print.html
Interesting Combinatorics Question • September 9th 2010, 02:24 AM Boysilver Interesting Combinatorics Question Say we are given $n,t \in \mathbb{N}$. In how many ways can we find positive integers $a_0,\ldots,a_n$ such that $a_0+\ldots+a_n ? I think the answer must be ${t+n\choose t-1}$ (as I'm trying to follow something in a book and this would make sense) but I can't see how it follows. • September 9th 2010, 02:35 AM yeKciM Quote: Originally Posted by Boysilver Say we are given $n,t \in \mathbb{N}$. In how many ways can we find positive integers $a_0,\ldots,a_n$ such that $a_0+\ldots+a_n ? I think the answer must be ${t+n\choose t-1}$ (as I'm trying to follow something in a book and this would make sense) but I can't see how it follows. perhaps i didn't quite understand what you have asking but that can be checked very easy :D if you put $n= 5 ; t=3$ you get $\displaystyle \binom{8 }{2} = \frac {8\cdot 7 \cdot 6!}{6! \cdot 2 } = 28 >n ?$ so if you look that $a_n \in \mathbb{N}$ you canthan for n=5 have $1+2 < n$ and of course $2+1 < n$ but that's if the place of them is important ( $a_0, a_1 , ... ,a_n$ ) • September 9th 2010, 03:01 AM Boysilver Yes, I didn't explain very clearly. In your example, what I'm saying is that I'd like to prove that the number of ways in which we can find non-negative integers $a_0,a_1,a_2,a_3,a_4,a_5$ (also, previously I said "positive integers" - I meant non-negative integers) such that $a_0+a_1+a_2+a_3+a_4+a_5<3$ is 28 possibilities. (I said " $\ldots < n$" last time - I meant $. Sorry again!) Just for clarity, I'll list them here: $(a_0,a_1,a_2,a_3,a_4,a_5) =$ (0,0,0,0,0,0), (1,0,0,0,0,0), (1,0,0,0,0,1), (1,0,0,0,1,0), (1,0,0,1,0,0), (1,0,1,0,0,0), (1,1,0,0,0,0), (0,1,0,0,0,0), (0,1,0,0,0,1), (0,1,0,0,1,0), (0,1,0,1,0,0), (0,1,1,0,0,0), (0,0,1,0,0,0), (0,0,1,0,0,1), (0,0,1,0,1,0), (0,0,1,1,0,0), (0,0,0,1,0,0), (0,0,0,1,0,1), (0,0,0,1,1,0), (0,0,0,0,1,0), (0,0,0,0,1,1), (0,0,0,0,0,1), (2,0,0,0,0,0), (0,2,0,0,0,0), (0,0,2,0,0,0), (0,0,0,2,0,0), (0,0,0,0,2,0), (0,0,0,0,0,2). So there are indeed 28. I'd like a way of proving that the number of ways will be $t+n \choose t-1$ in the general case for any t and n. • September 9th 2010, 03:09 AM yeKciM aaaaaah that:D:D:D it's okay :D • September 9th 2010, 06:10 AM undefined Quote: Originally Posted by Boysilver Yes, I didn't explain very clearly. In your example, what I'm saying is that I'd like to prove that the number of ways in which we can find non-negative integers $a_0,a_1,a_2,a_3,a_4,a_5$ (also, previously I said "positive integers" - I meant non-negative integers) such that $a_0+a_1+a_2+a_3+a_4+a_5<3$ is 28 possibilities. (I said " $\ldots < n$" last time - I meant $. Sorry again!) Just for clarity, I'll list them here: $(a_0,a_1,a_2,a_3,a_4,a_5) =$ (0,0,0,0,0,0), (1,0,0,0,0,0), (1,0,0,0,0,1), (1,0,0,0,1,0), (1,0,0,1,0,0), (1,0,1,0,0,0), (1,1,0,0,0,0), (0,1,0,0,0,0), (0,1,0,0,0,1), (0,1,0,0,1,0), (0,1,0,1,0,0), (0,1,1,0,0,0), (0,0,1,0,0,0), (0,0,1,0,0,1), (0,0,1,0,1,0), (0,0,1,1,0,0), (0,0,0,1,0,0), (0,0,0,1,0,1), (0,0,0,1,1,0), (0,0,0,0,1,0), (0,0,0,0,1,1), (0,0,0,0,0,1), (2,0,0,0,0,0), (0,2,0,0,0,0), (0,0,2,0,0,0), (0,0,0,2,0,0), (0,0,0,0,2,0), (0,0,0,0,0,2). So there are indeed 28. I'd like a way of proving that the number of ways will be $t+n \choose t-1$ in the general case for any t and n. I find it interesting and am not sure how to derive the expression you got, but there is a simple recurrence. Call the number we're looking for $f(n,t)$. Restrict $n\ge0,t>0$. We have $\forall n:f(n,1)=1$ $\forall t:f(0,t)=t$ $\displaystyle f(n,t)=\sum_{k=0}^{t-1} f(n-1,t-k)\ \ \ \ ,\ \ \ t > 1\ \&\ n > 0$ Edit: fixed mistake. • September 9th 2010, 07:16 AM Plato Here is an interesting connection. I would have counted the total by $\displaystyle \sum\limits_{k = 0}^{t - 1} \binom{n + k}{k}$. That happens to equal $\displaystyle \binom{n + t}{t-1}$. • September 10th 2010, 09:05 PM aman_cc @Plato - Is there a good way to prove that $\displaystyle \sum\limits_{k = 0}^{t - 1} \binom{n + k}{k} = \displaystyle \binom{n + t}{t-1}$ using combinatorial arguments? • September 11th 2010, 12:31 PM Boysilver It's easy to prove the above by induction but I agree that there looks like there should be some nicer, intuitive combinatorial argument. I'm struggling with the part before though - how do we know that the number of ways of finding non-negative integers $a_0,\ldots,a_n$ such that (for a given positive integer k) $a_0+\ldots+a_n=k$ is $n+k\choose k$ ? Sorry if I'm overlooking something obvious but have been thinking about this for a while. • September 11th 2010, 01:05 PM undefined Quote: Originally Posted by Boysilver It's easy to prove the above by induction but I agree that there looks like there should be some nicer, intuitive combinatorial argument. I'm struggling with the part before though - how do we know that the number of ways of finding non-negative integers $a_0,\ldots,a_n$ such that (for a given positive integer k) $a_0+\ldots+a_n=k$ is $n+k\choose k$ ? Sorry if I'm overlooking something obvious but have been thinking about this for a while. See theorem 2 here Stars and bars (probability) - Wikipedia, the free encyclopedia • September 11th 2010, 01:06 PM Plato Quote: Originally Posted by Boysilver I'm struggling with the part before though - how do we know that the number of ways of finding non-negative integers $a_0,\ldots,a_n$ such that (for a given positive integer k) $a_0+\ldots+a_n=k$ is $n+k\choose k$ ? Sorry if I'm overlooking something obvious but have been thinking about this for a while. Sorry, but it me quite sometime to recall working on this. We actually have $n+1$ variables, $a_0,a_1,\cdots,a_n$. To put $k$ identical items into $n+1$ different cells can be done in $\displaystyle \binom{k+(n+1)-1}{k}=\binom{k+n}{k}$ ways.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265989661216736, "perplexity": 359.27980560749177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00141-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.lofoya.com/Solved/1183/in-a-pocket-of-a-the-ratio-of-rs-1-coins-50p-coins-and-25p-coins
# Difficult Ratios & Proportion Solved QuestionAptitude Discussion Q. In a pocket of A, the ratio of Rs.1 coins, 50p coins and 25p coins can be expressed by three consecutive odd prime numbers that are in ascending order. The total value of coins in the bag is Rs 58. If the number of Rs.1, 50p, 25p coins are reversed, find the new total value of coins in the pocket of A? ✖ A. Rs 68 ✖ B. Rs 43 ✖ C. Rs 75 ✔ D. Rs 82 Solution: Option(D) is correct Since the ratio of the number of Rs. 1, 50p and 25p coins can be represented by 3 consecutive odd numbers that are prime in ascending order, the only possibility for the ratio is $3:5:7$. Let the number of Re1, 50p and 25p coins be $3k, 5k$ and $7k$ respectively. Hence, total value of coins in paise $⇒ 100×3k+50×5k+25×7k$ $=725k$ $=5800$ $⇒ k=8.$ If the number of coins of Rs. 1,50p and 25p is reversed, the total value of coins in the Bag (in paise) $=100×7k+50×5k+25×3k=1025k$ (In above we find the value of $k$). $⇒ 8200 p = \text{Rs. } 82.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220365881919861, "perplexity": 1262.1489946176423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695726.80/warc/CC-MAIN-20170926122822-20170926142822-00249.warc.gz"}
http://hplgit.github.io/prog4comp/doc/pub/._p4c-bootstrap-Python030.html
$$\newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\F}{\boldsymbol{F}} \newcommand{\J}{\boldsymbol{J}} \newcommand{\x}{\boldsymbol{x}} \renewcommand{\c}{\boldsymbol{c}}$$ # Rate of convergence With the methods above, we noticed that the number of iterations or function calls could differ quite substantially. The number of iterations needed to find a solution is closely related to the rate of convergence, which is the speed of the error as we approach the root. More precisely, we introduce the error in iteration $$n$$ as $$e_n=|x-x_n|$$, and define the convergence rate $$q$$ as $$$$e_{n+1} = Ce_n^q, \tag{6.5}$$$$ where $$C$$ is a constant. The exponent $$q$$ measures how fast the error is reduced from one iteration to the next. The larger $$q$$ is, the faster the error goes to zero, and the fewer iterations we need to meet the stopping criterion $$|f(x)| < \epsilon$$. A single $$q$$ in (6.5) is defined in the limit $$n\rightarrow\infty$$. For finite $$n$$, and especially smaller $$n$$, $$q$$ will vary with $$n$$. To estimate $$q$$, we can compute all the errors $$e_n$$ and set up (6.5) for three consecutive experiments $$n-1$$, $$n$$, and $$n+1$$: \begin{align*} e_{n} &= Ce_{n-1}^q,\\ e_{n+1} &= Ce_n^q\thinspace . \end{align*} Dividing these two equations by each other and solving with respect to $$q$$ gives $$q = \frac{\ln (e_{n+1}/e_n)}{\ln(e_n/e_{n-1})}\thinspace .$$ Since this $$q$$ will vary somewhat with $$n$$, we call it $$q_n$$. As $$n$$ grows, we expect $$q_n$$ to approach a limit ($$q_n\rightarrow q$$). To compute all the $$q_n$$ values, we need all the $$x_n$$ approximations. However, our previous implementations of Newton's method, the secant method, and the bisection method returned just the final approximation. Therefore, we have extended the implementations in the module file nonlinear_solvers.py such that the user can choose whether the final value or the whole history of solutions is to be returned. Each of the extended implementations now takes an extra parameter return_x_list. This parameter is a boolean, set to True if the function is supposed to return all the root approximations, or False, if the function should only return the final approximation. As an example, let us take a closer look at Newton: def Newton(f, dfdx, x, eps, return_x_list=False): f_value = f(x) iteration_counter = 0 if return_x_list: x_list = [] while abs(f_value) > eps and iteration_counter < 100: try: x = x - float(f_value)/dfdx(x) except ZeroDivisionError: print "Error! - derivative zero for x = ", x sys.exit(1) # Abort with error f_value = f(x) iteration_counter += 1 if return_x_list: x_list.append(x) # Here, either a solution is found, or too many iterations if abs(f_value) > eps: iteration_counter = -1 # i.e., lack of convergence if return_x_list: return x_list, iteration_counter else: return x, iteration_counter The function is found in the file nonlinear_solvers.py. We can now make a call x, iter = Newton(f, dfdx, x=1000, eps=1e-6, return_x_list=True) and get a list x returned. With knowledge of the exact solution $$x$$ of $$f(x)=0$$ we can compute all the errors $$e_n$$ and all the associated $$q_n$$ values with the compact function def rate(x, x_exact): e = [abs(x_ - x_exact) for x_ in x] q = [log(e[n+1]/e[n])/log(e[n]/e[n-1]) for n in range(1, len(e)-1, 1)] return q The error model (6.5) works well for Newton's method and the secant method. For the bisection method, however, it works well in the beginning, but not when the solution is approached. We can compute the rates $$q_n$$ and print them nicely, def print_rates(method, x, x_exact): q = ['%.2f' % q_ for q_ in rate(x, x_exact)] print method + ':' for q_ in q: print q_, print The result for print_rates('Newton', x, 3) is Newton: 1.01 1.02 1.03 1.07 1.14 1.27 1.51 1.80 1.97 2.00 indicating that $$q=2$$ is the rate for Newton's method. A similar computation using the secant method, gives the rates secant: 1.26 0.93 1.05 1.01 1.04 1.05 1.08 1.13 1.20 1.30 1.43 1.54 1.60 1.62 1.62 Here it seems that $$q\approx 1.6$$ is the limit. Remark. If we in the bisection method think of the length of the current interval containing the solution as the error $$e_n$$, then (6.5) works perfectly since $$e_{n+1}=\frac{1}{2}e_n$$, i.e., $$q=1$$ and $$C=\frac{1}{2}$$, but if $$e_n$$ is the true error $$|x-x_n|$$, it is easily seen from a sketch that this error can oscillate between the current interval length and a potentially very small value as we approach the exact solution. The corresponding rates $$q_n$$ fluctuate widely and are of no interest.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9772678017616272, "perplexity": 615.2989260093736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690211.67/warc/CC-MAIN-20170924205308-20170924225308-00130.warc.gz"}
http://mathhelpforum.com/calculus/15928-integration-parts.html
# Thread: integration by parts 1. ## integration by parts 1) $\int xe^{2x}dx$ $u = x, dv = e^{2x}dx$ $du = dx, v = e^{2x}$ $\int udv = uv-\int vdu$ $xe^{2x}-\int e^{2x}dx$ $xe^{2x}-e^{2x}+C$ what i got is wrong, i think im doing something wrong with $e^{2x}$ 2. Originally Posted by viet 1) $\int xe^{2x}dx$ Let $u=x \mbox{ and }v' = e^{2x}\Rightarrow u' = 1 \mbox{ and } v= \frac{1}{2}e^{2x}$ Thus, $\int u v' dx = uv - \int u'v dx$ And hence, $\int xe^{2x} dx = \frac{1}{2}xe^{2x} - \int \frac{1}{2} e^{2x} dx = \frac{1}{2}xe^{2x} - \frac{1}{4}e^{2x}+C$ 3. Hello, viet! I think im doing something wrong with $e^{2x}$ . Yes! $\int e^{2x}dx \:=\:\frac{1}{2}\,e^{2x} + C$ . . . . . . . . $\uparrow$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944965243339539, "perplexity": 2272.038826010365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00100.warc.gz"}
http://math.stackexchange.com/questions/183256/notation-of-super-lie-algebras
# notation of super Lie algebras The classic super Lie algebras are of type A: $\operatorname{sl}(m+1 \mid n+1)$, $m\neq n$, $\operatorname{psl}(n+1 \mid n+1)$, type B: $\operatorname{osp}(2m+1 \mid 2n)$, type C: $\operatorname{osp}(2 \mid 2n)$, type D: $\operatorname{osp}(2m \mid 2n)$. But I find some paper they study $\operatorname{psu}(2, 2\mid 4)$. What type is this super Lie algebra? Thank you very much. - The $\mathfrak{psu}(2,2|4)$ is a real form of $\mathfrak{psl}(4,4)$, i.e. there exists an involution $\sigma: \mathfrak{psl}(4,4) \mapsto \mathfrak{psl}(4,4)$ (i.e. $\sigma^2 = \mathrm{id}$) and generators of $\mathfrak{psu}(2,2|4)$ are those linear combinations of generators, that are fixed by $\sigma$. @user9791 I think it goes by the analogy with classical Lie algebras. $\mathfrak{su}$ is a real form of $\mathfrak{sl}$, and $\mathfrak{psu}$ is a real form $\mathfrak{psl}$. The "p" in $\mathfrak{psl}$ stands for projective, I think. –  Sasha Aug 16 '12 at 16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336439967155457, "perplexity": 312.1652098712592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131846.69/warc/CC-MAIN-20140914011211-00322-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://questioncove.com/updates/5175cdb0e4b0be9997e04a96
OpenStudy (anonymous): Find an equation for the nth term of the arithmetic sequence. -15, -6, 3, 12, ... 5 years ago OpenStudy (campbell_st): well you know the 1st term a = -15 and the common difference is d = 9 so the general from is $a_{n} = -15 + (n -1)\times 9$ distribute the 9 and simplify for the answer. 5 years ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830527126789093, "perplexity": 833.8563232654712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746926.93/warc/CC-MAIN-20181121011923-20181121033923-00492.warc.gz"}
https://competitive-exam.in/questions/discuss/which-of-the-following-is-not-available-in-font
# Which of the following is not available in Font Spacing? Normal Loosely Condensed Expanded Please do not use chat terms. Example: avoid using "grt" instead of "great".
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080387115478516, "perplexity": 3653.3788935111634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00314.warc.gz"}
https://doc.cgal.org/4.12/Algebraic_kernel_d/classAlgebraicKernel__d__2_1_1ApproximateRelativeX__2.html
CGAL 4.12 - Algebraic Kernel AlgebraicKernel_d_2::ApproximateRelativeX_2 Concept Reference ## Definition A model of AlgebraicKernel_d_2::ApproximateRelativeX_2 is an AdaptableBinaryFunction that computes an approximation of the $$x$$-coordinate of an AlgebraicKernel_d_2::Algebraic_real_2 value with respect to a given relative precision. Refines: AdaptableBinaryFunction AlgebraicKernel_d_2::ApproximateAbsoluteY_2 AlgebraicKernel_d_1::ApproximateAbsolute_1 AlgebraicKernel_d_1::ApproximateRelative_1 ## Types typedef std::pair< AlgebraicKernel_d_1::Bound, AlgebraicKernel_d_1::Boundresult_type typedef AlgebraicKernel_d_2::Algebraic_real_2 first_argument_type typedef int second_argument_type ## Operations result_type operator() (const first_argument_type &v, const second_argument_type &a) The function computes a pair $$p$$ of AlgebraicKernel_d_1::Bound, where $$p.first$$ represents the lower approximation and $$p.second$$ represents the upper approximation. More... ## ◆ operator()() result_type AlgebraicKernel_d_2::ApproximateRelativeX_2::operator() ( const first_argument_type & v, const second_argument_type & a ) The function computes a pair $$p$$ of AlgebraicKernel_d_1::Bound, where $$p.first$$ represents the lower approximation and $$p.second$$ represents the upper approximation. The pair $$p$$ approximates the $$x$$-coordinate $$x$$ of the AlgebraicKernel_d_2::Algebraic_real_2 value $$v$$ with respect to the relative precision $$a$$. Postcondition $$p.first <= x$$ $$x <= p.second$$ $$(x - p.first) <= 2^{-a} |x|$$ $$(p.second - x) <= 2^{-a} |x|$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560535550117493, "perplexity": 4732.620241440308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00760.warc.gz"}
https://physics.stackexchange.com/questions/441546/why-does-maxwells-equations-partial-mu-f-mu-nu-0-have-3-independent
# Why does Maxwell's equations $\partial_{\mu} F^{\mu \nu} = 0$ have 3 independent components (DOF) in $D = 4$? And how can we generalize this to the statement that it has $$D-1$$ independent components in dimension $$D$$? I know that $$F_{\mu \nu}$$ has six independent components (because of antisymmetry), how do I go from there? First, I'll address a language issue: "independent components" can mean two different things: • The statement that $$F_{\mu\nu}$$ has six independent components is a statement about the number of parameters required to specify an antisymmetric matrix. • The statement that $$\partial^\mu F_{\mu\nu}=0$$ has three independent components is a statement about the number of functions required to specify a solution of this differential equation. The question about the number of independent functions required to specify a solution is answered in https://physics.stackexchange.com/a/20072/206691, so this could potentially be marked as a duplicate question. That other answer uses a different notation, though, so I'll offer a version that uses the current OP's notation. I'll write $$j,k$$ for spatial indices and $$0$$ for the time index. Suppose that we have some solution $$F_{\mu\nu}$$ to $$\partial^\mu F_{\mu k}=0 \tag{1}$$ but that we have not made any attempt yet to enforce $$\partial^\mu F_{\mu 0}=0. \tag{2}$$ Equation (2) is equivalent to $$\partial^k F_{0k}=0. \tag{3}$$ Equation (3) is a constraint: it is a condition that the initial data must satisfy, rather than a condition on how that data evolves in time. Applying $$\partial^k$$ to equation (1) gives $$\partial^\mu \partial^k F_{\mu k}=0. \tag{4}$$ In equation (4), the terms with $$\mu\neq 0$$ are identically zero because $$F$$ is antisymmetric, so equation (4) is equivalent to $$\partial^0 \partial^k F_{0 k}=0. \tag{5}$$ In words, this says that if the functions $$F_{\mu\nu}$$ that we chose to satisfy equations (1) for all times are also chosen to satisfy the constraint (2) at some initial time, then they automatically satisfy the constraint (2) for all times. In this sense, a solution to $$\partial^\mu F_{\mu\nu}=0$$ in $$D$$-dimensional spacetime has only $$D-1$$ independent components. Summary: Equation (1), which involves time-derivatives, has $$D-1$$ components (the number of values of the spatial index $$k$$). Equation (3) does not involve time-derivatives; it imposes a constraint on the initial data rather than an additional condition on how that data evolves in time. There are 3 dofs for the three spatial directions. Then there are two directions - time reversal symmetry. Taking polarisation into account one has 12 independent solutions, per frequency. That is a lot of dofs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492600321769714, "perplexity": 159.49673249007787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00087.warc.gz"}
https://geo.libretexts.org/Bookshelves/Sedimentology/Book%3A_Introduction_to_Fluid_Motions_and_Sediment_Transport_(Southard)/04%3A_Flow_in_Channels
# 4: Flow in Channels This chapter focuses on two of the most important aspects of channel flow: boundary resistance to flow, and the velocity structure of the flow. The discussion is built around two reference cases: steady uniform flow in a circular pipe, and steady uniform flow down an inclined plane. Flow in a circular pipe is clearly of great practical and engineering importance, and it is given lots of space in fluid-dynamics textbooks. Flow down a plane is more relevant to natural Earth-surface settings (sheet floods come to mind), and it serves as a good reference for river flows. • 4.1: Introduction to Flow in Channels Flows in conduits or channels are of interest in science, engineering, and everyday life. Flows in closed conduits or channels, like pipes or air ducts, are entirely in contact with rigid boundaries. • 4.2: Laminar Flow Down an Inclined Plane We apply Newton’s second law to steady and uniform flow down an inclined plane; the strategy is to look at a block of the flow, bounded by imaginary planes normal to the bottom, with unit cross-stream width and unit streamwise distance. Such a block of fluid is said to be a “free body”. Because the flow is assumed to be steady and uniform, all of the forces in the streamwise direction that are exerted upon the fluid within the free body at any given time must add up to be to zero. • 4.3: Turbulent Flow in Channels - Initial Material If you made experiments with pipe flows and channel flows at very low Reynolds numbers, before the transition to turbulent flow, you would find beautiful agreement between theory and observation—something that is always satisfying for both the theoretician and the experimentalist. But for turbulent flows, which is the situation in most flows that are of practical interest, the story is different. • 4.4: Turbulent Shear Stress If the material or property passively associated with the fluid is on the average unevenly distributed—if its average value varies in a direction normal to the mean motion—then the balanced turbulent transfer of fluid across the planes causes a diffusive transport or “flow” of this property, usually referred to as a flux, in the direction of decreasing average value. This kind of transport is called turbulent diffusion. • 4.5: Structure of Turbulent Boundary Layers In the following section are some of the most important facts and observations on the turbulence structure of turbulent boundary layers—with steady uniform flow down a plane as a reference case, but the differences between this and other kinds of boundary-layer flow lie only in minor details and not in important effects. • 4.6: Flow Resistance This section takes account of what is known about the mutual forces exerted between a turbulent flow and its solid boundary. You have already seen that flow of real fluid past a solid boundary exerts a force on that boundary, and the boundary must exert an equal and opposite force on the flowing fluid. It is thus immaterial whether you think in terms of resistance to flow or drag on the boundary. • 4.7: Velocity Profiles You have already seen that the profile of time-average local fluid velocity from the bottom to the surface in turbulent flow down a plane is much blunter over most of the flow depth than the corresponding parabolic profile for laminar flow. This is the place to amplify and quantify the treatment of velocity profiles in turbulent boundary-layer flows. • 4.8: More on the Structure of Turbulent Boundary Layers- Coherent Structures in Turbulent Shear Flow
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399102091789246, "perplexity": 650.634403776754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00558.warc.gz"}
https://www.aanda.org/articles/aa/full/2001/24/aah2666/aah2666.right.html
A&A 372, 945-951 (2001) DOI: 10.1051/0004-6361:20010549 ## UBVRI-polarimetry of the AM Herculis star RX J0203.8+2959 S. Katajainen1 - F. Scaltriti3, - V. Piirola1 - H. J. Lehto1,2 - E. Anderlucci3, 1 - Tuorla Observatory, Väisäläntie 20, 21500 Piikkiö, Finland 2 - Department of Physics, 21400, Turku University, Finland 3 - Osservatorio Astronomico di Torino, 10025 Pino Torinese, Italy Received 15 December 1999 / Accepted 9 April 2001 Abstract We present results of the first simultaneous UBVRI-photopolarimetric observations of the long period (4.6 hrs) AM Herculis system RX J0203.8+2959. The observed circular polarization shows both negative and positive polarization, from to in the B-band. A rise of the positive circular polarization is observed near the same orbital phase as a change in HR1 hardness ratio from soft to hard in the earlier ROSAT X-ray observations. The negative circular polarization is observed near orbital phases where a soft X-ray component was seen in ROSAT observations. Linear polarization pulses coincide with the sign reversal of the circular polarization, and coincide also with changes in the X-ray hardness ratio in earlier ROSAT data. The observed polarization variations favour two-pole accretion. Least square fits to the position angle curves during linear pulses suggest high orbital inclination, estimates from to = 73 . The UBVRI-lightcurves show smooth variations with a broad maximum and minimum. RX J0203.8+2959 was in intermediate accretion state during observations in November 1998 ( V= 15.7 - 16.7). The observed flux and circular polarization variations are reproduced by using constant temperature cyclotron emission models having two accretion regions and with parameters: , and colatitude for the negative pole, and , and for the main accreting positive pole, and assuming orbital inclination . Key words: stars: magnetic fields - novae, cataclysmic variables - stars: individual: RX J0203.8+2959 - accretion, accretion disks - polarization ### 1 Introduction The object RX J0203.8+2959 belongs to a sample of very soft X-ray sources in the ROSAT All-Sky Survey (RASS). It was preliminarily identified as an AM Herculis star by Beuermann & Thomas (1993). Schwarz et al. (1998) confirmed that RX J0203.8+2959 has properties typical for an AM Herculis star (a subclass of cataclysmic variables, CVs), with an orbital period of 4.6 hrs and brightness variations between V=18-15.5. Schwarz et al. (1998) detected cyclotron emission harmonics and estimated the magnetic field strength  MG. The orbital period of RX J0203.8+2959 is one of the longest among all the AM Herculis stars; only two systems, V1309 Ori (7.98 hrs) and V895 Cen (4.76 hrs) have longer orbital periods. Most of the AM Herculis stars have periods below the "period gap'', which occurs between 2 and 3 hours. Being one of the longest period AM Herculis systems, RX J0203.8+2959 is an important object to study the synchronization process between the primary and the secondary and the accretion mechanism in the long period (  hrs) AM Herculis stars. Figure 1: Simultaneous (UBVRI) light curves of RX J0203.8+2959 between nights 13/14 and 19/20 November, 1998. Each point presents a single photometric measurement. Open with DEXTER ### 2 Observations Our observations were carried out during 7 nights from 13/14 to 19/20 November, 1998, (JD 2451131 - 2451137) with the Turin-photopolarimeter at the 2.15 metre telescope of Complejo Astronomico El Leoncito, CASLEO in Argentina (latitude ). The multichannel double image chopping instrument, with dichroic filters to split the light into five spectral regions for photomultipliers, follows the design principles of Piirola (1973, 1988). It provides in the U-, B-, V-, R- and I-bands simultaneous circular and linear polarimetry as well as photometry in /4-mode. The efficiency for the circular polarimetry is 70% and 50% for the linear polarimetry with the /4-retarder, and 100% for linear polarimetry with the /2-retarder. During the nights of 15/16, 16/17 and 18/19 November, 1998, the instrument was used in /2-mode for maximum efficiency for linear polarization data, and on the other nights (13/14, 14/15, 17/18 and 19/20 November, 1998) /4-mode was used. The sky background polarization is directly eliminated by using a calcite plate as polarizing beam splitter. The instrumental polarization and the zero-point of the position angle are determined through observations of high and low polarization standard stars. The data have been folded in Figs. 1-3 by using the ephemeris determined from V-band minima by Schwarz et al. (1998): (1) #### 2.1 Photometric UBVRI-observations Photometric UBVRI-data were obtained during 7 nights (from 13/14 to 19/20, November, 1998) with time resolution of 26 s. Figure 1 presents simultaneous light curves (UBVRI)folded over the 4.6 hr orbital period. Light curves in Fig. 1 have been plotted twice (from phase to ) to illustrate the variations over the orbital period. Brightness varies between magnitudes 16 and 17 in the B- and V-bands. The range of variations is about one magnitude in all bands. The brightness level in the medium accretion state is V=17-16 and in the high state V=16-15, according to Schwarz et al. (1998). The broad maxima in the U- and the B-bands occur between the orbital phases , whereas the V-, R-, and the I-bands have their maximum 0.3 phase later ( ). Minima in the light curves occur between the phases in the U- and the B-bands, whereas the V-, R-, and I-bands have their mimima centered at the phase . Schwarz et al. (1998) also reported phase dependent minima: during their observations the I-band light curve mimimum lagged the minimum in the B-band by 0.12 phase units. Small variability in the shape of the light curves from night to night, accompanied with some flickering, complicate accurate determination of times of minima. Predicted timing errors in ephemeris given by Schwarz et al. (1998) in November 1998 are 10 min (0.04 phase units). Due to the Northern declination, the maximum duration of the nightly runs was slightly greater than 4 hours (almost one complete cycle) and the values of air masses were between 2.1 and 2.7. We estimate that the accuracy of our photometry is 0.04 mag for the UBVR bands and 0.08 mag for the I-band. The full average spread of the observations in light curves are: 0.45 mag (U), 0.30 (B), 0.30 (V), 0.27 (R), 0.55 (I). Even though we consider some uncertainty arising from an estimate of the extinction coefficients (due to the low altitude of the object), we believe that the observed spread in the curves reflects real changes in the physical phenomenona in the source. #### 2.2 Polarimetry Polarimetry data, plotted from orbital phase to , are shown in Fig. 2 (circular) and Fig. 3 (linear). Individual observations show only small variations from night to night, and thus combining the data from the separate nights, and averaging to the phase bins over orbital cycle is appropriate in order to increase the S/N ratio. Circular polarimetry (Fig. 2) shows both negative and positive polarization. Positive circular polarization is observed in the orbital phase interval in the B-band and between in the V-, R- and I-bands. The circular polarization reaches +7% near the phase in the B-band, +6% in the V- and I-bands, and +5% in the U- and R-bands. The negative circular polarization reaches in the B-band (), in the I-band (), in the V- and (quite marginally) in the R-band (). In the U-band the negative circular polarization is almost negligible. Figure 3 shows the phase binned linear polarization data, computed by vectorially averaging individual observations from the B-,V-, R-, and I-bands in order to increase the S/N ratio. U-band data is not combined with the data from the other wavelengths due to higher noise level in this band. The degree of the linear polarization is low: 3% at maximum and during most part of the orbital cycle polarization is less than 1%. Figure 2: Simultaneous circular polarization curves (UBVRI) of RX J0203.8+2959. Data obtained during 7 nights between 13/14 and 19/20 November 1998, has been averaged to 40 phase bins. Error bars correspond to the standard error of the mean ( ). Open with DEXTER The linear polarization shows pulses with defined position angle between the orbital phases and . Both of these peaks coincide with the change of the sign in the circular polarization. A pulse-like feature is seen also near the phase , but there are no zero crossings in the circular polarization near the same orbital phase. The linear polarization drops to low level at the phase . A similar drop in polarization is also seen in the circular data in the U-, B-, and V-bands, almost near the same phase. This may be due to some unpolarized material (an accretion stream?) crossing our line of sight to the cyclotron emission area between orbital phases . Earlier ROSAT X-ray observations by Schwarz et al. (1998) showed that during the short interval near photometric phase HRI-detectors had almost zero counts. Optical light curves (Fig. 1) do not reveal any drop in the flux level during the same orbital phase, however. Comparing our polarization data to earlier ROSAT PSPC and HRI detectors data and HR1 hardness ratio, by Schwarz et al. (1998) (Fig. 16 in their paper; HR1 hardness ratio is defined as (H-S)/(H+S), where H and S are counts above and below 0.4 keV), it can be noticed that the soft X-ray component ( ) is observed between the photometrical phases and . The HR1 ratio changes rapidly after the phase from -0.8 to 0.0 and reaches at the phases its maximum, 0.5, at the same time when also the positive circular polarization has its maximum peak value. The soft X-ray component thus either disappears, or strongly reduces, when a linear pulse is seen and circular polarization changes its sign. After photometric phase , HR1 hardnes ratio drops from hard to soft. This coincides with linear polarization pulse observed between the phases , and may be connected to disappearing of the main emission region (positive pole). The two X-ray components, soft and hard, seen at different orbital phases, coinciding with a change from negative to positive circular polarization, suggest two pole accretion. Figure 3: Simultaneous linear polarization (top) and position angle (bottom) curves of RX J0203.8+2959 calculated by vectorially averaging the individual observations of B-, V-, R-, and I-bands. Data have been averaged to 28 phase bins. Error bars correspond to the standard error of the mean ( ). The continous line is a Fourier fit with 5 harmonics of the orbital period. Open with DEXTER The wavelength dependences of the negative and the positive circular polarization are also slightly different. Furthermore, the positive circular polarization is observed when V-, R-, and I-bands have their maxima in brightness, whereas the negative circular polarization is seen during the phases ( ) when the U- and B-light curves have their maxima. Also this favours a two accreting pole model. ### 3 The system geometry The direction of the linearly polarized component of the cyclotron emission is perpendicular to the field. This information can be used for computing the inclination of the rotation axis with respect to our line of the sight in the dipole model approximation (Meggit & Wickramasinghe 1982; Liebert et al. 1982) (2) We have applied a standard least square fit method to calculate the rate of the change of the position angle during two pulses, in order to estimate the orbital inclination i. Fits in the phase interval give inclination, when weighting according to inverse square of the standard error of each normal points, = . The broad pulse can be connected either with disappearance of the negative pole behind the limb of the white dwarf or appearance of the positive pole. This pulse may also has been affected by effects arising from possible overlap of the linear polarization from another accretion pole, which may also increase uncertainties in our inclination estimates. During the other linear polarization pulse the least square fit (weighted according errors) to position angle data between orbital phases gives . The observed variations in position angle are noisy, but not randomly scattered. They suggest (within given error estimates) high orbital inclination more probable solution than low inclination. Evidences of two accreting poles (see Sect. 2.2) also supports high inclination according Schwarz et al. (1998). Assuming orbital inclination , the lack of self-eclipses in the light curves would require a colatitude for main accreting region. However, there are more than one region accreting, the main accreting region may have self-eclipses if another emission region is seen during that time interval. By studying circular polarization curves (Fig. 2) it is obvious that possible self-eclipse of the main accretion region has quite short duration, because positive circular polarization is observed during most of the the orbital cycle. By using formula given by Bailey & Axon (1981): (3) where is the length of the phase interval where the sign of the circular polarization is reversed, and the active pole is far side of the white dwarf, we can estimate . The circular polarization data (Fig. 2) show values of 0.4 (B-band) and 0.2 (V-, R, and I-bands) for , which would then results values in the range from 50 to 24 for , and correspondingly for negative pole values of range from 155 to . ### 4 Comparison with cyclotron model computations We have reproduced the observed light curves and polarization variations by using the numerical modelling codes for cyclotron emission described in Piirola et al. (1987a,b, 1990, 1993). For the cyclotron fluxes and polarization dependence on the viewing angle , the angle between the line of the sight and the magnetic field, we have used model grids given by Wickramasinghe & Meggit (1985) for constant high temperature  keV (originally in 10 degrees divisions) which we have further interpolated over whole range of viewing angle for one degree divisions. These models predict the intensity of the cyclotron emission and linear and circular polarization from a point source for different viewing angles for each harmonics for a given optical depth parameter and electron shock temperature . The height of the emission region(s) is assumed to be very small compared to the radius of the white dwarf. The observations of ST LMi (Cropper 1986), BL Hyi (Piirola et al. 1987a), VV Pup (Piirola et al. 1990), for example, have shown that the height of the emitting regions, , is very small in units of white dwarf radius, 0.01  . The orbital inclination is adopted to the value , which is near the estimates from the rate of change of the position angle during the two pulses (Sect. 3). The colatitudes and longitudes of the emission regions, extension of the emission regions in longitude and in colatitude are kept as free parameters. The amount of polarization given by the constant temperature Wickramasinghe & Meggit (1985) models is often too high (see Cropper 1985; Piirola et al. 1987a,b, 1990). The inclusion of an unpolarized background emission is necessary in order to bring the polarization values into better agreement with the observations. We have adopted a colour dependence of the background comparable to relative flux ratios near in each band (lowest states during orbital cycle) to separate unpolarized background effects. Schwarz et al. (1998) found that the faint and the bright phase during the orbital cycle in RX J0203.8+2959 was dominated by blue continuum, where at least three separate components may contribute: optically thick cyclotron emission from accretion region, photospheric radiation of the white dwarf itself, and possible large fraction of reprocessed continuum radiation from the line emitting regions. Schwarz et al. (1998) also estimated that only 25% of total visual light observed during the high state originates from measured cyclotron flux. In our modelling the adopted background fluxes are in the B-, V-, and R-bands 3-4 times larger than the peak emission from pure cyclotron flux (depending on wavelength). Calculations are made by dividing extended emission arcs into three separate strips, each consisting of 20 equidistant points along a line on the surface of the white dwarf, for which Stokes parameters Q, U, V and I are calculated independently, as an approximation for regions where volume elements do not obscure each other, and their contributions to total polarization and flux are summed together as a function of orbital phase. The geometric model which gives acceptable fits to our data for different temperatures , plasma parameter values and for different harmonics numbers , by using value for orbital inclination, contains two extended emission regions, (separated in longitude): positive (main) circular polarization region between colatitudes and 30 and the other region (negative polarization) between and 140. Figure 4: Calculated circular polarization curves from orbital phase to , by using parameters = 70, two extended emission regions between colatitudes and , by 90 in longitude separated and both consisting of three strips, centered at colatitudes 20, 30, and 40 (positive region) and 120, 130, and 140 (negative region). Values , and harmonic number from 8 (top), 7, 6, 5 and 4 (bottom) are used for dominant positive circular polarization region. Correspondingly values for from 7 (top), 6, 5, 4, and 3 (bottom), , are applied for negative cirular polarization emission region. Left panel shows curves for individual emission strips and the right panel the combined polarization curves. Open with DEXTER The general pattern of circular polarization variations over the orbital cycle (Fig. 2) is fairly well reproduced by the model, using cyclotron model parameters  keV, , and the harmonics 8, 7, 6, 5, and 4 in the UBVRI bands, respectively (Fig. 4) for the main accretion region. These harmonics correspond to magnetic field of about 32-33 MG. The low S/N ratio in the U and I bands (Fig. 2) does not allow any detailed comparison of the phase dependent variations in U and I, but the amplitude level in these bands is roughly in accordance with the predictions from the model. The relative amount of negative circular polarization is largest in the blue, which points to the possibility of stronger magnetic field in the second, less active, pole. Our best fit, with  keV and the harmonics (7), 6, 5, 4, and (3), in the (U)BVR(I) bands, respectively for the negative circular polarization, corresponds to magnetic field of about 40-41 MG. These field estimates are in reasonable agreement with the value  MG obtained by Schwarz et al. (1998). Tight constraints for the field strength can not be set based to our cyclotron model fittings, but varying a large range of harmonics with different temperatures and plasma parameters, we can say that harmonic numbers 3 or lower, are not able to predict well variations observed in the I-band, as well the harmonics clearly higher than 8 or 10, do not seem to give enough high polarization for B-band, setting the field range to be about 25-43 MG. Confirmed two-pole accretors, like VV Pup (Wickramasinghe et al. 1989), UZ For (Schwope et al. 1990), DP Leo (Cropper & Wickramasinghe 1993) and QS Tel (Schwope et al. 1995), have shown significantly different fields in two accreting regions. In some cases, nearly a factor of 2 difference has been observed between magnetic field strengths in different poles. The more strongly accreting pole has usually the weaker magnetic field. The models predict brightness variations (Fig. 5) which are about three times smaller than observed and not that smooth. If we reduce the amount of unpolarized background, the relative flux variations increase, but then the polarization predicted by the model will be too high. The observed linear polarization pulses at phases 0.5 and 0.95 are well reproduced by the model (Fig. 6). The negative pole is near the limb at the phase 0.5. The minimum in the brightness near the phase is due to having both emission regions at self-eclipse by the white dwarf limb. The length of the extended accretion regions are not very strongly constrained. The fit to the observed smooth and asymmetric variations in circular polarization and flux, by using small and very short accretion regions is worse comparing to fitting by longitudinally extended emission regions. The short emission regions produce also a short-term, sharp linear pulse with too high peak value, whereas the longer emission regions give flatter and weaker linear polarization peaks (consistent with observations) when the angle between magnetic field and our line of the sight, , is close . Figure 5: By using the same parameters as in Fig. 4 (for circular polarization) the calculated light curves are presented. Open with DEXTER The best fit to observed data is achieved by assuming relatively long emission regions (tens of degrees in longitude). The length of the main emission region in our model shown in Figs. 4, 5, and 6 (60 in longitude) corresponds in units of white dwarf radius. The asymmetry seen in our light (Fig. 1) and polarization curves (Fig. 2) of RX J0203.8+2959, suggests also that the cyclotron emission regions are longitudinally extended. This is in accordance with earlier modelling of cyclotron emission in AM Herculis stars. The assumption that the accretion takes place over a small spot near the magnetic pole in centered dipole field is showed to be too simplified. Longitudinally extended arc-shaped accretion region(s), slightly offset from the magnetic pole, have been proposed to explain the circular polarization curves in AM Herculis stars, see e.g.: Wickramashinge & Ferrario (1988), and Ferrario & Wickramasinghe (1990). Figure 6: Calculated linear polarization and position angle curves for harmonic number for positive pole and harmonic for negative pole. The other parameters are the same as for calculations in Figs. 4 and 5. Open with DEXTER Observed polarization and brightness variations have been succesfully modelled by using extended emission regions, for example, in ST LMi (Ferrario et al. 1989; Ferrario et al. 1993; Cropper & Horne 1994), DP Leo (Schmidt 1988; Cropper & Wickramasinghe 1993); V347 Pav (Bailey et al. 1995; Ramsay et al. 1996; Potter et al. 2000). The extension of emission regions have been estimated to be, for example: (Wickramasinghe 1988), (UZ For in high state, Schwope et al. 1990), (DP Leo, Schmidt 1988) or extended from 15 to 100 in longitude (V834 Cen, Cropper 1989; ST LMi, Cropper & Horne 1994; VV Pup, Piirola et al. 1990; DP Leo, Cropper & Wickramasinghe 1993; EV UMa, Hakala et al. 1994). Recent X-ray observations have also shown that hard X-ray emission regions in systems BL Hyi (Wolff et al. 1999) and V2301 Oph (Steinman-Cameron et al. 1999) have extension of 45-50 in longitude. Although that our model is able to produce the observed variations in polarization acceptable well (Figs. 45, and 6), we can not assert that our solution is unique. The model fits are depending on several parameters, such as orbital inclination, (data favours high inclination, see Sect. 2.2) and choice of the correct electron shock temperature and plasma parameter (values kT = 20 keV are adopted from Schwarz et al. 1998 fits to ROSAT-data). ### 5 Conclusions We have observed RX J0203.8+2959 and collected a large amount of circular and linear polarimetric and photometric data covering well the orbital phase. A rise of positive circular polarization is observed near the same orbital phases as the change in X-ray spectrum HR1 hardness ratio in 1994 during ROSAT observations. Negative circular polarization is observed at the same orbital phase when the soft X-ray component is seen (Schwarz et al. 1998). Linear polarization show pulses at the orbital phases when the sign of the circular polarization is reversed. Two pulses coincide with the change in the X-ray spectrum. Observed polarization variations favour a two-pole accretion instead of one accreting pole model. The position angle variations suggest relatively high values for the inclination angle of the white dwarf spin axis, ). Acknowledgements This work was supported by the Finnish Academy of Sciences and Letters (Academia Scientiarum Fennica). We thank the referee for profound comments, which have improved our manuscript. We thank also Dr. C. Flynn for a reading of the manuscript.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9085397720336914, "perplexity": 2569.751023635205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00653.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=0422562
MathSciNet bibliographic data MR422562 (54 #10548) 28A40 (60B05) Heidemann, Dirk On the extension of cylinder measures to $\tau$$\tau$-smooth measures in linear spaces. Proc. Amer. Math. Soc. 61 (1976), no. 1, 59–65 (1977). Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797462224960327, "perplexity": 4086.221295120608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462202.69/warc/CC-MAIN-20150226074102-00240-ip-10-28-5-156.ec2.internal.warc.gz"}
https://dkfzsearch.kobv.de/simpleSearch.do?query=journal%3A%7BConstructive+approximation%7D&plv=2
An error occurred while sending the email. Please try again. Proceed reservation? Export Filter Collection Keywords Publisher Years • 1 Electronic Resource Springer Constructive approximation 11 (1995), S. 85-106 ISSN: 1432-0940 Keywords: Primary 41A20, 41A21 ; Secondary 41A60, 30C15 ; Padé approximation ; Incomplete rationals ; Incomplete polynomials ; Steepest descent ; Zeros ; Poles Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract We consider rational approximations of the form $$\left\{ {(1 + z)^{\alpha n + 1} \frac{{p_{cn} (z)}}{{q_n (z)}}} \right\}$$ in certain natural regions in the complex plane wherep cn andq n are polynomials of degreecn andn, respectively. In particular we construct natural maximal regions (as a function of α andc) where the collection of such rational functions is dense in the analytical functions. So from this point of view we have rather complete analog theorems to the results concerning incomplete polynomials on an interval. The analysis depends on an examination of the zeros and poles of the Padé approximants to (1+z)αn+1. This is effected by an asymptotic analysis of certain integrals. In this sense it mirrors the well-known results of Saff and Varga on the zeros and poles of the Padé approximant to exp. Results that, in large measure, we recover as a limiting case. In order to make the asymptotic analysis as painless as possible we prove a fairly general result on the behavior, inn, of integrals of the form $$\int_0^1 {[t(1 - t)f_z (t)]^n {\text{ }}dt,}$$ wheref z (t) is analytic inz and a polynomial int. From this we can and do analyze automatically (by computer) the limit curves and regions that we need. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 2 Electronic Resource Springer Constructive approximation 11 (1995), S. 439-453 ISSN: 1432-0940 Keywords: 41A15 ; 41A36 ; 41A63 ; Bernstein-Schoenberg operator ; simplex spline ; asymptotic error Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract We study an approximation operator of Bernstein-Schoenberg type which employs multivariate B-splines and was introduced in [2]. By first considering its action on convex functions, we derive a general formula of Voronovskaya type for the asymptotic error as the degree tends to infinity. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 3 Electronic Resource Springer Constructive approximation 11 (1995), S. 423-438 ISSN: 1432-0940 Keywords: 41A63 ; 41A25 ; 42B ; 65D10 ; Approximation order ; Fourier transform ; Quasi-interpolation ; Cardinal interpolation ; Shift-invariant space Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract A Fourier analysis approach is taken to investigate the approximation order of scaled versions of certain linear operators into shift-invariant subspaces ofL 2(R d ). Quasi-interpolants and cardinal interpolants are special operators of this type, and we give a complete characterization of the order in terms of some type of ellipticity condition for a related function. We apply these results by showing that theL 2-approximation order of a closed shift-invariant subspace can often be realized by such an operator. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 4 Electronic Resource Springer Constructive approximation 11 (1995), S. 455-476 ISSN: 1432-0940 Keywords: 41A21 ; 65F05 ; (matrix) Padé approximation ; (vector) Padé-Hermite approximation ; (block) Hankel systems of equations ; look-ahead ; perfect points Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract In this paper, we present an algorithm to compute vector Padé-Hermite approximants along a sequence of perfect points in the vector Padé-Hermite table. We show the connection to matrix Padé approximants. The algorithm is used to compute the solution of a block Hankel system of linear equations. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 5 Electronic Resource Springer Constructive approximation 11 (1995), S. 513-515 ISSN: 1432-0940 Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 6 Electronic Resource Springer Constructive approximation 11 (1995), S. 477-512 ISSN: 1432-0940 Keywords: 42C05 ; 33D15 ; Orthogonal polynomials ; Orthogonal Laurent polynomials ; Recurrence relation ; Hahn-Extonq-Bessel function ; q-Lommel polynomials ; Al-Salam-Chihara polynomials ; Continuousq-Hermite polynomials ; Chebyshev polynomials ; Perturbation Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract Laurent polynomials related to the Hahn-Extonq-Bessel function, which areq-analogues of the Lommel polynomials, have been introduced by Koelink and Swarttouw. The explicit strong moment functional with respect to which the Laurentq-Lommel polynomials are orthogonal is given. The strong moment functional gives rise to two positive definite moment functionals. For the corresponding sets of orthogonal polynomials, the orthogonality measure is determined using the three-term recurrence relation as a starting point. The relation between Chebyshev polynomials of the second kind and the Laurentq-Lommel polynomials and related functions is used to obtain estimates for the latter. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 7 Electronic Resource Springer Constructive approximation 12 (1996), S. 1-30 ISSN: 1432-0940 Keywords: Primary 28A75, 42B10, 46L55 ; Secondary 05B45 ; Iterated function system ; Affine maps ; Fractional measure ; Harmonic analysis ; Hilbert space ; Operator algebras Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract We consider affine systems inR n constructed from a given integral invertible and expansive matrixR, and a finite setB of translates,σ bx:=R–1x+b; the corresponding measure μ onR n is a probability measure fixed by the self-similarity $$\mu = \left| B \right|^{ - 1} \sum\nolimits_{b \in B} {\mu o\sigma _b^{ - 1} }$$ . There are twoa priori candidates for an associated orthogonal harmonic analysis: (i) the existence of some subset Λ inR n such that the exponentials {eiλ·x}Λ form anorthogonal basis forL 2(μ); and (ii) the existence of a certaindual pair of representations of theC *-algebraO N wheren is the cardinality of the setB. (For eachN, theC *-algebraO N is known to be simple; it is also called the Cuntz algebra.) We show that, in the “typical” fractal case, the naive version (i) must be rejected; typically the orthogonal exponentials inL 2(μ) fail to span a dense subspace. Instead we show that theC *-algebraic version of an orthogonal harmonic analysis, namely (ii), is a natural substitute. It turns out that this version is still based on exponentialse iλ·x, but in a more indirect way. (See details in Section 5 below.) Our main result concerns the intrinsic geometric features of affine systems, based onR andB, such that μ has theC *-algebra property (ii). Specifically, we show that μ has an orthogonal harmonic analysis (in the sense (ii)) if the system (R, B) satisfies some specific symmetry conditions (which are geometric in nature). Our conditions for (ii) are stated in terms of two pieces of data: (a) aunitary generalized Hadamard matrix, and (b) a certainsystem of lattices which must exist and, at the same time, be compatible with the Hadamard matrix. A partial converse to this result is also given. Several examples are calculated, and a new maximality condition for exponentials is identified. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 8 Electronic Resource Springer Constructive approximation 12 (1996), S. 67-94 ISSN: 1432-0940 Keywords: 41A10 ; 41A25 ; 41A28 ; Simultaneous approximation ; Timan-Gopengauz type estimates ; Ditzian-Totik moduli of smoothness Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract Some estimates for simultaneous polynomial approximation of a function and its derivatives are obtained. These estimates are exact in a certain sense. In particular, the following result is derived as a corollary: Forf∈C r[−1,1],m∈N, and anyn≥max{m+r−1, 2r+1}, an algebraic polynomialP n of degree ≤n exists that satisfies $$\left| {f^{\left( k \right)} \left( x \right) - P_n^{\left( k \right)} \left( {f,x} \right)} \right| \leqslant C\left( {r,m} \right)\Gamma _{nrmk} \left( x \right)^{r - k} \omega ^m \left( {f^{\left( r \right)} ,\Gamma _{nrmk} \left( x \right)} \right),$$ for 0≤k≤r andx ∈ [−1,1], where ωυ(f(k),δ) denotes the usual vth modulus of smoothness off (k), and Moreover, for no 0≤k≤r can (1−x 2)( r−k+1)/(r−k+m)(1/n2)(m−1)/(r−k+m) be replaced by (1-x2)αkn2αk-2, with αk〉(r-k+a)/(r-k+m). Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 9 Electronic Resource Springer Constructive approximation 12 (1996), S. 95-110 ISSN: 1432-0940 Keywords: 41A05 ; 41A10 ; 65D05 ; Wavelets ; Chebyshev polynomials ; Interpolation ; Reconstruction ; Decomposition Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract We investigate a polynomial wavelet decomposition of theL 2(−1, 1)-space with Chebyshev weight, where the wavelets fulfill certain interpolatory conditions. For this approach we obtain the two-scale relations and decomposition formulas. Dual functions and Riesz-stability are discussed. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... • 10 Electronic Resource Springer Constructive approximation 12 (1996), S. 31-65 ISSN: 1432-0940 Keywords: Primary 41A46, 68U10 ; Secondary 41A99, 54D35 ; Hausdorff distance ; Hausdorff continuity ; Completed graph ; Pixel function ; Image compression ; Fractal transform operator Source: Springer Online Journal Archives 1860-2000 Topics: Mathematics Notes: Abstract The problem of finding appropriate mathematical objects to model images is considered. Using the notion of acompleted graph of a bounded function, which is a closed and bounded point set in the three-dimensional Euclidean spaceR 3, and exploring theHausdorff distance between these point sets, a metric spaceIM D of functions is defined. The main purpose is to show that the functionsf∈IM D, defined on the squareD=[0,1]2, are appropriate mathematical models of real world images. The properties of the metric spaceIM D are studied and methods of approximation for the purpose of image compression are presented. The metric spaceIM D contains the so-calledpixel functions which are produced through digitizing images. It is proved that every functionf∈IM D may be digitized and represented by a pixel functionp n, withn pixels, in such a way that the distance betweenf andp n is no greater than 2n −1/2. It is advocated that the Hausdorff distance is the most natural one to measure the difference between two pixel representations of a given image. This gives a natural mathematical measure of the quality of the compression produced through different methods. Type of Medium: Electronic Resource Signatur Availability Others were also interested in ... Close ⊗
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589042067527771, "perplexity": 3184.6459744795116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00114.warc.gz"}
http://www.boundaryvalueproblems.com/content/2013/1/277
Research Article # Attractions and repulsions of parallel plates partially immersed in a liquid bath: III Rajat Bhatnagar1 and Robert Finn2* Author Affiliations 1 Department of Biochemistry and Biophysics, California Institute for Quantitative Biosciences, University of California, San Francisco, CA, 94143, USA 2 Mathematics Department, Stanford University, Stanford, CA, 94305-2125, USA For all author emails, please log on. Boundary Value Problems 2013, 2013:277  doi:10.1186/1687-2770-2013-277 Received: 30 September 2013 Accepted: 30 September 2013 Published: 13 December 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract We continue earlier studies on the indicated configuration, improving previous estimates, providing explicit expressions for the relevant forces and a formal algorithmic procedure for their calculation, and sharpening and extending the predictions for qualitative distinctions among varying types of behavior that can occur. We include graphical representations for some of the more significant relations, as an aid to interpretation and for eventual design of experiments to test the physical relevance of the new material. ### 1 Background remarks The present work is a continuation of [1] and of [2], where the behavior of the solutions of the capillary equation for the surface height of liquid in an infinite tank is described, in terms of the contact angles of the liquid with two infinite parallel plates that are partially immersed into the liquid and held rigidly. We describe here an algorithmic procedure for explicitly calculating the forces of attraction or repulsion between the plates and , depending on the respective contact angles and with the fluid, on the sides of the plates that face each other. As pointed out in [1] and in [2], the net forces in question do not depend on the contact angles at the triple interfaces on the opposite (outer) sides of the plates; that is a consequence of the hypothesis that the fluid surface extends to infinity in the two directions exterior to the plate configuration and orthogonal to it. We may thus concentrate attention on the integral curves for the fluid height on a section of the channel joining the plates; these curves are determined as solutions of the ‘capillary equation’ (1.1) where ψ is inclination of the curve with the horizontal, and u is the height above the level at infinity. This equation asserts geometrically that the planar curvature of the interface is proportional to the height above the (uniquely determined) level at infinite distance from the plates (see, e.g., Theorem 2.1 of [2]). We assume in this work that the proportionality factor , as occurs for a non-zero gravity acceleration g directed downward toward the fluid. Physically, , where ρ is density change across the interface and σ is the surface tension arising from the fluid/fluid interface. We are interested in categorizing the ranges of qualitatively distinct behavior that can occur. In accord with engineering practice and in cognizance of relevant uniqueness properties, the distinctions are best displayed in terms of non-dimensional parameters: setting , , , where 2a is the distance between the plates, (1.1) takes the form In the non-dimensional coordinates, the plates are always two units apart. The physical concept of plate separation is replaced by the magnitude . We proved in [1] that for arbitrary , there is a unique solution of (1.1) in the interval between the plates, achieving prescribed inclinations , (equivalently, contact angles , ) on the respective plates. Correspondingly, there is a unique solution of (1.1+) for arbitrary and contact angles. We obtain an explicit representation in terms of ψ as parameter by rewriting (1.1+) in the form (1.2a) (1.2b) The latter relation separates: (1.3) from which (1.4) for any solution of (1.3) with inclination at height . From (1.2a) now follows that for any two points and on the solution (1.4), there holds: (1.5) on any interval on which ψ is monotonic in ξ. Note that (1.5) relates three distinct points on the solution curve, any two of which may coincide. We shall have to take pains in each instance to ensure that the correct branches for the roots are used. One sees easily that aside from the trivial solution of (1.1+), inflections of a solution curve occur exactly at crossing points of that curve with the ξ-axis, that at most one such point occurs on any solution, and that the sense of monotonicity of ψ reverses at every such point. Thus when such a crossing occurs, the integral in (1.5) must be split into two parts with the senses of integration reversed. We observe that only a single inflection can occur on any solution curve, see the assertions (1) to (5) in Sec. II of [1]. Solutions in the infinite intervals exterior to the plates are uniquely determined by the contact angle on the plates facing the respective interval and by the requirement of being defined in an infinite interval; see Theorem 2.1 in [2]. These solutions are asymptotic to the ξ-axis but do not contact it; they admit the representations (1.6) Here refers to the plate position, and is the inclination on that plate. Note that four distinct solutions appear, depending on the signs of the non-constant roots. The constant root is taken as positive. In Figures 1, 2, 3, particular integral curves T, I, II, III, IV, IV0, and V are sketched for the subset of solutions of (1.1) in the interval between the plates and meeting in the prescribed angle , and for successively decreasing plate separations. We have chosen for convenience . In the figures, some of the curves are extended beyond the plates as solutions, at least to the extent to which they can be represented as graphs. The sketched curves serve as barriers for distinguishing the qualitative structures of general solutions. No two of the curves in can cross each other within the interval in which both are graphs, and the regions between adjacent barriers serve to distinguish specific global behaviors. Figure 1. Large plate separation. Figure 2. Intermediate plate separation. Figure 3. Small plate separation. In the ensuing context, we discuss in detail the specific roles of the indicated curves. For convenience, we provide here a preliminary outline of essential features. • The curves I and V are determined by (1.6) with and choosing respectively the positive and negative signs for the non-constant roots. IV0 is the unique curve of meeting at its crossing point with the ξ-axis. These three curves are rigidly attached toand are independent both of the position of, and of the contact angle. T is the ‘top’ barrier, in the sense that there are no higher solution curves of . II is the unique curve of meeting at its crossing point with the ξ-axis. III is the symmetric solution, meeting in angle and in angle . It crosses the ξ-axis at the midpoint between the plates, independent of plate separation. IV is the unique curve of that meets in angle π. When the plates are sufficiently far apart (B large enough), neither IV0 nor V extends to , and IV lies above both these curves between the plates. As the plates come together, a critical separation is attained, for which IV passes through and can then be shown to coincide with IV0. Figure 1 prevails when the plates exceed this separation so that ; we refer to such configurations as large separations. Closing the gap further, IV moves below IV0, remaining at first above V, and the relevant picture becomes Figure 2. Further gap closure leads to a value at which IV and V coincide, again with the common curve extending exactly to . For , V will lie above IV in the interval connected to in which both curves are graphs, and Figure 2 must be replaced by Figure 3. From (1.6) we find that the critical separation for this change is (1.7) using positive roots. The corresponding value is determined by (3.1). The two crucial values for are illustrated in Figure 4. Figure 4. The half plate separationsand. We established in [1] and in [2] that solution curves joiningandare attracting when the extended curve has a positive minimum or negative maximum. Denoting that height by , and the (horizontal) attracting force by F, we are led to a non-dimensional attracting force (1.8) in units of σ, with . When the extended curve is asymptotic to the ξ-axis at infinity, then and there is no force between the plates. The final alternative is that the extended curve meets the axis at some point . When that occurs, the curve is either the trivial solutionor else it crosses the axis in an angleat a uniquely determined pointand there will be a repelling force (1.9) in units of σ, tending to separate the plates. We note the immediate universal bound, for every repelling configuration. ### 2 Configurations We consider the family of solutions in the interval between and . We have from (1.4) that if , then the solution will not cross the ξ-axis; if , the curve will then attain a minimum height at a point where ; thus at this point we find ; in further continuation the curve becomes vertical at a height for which . If in (1.5) we set ξ to be the position of the plate , then we find (2.1) For givendimensionless plate separation, relation (2.1) uniquely determines the (positive) heightfor which the solution meetsin the contact angleandin the prescribed angle. The valuethus found is the highest initial positionfor which the solution inextends to meet the plate. We designate this solution with T. It forms an upper barrier for all solutions in that extend as graphs to meet . The dependence of the plate heights , on is illustrated in Figure 5. Figure 5. The intersection heights,with the plates, as functions of half plate separation;. Referring to any of Figures 1, 2, 3, we see that the two plates together with T and I determine a non-null closed region , topologically a disk, which is simply covered by a subset of solutions in , all of which yield attracting forces (or zero force in the unique case of I). We obtain such a region for every choice of ‘plate separation’ . If , then for the givenno further attracting solutions can be found; all further solutions inwill be repelling. But if , then IV and V coincide, providing a negative solution yielding zero force; if , then IV moves below V and there is a new region of negative solutions providing attracting forces (Figure 3). Thus when the plates are close enough to each other, two complementary regions appear, one of positive solutions and the other of negative solutions, both of which yield attracting forces between the plates. The curve II is the unique element of whose height vanishes on . The region between I and II is again simply covered by solutions in . All these curves lie in the upper half-plane within but intercept the ξ-axis when extended, and thus provide repelling forces. III is the symmetric solution, achieving on the contact angle . This curve has special properties as we shall see below. Within the region , all curves of are repelling; all curves cross the axis and the sense of monotonicity of ψ in x reverses on crossing, so that special precautions must be taken in the representations. Corresponding comments apply for when . In that event, there are no solutions in below IV, and IV then plays to some extent the role at the bottom that T plays at the top, the adjacent solutions being, however, repelling rather than attracting. If , then a region of repelling solutions is created, as is the new region of attracting solutions. IV, however, retains its property of being a lower barrier below which there are no elements of . ### 3 Barrier curves All barrier curves have the common inclination on . They are distinguished by the choices of angles with which they intersect . #### 3.1 The barriers T and IV By definition, for the upper curve T, we have ; equivalently, . Equation (2.1) determines the height . The counterpart for negative solutions is the curve IV at the bottom, which needs a bit more discussion. We introduce the further barrier IV0 which meets in angle at the level . This curve becomes vertical at a critical ‘dimensionless separation’ , and we find (3.1) using positive roots. If , then IV lies below IV0, so that on IV, and we may write (3.2) a relation uniquely determining the (negative) at which IV meets . If , then IV contains both positive and negative heights, and account must be taken of the change in sense of its curvature at the crossing point with the ξ-axis, where . Denoting the inclination at that point by , we obtain using (1.3) separately on the negative and positive portions of the curve, (3.3) which can be used to determine . The (negative) height and (positive) height can then be determined from the analogues of (1.4) (3.4) #### 3.2 The barriers I and V These are determined by (1.6), using appropriate signs for the roots. Note that is the same for both curves. Under our choice , I always extends to meet both plates, however V does so only if . #### 3.3 The barrier II II is the particular curve in with zero height on . The crossing angle is determined as in (3.3) by (3.5) since there is no contribution from below the axis. We then obtain the height from the second of relations (3.4). #### 3.4 The barrier III III is the symmetric curve with contact angle on . It cuts the ξ-axis at the midpoint between the plates, and thus by analogy with (3.5) the angle of that intercept can be obtained from (3.6) Again we find from the second equation in (3.4). By symmetry, . The barrierIIIhas the unique property, that ifis fixed and the separation, the configuration remains repelling, with III asymptotic to the symmetric linear segment inclined at to the axis and joining the plates. ### 4 Force calculations We proceed to calculate the forces between the plates in varying configurations. In practice, the accessible parameters will generally be the contact angles and on the sides of the plates facing each other, and the dimensionless plate separation . Other parameters, such as the height of the contact points and with the plates or the height of a local extremum or position of the crossing point with the ξ-axis, can be substituted via relations (1.4)-(1.6). Our basic force relations are (1.8) and (1.9), corresponding respectively to the attracting and repelling cases. The results for varying configurations are illustrated in Figures 6-13. Figure 6. The anglefrom (4.2). Figure 7. The net attracting force, from (4.4), with. The force vanishes when . Figure 8. Attracting forces, negative solutions: plots of (4.5) and (4.6) assuming. Solid lines indicate forces at the specified (right vertical axis). Dashed line indicates (left vertical axis). Figure 9. Repelling forces, positive solutions: plots of (4.2), (4.7), (4.8),. Solid lines indicate forces at the specified (right vertical axis). Dashed lines indicate specified angles (left vertical axis). Figure 10. Equations (4.7) and (4.9);. Figure 11. Equation (4.10);. Figure 12. Forces on the symmetric curve III. Figure 13. Plots in rangebased on (4.18) and (4.19). Note that for , and hold. #### 4AP Attracting forces, positive solutions These are encountered only in (see Section 2). As shown in [1], the dimensionless force is determined, in units of σ, from (4.1) see (1.8). For T we have (). For I, letting denote the inclination at the crossing with , we find using (1.6) that (4.2) which uniquely determines for any prescribed separation 2a. Thus, for the given, attracting solutions prevail whenever. To calculate the net force between the plates, we return to (1.5), replacing the reference point by the (positive) minimizing point of height and setting . We find, since at the local minimum, (4.3) In view of (1.8), we may rewrite this relation in the form (4.4) which determines ℱ uniquely in terms of the contact angles on the plates and the separation. #### 4AN Attracting forces, negative solutions If (see (1.7)), there are no such solutions. When the plates are close enough so that , a new region appears (Figure 3) in which the (extended) solutions have negative maxima, and the net force will again be attracting. We may emulate the 4AP discussion. The crossing angle of V with is now determined by (4.5) Attracting solutions can be found with any in the range , and the net attracting force is obtained from (4.6) using the positive root. #### 4RP Repelling forces, positive solutions Repelling solutions all cross the ξ-axis and thus change sign; however, we may characterize those that are positive between the plates as those lying in the region . Denoting by the inclination of II at the crossing point with , we find by procedures analogous to those above (4.7) which determines in terms of the separation. Solutions will be positive between the plates and repelling whenever . In view of (1.9), the repelling force ℱ in this range will be determined by (4.8) #### 4RPN Repelling forces, changing sign In the region , we have . Solutions continue to repel, however their heights change sign between the plates. We shall see below that this has significant effects on limiting behavior as . Since the orientations change at the crossing points, we must split the integration in (4.8) into two parts. Applying (1.5) separately over each of the two segments and into which the crossing point divides the interval 2a and adding, we find (4.9) which determines the crossing angle . According to (1.9), the normalized force ℱ can be computed directly from (4.10) #### 4S The symmetric curve III This curve has a special interest. Regardless of plate separation, it crosses the axis at the midpoint between the plates, and thus yields repelling force for every separation. Relation (4.9) simplifies to (4.11) or, equivalently, From (4.11) follows (4.12) from which . In the other direction, we note that , and thus (4.13) leading to bounds in both directions for the repelling force . These bounds are precise asymptotically as . For large separations, (4.13) loses in precision. We improve it by observing that the actual solution in the positive portion of the interval between the plates lies below the line segment joining the crossing point to the height on . Thus, (4.14) Using (1.4), we find , so that (4.15) from which (4.16) yielding a perfunctory but conceptually useful bound for ℱ that could be improved in detail by using again the left-hand side of (4.13). Actually, the force for large a vanishes exponentially in a as follows from the general estimates of Siegel [3]. If B is small, we find from (4.13) and the monotonicity of ψ in x on each side of the halfway point between the plates that asthe inclination ofIIIbetween the plates tends uniformly to. As a consequence, we find that the normalized repelling force of the solutionIIItends in the limit to the magnitude (4.17) as the plates approach each other. Corresponding to fixed contact angles and on the plates, no other solution curveshares this behavior. The above force calculations apply for any choice of separation 2a. To continue with solutions joining the plates but situated below III, we distinguish cases according to the plate separation. #### 4.1 In this event IV lies above IV0, see Figure 1. The range of inclinations achieved in the corresponding region is . All solution curves cross the x-axis between the plates, and the force calculation proceeds as in (4.9), (4.10), with the extended range for . We shall see, however, in Section 5 that from the point of view of limiting behavior as the plates approach each other, it would not be appropriate to join this region with the preceding one. No solutions meeting both plates and achieving the contact angleonexist belowIV. #### 4.2 IV lies below IV0 but above V, see Figure 2. IV0 meets in an inclination , determined by (4.18) In the region , we have , the solution curves cross the x-axis between the plates, and we may again use the procedure indicated by (4.9), (4.10). In the remaining region , there is no crossing point between the plates, the fluid level is negative with ψ decreasing in ξ, and the force is obtained from the modified version of (4.8): (4.19) #### 4.3 Now IV lies below V, see Figure 3. We obtain a region which falls in the range of 4.2 above, then a region yielding repelling solutions with no axis crossing between the plates. lies in the range , where is determined from (4.20) The net force arising from each curve in is then obtained from (4.19). ### 5 Limiting behavior for small separation With given contact angles on the two plates (corresponding to prescribed materials), we investigate the consequences of varying the separation of the plates. We effect the change conveniently by holding fixed and displacing in either direction. It is crucial to observe that in such a displacement, the barriers I, IV0 and V are rigidly attached to (and hence remain fixed), and the barrier III continues to pass through the midpoint on the x-axis. II and IV move downward as a decreases. The set of solutions examined is rigidly attached to and does not change as is shifted; however, the choice of elements within must change to maintain prescribed conditions on . Note that the geometric locus of II - when considered as an element of determined by its contact angles with the plates - moves upward in the family with decreasing a, but when considered as defined by its property of passing through the intersection of with the x-axis, it moves downward. We distinguish the initial ℛ-regions and examine what happens to a typical solution curve in each such region, with decreasing a. #### 5.1 Curves above T In a given configuration, there are no solutions above T that meet in angle and extend to . If we allow a to decrease with and fixed, a new T+ appears, lying above T. There are no solutions above T+, but the original is extended with the new (attracting) solutions in the region , consisting of solutions that previously did not extend to . Every curve above T and meeting in angle eventually falls into this category, as a decreases toward zero. #### 5.2 Curves above I We consider a particular such curve of , displace toward , and ask what must be done to preserve the original angle on . Since is convex upward, the angle with will increase when is displaced toward . To retain the original angle , one must move to a curve above the original one. As a consequence, every curve oflying aboveImoves upward and remains attracting following the displacement. Additionally, new attracting curves will appear above T, as noted in 5.1. Each of the curves considered has a positive minimum , or else achieves (in the particular case of I) a minimum zero at infinity. We can determine the net attracting force by estimating and using (4.1). Adapting (1.5) to the configurations considered, we find (5.1) Here can be arbitrary in the interval , where is the inclination of the barrier I at its intersection with . We distinguish the cases in which the minimizing point occurs outside the interval between the plates () from those for which it occurs within that interval (). In the former case, we have , and by inserting the end values or into the root under the integral sign, we get (5.2a) In the latter case, cosψ achieves its maximum (=1) interior to the interval , and the indicated procedure yields instead the slightly weaker estimate (5.2b) The crossover value is uniquely determined by the particular case (4.5) of (5.1). The normalized attracting force ℱ can now be estimated using (4.1). Relations (5.2a), (5.2b) provide an explicit version of Laplace’s discovery [4] that the attracting forces remain attracting and become unbounded as the inverse square of the distance between the plates, as the separation decreases to zero. #### 5.3 Negative attractors In the event , a second interval of (negative) attracting solutions appears above IV and below V. For in the range we have chosen, the discussion for these solutions is analogous and somewhat simpler than the one just given, as in no case does the (negative) maximum appear between the plates. Again attracting solutions remain attracting as plate separation decreases; the estimate (5.2a) prevails, albeit with and interchanged. #### 5.4 Repelling case This case is discussed in explicit detail in [2]; we include here in outline form some essential features, returning for explicit convenience to direct physical notation. To begin, let us look at the point on , as in Figure 4. When is displaced an amount δ toward , the horizontally displaced will encounter too large an inclination from the element of passing through that point, as that element will have the same inclination on as does the indicated solution , and at every position x between the plates, its height u is smaller than that of that solution and thus by (1.1) its curvature is smaller. Thus the inclination of the field element at exceeds that at q, which in turn exceeds that at . Therefore in order to attain the initial slope again, one will have to move upward on the displaced plate . Since , and high enough points on yield , a (unique) such point can always be found. We observe now that on the original vertical segment of joining I and II, holds (see Figures 1, 2, 4 for notation). Thus the angle is attained at some intermediate point on I between and . We choose δ so that passes through that point. The solution curve then has the identical data on the two vertical plates as does I, and by the uniqueness theorem (see [1]) must coincide with I. Looking more closely, we see that by movingcontinuously toward, we obtain a continuous family of solutions joining the plates, with left-hand end points rising in the motion, and such that at some intermediate position strictly between the plates, the solution will coincide with the portion ofIto the right of that point. Once that happens, all further motion ofto the right leads to attracting solutions to which the material of preceding sections applies. Note that for the given on , this ‘crossover’ behavior occurs for all in the range between and . The ‘crossover position’ between the plates, where the solution curve joins with I and yields zero net force, is determined explicitly from the relation (5.3) For any chosen within its range, we obtain a picture of a succession of solution curves, defined over successively shorter subintervals attached to between the plates, each successively closer to I and yielding successively smaller repelling forces, until the solution curve coincides with a non-null portion of I, providing zero force. See Figure 14. Figure 14. Behavior of solution curves with changing plate separation; contact angles prescribed. The force magnitude is obtained using (1.9). We adapt (1.5) to obtain the inclination from (5.4) Having determined , the position of crossing with the x-axis can be found from the expression for its distance from : (5.5) #### 5.5 Repelling case These are still repelling solutions as they continue to cross the x-axis. Nevertheless, there are significant changes from the case just considered, as the initial heights on are negative. The sense of curvature of the solution curves reverses in the negative region, and account must be taken of that change in two senses. We note first that the range of angles that arises is . To determine the force arising from a given solution curve, we need only determine according to (1.9) the angle of intercept with the x-axis. Using (1.5) separately in the positive and in the negative regions and adding, we obtain (5.6) which determines and hence the force. We may then obtain the distance to of the intercept, from the relation (5.7) A further change in behavior occurs in that if one moves a small distance δ toward , one finds that to maintain the same initial angle , one must look downward instead of upward as before. As a consequence, the new solution curve lies below the previous one, it will be further from I than was the previous one, and will increase the repelling force rather than decreasing it as above. Nevertheless, it turns out that the sequence of solutions thus constructed converges to a segment ofIjust as did the previous one. This assertion may at first seem in conflict with the behavior just described; however, one can show that although the solution curves at first diverge from I, their starting points on the intersections with the successive planes actually rise and become positive prior to reaching . Once that happens, the discussion of 5.4 applies again without change, and the corresponding behavior is observed. A complete proof of this behavior appears in [2]. We see that if the initially chosenlies in the range, then, as the plates are brought together, the repelling force will initially strengthen to a maximum, and then will weaken to a critical separation at which the solution coincides withIand yields zero force, and will finally become attracting with force increasing as the inverse square of the separation, according to (5.2a), (5.2b). We can characterize these critical configurations explicitly. The maximum repelling force is achieved corresponding to a starting point lying on the x-axis with solution inclined at angle . We thus set , in (1.4), and in (1.5) we let denote the coordinate of the crossing point. We find (5.8) which yields a unique value such that a solution with inclination at that point will achieve the inclination on . From (1.9) we find for the maximum forcein this procedure (5.9) a remarkable formula yielding explicitly the maximum repelling force achievable by bringing the plates together, whenever the prescribed datumis chosen from the range. As a corollary, we see that the absolute maximum repelling force for all configurations on or above the symmetric oneIIIappears withIIIitself, when. #### 5.6 The symmetric curve III As we move downward through the range , the angle increases from the angle with which II cuts the x-axis to the angle , achieved by III. The curve III itself fails, however, to become attracting with decreasing separation; we see that immediately since, due to its symmetry, it cuts the x-axis for every separation. In Section 4S we have already established upper and lower bounds for the repelling force in this case, notably the non-zero limit as the plates approach each other. The material above together with what is to follow shows that IIIis isolated in this respect; every other configuration with fixed angles on the plates becomes attracting as the plates come together, of magnitude rising to infinity as the inverse square of the separation distance. Thus there is a very striking singular limiting behavior in configurations adjacent to the symmetric one. Physically, this corresponds to liquid going to positive infinity when it is initially above III, and to negative infinity when it is initially below III. It should be of interest to observe this transition experimentally. We continue to discuss the remaining cases that occur; to this purpose we return to non-dimensional notation. #### 5.6.1 Large separation: This is illustrated in Figure 1. A new region appears with in the range . Since all these angles exceed , they cannot reappear on the curve I as happens for initial datum above that of III, and thus the convergence to a segment of I does not recur here. We observe that the range of that appears is identical to the range of ψ on the portion of V between the plates. Since ψ is monotonic on V, there is a unique point on this arc at which the initially chosen value of appears, see Figure 4. Denote by the x-coordinate of that point. When is moved toward , one finds one must move downward from the initial height in order to achieve again the same initial inclination . Thus the succession of solution curves moves toward V, with the repelling force decreasing. The exotic behavior noted in 5.5 above, with repelling force initially increasing as the plates come together, does not reappear for the region belowIII. When is situated at , the data of the relevant solution curve at its two endpoints coincide with those of V at those points, and thus the two curves coincide on the interval, with vanishing force. Further approach of toward yields attracting solutions, with forces controlled by (5.2a), (5.2b). Setting , the position is determined from the relation (5.10) with the coordinate of . For any , the force ℱ will be attracting, and we may determine it from (5.11) In the present case , there are no solutions below IV joining the plates and which meet in the prescribed angle . For characterization of IV, see Section 3.1. #### 5.6.2 Intermediate separation: The relevant picture for the initial configuration is now Figure 2. We obtain two new regions for repelling solutions, viz. and . #### 5.6.2.1 All solutions are repelling and cross the axis between the plates. The configuration is fully analogous to that of , and analogous considerations apply. See Section 5.5. The repelling forces successively increase to a maximum of , then solutions move to V and proceed to cross over and become attracting. #### 5.6.2.2 All solutions are repelling and cross the axis outside the plates. The situation is essentially that of . As the plates move together, the repelling force decreases monotonically to zero and then attracting forces prevail. See Section 5.4. #### 5.6.3 Small separation: The situation is now essentially analogous to the initial discussion for curves lying above III. We remark the technical distinction that the minimizing point on the upper barrier arc T lies always between the plates; those for the corresponding lower barriers lie to the right of , although they approach that plate with decreasing separation 2a. ### 6 Notes added in proof 1. After completing this work, we were informed by John McCuan of an earlier paper [5] in which some of the material relates closely to the topic of the present study. Our contribution can be regarded as an improvement on Section 4 of [5], in the sense that we study the question in the context of the fully nonlinear equations, in preference to the linearization adopted in that reference. The particular geometry of the configuration permits us to integrate the equations explicitly in original form, yielding expressions that describe general physical laws. Beyond the evident improvements in precision and detail, we were led to the discovery that the net attracting (repelling) force on the plates is independent of the contact angles that occur on their outer sides; thus the restriction made in [5] to plates with identical angles on the two sides is superfluous. We find also the general theorem that the net force is repelling or attracting, according as the (extended) solution curve joining the plates in a vertical section does or does not contain a zero for the height on its traverse, the net force being then provided respectively by the elementary formulas (1.9) or (1.8). We obtain additionally a more complete description of the limiting behavior as given plates approach each other (this behavior becomes dramatically singular for solutions close to the symmetric one; see Section 5 of the present work). 2. The exact formal theory was additionally a help for us toward avoiding misleading inferences suggested by the linearization, among them the erroneous statement in [5] opening the final paragraph on p.819: ‘This result shows that vertical plateswill attract if they have like menisci and otherwise repel…’. In fact (as shown in Section 4AP) for any plate separation and acute angle , the solutions in the non-null subset of for which have unlike menisci at the plates and for these solutions the plates nevertheless attract each other. ### Competing interests The authors declare that they have no competing interests. ### Authors’ contributions Both authors contributed equally in this work, in all respects. ### Acknowledgements The latter author is indebted to the Mathematische Abteilung der Universität and to the Max-Planck-Institut für Mathematik in den Naturwissenschaften, in Leipzig, for invaluable support during preparation of this work. ### References 1. Finn, R: On Young’s paradox, and the attractions of immersed parallel plates. Phys. Fluids. 22, Article ID 017103 (2010) 2. Finn, R, Lu, D: Mutual attractions of partially immersed parallel plates. J. Math. Fluid Mech.. 15(2), 273–301 (2013). Publisher Full Text 3. Siegel, D: Height estimates for capillary surfaces. Pac. J. Math.. 88(2), 471–515 (1980). Publisher Full Text 4. Laplace, PS: Traité de mécanique céleste, oeuvres complète, vol. 4, Gauthier-Villars, Paris, 1805, Supplément 1, livre X, pp. 771-777. Supplément 2, livre X, pp. 909-945. See also the annotated English translation by N. Bowditch 1839. Chelsea, New York (1966) 5. Vella, D, Mahadevan, L: The cheerios effect. Am. J. Phys.. 73, 817–825 (2005). Publisher Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928606390953064, "perplexity": 828.3897484629186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00148-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/276471-inequality.html
1. ## inequality hello,i tried to solve the following inequality but found difficulties i tried AM GM but didn't seem to work a,b,c are positive real numbers with a+b^²+c^4=28 prove that abc≤64 any help or advice is highly appreciated by amgm i had 28/3≥cubicroot(ab^2c^4) but didn't know how to advance further i did some other attempts but i failed 2. ## Re: inequality I am not sure that this helps because it uses calculus. $d > 0,\ e > 0\, f > 0, \text { and } d + e + f = 28.$ $z = def - y(28 - d - e - f) \implies$ $\dfrac{\delta z}{\delta d} = ef - y \implies ef = y \text { if } \dfrac{\delta z}{\delta d} = 0.$ $\dfrac{\delta z}{\delta e} = df - y \implies df = y \text { if } \dfrac{\delta z}{\delta e} = 0.$ $\dfrac{\delta z}{\delta f} = de - y \implies de = y \text { if } \dfrac{\delta z}{\delta f} = 0.$ $\dfrac{\delta z}{\delta y} = 28 - d - e - f \implies d + e + f = 28 \text { if } \dfrac{\delta z}{\delta y} = 0.$ $ef = y \implies y > 0 \text { and } f = \dfrac{y}{e}.$ $df = y \implies d * \dfrac{y}{e} = y \implies \dfrac{d}{e} = 1 \implies d = e.$ $de = y \text { and } f = \dfrac{y}{e} \implies f = \dfrac{de}{e} = d = e \implies$ $d = e = f = \dfrac{28}{3}.$ In other words, to maximize (d * e * f) subject to (d + e + f) = 28, d = e = f = 28/3. How you discover that with algebra, I am not sure, but I suppose you can confirm it with algebra. $\therefore a = \dfrac{28}{3} < 9.5,\ b = \sqrt{\dfrac{28}{3}} < 3.1 , \text { and } c = \sqrt[4]{\dfrac{28}{3}} < 1.8.$ $\therefore abc < 9.5 * 3.1 * 1.8 = 53.01 \le 64.$ 3. ## Re: inequality Originally Posted by JeffM I am not sure that this helps because it uses calculus. $d > 0,\ e > 0\, f > 0, \text { and } d + e + f = 28.$ $z = def - y(28 - d - e - f) \implies$ $\dfrac{\delta z}{\delta d} = ef - y \implies ef = y \text { if } \dfrac{\delta z}{\delta d} = 0.$ $\dfrac{\delta z}{\delta e} = df - y \implies df = y \text { if } \dfrac{\delta z}{\delta e} = 0.$ $\dfrac{\delta z}{\delta f} = de - y \implies de = y \text { if } \dfrac{\delta z}{\delta f} = 0.$ $\dfrac{\delta z}{\delta y} = 28 - d - e - f \implies d + e + f = 28 \text { if } \dfrac{\delta z}{\delta y} = 0.$ $ef = y \implies y > 0 \text { and } f = \dfrac{y}{e}.$ $df = y \implies d * \dfrac{y}{e} = y \implies \dfrac{d}{e} = 1 \implies d = e.$ $de = y \text { and } f = \dfrac{y}{e} \implies f = \dfrac{de}{e} = d = e \implies$ $d = e = f = \dfrac{28}{3}.$ In other words, to maximize (d * e * f) subject to (d + e + f) = 28, d = e = f = 28/3. How you discover that with algebra, I am not sure, but I suppose you can confirm it with algebra. $\therefore a = \dfrac{28}{3} < 9.5,\ b = \sqrt{\dfrac{28}{3}} < 3.1 , \text { and } c = \sqrt[4]{\dfrac{28}{3}} < 1.8.$ $\therefore abc < 9.5 * 3.1 * 1.8 = 53.01 \le 64.$ There is something wrong with the last part of this argument. if $a=16, b=2\sqrt{2}, c=\sqrt{2}$ then $a b c=64$ giving the maximum of $a b c$ so $a b c$ cannot be less than 53.01 4. ## Re: inequality You are correct. I think I better give it up. I need to be looking at $d + e^2 + f^4= 28$ as the constraint if dealing with $z = def.$ I'll work on that. 5. ## Re: inequality i found abc≤28²/8c because (a+b^2+c^4)^2≥4a(b^2+c^4)≥8abc^2 6. ## Re: inequality $a > 0,\ b > 0\, c > 0, \text { and } a + b^2 + e^4 = 28.$ $z = abc - y(28 - a - b^2 - c^2) \implies$ $\dfrac{\delta z}{\delta a} = bc - y \implies bc = y \text { if } \dfrac{\delta z}{\delta a} = 0.$ $\dfrac{\delta z}{\delta b} = ac - 2by \implies ac = 2by \text { if } \dfrac{\delta z}{\delta b} = 0.$ $\dfrac{\delta z}{\delta c} = ab - 4c^3y \implies ab = 4c^3y \text { if } \dfrac{\delta z}{\delta c} = 0.$ $\dfrac{\delta z}{\delta y} = 28 - a - b^2 - c^4 \implies a + b^2 + c^4 = 28 \text { if } \dfrac{\delta z}{\delta y} = 0.$ $bc = y \implies y > 0 \text { and } c = \dfrac{y}{b}.$ $ac = 2by \implies a * \dfrac{y}{b} = 2by \implies a = 2b^2.$ $ab = 4c^3y \implies 2b^3 = 4c^3 * bc \implies 0.5b^2 = c^4 \implies c = \sqrt[4]{0.5b^2}.$ $\therefore 28 = a + b^2 + c^4 = 2b^2 + b^2 + 0.5b^2 = 3.5b^2 \implies$ $0.5 * 7b^2 = 28 \implies 0.5b^2 = 4 \implies b^2 = 8 = 4 * 2 \implies b = 2\sqrt{2} \implies$ $a = 2 * 8 = 16 \text { and } c = \sqrt[4]{0.5 * 8} = \sqrt{2}.$ And $16 + 8 + 4 = 28 \text { and } 16 * 2\sqrt{2} * \sqrt{2} = 16 * 2 * 2 = 64.$ $\therefore abc \le 64.$ 7. ## Re: inequality thank you sir 8. ## Re: inequality Originally Posted by aray thank you sir You are welcome. Sorry for initial mistake. Idea came to rescue. 9. ## Re: inequality Originally Posted by JeffM $a > 0,\ b > 0\, c > 0, \text { and } a + b^2 + e^4 = 28.$ $z = abc - y(28 - a - b^2 - c^2) \implies$ $\color{red}{\dfrac{\delta z}{\delta a} = bc - y \implies bc = y \text { if } \dfrac{\delta z}{\delta a} = 0.}$ $\color{red}{\dfrac{\delta z}{\delta b} = ac - 2by \implies ac = 2by \text { if } \dfrac{\delta z}{\delta b} = 0.}$ $\color{red}{\dfrac{\delta z}{\delta c} = ab - 4c^3y \implies ab = 4c^3y \text { if } \dfrac{\delta z}{\delta c} = 0.}$ $\dfrac{\delta z}{\delta y} = 28 - a - b^2 - c^4 \implies a + b^2 + c^4 = 28 \text { if } \dfrac{\delta z}{\delta y} = 0.$ $bc = y \implies y > 0 \text { and } c = \dfrac{y}{b}.$ $ac = 2by \implies a * \dfrac{y}{b} = 2by \implies a = 2b^2.$ $ab = 4c^3y \implies 2b^3 = 4c^3 * bc \implies 0.5b^2 = c^4 \implies c = \sqrt[4]{0.5b^2}.$ $\therefore 28 = a + b^2 + c^4 = 2b^2 + b^2 + 0.5b^2 = 3.5b^2 \implies$ $0.5 * 7b^2 = 28 \implies 0.5b^2 = 4 \implies b^2 = 8 = 4 * 2 \implies b = 2\sqrt{2} \implies$ $a = 2 * 8 = 16 \text { and } c = \sqrt[4]{0.5 * 8} = \sqrt{2}.$ And $16 + 8 + 4 = 28 \text { and } 16 * 2\sqrt{2} * \sqrt{2} = 16 * 2 * 2 = 64.$ $\therefore abc \le 64.$ For the three lines in red, it appears a negative sign was missed. It does not affect the end result (as the negatives seem to come in pairs, bringing the result back to positive). $\dfrac{\partial z}{\partial a} = bc+y \Longrightarrow bc = -y \text{ if } \dfrac{\partial z}{\partial a} = 0$ $\dfrac{\partial z}{\partial b} = ac+2by \Longrightarrow ac = -2by \text{ if } \dfrac{\partial z}{\partial b} = 0$ $\dfrac{\partial z}{\partial a} = ab+4c^3y \Longrightarrow ab = -4c^3y \text{ if } \dfrac{\partial z}{\partial c} = 0$ $bc = -y \Longrightarrow y<0$ and $c=-\dfrac{y}{b}$. $ac=-2by \Longrightarrow -a\dfrac{y}{b} = -2by \Longrightarrow a=2b^2$ $ab=-4c^3y \Longrightarrow 2b^3=-4c^3(-bc) \Longrightarrow 0.5b^2=c^4 \Longrightarrow c = \sqrt[4]{0.5b^2}$ The rest follows the same. 10. ## Re: inequality Originally Posted by SlipEternal For the three lines in red, it appears a negative sign was missed. It does not affect the end result (as the negatives seem to come in pairs, bringing the result back to positive). $\dfrac{\partial z}{\partial a} = bc+y \Longrightarrow bc = -y \text{ if } \dfrac{\partial z}{\partial a} = 0$ $\dfrac{\partial z}{\partial b} = ac+2by \Longrightarrow ac = -2by \text{ if } \dfrac{\partial z}{\partial b} = 0$ $\dfrac{\partial z}{\partial a} = ab+4c^3y \Longrightarrow ab = -4c^3y \text{ if } \dfrac{\partial z}{\partial c} = 0$ $bc = -y \Longrightarrow y<0$ and $c=-\dfrac{y}{b}$. $ac=-2by \Longrightarrow -a\dfrac{y}{b} = -2by \Longrightarrow a=2b^2$ $ab=-4c^3y \Longrightarrow 2b^3=-4c^3(-bc) \Longrightarrow 0.5b^2=c^4 \Longrightarrow c = \sqrt[4]{0.5b^2}$ The rest follows the same. Umm I set up z = abc MINUS y * (28 and so on). As you say it makes no difference. 11. ## Re: inequality Originally Posted by JeffM Umm I set up z = abc MINUS y * (28 and so on). As you say it makes no difference. Correct, $abc - y(28-a-b^2-c^4) = abc-28y \color{red}{+} ay \color{red}{+} b^2y \color{red}{+} c^4y$ because of the distributive property. 12. ## Re: inequality We can also solve the problem using AM-GM $\frac{\frac{a}{4}+\frac{a}{4}+\frac{a}{4}+\frac{a} {4}+\frac{b^2}{2}+\frac{b^2}{2}+c^4}{7}\geq \sqrt[7]{\frac{a^4 b^4 c^4}{4^42^2}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104468822479248, "perplexity": 1505.930414192796}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00582.warc.gz"}
https://kyushu-u.pure.elsevier.com/en/publications/nuclear-equation-of-state-for-core-collapse-supernova-simulations
# Nuclear equation of state for core-collapse supernova simulations with realistic nuclear forces H. Togashi, K. Nakazato, Y. Takehara, S. Yamamuro, H. Suzuki, M. Takano Research output: Contribution to journalArticlepeer-review 99 Citations (Scopus) ## Abstract A new table of the nuclear equation of state (EOS) based on realistic nuclear potentials is constructed for core-collapse supernova numerical simulations. Adopting the EOS of uniform nuclear matter constructed by two of the present authors with the cluster variational method starting from the Argonne v18 and Urbana IX nuclear potentials, the Thomas–Fermi calculation is performed to obtain the minimized free energy of a Wigner–Seitz cell in non-uniform nuclear matter. As a preparation for the Thomas–Fermi calculation, the EOS of uniform nuclear matter is modified so as to remove the effects of deuteron cluster formation in uniform matter at low densities. Mixing of alpha particles is also taken into account following the procedure used by Shen et al. (1998, 2011). The critical densities with respect to the phase transition from non-uniform to uniform phase with the present EOS are slightly higher than those with the Shen EOS at small proton fractions. The critical temperature with respect to the liquid–gas phase transition decreases with the proton fraction in a more gradual manner than in the Shen EOS. Furthermore, the mass and proton numbers of nuclides appearing in non-uniform nuclear matter with small proton fractions are larger than those of the Shen EOS. These results are consequences of the fact that the density derivative coefficient of the symmetry energy of our EOS is smaller than that of the Shen EOS. Original language English 78-105 28 Nuclear Physics A 961 https://doi.org/10.1016/j.nuclphysa.2017.02.010 Published - May 1 2017 ## All Science Journal Classification (ASJC) codes • Nuclear and High Energy Physics ## Fingerprint Dive into the research topics of 'Nuclear equation of state for core-collapse supernova simulations with realistic nuclear forces'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680722951889038, "perplexity": 1795.2212665129484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00012.warc.gz"}
http://openmx-square.org/openmx_man3.7/node104.html
Next: Parallelization of NEGF Up: Electric transport calculations Previous: Periodic system under zero   Contents   Index Interpolation of the effect by the bias voltage Since for large-scale systems it is very time-consuming to perform the SCF calculation at each bias voltage, an interpolation scheme is available to reduce the computational cost in the calculations by the NEGF method. The interpolation scheme is performed in the following way: (i) the SCF calculations are performed for a few bias voltages which are selected in the regime of the bias voltage of interest. (ii) when the transmission and current are calculated, linear interpolation is made for the Hamiltonian block elements, and , of the central scattering region and the right lead, and the chemical potential, , of the right lead by where the indices and in the superscript mean that the quantities are calculated or used at the corresponding bias voltages where the SCF calculations are performed beforehand. In general, should range from 0 to 1 for the moderate interpolation. In the calculation of the step 3, the interpolation is made by adding the following keywords in the input file: NEGF.tran.interpolate on # default=off, on|off NEGF.tran.interpolate.file1 c1-negf-0.5.tranb NEGF.tran.interpolate.file2 c1-negf-1.0.tranb NEGF.tran.interpolate.coes 0.7 0.3 # default=1.0 0.0 When you perform the interpolation, the keyword 'NEGF.tran.interpolate' should be 'on'. In this case, files 'c1-negf-0.5.tranb' and 'c1-negf-1.0.tranb' specified by the keywords 'NEGF.tran.interpolate.file1' and 'NEGF.tran.interpolate.file2' are the results under bias voltages of 0.5 and 1.0 V, respectively, and the transmission and current at are evaluated by the interpolation scheme, where the weights of 0.7 and 0.3 are specified by the keyword 'NEGF.tran.interpolate.coes'. A comparison between the fully self consistent and the interpolated results is shown with respect to the current and transmission in the linear carbon chain in Figs. 32(a) and (b). In this case, the SCF calculations at three bias voltages of 0, 0.5, and 1.0 V are performed, and the results at the other bias voltages are obtained by the interpolation scheme. For comparison we also calculate the currents via the SCF calculations at all the bias voltages. It is confirmed that the simple interpolation scheme gives notably accurate results for both the calculations of the current and transmission. Although the proper selection of bias voltages used for the SCF calculations may depend on systems, the result suggests that the simple scheme is very useful to interpolate the effect of the bias voltage while keeping the accuracy of the calculations. Next: Parallelization of NEGF Up: Electric transport calculations Previous: Periodic system under zero   Contents   Index t-ozaki 2013-05-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87233966588974, "perplexity": 1177.011712097774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864191.74/warc/CC-MAIN-20180621153153-20180621173153-00286.warc.gz"}
http://dlmftables.uantwerpen.be/HY.jsp
# 15. Hypergeometric Function $F\left(a,b;c;x\right)$ denotes the hypergeometric function $F$ (also known as the generalized hypergeometric function ${}_{2}F_{1}\left(a,b;c;x\right)$); see DLMF (15.2.1). This function is not defined when $c$ is a non-positive integer. Computations are for real $a,b,c$ and $x$, which currently must satisfy all of the conditions: • $|x|<1$, • either $a$ or $b$ must be integer, • if $a,b$ are non-negative integers and $c$ is an integer, then $c>\mathrm{min}\left(a,b\right)$, • if $a,c$ are integers but $b$ is not, then $c>a$, • if $b,c$ are integers but $a$ is not, then $c>b$. Evaluate $F\left(a,b;c;x\right)$ as a Tabulation or Comparison? Get function arguments from form or data file? Specify the data file to compare tofor arguments: Note: The lines in the data file must contain values for $a$, $b$, $c$, $x$ and $F\left(a,b;c;x\right)$ in that order, separated by commas or spaces. Specify grid for function parameters and arguments: Choose $a$ starting from ending with stepping by Choose $b$ starting from ending with stepping by Choose $c$ starting from ending with stepping by Choose $x$ starting from ending with stepping by Show output to digits, the number of digits in file, using Return unformatted output only
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868980407714844, "perplexity": 843.2365902790505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00550.warc.gz"}
https://www.physicsforums.com/threads/mesh-analysis-and-power-dissipation.778852/
Mesh analysis and power dissipation 1. Oct 29, 2014 Andriy B Hi I'm taking an intro course to electrical and computer engineering and I'm having trouble figuring out part c of number one. I have attached a copy of my solutions to parts a and b which I believe are right (correct me if I'm wrong please) however when I calculate the power dissipation of each component and add them up I get a large number. Thank you in advance. Attached Files: File size: 199.6 KB Views: 66 • IMG_0300 (1).JPG File size: 55 KB Views: 115 2. Oct 29, 2014 zoki85 " Determine the power dissipation of each component and show that the sum of the power dissipated equals zero " Somebody was drunk while writting that assignement ? Dissipative components in that mesh are resistors. Sum of powers dissipated in them must be equal to sum of powers delivered by voltage sources. 3. Oct 30, 2014 Andriy B So what's the correct way to solve part c? 4. Oct 31, 2014 zoki85 After you find current of each branch, calculate power dissipated by every resistor (PR=IR2R).Sum these powers up to find total power dissipated in network. Power contribution of voltage source calculate as PS=Vs⋅I, where I is current through the source (pay attention to voltage source orientation and current flowing through it, depending on network parameters it may be negative). Then show that ∑PR= ∑PS holds Similar Discussions: Mesh analysis and power dissipation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204358458518982, "perplexity": 1225.8051384424125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00238.warc.gz"}
https://kyushu-u.pure.elsevier.com/en/publications/pic-simulation-of-a-quasi-parallel-collisionless-shock-interactio
# PIC simulation of a quasi-parallel collisionless shock: Interaction between upstream waves and backstreaming ions F. Otsuka, S. Matsukiyo, T. Hada Research output: Contribution to journalArticlepeer-review 4 Citations (Scopus) ## Abstract We perform a one-dimensional full particle-in-cell (PIC) simulation of a quasi-parallel collisionless shock with Alfvén Mach number 6.6 and the shock angle 20 degrees between the upstream magnetic field and the shock normal direction. The FAB far upstream is generated by a reflection of incoming ions at the shock. They excite a right-handed Alfvén wave in the plasma rest frame through resonant beam instability. Both pitch-angle and energy diffusions of the FAB take place, as one approaches the shock. The maximum energy of the FAB near the shock is about 10 times larger than the energy in which the specular reflection at the shock is assumed. Nonlinear phase bunches of the FAB and the background ions are found near the shock due to the resonant and non-resonant trappings by the excited wave, respectively. Original language English 100709 High Energy Density Physics 33 https://doi.org/10.1016/j.hedp.2019.100709 Published - Nov 2019 ## All Science Journal Classification (ASJC) codes • Radiation • Nuclear and High Energy Physics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072145223617554, "perplexity": 4197.802448619546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00170.warc.gz"}
http://mathoverflow.net/questions/118867/a-certain-coniveau-like-filtration-for-cohomology-what-can-one-say-about-the
# A certain 'coniveau-like' filtration for cohomology: what can one say about the intersection of $Ker H^i(X)\to H^i(Z)$ for $Z$ running through subvarieties of $X$ of dimension $m$? Let $X$ be a smooth variety (say, a complex one; denote its dimension by $n$). What can one say about the intersection of $Ker (H^i(X)\to H^i(Z))$ for $Z$ running through (closed, not necessarily smooth!!) subvarieties of $X$ of dimension $m$ (here $H^\ast$ is singular or etale cohomology)? Did anybody study this filtration (for fixed $X,i$, when $m$ varies) before? What properties can one prove for it? Some remarks. 1. If $Z$ is a generic multiple hyperplane section of $X$, $i\le m$, then a theorem of Beilinson yields that the map $H^i(X)\to H^i(Z)$ is injective. 2. On the other hand, I believe that $H^{m+1}(X)\to H^{m+1}(Z)$ cannot be injective if $X$ is 'too complicated'; yet I have no idea how to construct any more or less general examples here. 3. Certainly, it suffices to consider only 'large enough' $Z$ here. 4. The filtration in question is closely related with the one given by $\cap Ker (H^i(f):H^i(X)\to H^i(Z))$ for $f$ running through proper morphisms with $Z$ of dimension $m$ (and smooth). 5. If $X$ is proper, then one can replace $H^\ast$ here with $H_c^\ast$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614590406417847, "perplexity": 242.79110207911407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824319.59/warc/CC-MAIN-20160723071024-00150-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/138830-matrix-rotation.html
# Math Help - Matrix Rotation 1. ## Matrix Rotation What matrix represents the anticlockwise rotation through $\theta$...I think it perhaps might be $(\begin{array}{cc}cos\theta&-sin\theta\\sin\theta&cos\theta\end{array})$ but i'm not at all sure. Many thanks 2. Originally Posted by Lightningepsilon What matrix represents the anticlockwise rotation through $\theta$...I think it perhaps might be $(\begin{array}{cc}cos\theta&-sin\theta\\sin\theta&cos\theta\end{array})$ but i'm not at all sure. Many thanks Yes that's right. If you rotate (1,0) through x anticlockwise , you get the point (cos x, sin x) so that's your first column in the matrix If you rotate (0,1) through x anticlockwise, you get (-sin x, cos x) so that is the second column in your matrix. Best way to think of your transformation matrices is to consider the image of (1,0) and (0,1) and these form the columns of your matrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961916446685791, "perplexity": 1092.8825066157021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036618.23/warc/CC-MAIN-20150601214356-00048-ip-10-180-206-219.ec2.internal.warc.gz"}
https://openturns.github.io/openturns/latest/auto_data_analysis/distribution_fitting/plot_estimate_normal.html
# Fit a parametric distribution¶ In this example we estimate the parameters of a distribution from a given sample. Once we are settled on a good candidate, we use the corresponding factory to fit the distribution. Each distribution factory has one or several estimators available. They are all derived from either the Maximum Likelihood method or from the method of moments (see Parametric Estimation). from __future__ import print_function import openturns as ot import openturns.viewer as viewer from matplotlib import pylab as plt ot.Log.Show(ot.Log.NONE) ## The Normal distribution¶ The parameters are estimated by the method of moments. We consider a sample, here created from a standard normal distribution : sample = ot.Normal().getSample(1000) We can estimate a normal distribution with ǸormalFactory : distribution = ot.NormalFactory().build(sample) We take a look at the estimated parameters with the getParameter method : print(distribution.getParameter()) Out: [0.00320214,1.02733] We draw the fitted distribution graph = distribution.drawPDF() graph.setTitle("Fitted Normal distribution") view = viewer.View(graph) ## The Student distribution¶ The parameters of the Student law are estimated by a mixed method of moments and reduces MLE. We generate a sample from a Student distribution with parameters , and a scale parameter . sample = ot.Student(5.0, -0.5, 2.0).getSample(1000) We use the factory to build an estimated distribution : distribution = ot.StudentFactory().build(sample) We can obtain the estimated parameters with the getParameter method : print(distribution.getParameter()) Out: [3.65576,-0.515215,1.84614] Draw fitted distribution graph = distribution.drawPDF() graph.setTitle("Fitted Student distribution") view = viewer.View(graph) ## The Pareto distribution¶ By default the parameters of the Pareto distribution are estimated by least squares. We use a sample from a Pareto distribution with a scale parameter , a shape parameter and a location parameter . sample = ot.Pareto(1.0, 1.0, 0.0).getSample(1000) Draw fitted distribution distribution = ot.ParetoFactory().build(sample) print(distribution.getParameter()) graph = distribution.drawPDF() graph.setTitle("Fitted Pareto distribution") view = viewer.View(graph) plt.show() Out: [0.787856,0.944192,0.246677] Total running time of the script: ( 0 minutes 0.328 seconds) Gallery generated by Sphinx-Gallery
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324670791625977, "perplexity": 2752.545705003109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735600.89/warc/CC-MAIN-20201204101314-20201204131314-00039.warc.gz"}
https://www.riemannhypothesis.info/tags/zeros/
## Applying the Explicit Formula It’s quite some time since we arrived at Riemann’s main result, the explicit formula $J(x)=\mathrm{Li}(x)-\sum_{\Im\varrho>0}\left(\mathrm{Li}(x^\varrho)+\mathrm{Li}(x^{1-\varrho})\right)+\int_x^\infty\frac{\mathrm{d}t}{t(t^2-1)\log t}-\log2,$ where $$J(x)$$ is the prime power counting function introduced even earlier. It’s high time we applied this! First, let’s take a look at $$J(x)$$ when calculating it exactly: You see how this jumps by one unit at prime values ($$2$$, $$3$$, $$5$$, $$7$$, $$11$$, $$13$$, $$17$$, $$19$$), by half a unit at squares of primes ($$4$$, $$9$$), by a third at cubes ($$8$$), and by a quarter at fourth powers ($$16$$), but is constant otherwise. [Read More] We’ve seen the calculus version $J(x)=\frac{1}{2\pi i}\int_{a-i\infty}^{a+i\infty}\log\zeta(s)x^s\frac{\mathrm{d}s}{s},$ of the Euler product, and we know how to express $$\xi(s)$$ as a product over its roots $\xi(s)=\xi(0)\prod_\varrho\left(1-\frac{s}{\varrho}\right),$ where $\xi(s) = \frac{1}{2} \pi^{-s/2} s(s-1) \Pi(s/2-1) \zeta(s) \newline = \pi^{-s/2} (s-1) \Pi(s/2) \zeta(s).$ High time we put everything together – the reward will be the long expected explicit formula for counting primes!First, let’s bring the two formulae for $$\xi(s)$$ together and rearrange them such that we obtain a formula for $$\zeta(s)$$: [Read More] We’ve seen that $$\zeta(s)$$ satisfies the functional equation $\zeta(1-s)=2^{1-s}\pi^{-s}\cos(\pi s/2)\Pi(s-1)\zeta(s).$ (Well, it still needs to be proved, but let’s just assume it’s correct for now.) The goal of this post is an even more symmetrical form that will yield the function $$\xi(s)$$ which we can develop into an incredibly useful product expression. On our wish list for $$\xi(s)$$ we find three items: It’s an entire function, i.e., a function that’s holomorphic everywhere in $$\mathbb{C}$$ without any poles. [Read More]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982568085193634, "perplexity": 315.21580726363186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00784.warc.gz"}
http://math.stackexchange.com/questions/147348/is-this-proof-that-su2-cannot-be-isomorphic-to-so1-3-valid
# Is this proof that SU(2) cannot be isomorphic to SO(1,3) valid? It seems intuitively obvious to me that there cannot be an isomorphism between $\mathrm{SU}(2)$ and $\mathrm{SU}(2)\times\mathrm{SU}(2)$ where SU(2) is the Lie Group with the Pauli matrices as generators and $\mathrm{SU}(2)\times\mathrm{SU}(2)$ is the Direct Product whose generators are formed from the generators of $\mathrm{SU}(2)$ by putting them into $2\times2$ matrices in block diagonal form. (I was told this is a correct method of taking the Direct Product in an answer to a prior question. See the response "that is one method of taking the Direct Product": And, if so, is the following argument valid? $\mathrm{SU}(2)$ is not isomorphic to $\mathrm{SO}(1,3)$ because $\mathrm{SO}(1,3)$ is isomorphic to $\mathrm{SU}(2)\times \mathrm{SU}(2)$. If $\mathrm{SU}(2)$ is isomorphic to $\mathrm{SO}(1,3)$, then $\mathrm{SU}(2)$ is isomorphic to $\mathrm{SU}(2)\times \mathrm{SU}(2)$. But, that is false, therefore $\mathrm{SU}(2)$ cannot be isomorphic to $\mathrm{SO}(1,3)$. - $SO(3,1)$ is not isomorphic to $SU(2)\times SU(2)$ (notice e.g. that $SU(2)\times SU(2)$ os compact while $SO(3,1)$ is not), but rather to $SL(2,\mathbb{C})/\pm1$. (however $SL(2,\mathbb{C})$ and $SU(2)\times SU(2)$ are two real version of the same complex group $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$.) Anyway, $SO(3,1)$ is not isomorphic to $SU(2)$ simply by dimension count: the first has dimension $6$ while the second has $3$. - But isn't it the case that any Lorentz Transformation can be found by exponentiating the generators of SU(2)XSU(2)? If any transformation can be described by a group action in SU(2)XSU(2), then how can there not be an isomorphism? How would you characterize the relationship between SU(2)XSU(2) and the Lorentz Group? –  bb6 May 20 '12 at 13:03 And could SU(2) be isomorphic to SO(2,1) or O(2,1)? and are these representations of the Lorentz Group of Transformations with only one space component? –  bb6 May 20 '12 at 13:11 Also, $SU(2)$ is connected, whereas $SO(3,1)$ isn't. –  M Turgeon May 20 '12 at 15:06 @barry I'm not an expert in these things, but here is what I found on Wikipedia. Indeed, you can get generators of $SO(3,1)$ by exponentiating (suitable multiples of) the Pauli matrices. But that does not mean you get isomorphic Lie groups. Another example is the Lie algebras $\mathfrak{su}_2$ and $\mathfrak{so}_3$; they are isomorphic as Lie algebras, but $SU(2)$ is not isomorphic to $SO(3)$ (this has to do with the fact that $SU(2)$ is a double cover of $SO(3)$). –  M Turgeon May 20 '12 at 15:16 Thanks for these replies. They were very helpful. I thought groups with the same algebra must be isomorphic to each other. It seemed obvious to me at the time. I'll be more careful to distinguish Lie Groups from their Algebras in the future. I've been working from a physics textbook that does not provide much information about group theory. Could anyone recommend a good group theory book? Or a good source on the internet? –  bb6 May 20 '12 at 22:56 barry, perhaps some more comments about the differences between mathematicians and physicists are in order. In my experience, physicists frequently do not distinguish between Lie groups and their Lie algebras, whereas mathematicians are generally careful to make this distinction. There are many pairs of Lie groups which are not isomorphic but which have isomorphic Lie algebras, and for many purposes this is "close enough." How would you characterize the relationship between SU(2)XSU(2) and the Lorentz Group? They have isomorphic Lie algebras. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164276123046875, "perplexity": 158.34093526444337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768529.27/warc/CC-MAIN-20141217075248-00094-ip-10-231-17-201.ec2.internal.warc.gz"}
https://brilliant.org/problems/simple-looking-combinatorics-problem-not-as-simple/
# Don't monte carlo this A fair coin is tossed until it lands heads up. What is the probability that heads first appears on an odd numbered toss? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564387559890747, "perplexity": 346.2229144154698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00566.warc.gz"}
http://mathhelpforum.com/algebra/27804-algebra.html
# Math Help - Algebra 1. ## Algebra Solve for $m$ $5m^4+24m^2-36=0\;m^2=y\Rightarrow 5y^2+24y-36=0$ Then i plugged it into the quadratic formula $y=\frac{-24\pm\sqrt{24^2-(4)(5)(-36)}}{10}\Rightarrow y=\frac{-24\pm36}{10}\;y=\frac{6}{5}\;y=-6$ From there i did $m^2=\frac{6}{5}\Rightarrow m=\pm\sqrt\frac{6}{5}$ $m^2=-6$ Which gives an imaginary answer, so I really cant use that as answer right? P.S. I used $m=\pm\sqrt\frac{6}{5}$ as my answer and I got marked off 1 point anyone know why? 2. That's good, subbing y for m^2. Therefore, you get a quadratic. Just factor it. $5y^{2}+24y-36=0$ $(y+6)(5y-6)=0$ Now, sub m^2 back in: $(m^{2}+6)(5m^{2}-6)=0$ Now, The M^2+6 results in 2 non-real solutions. The 5m^2-6 will give two real solutions. 3. Do you think I should put the non real answers as well as the real answers on my test as the answers for what m equals? 4. Hello, OzzMan! Solve for $m$ $5m^4+24m^2-36\:=\:0$ Let $m^2\,=\,y\quad\Rightarrow\quad 5y^2+24y-36\:=\:0$ Then i plugged it into the quadratic formula: $y\:=\:\frac{-24\pm\sqrt{24^2-(4)(5)(-36)}}{10}\quad\Rightarrow\quad y\:=\:\frac{-24\pm36}{10} \;=\;\frac{6}{5}\;-6$ From there i did $m^2\:=\:\frac{6}{5}\quad\Rightarrow\quad m\:=\:\pm\sqrt\frac{6}{5}$ $m^2\:=\:-6$ which gives an imaginary answer. P.S. I used $m\:=\:\pm\sqrt\frac{6}{5}$ as my answer and I got marked off 1 point. Anyone know why? Just a guess . . . maybe you were expected to rationalize that square root? . . $\pm\sqrt{\frac{6}{5}} \;=\;\pm\sqrt{\frac{6}{5}\cdot\frac{5}{5}} \;=\;\pm\sqrt{\frac{30}{25}} \;=\;\pm\frac{\sqrt{30}}{\sqrt{25}} \;=\;\pm\frac{\sqrt{30}}{5}$ Oh yes, your reasoning and your work is correct . . . Nice going! 5. oh that makes a lot of sense thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332810640335083, "perplexity": 1240.273465994964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00306-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.ntrand.com/normal-distribution-single/
// Home / Documentation / Gallery of Distributions / Normal distribution (Single variable) # Normal distribution (Single variable) ## Where do you meet this distribution? • Standard score • Finance, Economics : changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal • Average of stochastic variables : Central Limit Theorem • Statistical mechanics : Velocities of the molecules in the ideal gas • Quantum physics : Probability density function of a ground state in a quantum harmonic oscillator • Error analysis ## Shape of Distribution ### Basic Properties • Two parameters $m, \sigma$ are required. $\sigma>0$ These parameters are Mean and Standard Deviation of the distribution respectively. • Continuous distribution defined on entire range • This distribution is always symmetric. ### Probability • Cumulative distribution function $F(x)=\int_{-\infty}^{x}\phi\left(\frac{t-m}{\sigma}\right)\text{d}t$ where $\phi(\cdot)$ is Probability density function of standard normal distribution. • Probability density function $f(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(x-m)^2}{2\sigma^2}\right]$ • How to compute these on Excel. 1 2 3 4 5 6 7 A B Data Description 0.5 Value for which you want the distribution 8 Value of parameter M 2 Value of parameter Sigma Formula Description (Result) =NTNORMDIST((A2-A3)/A4,TRUE) Cumulative distribution function for the terms above =NTNORMDIST((A2-A3)/A4,FALSE) Probability density function for the terms above • Function reference : NTNORMDIST • NtRand Function NTNORMDIST is same as excel function NORMSDIST when 2nd. argument=TRUE. ### Quantile • Inverse function of cumulative distribution function $F^{-1}(P)=\sigma\Phi^{-1}(P)+m$ where $\Phi(\cdot)$ is cumulative distribution function of standard normal distribution. • NORMSINV is an excel function • How to compute this on Excel. 1 2 3 4 5 6 A B Data Description 0.7 Probability associated with the distribution 1.7 Value of parameter M 0.9 Value of parameter Sigma Formula Description (Result) =A4*NORMSINV(A2)+A3 Inverse of the cumulative distribution function for the terms above ## Characteristics ### Mean – Where is the “center” of the distribution? (Definition) • Mean of the distribution is given as $m$. ### Standard Deviation – How wide does the distribution spread? (Definition) • Standard deviation of the distribution is given as $\sigma$. ### Skewness – Which side is the distribution distorted into? (Definition) • Skewness of the distribution is given as $0$. ### Kurtosis – Sharp or Dull, consequently Fat Tail or Thin Tail (Definition) • Kurtosis of the distribution is given as $0$. ## Random Numbers x • How to generate random numbers on Excel. 1 2 3 4 5 A B Data Description 0.5 Value of parameter M 0.5 Value of parameter Sigma Formula Description (Result) =A3*NTRANDNORM(100)+A2 100 Normal deviates based on Mersenne-Twister algorithm for which the parameters above Note The formula in the example must be entered as an array formula. After copying the example to a blank worksheet, select the range A5:A104 starting with the formula cell. Press F2, and then press CTRL+SHIFT+ENTER. ## NtRand Functions • Generating random numbers based on Mersenne Twister algorithm: NTRANDNORM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311434626579285, "perplexity": 3766.7090577766717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607862.71/warc/CC-MAIN-20170524211702-20170524231702-00614.warc.gz"}
http://www.oalib.com/search?kw=Robert%20Main&searchField=authors
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Publish in OALib Journal ISSN: 2333-9721 APC: Only $99 Submit 2020 ( 1 ) 2019 ( 37 ) 2018 ( 75 ) 2017 ( 65 ) Search Results: 1 - 10 of 27598 matches for " Robert Main " All listed articles are free for downloading (OA Articles) Page 1 /27598 Display every page 5 10 20 Item BMC Research Notes , 2008, DOI: 10.1186/1756-0500-1-46 Abstract: HIV-HCV coinfected patients were retrospectively identified from the HIV clinic. ART was classified as none, nucleoside reverse transcriptase inhibitors (NRTIs) only, highly active antiretroviral therapy (HAART) only, and sequential therapy (initial NRTIs followed by HAART). Fibrosis stage and necroinflammation grade were assessed by the modified HAI (Ishak) scoring method. Steatosis was graded as 0–3.Sixty patients were identified. The overall prevalence of hepatic steatosis was 58%. Those that received HAART only had a lower prevalence of steatosis (41%) compared to those on NRTIs only (70%) or sequential therapy (82%). Independent predictors of hepatic steatosis were absence of HAART only therapy, OR 2.9, p = 0.09, and presence of cirrhosis, OR 4.6, p = 0.044. Forty five percent of the patients had advanced fibrosis (fibrosis stage ≥ 3). NI grade (OR 1.9, p = 0.030), and steatosis grade (OR 3.6, p = 0.045), were independent predictors of advanced fibrosis.Hepatic steatosis is associated with more advanced hepatic fibrosis in the HIV-HCV coinfected population. HAART only therapy (rather than NRTIs only or sequential therapy) appears to be associated with a lower prevalence of hepatic steatosis. This may be one of the mechanisms by which HAART could attenuate hepatic fibrosis in such a cohort.Highly active antiretroviral therapy (HAART) has significantly improved survival in patients with human immunodeficiency virus (HIV) infection [1]. Increasing attention is now being focused on co infection with other viruses like hepatitis C (HCV). Because of similar routes of transmission, approximately 25–30% of patients with HIV are also coinfected with HCV [2]. Factors associated with more advanced hepatic fibrosis in HCV infection include HIV coinfection [3] and hepatic steatosis, (prevalence of 47%–79%) [4-6].Patients with HIV are also at increased risk of developing hepatic steatosis due to multiple factors including antiretroviral therapy (ART), obesity, hyperglycemia, Physics , 2015, Abstract: We derive X-ray mass, luminosity, and temperature profiles for 45 galaxy clusters to explore relationships between halo mass, AGN feedback, and central cooling time. We find that radio--mechanical feedback power (referred to here as "AGN power") in central cluster galaxies correlates with halo mass, but only in halos with central atmospheric cooling times shorter than 1 Gyr. This timescale corresponds approximately to the cooling time (entropy) threshold for the onset of cooling instabilities and star formation in central galaxies (Rafferty et al. 2008). No correlation is found in systems with central cooling times greater than 1 Gyr. The trend with halo mass is consistent with self-similar scaling relations assuming cooling is regulated by feedback. The trend is also consistent with galaxy and central black hole co-evolution along the$M_{BH} - \sigma \$ relation. AGN power further correlates with X-ray gas mass and the host galaxy's K-band luminosity. AGN power in clusters with central atmospheric cooling times longer than ~1 Gyr typically lies two orders of magnitude below those with shorter central cooling times. Galaxies centred in clusters with long central cooling times nevertheless experience ongoing and occasionally powerful AGN outbursts. We further investigate the impact of feedback on cluster scaling relations. We find L-T, and M-T relations, excluding regions directly affected by AGN, that are consistent with the cluster population as a whole. While the gas mass rises, the stellar mass remains nearly constant with rising total mass, consistent with earlier studies. This trend is found regardless of central cooling time, implying tight regulation of star formation in central galaxies as their halos grew, and long-term balance between AGN heating and atmospheric cooling. Our scaling relations are presented in forms that can be incorporated easily into galaxy evolution models. Philippe Main on Mathematical Problems in Engineering , 2011, DOI: 10.1155/2011/414702 Abstract: Slender structures immersed in a cross flow can experience vibrations induced by vortex shedding (VIV), which cause fatigue damage and other problems. VIV models that are used in structural design today tend to assume harmonic oscillations in some way or other. A time domain model would allow to capture the chaotic nature of VIV and to model interactions with other loads and nonlinearities. Such a model was developed in the present work: for each cross section, recent velocity history is compressed using Laguerre polynomials. The compressed information is used to enter an interpolation function to predict the instantaneous force, allowing to step the dynamic analysis. An offshore riser was modeled in this way: some analyses provided an unusually fine level of realism, while in other analyses, the riser fell into an unphysical pattern of vibration. It is concluded that the concept is promising, yet that more work is needed to understand orbit stability and related issues, in order to produce an engineering tool. Eric Main Journal of Educational Media & Library Sciences , 2004, Abstract: As internet-based technologies increasingly colonize learning environments in higher education, they allow purposes contrary to learning to have direct access to students. The internet as a governing metaphor for transparent connectivity and equal access is a red herring because the power relations across the connections are unequal. The internet also functions as a mechanism for the operant conditioning of students by commercial interests and for surveillance and control by political authorities, purposes which can, if not restrained, undermine the intentions of teachers using technology.Teachers should resist fully automating their course management, especially grading and assessment because too much mechanization can only produce reductive thinking.A related trend is the gradual replacement of liberal studies by vocational courses that feature technology as the subject. This cooperates with the aforementioned trend to effectively censor the creative and critical thinking that instructors strive to teach. J. Main Physics , 1999, DOI: 10.1016/S0370-1573(98)00131-8 Abstract: Harmonic inversion is introduced as a powerful tool for both the analysis of quantum spectra and semiclassical periodic orbit quantization. The method allows to circumvent the uncertainty principle of the conventional Fourier transform and to extract dynamical information from quantum spectra which has been unattainable before, such as bifurcations of orbits, the uncovering of hidden ghost orbits in complex phase space, and the direct observation of symmetry breaking effects. The method also solves the fundamental convergence problems in semiclassical periodic orbit theories - for both the Berry-Tabor formula and Gutzwiller's trace formula - and can therefore be applied as a novel technique for periodic orbit quantization, i.e., to calculate semiclassical eigenenergies from a finite set of classical periodic orbits. The advantage of periodic orbit quantization by harmonic inversion is the universality and wide applicability of the method, which will be demonstrated in this work for various open and bound systems with underlying regular, chaotic, and even mixed classical dynamics. The efficiency of the method is increased, i.e., the number of orbits required for periodic orbit quantization is reduced, when the harmonic inversion technique is generalized to the analysis of cross-correlated periodic orbit sums. The method provides not only the eigenenergies and resonances of systems but also allows the semiclassical calculation of diagonal matrix elements and, e.g., for atoms in external fields, individual non-diagonal transition strengths. Furthermore, it is possible to include higher order terms of the hbar expanded periodic orbit sum to obtain semiclassical spectra beyond the Gutzwiller and Berry-Tabor approximation. J. Main Physics , 1999, DOI: 10.1103/PhysRevA.60.1726 Abstract: In a recent paper Robicheaux and Shaw [Phys. Rev. A 58, 1043 (1998)] calculate the recurrence spectra of atoms in electric fields with non-vanishing angular momentum not equal to 0. Features are observed at scaled actions an order of magnitude shorter than for any classical closed orbit of this system.'' We investigate the transition from zero to nonzero angular momentum and demonstrate the existence of short closed orbits with L_z not equal to 0. The real and complex ghost'' orbits are created in bifurcations of the uphill'' and downhill'' orbit along the electric field axis, and can serve to interpret the observed features in the quantum recurrence spectra.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382234573364258, "perplexity": 1863.115502904442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00342.warc.gz"}
http://mathhelpforum.com/algebra/122104-negative-root.html
1. ## negative root? Why is it when you square root a number you get a +/-? Why is it when you solve $-4^2$ it is -16 and not 16? How can you tell it is +/- for the second question? 2. Hi there $-4^2=-(4)(4)=-16$ but $(-4)^2=(-4)(-4)=16$ You get a $\pm$ when you take the square root of a number because both the positive and negative square roots could be the factors: e.g. $4\times4=16$ and $(-4)\times(-4)=16$ 3. So, if it is not in ()'s I assume it is -(4)(4)? Why is it then in some cases it is always $-4^2$ is always 16? 4. $-(4)^2=-(-4)^2=-16$ $(4)^2=16,\ -(4^2)=-16$ $(-4)^2=16,\ -(-4)^2=-16$ $4(4)=4+4+4+4=16$ $-4(4)=-4-4-4-4=-16$ $Hence,\ negative\ by\ positive\ means\ successive\ subtraction.$ $(-4)(-4)=-[4(-4)]=\ opposite\ of\ subtract\ 4\ four\ times,$ $which\ is\ add\ 4\ four\ times.\ Add\ and\ subtract\ are\ opposites.$ $This\ is\ why\ negative\ by\ negative\ is\ really\ positive\ by\ positive.$ $To\ write\ -4^2\ is\ ambiguous.$ $We\ must\ distinguish\ between\ -[(4)^2]\ and\ [(-4)^2]$ 5. Originally Posted by Barthayn So, if it is not in ()'s I assume it is -(4)(4)? Why is it then in some cases it is always $-4^2$ is always 16? In my studies $-4^2$ has always been taken to equal $-16$. Maybe sometimes people get lazy and leave out the parentheses, or don't see a distinction between the two...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880199193954468, "perplexity": 417.3368822223579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00140.warc.gz"}
http://mathhelpforum.com/algebra/126792-find-value-x-y-satisfying-given-equation.html
# Math Help - Find the value of x and y satisfying the given equation: 1. ## Find the value of x and y satisfying the given equation: x^2 + y^2 = 4 and x/2 + y =1 2. Hi Substitute x = 2-2y into equation x²+y²=4
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160489201545715, "perplexity": 1101.9804086101926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829754.11/warc/CC-MAIN-20140820021349-00262-ip-10-180-136-8.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/appliedmechanics/article-abstract/52/3/559/390787/Transient-Loads-Analysis-by-Dynamic-Condensation?redirectedFrom=fulltext
This paper investigates an alternative finite-difference time-integration procedure in structural dynamics. The integral form of the equation of motion, instead of the differential form, is used to develop the finite-difference time-integration method. An expression called dynamic condensation is presented to relate the dynamic response and unknown force at a limited number of degrees of freedom. A stable and accurate method of analysis is derived to determine the transient loads of structural components in a structural system without constructing the system equation of motion. This content is only available via PDF. You do not currently have access to this content.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098016977310181, "perplexity": 355.4477293601299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00285.warc.gz"}
http://mathoverflow.net/questions/64129/how-to-study-the-size-of-basins-of-attraction-in-a-discrete-dynamical-system
# How to study the size of basins of attraction in a discrete dynamical system? I have a discrete dynamical system in $[0,1]^n$. Specifically, I am studying the dynamics of a probability distribution under certain operator $\phi$ such that $\mathbf{q}[t+1]=\phi(\mathbf{q}[t])$. The probability distribution is specified by $n$ parameters in the $n$-dimensional vector $\mathbf{q}$. What I need to know is whether a specific starting point $\mathbf{q}_0$ is in the basin of attraction of another specific point $\mathbf{q}'$. The only way that I have to check this is by running a computer program. I would like to know how to analyze mathematically this type of global behaviors of the system. More in general, studying the size of the basin of attraction of the point $\mathbf{q}'$ would be very important. What type of mathematical tools are needed to study this kind of issues? - I don't think there is a magic theorem here which works all the time. What you want depends a lot on the specific $\phi$, ${\bf q}'$, ${\bf q}_0$ you are working with. What you said sounds a lot like a toy model for the renormalization group, which is a way to study central limit type theorems. For instance the classical central limit theorem can be interpreted as a convergence in the basin of attraction of the Gaussian $\mathcal{N}(0,1)$ law with respect to a map $\phi$ on probability distributions of centered variables with variance 1 given by $$Z\longrightarrow \frac{X+Y}{\sqrt{2}}\ .$$ Namely given a probability distribution for a random variable $Z$, one makes two independent copies $X$, $Y$, and looks at the probability distribution of $\frac{X+Y}{\sqrt{2}}$. In the absence of a precise description of your setup I can only throw in some general ideas. The first thing to do is to identify the fixed points of your map $\phi$. I assume it is nonlinear, so this is not a trivial question. If you cannot solve it completely, you need to at least find some easy fixed points. I assume this is what your ${\bf q}'$ is. Then you need to analyze the linearization of $\phi$ at these easy fixed points. I am again assuming that you did that and found that this linearization $A$ has all eigenvalues of modulus less than 1 so it is contractive. From this it follows easily that if you start near ${\bf q}'$ you will converge to ${\bf q}'$. If your starting point ${\bf q}_0$ is not in this neighborhood of ${\bf q}'$, then indeed you might want to run some simulations on the computer not only for ${\bf q}_0$ but also for many other starting points. In the optimistic situation where the simulation indicates that all points go to ${\bf q}'$, then there could be a Lyapunov function for your map, see http://en.wikipedia.org/wiki/Lyapunov_function That's a function which changes monotonically under $\phi$. If you have such a function which controls the norm of ${\bf q}$ then it might tell you that after enough iterations you will be in the neighborhood of ${\bf q}'$ where you have the contraction property. Of course the difficult thing is to come up with the correct guess for this Lyapunov function if it exists. This guess depends on the specifics of your example. For the central limit theorem there is a notion of entropy which does the job. Some references: - Thank you so much for your time. Your suggestions and references are really useful for me. I know some easy fixed points of the system. For example, I know that the points $\{0,1\}^n$ are fixed points. In addition, I also can identify some attractive fixed points such as $\mathbf{q}'$. In essence, the systems are formed by products among the parameters $q_i \in \mathbf{q}$ and summations. I am not sure if this is relevant. In any case I have to study Lyapunov functions. –  Navcar May 6 '11 at 19:27 Recipe: Find an equilibrium point. Linearize the dynamical system around it. Find a quadratic Lyapunov function for the linearized system. Compute its rate of change along the trajectories of the original nonlinear system. A compact region where the rate of change is negative is inside the basin of attraction of the equilibrium point. Many things can go wrong. Maybe the linearization is not asymptotically stable. Maybe it is, but the basin of attraction computed is too small to be useful. Then you need to search for Lyapunov functions that are not quadratic. That is a harder task, and doesn't lend itself to simple recipes as above. But is is a beginning. This is called "Lyapunov's indirect method" to study stability, because it is based on linearization. It is described in many dynamical systems and nonlinear controls textbooks. - Thank you for the recipe! It contains a lot of information in few words. Certainly, I need to think carefully about this. –  Navcar May 6 '11 at 19:49 The long version appears in many control theory texts. Let me know if you'd like pointers. –  Pait May 8 '11 at 22:49 I do not know if it could be possible to find a reference with a simple example of the procedure that you describe. Thanks! –  Navcar May 12 '11 at 13:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959832191467285, "perplexity": 103.16063463640519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647778.29/warc/CC-MAIN-20141024030047-00105-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/TOPOLOGY-ON-COHOMOLOGY-OF-LOCAL-FIELDS-Cesnavicius/21f9f2f58cb2a46c22dcbd9aec8061a0932e4c2a
# TOPOLOGY ON COHOMOLOGY OF LOCAL FIELDS @article{Cesnavicius2015TOPOLOGYOC, title={TOPOLOGY ON COHOMOLOGY OF LOCAL FIELDS}, author={Kestutis Cesnavicius}, journal={Forum of Mathematics, Sigma}, year={2015}, volume={3} } Arithmetic duality theorems over a local field $k$ are delicate to prove if $\text{char}\,k>0$. In this case, the proofs often exploit topologies carried by the cohomology groups $H^{n}(k,G)$ for commutative finite type $k$-group schemes $G$. These ‘Čech topologies’, defined using Čech cohomology, are impractical due to the lack of proofs of their basic properties, such as continuity of connecting maps in long exact sequences. We propose another way to topologize $H^{n}(k,G)$: in the key case… Expand 12 Citations p-Selmer growth in extensions of degree p On the topology of geometric and rational orbits for algebraic group actions over valued fields DUALITY FOR COHOMOLOGY OF CURVES WITH COEFFICIENTS IN ABELIAN VARIETIES • T. Suzuki • Mathematics • Nagoya Mathematical Journal • 2018 The ℓ-parity conjecture over the constant quadratic extension Artin-Mazur-Milne duality for fppf cohomology • Mathematics • 2018 #### References SHOWING 1-10 OF 53 REFERENCES
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601107239723206, "perplexity": 1942.5364008069268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00380.warc.gz"}
https://www.mytech-info.com/2016/06/common-emitter-configuration.html
# Common Emitter Configuration Consider an n-p-n transistor in common emitter configuration. In this configuration the base current IB is the input current and collector current IC is the output current. We know  IC = IC + IB and IC = αIE + ICBO = (IC + IB) + ICBO or IC (1-α) = αIB + ICBO IC = [α/(1-α)]IB + ICBO/(1-α) # Common Emitter Configuration #### Current Amplification Factor The ratio of the collector current to the base current is called dc forward current transfer ratio or the dc current gain. It is designated by βdc. It is said to be the dc beta. Common Emitter dc current gain βdc = IC/IB ​Where IC and IB are collector and base current of a particular operating point in the linear region. As the value of base current is in microamperes and low, the value of βdc lies in the range of 10 to 500 depending on the type of the transistor. Common emitter ac current gain βdc is defined as the ratio of small change in collector current ΔIB at a constant collector to emitter voltage VCE. βdc = ΔIC/ΔIE    AT  VCE = constant​ #### Relation between αdc and βdc We know the                I E = IC + IB  ----> i Dividing by IC IE/IC = 1+(IB/IC) 1/α = 1 + (1/β)   =  (β+1) / β                  [ ∴ α = IC / IE and β = IC/IB] α = β / (β+1)  ------>  ii Cross multiplying the above equation, we have, α(β+1) =β αβ+α =β α = β – αβ  = β (1 – α) β = α / (α+1)  -------> iiii Equation (ii) ---> α = β / (β+1) 1-α = 1 – [β / (β+1)]  ------>iv From equation (iii), it is seen that as α approaches unity, β approaches infinity. That is the current gain of transistor in common emitter configuration is very high. It is because of this reason that transistors are used in common emitter configuration. We know that, IC = [αIB / (1-α)] + ICBO / (1-α)   ----->v Since,  β+1 = 1 / (1-α) and  1-α= 1 / (β+1), We have IC = βIB + (β+1) ICBO  ------->vi ​The term “(β+1)ICBO” is the reverse leakage current in common emitter configuration. It is denoted by ICEO. ICEO = (β+1)ICBO   ------>vii ​Since β is much greater than I, ICEO>ICBO. Substituting the value of ICEO in equation vi, we get IC = βIB + ICEO    ------->viii Since, β = α / (α+1), we can write ICEO = [α / (1-α)] ICBO  ------->ix ICEO = ICEO / (1-α ) • The leakage current in common emitter configuration is larger than that of common base configuration. ICE is the collector current which flows when the base emitter circuit is left open and collector base junction is reverse biased. It is in the same direction as normal collector current flows through the transistor and is temperature sensitive. ## Characteristic of Common Emitter Configuration The figure above shows that the experimental setup for determining the static characteristic of an n-p-n transistor used in a common emitter configuration circuit. Two variable dc regulated power supplies VBB and VCC are connected to base and collector terminals of a transistor. • A micro ammeter and a voltmeter are connected to measure the base current IB and VBE, and a milli ammeter and a voltmeter are connected to measure IC and VCE in the circuit. #### Input Characteristic (VBE vs IB) VCE= Constant The input characteristic curves are obtained by plotting base emitter voltage VBE vs base current IB keeping VCE constant. The characteristic curves are plotted for various values of collector to emitter voltage VCE. From the input characteristic we observe the following important points: • There exist a threshold, cut in voltage Vγ below which the base current IB is very small. The value of cut in voltage is 0.5 V for Si and 0.1V for GE transistors. • After the cut in voltage the base current IB increases rapidly with small increase in base-emitter voltage VBE. However, it may be noted that the value of base current does not increases as rapidly as that of the input characteristic of a common base configuration. It means that dynamic input resistance is small in common emitter configuration but it is little higher as compared to the CB configuration. • For a fixed value of VBE, IB increases as VCE is decreased. A large value of VCE results in a larger reverse bias at collector base PN junction. This increases the depletion region and reduces the effective width if the base, which in turn reduces the base current. • Dynamic or ac input resistance can be determined from the input characteristics curve. It is defined as the ration of a small change in the base to emitter voltage to the resulting change in the base current at constant to emitter voltage. Input Resistance RO = [ΔVBE/ΔIB]      at VCE = constant The value of Rin is typically 1KΩ, but can range from 600Ω to 4KΩ. #### Output Characteristics (VCE vs IC) at IB=Constant • The output characteristic of the common emitter configuration curves are obtained by plotting VCE vs. IC for different values of IB. The collector current varies with VCE for values between 0V and 1V. The collector current varies with VCE for values between 0V and 1V. After this collector current IC becomes almost constant and reaches the saturation values. The transistors are operated in the region above knee voltage. This region is called active region. The experiment is repeated for different values of IB. • The output characteristics curves may be divided into three regions. They are 1. Saturation region 2. cut-off region 3. Active region • In the above figure the active region is the area to the right of the ordinate VCE = a few tenths of a voltage and above of IB = 0. • Ideally, when VCE exceeds 0.7V, the base collector junction becomes reverse biased and the transistor goes into the active or linear region of its operation. Once the base collector junction is reverse biased IC levels off and remains almost constant for a given value of IB as VCE continues to increase. Actually IC increases very slight as VCE increases due to widening of the base collector depletion region. This phenomenon is called a nearly effect. • When the base current IB is zero, a small collector current exists. This is called leakage current. However, for all practical purposes, the collector current is zero, when the base current is zero. Under this condition the transistor is said to be cut-off. The small collector current is called the collector cut-off current. • When VCE reaches a sufficiently high voltage, the reverse biased collector junction goes into breakdown and therefore collector current increases rapidly. If VCE is greater than 40 V, the collector diode breaks down and normal transistor action is lost. The transistor is not intended to operate in the breakdown region. This effect is commonly known as punch through effect. • From the output characteristics, the dynamic output resistance can be determined. It is given by Dynamic output resistance Rout = [ΔVCE/ΔIC]      at IB = constant • The reciprocal of the slope of the output characteristic in the region gives the output resistance. The value of Rout ranges from 10KΩ to 50 KΩ. • The output characteristic may be used to determine the small signal common emitter current gain or ac beta (βac) of a transistor. This can be done by selecting the two points M and N on the characteristic and note the corresponding values of ΔIE and ΔIB. βdc = ΔIC / ΔIE  = 10mA / 40mA  =250.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246040940284729, "perplexity": 3220.116438927411}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00797.warc.gz"}
https://mathoverflow.net/questions/15795/the-infinitesimal-topos-in-positive-characteristic
# The Infinitesimal topos in positive characteristic This question was inspired by and is somewhat related to this question. In his article "Crystals and the de Rham cohomology of schemes" in the collection "Dix exposes sur la cohomologie des schemas" Grothendieck defines the (small) infinitesimal site of an $S$-scheme $X$ using thickenings of usual opens. He then proceeds to prove that in characteristic $0$ the cohomology with coefficients in $\mathcal{O}_{X}$ computes the algebraic de Rham cohomology of the underlying scheme. This is remarkable, because the definition of the site does not use differential forms and it is not necessary for $X/S$ to be smooth. This fails in positive characteristics, and as a remedy, Grothendieck sugessts adding the additional data of divided power structures to the site, which he then calls the "crystalline site of $X/S$". This site then has good cohomological behaviour (e.g. if $X$ is liftable to char. $0$, then cohomology computed with the crystalline topos is what it "should be"). The theory of the crystalline topos was of course worked out very successfully by Pierre Berthelot. My question is: Even though the infinitesimal site is in some sense not nicely behaved in positive characteristics, have people continued to study it in this context? What kind of results have been obtained, and has it still been useful? I'm particularly interested in results about $D$-modules in positive characteristic (i.e. crystals in the infinitesimal site if $X/S$ is smooth), but I am also curious to see in which other directions progress has been made. - There is a paper of Ogus for 1975, "The cohomology of the infinitesimal site", in which he shows that if $X$ is proper over an algebraically closed field $k$ of char. $p$, and embeds into a smooth scheme over $k$, then the infinitesimal cohomology of $X$ coincides with etale cohomology with coefficients in $k$ (or more generally $W_n(k)$ if we work with the infinitesimal site of $X$ over $W_n(k)$). Note that, since $k$ has char. $p$, we are talking about etale cohomology with mod $p^n$ coefficients, so this is "smaller" than the usual etale cohomology; it just picks up the "unit roots" of Frobenius. So the infinitesimal cohomology gives the unit root part of the crystalline cohomology. Dear BZ, Since pulling back via Frobenius takes one from $\mathcal D^{(m)}$-modules to $\mathcal D^{(m+1)}$-modules, and the structure sheaf pulls back to itself, does this mean that the answer is always the same for finite $m$ (i.e. always the crystalline cohomology)? –  Emerton Feb 19 '10 at 15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040361046791077, "perplexity": 313.75793340837515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096061.71/warc/CC-MAIN-20150627031816-00236-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/101208/how-does-one-prove-that-the-spectral-norm-is-less-than-or-equal-to-the-frobenius
# How does one prove that the spectral norm is less than or equal to the Frobenius norm? How does one prove that the spectral norm is less than or equal to the Frobenius norm? The given definition for the spectral norm of $A$ is the square root of the largest eigenvalue of $A*A$. I don't know how to use that. Is there another definition I could use? We also have $\displaystyle{\max\frac{\|Ax\|_p}{\|x\|_p}}$, but if $p=2$ it's the Frobenius norm, right? - add comment ## 2 Answers Any matrix norm induced by a norm on your vector space (over $\mathbb{R}$ or $\mathbb{C}$) that also satisfies $\|A^{*}\|=\|A\|$ will be greater than or equal to the spectral norm. Let $\lambda$ denote the largest singular value of A (the square root of the largest eigenvalue of ($A^*A$) ) and $v$ the corresponding eigenvector. Let $\|A\|$ denote the matrix norm induced by a norm on the vector space: $$\|A\|^2=\|A^{*}\|\cdot\|A\|\geq\|A^{*}A\|=\max\frac{\|A^{*}Ax\|}{\|x\|}\geq\frac{\|A^{*}Av\|}{\|v\|}=\lambda$$ and so $\|A\|\geq\sqrt{\lambda}$ For the 2-norm you actually have equality, which you can show by singular value decomposition. We can take an orthonormal basis of eigenvectors for $A^*A$ (with respect to the usual scalar product that also induces the 2-norm). Denote this basis by $v_1,\ldots,v_n$ with eigenvalues $\lambda=\lambda_1,\ldots,\lambda_n$. For any vector $x=\sum x_i v_i$ we have $$\|Ax\|_2^2=\overline{x}^TA^{*}Ax\leq\overline{x}^T\sum\lambda_i x_i v_i=\sum\lambda_i |x_i|^2\leq \lambda \|x\|_2^2$$ So $\|A\|_2\leq \sqrt{\lambda}$ and both inequalities together show $\|A\|_2=\sqrt{\lambda}$. - I think that Michalis` proof only works for self-adjoint matrix norms. This happens to apply in the case of the Frobenius norm, but doesn't hold for all norms (consider the max column-sum norm, $\parallel A \parallel_1 = \displaystyle\max_{1\le j \le n }$ $\displaystyle\sum_{i=1}^n a_{ij}$.) –  user17794 Jan 22 '12 at 2:32 (1) But the Frobenius norm is not induced by a norm on your vector space, is it? (2) In your proof that $\|A\|\geq \sqrt{\lambda}$ you assumed that $\|A\|=\|A^*\|$, which is not true for many norms. (E.g., see the example in Tim Duff's answer/comment.) –  Jonas Meyer Jan 22 '12 at 3:34 You are obviously right! –  Michalis Jan 22 '12 at 8:29 Thanks, $||A||=||A^{*}||$ is really an implicit assumption. Now that I read you comment I see that I had the wrong definition of Frobenius norm! –  Michalis Jan 22 '12 at 8:30 @Jonas Meyer: Thanks :) Some people don't seem to think so unfortunately. –  Michalis Jan 22 '12 at 20:48 show 1 more comment The Frobenius norm of $A$ is the squareroot of the sum of all of the eigenvalues (the trace) of $A^*A$, and since all eigenvalues of $A^*A$ are nonnegative, it follows that the largest eigenvalue is less than or equal to the sum of all of the eigenvalues. If $p=2$ it's the Frobenius norm, right? No, if $p=2$ that is another characterization of the spectral norm. A proof of this is sketched in Michalis's answer. - add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840508699417114, "perplexity": 180.14894642325328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997893881.91/warc/CC-MAIN-20140722025813-00011-ip-10-33-131-23.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0FBL
Lemma 42.53.3. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be a scheme locally of finite type over $S$. Let $\mathcal{N}$ be a virtual normal sheaf for a closed subscheme $Z$ of $X$. Suppose that we have a short exact sequence $0 \to \mathcal{N}' \to \mathcal{N} \to \mathcal{E} \to 0$ of finite locally free $\mathcal{O}_ Z$-modules such that the given surjection $\sigma : \mathcal{N}^\vee \to \mathcal{C}_{Z/X}$ factors through a map $\sigma ' : (\mathcal{N}')^\vee \to \mathcal{C}_{Z/X}$. Then $c(Z \to X, \mathcal{N}) = c_{top}(\mathcal{E}) \circ c(Z \to X, \mathcal{N}')$ as bivariant classes. Proof. Denote $N' \to N$ the closed immersion of vector bundles corresponding to the surjection $\mathcal{N}^\vee \to (\mathcal{N}')^\vee$. Then we have closed immersions $C_ ZX \to N' \to N$ Thus the desired relationship between the bivariant classes follows immediately from Lemma 42.43.2. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9946264624595642, "perplexity": 191.95109183671192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00415.warc.gz"}
http://nbviewer.jupyter.org/url/jakevdp.github.io/downloads/notebooks/FreqBayes3.ipynb
# Frequentism and Bayesianism III: Confidence, Credibility and why Frequentism and Science Don't Mix¶ This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. This post is part of a 5-part series: Part I Part II Part III Part IV Part V See also Frequentism and Bayesianism: A Python-driven Primer, a peer-reviewed article partially based on this content. In Douglas Adams' classic Hitchhiker's Guide to the Galaxy, hyper-intelligent pan-dimensional beings build a computer named Deep Thought in order to calculate "the Answer to the Ultimate Question of Life, the Universe, and Everything". After seven and a half million years spinning its hyper-dimensional gears, before an excited crowd, Deep Thought finally outputs the answer: 42 The disappointed technicians, who trained a lifetime for this moment, are stupefied. They probe Deep Though for more information, and after some back-and-forth, the computer responds: "once you do know what the question actually is, you'll know what the answer means." An answer does you no good if you don't know the question. I find this story be an apt metaphor for statistics as sometimes used in the scientific literature. When trying to estimate the value of an unknown parameter, the frequentist approach generally relies on a confidence interval (CI), while the Bayesian approach relies on a credible region (CR). While these concepts sound and look very similar, their subtle difference can be extremely important, as they answer essentially different questions. Like the poor souls hoping for enlightenment in Douglas Adams' universe, scientists often turn the crank of frequentism hoping for useful answers, but in the process overlook the fact that in science, frequentism is generally answering the wrong question. This is far from simple philosophical navel-gazing: as I'll show, it can have real consequences for the conclusions we draw from observed data. ## Confidence vs. Credibility¶ In part I of this series, we discussed the basic philosophical difference between frequentism and Bayesianism: frequentists consider probability a measure of the frequency of (perhaps hypothetical) repeated events; Bayesians consider probability as a measure of the degree of certainty about values. As a result of this, speaking broadly, frequentists consider model parameters to be fixed and data to be random, while Bayesians consider model parameters to be random and data to be fixed. These philosophies fundamenally affect the way that each approach seeks bounds on the value of a model parameter. Because the differences here are subtle, I'll go right into a simple example to illustrate the difference between a frequentist confidence interval and a Bayesian credible region. ## Example 1: The Mean of a Gaussian¶ Let's start by again examining an extremely simple problem; this is the same problem we saw in part I of this series: finding the mean of a Gaussian distribution. Previously we simply looked at the (frequentist) maximum likelihood and (Bayesian) maximum a posteriori estimates; here we'll extend this and look at confidence intervals and credibile regions. Here is the problem: imagine you're observing a star that you assume has a constant brightness. Simplistically, we can think of this brightness as the number of photons reaching our telescope in one second. Any given measurement of this number will be subject to measurement errors: the source of those errors is not important right now, but let's assume the observations $x_i$ are drawn from a normal distribution about the true brightness value with a known standard deviation $\sigma_x$. Given a series of measurements, what are the 95% (i.e. $2\sigma$) limits that we would place on the brightness of the star? ### 1. The Frequentist Approach¶ The frequentist approach to this problem is well-known, and is as follows: For any set of $N$ values $D = \{x_i\}_{i=1}^N$, an unbiased estimate of the mean $\mu$ of the distribution is given by $$\bar{x} = \frac{1}{N}\sum_{i=1}^N x_i$$ The sampling distribution describes the observed frequency of the estimate of the mean; by the central limit theorem we can show that the sampling distribution is normal; i.e. $$f(\bar{x}~||~\mu) \propto \exp\left[\frac{-(\bar{x} - \mu)^2}{2\sigma_\mu^2}\right]$$ where we've used the standard error of the mean, $$\sigma_\mu = \sigma_x / \sqrt{N}$$ The central limit theorem tells us that this is a reasonable approximation for any generating distribution if $N$ is large; if our generating distribution happens to be Gaussian, it also holds for $N$ as small as 2. Let's quickly check this empirically, by looking at $10^6$ samples of the mean of 5 numbers: In [20]: import numpy as np N = 5 Nsamp = 10 ** 6 sigma_x = 2 np.random.seed(0) x = np.random.normal(0, sigma_x, size=(Nsamp, N)) mu_samp = x.mean(1) sig_samp = sigma_x * N ** -0.5 print("{0:.3f} should equal {1:.3f}".format(np.std(mu_samp), sig_samp)) 0.894 should equal 0.894 It checks out: the standard deviation of the observed means is equal to $\sigma_x N^{-1/2}$, as expected. From this normal sampling distribution, we can quickly write the 95% confidence interval by recalling that two standard deviations is roughly equivalent to 95% of the area under the curve. So our confidence interval is $$CI_{\mu} = \left(\bar{x} - 2\sigma_\mu,~\bar{x} + 2\sigma_\mu\right)$$ Let's try this with a quick example: say we have three observations with an error (i.e. $\sigma_x$) of 10. What is our 95% confidence interval on the mean? We'll generate our observations assuming a true value of 100: In [21]: true_B = 100 sigma_x = 10 np.random.seed(1) D = np.random.normal(true_B, sigma_x, size=3) print(D) [ 116.24345364 93.88243586 94.71828248] Next let's create a function which will compute the confidence interval: In [22]: from scipy.special import erfinv def freq_CI_mu(D, sigma, frac=0.95): """Compute the confidence interval on the mean""" # we'll compute Nsigma from the desired percentage Nsigma = np.sqrt(2) * erfinv(frac) mu = D.mean() sigma_mu = sigma * D.size ** -0.5 return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu print("95% Confidence Interval: [{0:.0f}, {1:.0f}]".format(*freq_CI_mu(D, 10))) 95% Confidence Interval: [90, 113] Note here that we've assumed $\sigma_x$ is a known quantity; this could also be estimated from the data along with $\mu$, but here we kept things simple for sake of example. ### 2. The Bayesian Approach¶ $$P(\mu~|~D) = \frac{P(D~|~\mu)P(\mu)}{P(D)}$$ We'll use a flat prior on $\mu$ (i.e. $P(\mu) \propto 1$ over the region of interest) and use the likelihood $$P(D~|~\mu) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi\sigma_x^2}}\exp\left[\frac{(\mu - x_i)^2}{2\sigma_x^2}\right]$$ Computing this product and manipulating the terms, it's straightforward to show that this gives $$P(\mu~|~D) \propto \exp\left[\frac{-(\mu - \bar{x})^2}{2\sigma_\mu^2}\right]$$ which is recognizable as a normal distribution with mean $\bar{x}$ and standard deviation $\sigma_\mu$. That is, the Bayesian posterior on $\mu$ in this case is exactly equal to the frequentist sampling distribution for $\mu$. From this posterior, we can compute the Bayesian credible region, which is the shortest interval that contains 95% of the probability. Here, it looks exactly like the frequentist confidence interval: $$CR_{\mu} = \left(\bar{x} - 2\sigma_\mu,~\bar{x} + 2\sigma_\mu\right)$$ For completeness, we'll also create a function to compute the Bayesian credible region: In [23]: def bayes_CR_mu(D, sigma, frac=0.95): """Compute the credible region on the mean""" Nsigma = np.sqrt(2) * erfinv(frac) mu = D.mean() sigma_mu = sigma * D.size ** -0.5 return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu print("95% Credible Region: [{0:.0f}, {1:.0f}]".format(*bayes_CR_mu(D, 10))) 95% Credible Region: [90, 113] ### So What's the Difference?¶ The above derivation is one reason why the frequentist confidence interval and the Bayesian credible region are so often confused. In many simple problems, they correspond exactly. But we must be clear that even though the two are numerically equivalent, their interpretation is very different. Recall that in Bayesianism, the probability distributions reflect our degree of belief. So when we computed the credible region above, it's equivalent to saying "Given our observed data, there is a 95% probability that the true value of $\mu$ falls within $CR_\mu$" - Bayesians In frequentism, on the other hand, $\mu$ is considered a fixed value and the data (and all quantities derived from the data, including the bounds of the confidence interval) are random variables. So the frequentist confidence interval is equivalent to saying "There is a 95% probability that when I compute $CI_\mu$ from data of this sort, the true mean will fall within $CI_\mu$." - Frequentists Note the difference: the Bayesian solution is a statement of probability about the parameter value given fixed bounds. The frequentist solution is a probability about the bounds given a fixed parameter value. This follows directly from the philosophical definitions of probability that the two approaches are based on. The difference is subtle, but, as I'll discuss below, it has drastic consequences. First, let's further clarify these notions by running some simulations to confirm the interpretation. #### Confirming the Bayesian Credible Region¶ To confirm what the Bayesian credible region is claiming, we must do the following: 1. sample random $\mu$ values from the prior 2. sample random sets of points given each $\mu$ 3. select the sets of points which match our observed data 4. ask what fraction of these $\mu$ values are within the credible region we've constructed. In code, that looks like this: In [24]: # first define some quantities that we need Nsamples = 2E7 N = len(D) sigma_x = 10 # if someone changes N, this could easily cause a memory error if N * Nsamples > 1E8: raise ValueError("Are you sure you want this many samples?") # eps tells us how close to D we need to be to consider # it a matching sample. The value encodes the tradeoff # between bias and variance of our simulation eps = 0.5 # Generate some mean values from the (flat) prior in a reasonable range np.random.seed(0) mu = 80 + 40 * np.random.random(Nsamples) # Generate data for each of these mean values x = np.random.normal(mu, sigma_x, (N, Nsamples)).T # find data which matches matches our "observed" data x.sort(1) D.sort() i = np.all(abs(x - D) < eps, 1) print("number of suitable samples: {0}".format(i.sum())) number of suitable samples: 528 In [25]: # Now we ask how many of these mu values fall in our credible region mu_good = mu[i] CR = bayes_CR_mu(D, 10) within_CR = (CR[0] < mu_good) & (mu_good < CR[1]) print "Fraction of means in Credible Region: {0:.3f}".format(within_CR.sum() * 1. / within_CR.size) Fraction of means in Credible Region: 0.949 We see that, as predicted, roughly 95% of $\mu$ values with data matching ours lie in the Credible Region. The important thing to note here is which of the variables is random, and which are fixed. In the Bayesian approach, we compute a single credible region from our observed data, and we consider it in terms of multiple random draws of $\mu$. #### Confirming the frequentist Confidence Interval¶ Confirmation of the interpretation of the frequentist confidence interval is a bit less involved. We do the following: 1. draw sets of values from the distribution defined by the single true value of $\mu$. 2. for each set of values, compute a new confidence interval. 3. determine what fraction of these confidence intervals contain $\mu$. In code, it looks like this: In [26]: # define some quantities we need N = len(D) Nsamples = 1E4 mu = 100 sigma_x = 10 # Draw datasets from the true distribution np.random.seed(0) x = np.random.normal(mu, sigma_x, (Nsamples, N)) # Compute a confidence interval from each dataset CIs = np.array([freq_CI_mu(Di, sigma_x) for Di in x]) # find which confidence intervals contain the mean contains_mu = (CIs[:, 0] < mu) & (mu < CIs[:, 1]) print "Fraction of Confidence Intervals containing the mean: {0:.3f}".format(contains_mu.sum() * 1. / contains_mu.size) Fraction of Confidence Intervals containing the mean: 0.951 We see that, as predicted, 95% of the confidence intervals contain the true value of $\mu$. Again, the important thing to note here is which of the variables is random. We use a single value of $\mu$, and consider it in relation to multiple confidence intervals constructed from multiple random data samples. ### Discussion¶ We should remind ourselves again of the difference between the two types of constraints: • The Bayesian approach fixes the credible region, and guarantees 95% of possible values of $\mu$ will fall within it. • The frequentist approach fixes the parameter, and guarantees that 95% of possible confidence intervals will contain it. Comparing the frequentist confirmation and the Bayesian confirmation above, we see that the distinctions which stem from the very definition of probability mentioned above: • Bayesianism treats parameters (e.g. $\mu$) as random variables, while frequentism treats parameters as fixed. • Bayesianism treats observed data (e.g. $D$) as fixed, while frequentism treats data as random variables. • Bayesianism treats its parameter constraints (e.g. $CR_\mu$) as fixed, while frequentism treats its constraints (e.g. $CI_\mu$) as random variables. In the above example, as in many simple problems, the confidence interval and the credibility region overlap exactly, so the distinction is not especially important. But scientific analysis is rarely this simple; next we'll consider an example in which the choice of approach makes a big difference. ## Example 2: Jaynes' Truncated Exponential¶ For an example of a situation in which the frequentist confidence interval and the Bayesian credibility region do not overlap, I'm going to turn to an example given by E.T. Jaynes, a 20th century physicist who wrote extensively on statistical inference in Physics. In the fifth example of his Confidence Intervals vs. Bayesian Intervals (pdf), he considers a truncated exponential model. Here is the problem, in his words: A device will operate without failure for a time $\theta$ because of a protective chemical inhibitor injected into it; but at time $\theta$ the supply of the chemical is exhausted, and failures then commence, following the exponential failure law. It is not feasible to observe the depletion of this inhibitor directly; one can observe only the resulting failures. From data on actual failure times, estimate the time $\theta$ of guaranteed safe operation... Essentially, we have data $D$ drawn from the following model: $$p(x~|~\theta) = \left\{ \begin{array}{lll} \exp(\theta - x) &,& x > \theta\\ 0 &,& x < \theta \end{array} \right\}$$ where $p(x~|~\theta)$ gives the probability of failure at time $x$, given an inhibitor which lasts for a time $\theta$. Given some observed data $D = \{x_i\}$, we want to estimate $\theta$. Let's start by plotting this model for a particular value of $\theta$, so we can see what we're working with: In [27]: %matplotlib inline import numpy as np import matplotlib.pyplot as plt def p(x, theta): return (x > theta) * np.exp(theta - x) x = np.linspace(5, 18, 1000) plt.fill(x, p(x, 10), alpha=0.3) plt.ylim(0, 1.2) plt.xlabel('x') plt.ylabel('p(x)'); Imagine now that we've observed some data, $D = \{10, 12, 15\}$, and we want to infer the value of $\theta$ from this data. We'll explore four approaches to this below. ### 1. Common Sense Approach¶ One general tip that I'd always recommend: in any problem, before computing anything, think about what you're computing and guess what a reasonable solution might be. We'll start with that here. Thinking about the problem, the hard cutoff in the probability distribution leads to one simple observation: $\theta$ must be smaller than the smallest observed value. This is immediately obvious on examination: the probability of seeing a value less than $\theta$ is zero. Thus, a model with $\theta$ greater than any observed value is impossible, assuming our model specification is correct. Our fundamental assumption in both Bayesianism and frequentism is that the model is correct, so in this case, we can immediately write our common sense condition: $$\theta < \min(D)$$ or, in the particular case of $D = \{10, 12, 15\}$, $$\theta < 10$$ Any reasonable constraint on $\theta$ given this data should meet this criterion. With this in mind, let's go on to some quantitative approaches based on Frequentism and Bayesianism. ### 2. Frequentist approach #1: Sampling Distribution via the Normal Approximation¶ In the frequentist paradigm, we'd like to compute a confidence interval on the value of $\theta$. We can start by observing that the population mean is given by $$\begin{array}{ll} E(x) &= \int_0^\infty xp(x)dx\\ &= \theta + 1 \end{array}$$ So, using the sample mean as the point estimate of $E(x)$, we have an unbiased estimator for $\theta$ given by $$\hat{\theta} = \frac{1}{N} \sum_{i=1}^N x_i - 1$$ The exponential distribution has a standard deviation of 1, so in the limit of large $N$, we can use the standard error of the mean (as above) to show that the sampling distribution of $\hat{\theta}$ will approach normal with variance $\sigma^2 = 1 / N$. Given this, we can write our 95% (i.e. 2$\sigma$) confidence interval as $$CI_{\rm large~N} = \left(\hat{\theta} - 2 N^{-1/2},~\hat{\theta} + 2 N^{-1/2}\right)$$ Let's write a function which will compute this, and evaluate it for our data: In [28]: from scipy.special import erfinv def approx_CI(D, sig=0.95): """Approximate truncated exponential confidence interval""" # use erfinv to convert percentage to number of sigma Nsigma = np.sqrt(2) * erfinv(sig) D = np.asarray(D) N = D.size theta_hat = np.mean(D) - 1 return [theta_hat - Nsigma / np.sqrt(N), theta_hat + Nsigma / np.sqrt(N)] In [29]: D = [10, 12, 15] print("approximate CI: ({0:.1f}, {1:.1f})".format(*approx_CI(D))) approximate CI: (10.2, 12.5) We immediately see an issue. By our simple common sense argument, we've determined that it is impossible for $\theta$ to be greater than 10, yet the entirety of the 95% confidence interval is above this range! Perhaps this issue is due to the small sample size: the above computation is based on a large-$N$ approximation, and we have a relatively paltry $N = 3$. Maybe this will be improved if we do the more computationally intensive exact approach. Let's try it: ### 3. Frequentist approach #2: Exact Sampling Distribution¶ Computing the confidence interval from the exact sampling distribution takes a bit more work. For small $N$, the normal approximation will not apply, and we must instead compute the confidence integral from the actual sampling distribution, which is the distribution of the mean of $N$ variables each distributed according to $p(\theta)$. The sum of random variables is distributed according to the convolution of the distributions for individual variables, so we can exploit the convolution theorem and use the method of characteristic functions to find the following sampling distribution for the sum of $N$ variables distributed according to our particular $p(x~|~\theta)$: $$f(\theta~|~D) \propto \left\{ \begin{array}{lll} z^{N - 1}\exp(-z) &,& z > 0\\ 0 &,& z < 0 \end{array} \right\} ;~ z = N(\hat{\theta} + 1 - \theta)$$ To compute the 95% confidence interval, we can start by computing the cumulative distribution: we integrate $f(\theta~|~D)$ from $0$ to $\theta$ (note that we are not actually integrating over the parameter $\theta$, but over the estimate of $\theta$. Frequentists cannot integrate over parameters). This integral is relatively painless if we make use of the expression for the incomplete gamma function: $$\Gamma(a, x) = \int_x^\infty t^{a - 1}e^{-t} dt$$ which looks strikingly similar to our $f(\theta)$. Using this to perform the integral, we find that the cumulative distribution is given by $$F(\theta~|~D) = \frac{1}{\Gamma(N)}\left[ \Gamma\left(N, \max[0, N(\hat{\theta} + 1 - \theta)]\right) - \Gamma\left(N,~N(\hat{\theta} + 1)\right)\right]$$ A contiguous 95% confidence interval $(\theta_1, \theta_2)$ satisfies the following equation: $$F(\theta_2~|~D) - F(\theta_1~|~D) = 0.95$$ There are in fact an infinite set of solutions to this; what we want is the shortest of these. We'll add the constraint that the probability density is equal at either side of the interval: $$f(\theta_2~|~D) = f(\theta_1~|~D)$$ (Jaynes claims that this criterion ensures the shortest possible interval, but I'm not sure how to prove that). Solving this system of two nonlinear equations will give us the desired confidence interval. Let's compute this numerically: In [30]: from scipy.special import gammaincc from scipy import optimize def exact_CI(D, frac=0.95): """Exact truncated exponential confidence interval""" D = np.asarray(D) N = D.size theta_hat = np.mean(D) - 1 def f(theta, D): z = theta_hat + 1 - theta return (z > 0) * z ** (N - 1) * np.exp(-N * z) def F(theta, D): return gammaincc(N, np.maximum(0, N * (theta_hat + 1 - theta))) - gammaincc(N, N * (theta_hat + 1)) def eqns(CI, D): """Equations which should be equal to zero""" theta1, theta2 = CI return (F(theta2, D) - F(theta1, D) - frac, f(theta2, D) - f(theta1, D)) guess = approx_CI(D, 0.68) # use 1-sigma interval as a guess result = optimize.root(eqns, guess, args=(D,)) if not result.success: print "warning: CI result did not converge!" return result.x As a sanity check, let's make sure that the exact and approximate confidence intervals match for a large number of points: In [31]: np.random.seed(0) Dlarge = 10 + np.random.random(500) print "approx: ({0:.3f}, {1:.3f})".format(*approx_CI(Dlarge)) print "exact: ({0:.3f}, {1:.3f})".format(*exact_CI(Dlarge)) approx: (9.409, 9.584) exact: (9.408, 9.584) As expected, the approximate solution is very close to the exact solution for large $N$, which gives us confidence that we're computing the right thing. In [32]: print("approximate CI: ({0:.1f}, {1:.1f})".format(*approx_CI(D))) print("exact CI: ({0:.1f}, {1:.1f})".format(*exact_CI(D))) approximate CI: (10.2, 12.5) exact CI: (10.2, 12.2) The exact confidence interval is slightly different than the approximate one, but still reflects the same problem: we know from common-sense reasoning that $\theta$ can't be greater than 10, yet the 95% confidence interval is entirely in this forbidden region! The confidence interval seems to be giving us unreliable results. We'll discuss this in more depth further below, but first let's see if Bayes can do better. ### 4. Bayesian Credibility Interval¶ For the Bayesian solution, we start by writing Bayes' rule: $$p(\theta~|~D) = \frac{p(D~|~\theta)p(\theta)}{P(D)}$$ Using a constant prior $p(\theta)$, and with the likelihood $$p(D~|~\theta) = \prod_{i=1}^N p(x~|~\theta)$$ we find $$p(\theta~|~D) \propto \left\{ \begin{array}{lll} N\exp\left[N(\theta - \min(D))\right] &,& \theta < \min(D)\\ 0 &,& \theta > \min(D) \end{array} \right\}$$ where $\min(D)$ is the smallest value in the data $D$, which enters because of the truncation of $p(x~|~\theta)$. Because $p(\theta~|~D)$ increases exponentially up to the cutoff, the shortest 95% credibility interval $(\theta_1, \theta_2)$ will be given by $$\theta_2 = \min(D)$$ and $\theta_1$ given by the solution to the equation $$\int_{\theta_1}^{\theta_2} N\exp[N(\theta - \theta_2)]d\theta = f$$ this can be solved analytically by evaluating the integral, which gives $$\theta_1 = \theta_2 + \frac{\log(1 - f)}{N}$$ Let's write a function which computes this: In [33]: def bayes_CR(D, frac=0.95): """Bayesian Credibility Region""" D = np.asarray(D) N = float(D.size) theta2 = D.min() theta1 = theta2 + np.log(1. - frac) / N return theta1, theta2 Now that we have this Bayesian method, we can compare the results of the four methods: In [34]: print("common sense: theta < {0:.1f}".format(np.min(D))) print("frequentism (approx): 95% CI = ({0:.1f}, {1:.1f})".format(*approx_CI(D))) print("frequentism (exact): 95% CI = ({0:.1f}, {1:.1f})".format(*exact_CI(D))) print("Bayesian: 95% CR = ({0:.1f}, {1:.1f})".format(*bayes_CR(D))) common sense: theta < 10.0 frequentism (approx): 95% CI = (10.2, 12.5) frequentism (exact): 95% CI = (10.2, 12.2) Bayesian: 95% CR = (9.0, 10.0) What we find is that the Bayesian result agrees with our common sense, while the frequentist approach does not. The problem is that frequentism is answering the wrong question. I'll discuss this more below, but first let's do some simulations to make sure the CI and CR in this case are correct. ### Numerical Confirmation¶ To try to quell any doubts about the math here, I want to repeat the exercise we did above and show that the confidence interval derived above is, in fact, correct. We'll use the same approach as before, assuming a "true" value for $\theta$ and sampling data from the associated distribution: In [35]: from scipy.stats import expon Nsamples = 1000 N = 3 theta = 10 np.random.seed(42) data = expon(theta).rvs((Nsamples, N)) CIs = np.array([exact_CI(Di) for Di in data]) # find which confidence intervals contain the mean contains_theta = (CIs[:, 0] < theta) & (theta < CIs[:, 1]) print "Fraction of Confidence Intervals containing theta: {0:.3f}".format(contains_theta.sum() * 1. / contains_theta.size) Fraction of Confidence Intervals containing theta: 0.953 As is promised by frequentism, 95% of the computed confidence intervals contain the true value. The procedure we used to compute the confidence intervals is, in fact, correct: our data just happened to be among the 5% where the method breaks down. But here's the thing: we know from the data themselves that we are in the 5% where the CI fails. The fact that the standard frequentist confidence interval ignores this common-sense information should give you pause about blind reliance on the confidence interval for any nontrivial problem. For good measure, let's check that the Bayesian credible region also passes its test: In [38]: np.random.seed(42) N = 1E7 eps = 0.1 theta = 9 + 2 * np.random.random(N) data = (theta + expon().rvs((3, N))).T data.sort(1) D.sort() i_good = np.all(abs(data - D) < eps, 1) print("Number of good samples: {0}".format(i_good.sum())) Number of good samples: 65 In [39]: theta_good = theta[i_good] theta1, theta2 = bayes_CR(D) within_CR = (theta1 < theta_good) & (theta_good < theta2) print("Fraction of thetas in Credible Region: {0:.3f}".format(within_CR.sum() * 1. / within_CR.size)) Fraction of thetas in Credible Region: 0.954 Again, we have confirmed that, as promised, ~95% of the suitable values of $\theta$ fall in the credible region we computed from our single observed sample. ## Frequentism Answers the Wrong Question¶ We've shown that the frequentist approach in the second example is technically correct, but it disagrees with our common sense. What are we to take from this? Here's the crux of the problem: The frequentist confidence interval, while giving the correct answer, is usually answering the wrong question. And this wrong-question approach is the result of a probability definition which is fundamental to the frequentist paradigm! Recall the statements about confidence intervals and credible regions that I made above. From the Bayesians: "Given our observed data, there is a 95% probability that the true value of $\theta$ falls within the credible region" - Bayesians And from the frequentists: "There is a 95% probability that when I compute a confidence interval from data of this sort, the true value of $\theta$ will fall within it." - Frequentists Now think about what this means. Suppose you've measured three failure times of your device, and you want to estimate $\theta$. I would assert that "data of this sort" is not your primary concern: you should be concerned with what you can learn from those particular three observations, not the entire hypothetical space of observations like them. As we saw above, if you follow the frequentists in considering "data of this sort", you are in danger at arriving at an answer that tells you nothing meaningful about the particular data you have measured. Suppose you attempt to change the question and ask what the frequentist confidence interval can tell you given the particular data that you've observed. Here's what it has to say: "Given this observed data, the true value of $\theta$ is either in our confidence interval or it isn't" - Frequentists That's all the confidence interval means – and all it can mean! – for this particular data that you have observed. Really. I'm not making this up. You might notice that this is simply a tautology, and can be put more succinctly: "Given this observed data, I can put no constraint on the value of $\theta$" - Frequentists If you're interested in what your particular, observed data are telling you, frequentism is useless. #### Hold on... isn't that a bit harsh?¶ This might be a harsh conclusion for some to swallow, but I want to emphasize that it is not simply a matter of opinion or idealogy; it's an undeniable fact based on the very philosophical stance underlying frequentism and the very definition of the confidence interval. If what you're interested in are conclusions drawn from the particular data you observed, frequentism's standard answers (i.e. the confidence interval and the closely-related $p$-values) are entirely useless. Unfortunately, most people using frequentist principles in practice don't seem to realize this. I'd point out specific examples from the astronomical literature, but I'm not in the business of embarassing people directly (and they're easy enough to find now that you know what to look for). Many scientists operate as if the confidence interval is a Bayesian credible region, but it demonstrably is not. This oversight can perhaps be forgiven for the statistical layperson, as even trained statisticians will often mistake the interpretation of the confidence interval. I think the reason this mistake is so common is that in many simple cases (as I showed in the first example above) the confidence interval and the credible region happen to coincide. Frequentism, in this case, correctly answers the question you ask, but only because of the happy accident that Bayesianism gives the same result for that problem. Now, I should point out that I am certainly not the first person to state things this way, or even this strongly. The Physicist E.T. Jaynes was known as an ardent defender of Bayesianism in science; one of my primary inspirations for this post was his 1976 paper, Confidence Intervals vs. Bayesian Intervals (pdf). More recently, statistician and blogger W.M. Briggs posted a diatribe on arXiv called It's Time To Stop Teaching Frequentism to Non-Statisticians which brings up this same point. It's in the same vein of argument that Savage, Cornfield, and other outspoken 20th-century Bayesian practitioners made throughout their writings, talks, and correspondance. So should you ever use confidence intervals at all? Perhaps in situations (such as analyzing gambling odds) where multiple data realizations are the reality, frequentism makes sense. But in most scientific applications where you're concerned with what one particular observed set of data is telling you, frequentism simply answers the wrong question. Edit, November 2014: to appease several commentors, I'll add a caveat here. The unbiased estimator $\bar{x}$ that we used above is just one of many possible estimators, and it can be argued that such estimators are not always the best choice. Had we used, say, the Maximum Likelihood estimator or a sufficient estimator like $\min(x)$, our initial misinterpretation of the confidence interval would not have been as obviously wrong, and may even have fooled us into thinking we were right. But this does not change our central argument, which involves the question frequentism asks. Regardless of the estimator, if we try to use frequentism to ask about parameter values given observed data, we are making a mistake. For some choices of estimator this mistaken interpretation may not be as manifestly apparent, but it is mistaken nonetheless. ## The Moral of the Story: Frequentism and Science Do Not Mix¶ The moral of the story is that frequentism and Science do not mix. Let me say it directly: you should be suspicious of the use of frequentist confidence intervals and p-values in science. In a scientific setting, confidence intervals, and closely-related p-values, provide the correct answer to the wrong question. In particular, if you ever find someone stating or implying that a 95% confidence interval is 95% certain to contain a parameter of interest, do not trust their interpretation or their results. If you happen to be peer-reviewing the paper, reject it. Their data do not back-up their conclusion. If you have made it this far, I thank you for reading! I hope that this exploration succeeded in clarifying that the philosophical assumptions of frequentism and Bayesianism lead to real, practical consequences in the analysis and interpretation of data. If this post leads just one researcher to stop using frequentist confidence intervals in their scientific research, I will consider it a success. This post was written entirely in the IPython notebook. You can download this notebook, or see a static view on nbviewer. Many thanks to David W. Hogg for some helpful critiques on an early version of this post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054281711578369, "perplexity": 1011.7102688652963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322275.28/warc/CC-MAIN-20170628014207-20170628034207-00308.warc.gz"}
https://keplerlounge.com/number-theory/2021/06/14/erdos-bertrand.html
What follows is Erdős’ elegant proof of Bertrand’s postulate that builds upon two previous analyses I have shared on my blog: Given those analyses, we may use the following lemmas: ## Lemma 1: $$\forall n \in \mathbb{N}^*, {2n \choose n} \geq \frac{4^n}{2n+1}$$ ## Lemma 2: $$\forall p \in \mathbb{P}, \frac{2n}{3} < p \leq n \implies v_p\big({2n \choose n}\big) = 0$$ ## Lemma 3: $$v_p \big({2n \choose n}\big) \geq 1 \implies v_p \big({2n \choose n}\big) \leq \log_p (2n)$$ ## Lemma 4: $$e^{\vartheta(n)} \leq 4^n$$ ## Proof of Bertrand’s postulate: Bertrand’s postulate may be verified algorithmically for $$n \leq 467$$. For $$n \geq 468$$, we may proceed with a proof by contradiction. Let’s assume that there is no prime $$p, n < p \leq 2n$$. Given that $${2n \choose n}$$ has at most $$\sqrt{2n}$$ prime factors smaller than $$\sqrt{2n}$$, we may draw two inferences from Lemma 3: $$1.$$ The contribution of each of those factors to $${2n \choose n}$$ does not exceed $$2n$$. $$2.$$ For every prime $$p > \sqrt{2n}, v_p \big({2n \choose n}\big) \leq 1$$. Combining Lemmas 2, 3 and 4 we obtain: $${2n \choose n} \leq (2n)^{\sqrt{2n}} \cdot \prod_{\sqrt{2n} < p \leq 2n/3} p \leq (2n)^{\sqrt{2n}} \cdot \prod_{p=2}^{2n/3} p \leq (2n)^{\sqrt{2n}} \cdot 4^{2n/3}$$ Now, due to Lemma 1 this simplifies to: $$\frac{4^n}{2n+1} \leq (2n)^{\sqrt{2n}} \cdot 4^{2n/3} \implies 4^n \leq (2n+1) \cdot (2n)^{\sqrt{2n}} \cdot 4^{2n/3}$$ and it may be checked that (6) fails for $$n \geq 468$$ using a computer. ## References: 1. Aigner, Martin; Ziegler, Günter (2009). Proofs from THE BOOK (4th ed.). Berlin, New York: Springer-Verlag. 2. P. Ërdos, Beweis eines Satzes von Tschebyshef, Acta Sci. Math (Szeged) 5 (1930-1932)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9913933277130127, "perplexity": 857.8555509052024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00573.warc.gz"}
https://www.physicsforums.com/threads/work-done-on-a-spring-by-an-ideal-gas.673689/
# Homework Help: Work done on a spring by an ideal gas 1. Feb 22, 2013 ### physninj 1. The problem statement, all variables and given/known data Find the work done by an ideal gas on a hooke's spring when heat is added until volume and pressure triple 2. Relevant equations Pv=nrT Q=ncΔt W=(1/2)kx^2 F=kx=P*A 3. The attempt at a solution I took the work equation for the spring and subbed in P*A to get w=(1/2)(PA)Δx and then i believe i can state that the work done on the spring is W=.5P2V2-.5P1V1 would you agree with this? I skipped some of the process so let me know if you need clarification. Last edited: Feb 22, 2013 2. Feb 22, 2013 ### Andrew Mason I am not sure what the question is. Do we assume the spring is providing 0 pressure initially and during expansion the spring is compressed (or stretched). How does the spring length change with the volume? It depends on how the spring is connected to the gas container. AM 3. Feb 23, 2013 ### physninj Its pretty much a piston problem, where the spring is above the container of the gas connected to the lid and is initially at rest, then heat is added to the gas until it expands, working against the spring. Its mostly conjecture, I'm trying to anticipate one of the problems that is likely to be on my exam. 4. Feb 23, 2013 ### Andrew Mason The change in potential energy of the spring is equal to the work done on the spring by the gas. But in order to determine how much work is done by the gas on the spring, one has to know how much work is done on other bodies: eg the atmosphere; the piston. Can you give us the exact question? AM Last edited: Feb 23, 2013 5. Feb 23, 2013 ### physninj Yeah for sure I've attached one, thanks AM for being patient and responding to my incomplete posts. I think I can solve it I just need help with relating the mechanical work to the thermodynamic work. Last edited: Feb 23, 2013 6. Feb 23, 2013 ### ehild Originally the gas has Po pressure and Vo Volume. Assume slow heating, so the pressure of the gas is balanced by the atmospheric pressure and the pressure from the compressed spring. What is the pressure and the volume when the piston moves to the right by Δx, assuming k spring constant and A cross-section area of the cylinder? How is work calculated? How much work does the gas do while its volume increases three times the original volume? When the volume is 3 Vo the pressure is 3Po. What does it mean for the spring constant? ehild #### Attached Files: • ###### springgas.JPG File size: 4.1 KB Views: 264 7. Feb 24, 2013 ### physninj Well I think I'm starting to get there. I can state the following Wgas=PΔV=P*A*Δx=Wspring=-(1/2)k(Δx)^2 or PAΔx=(1/2)k(Δx)^2 I don't know where to go with this exactly, I know Δx is needed right? Work is calculated as pressure multiplied by the change in volume, but how do I use that when both pressure and volume are tripled? 8. Feb 24, 2013 ### ehild The gas does work against the surrounding atmosphere, too. The work ∫PdV from Vo to 3Vo. You need to express P in terms of V. What is the volume of the gas if the piston moves to the right by Δx? What is the pressure balanced by the spring and the atmospheric pressure if the spring is compressed by Δx? ehild 9. Feb 24, 2013 ### physninj Hmm. I used the ideal gas equation to get the integral into one variable and get the work done on the surroundings as W=nRTln(V2/V1), so that must be the total work done by the gas in expanding. The Volume of the gas moved by the piston is V+(A*Δx) or V2/3V1 I don't understand where any of this is getting me I'm sorry. I don't mean to be high maintenance. is it just nrTln(V2/V1)=(1/2)kΔx^2 or what. That's the work done in the expansion of the gas, but it doesn't take into effect the extra work the gas had to do to push the spring. 10. Feb 24, 2013 ### ehild The formula you use for work is for an isothermal process. Why should it be isothermal expansion when both volume and pressure increases three times? Think of the Ideal Gas Law. You need the work done on the spring but for that you need both k and Δx. You know that V= Vo+AΔx. How is the pressure related to Δx? What force is exerted on the piston from the spring? ehild 11. Feb 24, 2013 ### physninj I do not know what kind of process it is, that is the problem, its not given. Its not isobaric because the pressure triples. Its not isochoric because the volume triples. Its not adiabatic because heat is added. I know this stuff, but nowhere in any of my thermo chapters is there a problem like this. So how am I supposed to calculate work?? Unfortunately this cryptic stuff is just leading me down nonsense paths... anyways, why would I need to deal with k if its constant and given in the problem? and yeah I need Δx. Not sure how to get it though. I keep trying to set things equal different ways but not really seeing any progress. Pressure is related to work which is related to Δx, I think I established that, although I dont know if I did so correctly. and the force exerted by the spring is going to be F=kΔx Last edited: Feb 24, 2013 12. Feb 24, 2013 ### ehild You wrote already that the work done on the spring is 0.5 kΔx2. But neither k nor Δx are given. You can assume a quasi-static process when the total force of the gas exerted on the piston is equal to the external force. Yes, the spring exerts kΔx force on the piston, nut also the outside atmosphere exerts some force. Initially the pressure of the gas was equal to the atmospheric pressure. ehild 13. Feb 24, 2013 ### physninj oh wow you're right It is not provided. Goodness I think its time I went to sleep, its 1:30 a.m. here. Thank you for your help this evening, I'll be back to trying to figure this out tomorrow. 14. Feb 24, 2013 ### physninj Ok, let me go ahead and try to do a force summation of the internal and external And see if I can get down to two equations two unknowns. If anybody is willing to tell me what the bigger picture is it would really help me out. I haven't had a clue what I've been working towards this whole time. Let me just show you the process I tried to use before, this guessing game is getting me lost I think. I suck at being led to water I guess. so W=(1/2)kx^2 and kx=F W=(1/2)(F)*x=(1/2)(P*A)x and since Δx is the change in height of the chamber I can state the following, letting x represent the height of the container W=(1/2)P*A*(x+Δx)-(1/2)P*A*(x) W=(1/2)P2V2-(1/2)P1V1 And If it has to do work against the outside pressure as well I'm not sure how to account for it. this could very well be wrong. Heres the force summation I was talking about, I found it in this thread https://www.physicsforums.com/showthread.php?t=256682 Ftot=k*v+Fatm which forms a linear equation going through both my known points. Can I then solve for k using the slope formula? I dont understand why the spring force would be k*V. Last edited: Feb 24, 2013 15. Feb 24, 2013 ### ehild The spring force is not equal to PA. Initially, the gas was in equilibrium and the spring was unstretched. The initial pressure was equal to the atmospheric pressure, Po. The piston moves to the right by Δx, the spring gets shorter by Δx during the process. The piston exerts kΔx force on the spring, the spring exerts also kΔx force on the piston. The atmospheric pressure means APo force on the piston. So the external force on the piston is PoA +kΔx. The gas also exerts force on the piston: It is PA. The piston is stationary at the end: The inward and outward forces balance. PoA +kΔx=P(final)A and P(final)=3Po. You know that the final volume is Vf=Vi+AΔx, and Vf=3Vi. From these two equations you get expressions for k and Δx in terms of A and the pressure Po and volume Vi. Use them to determine the spring energy. ehild Last edited: Feb 24, 2013 16. Feb 24, 2013 ### physninj I've got it!! science be praised. Here's what I did Forces: PA=Patm+kx Divide both sides by A to get a linear function for pressure in terms of volume P=Patm+(k/A)x Then I can take the equation for volume and solve for x to eliminate it in the above equation V=V0+Ax x=(V-V0)/A So I have P=Patm+(k/A2)(V-V0) Now this is a linear function and the slope is defined by the term (k/A2) SO I can use the given initial and final pressures and volumes to calculate the slope m=ΔP/ΔV (getting a numerical value) thereby eliminating the not given values for k and A. Then the above equation can be integrated with respect to volume to give the work. ∫PdV=∫(Patm+(m)(V-V0))dV W=Patm(V-V0)+(1/2)m(V-V0)2 man it seems so simple now that I see it all worked out like this. Oh well, as my instructor says, you gotta embrace the struggle. Thanks for your help ehild. 17. Feb 25, 2013 ### ehild It is even simpler than that You do not need to integrate to get the work on the spring,. It is 1/2 kx2. As you wrote correctly, P=Patm+(k/A)x, and x=(V-Vo)/A . P-Patm=kx/A = k(V-Vo)/A2, that is k=(P-Patm) A2/(V-Vo) Substitute for x and k in W=1/2 kx2 . A cancels. ehild
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930764377117157, "perplexity": 602.6269796075497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593438.33/warc/CC-MAIN-20180722174538-20180722194538-00366.warc.gz"}
http://openturns.github.io/openturns/latest/user_manual/_generated/openturns.HypothesisTest_TwoSamplesKolmogorov.html
HypothesisTest_TwoSamplesKolmogorov¶ HypothesisTest_TwoSamplesKolmogorov(sample1, sample2, level=0.05) Test whether two samples follows the same distribution. If the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same. Parameters sample12-d float array A continuous distribution sample. sample22-d float array Another continuous distribution sample, can be of different size. levelfloat, , optional This is the risk of committing a Type I error, that is an incorrect rejection of a true null hypothesis. Default value is 0.05 Returns test_resultTestResult Test result. Notes This statistical test might be used to compare two samples and (of sizes not necessarily equal). The goal is to determine whether these two samples come from the same probability distribution or not. (without any information of the underlying distribution under the null hypothesis) As application, if null hypothesis could not be rejected, the two samples could be be aggregated in order to increase the robustness of further statistical analysis. Examples >>> import openturns as ot >>> ot.RandomGenerator.SetSeed(0) >>> sample1 = ot.Normal().getSample(20) >>> sample2 = ot.Normal(0.1, 1.1).getSample(30) >>> ot.HypothesisTest.TwoSamplesKolmogorov(sample1, sample2) class=TestResult name=Unnamed type=TwoSamplesKolmogorov binaryQualityMeasure=true p-value threshold=0.05 p-value=0.554765 statistic=0.216667 description=[sampleNormal vs sample Normal]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849384188652039, "perplexity": 2383.6428344434116}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00076.warc.gz"}
http://mathhelpforum.com/advanced-algebra/214645-show-t-diagonalizable-print.html
# Show that T is diagonalizable • March 12th 2013, 10:57 AM dave52 Show that T is diagonalizable Suppose T: V----> V has only two distinct eigenvalues k1 and k2 Show that T is diagonalizable iff V = Ek1+ Ek2 • March 12th 2013, 11:08 AM ILikeSerena Re: Show that T is diagonalizable Quote: Originally Posted by dave52 Suppose T: V----> V has only two distinct eigenvalues 1 and 2 Show that T is diagonalizable iff V = E1 + E2 Hi dave52! :) It seems to me that it is not true. $\begin{bmatrix}1&0&0 \\ 0&1&0 \\ 0&0&2\end{bmatrix}$ This matrix is diagonal, has distinct eigenvalues 1 and 2, but $\mathbb R^3 \ne E_1 + E_2$. • March 12th 2013, 11:27 AM jakncoke Re: Show that T is diagonalizable I think you meant to say that your vector space has Dimension 2. • March 12th 2013, 11:43 AM dave52 Re: Show that T is diagonalizable sorry I just fixed the problem • March 13th 2013, 01:09 PM ILikeSerena Re: Show that T is diagonalizable Let's first proof T is diagonalizable $\Rightarrow$ V = E1 + E2. If T is diagonalizable, it can be written as $PDP^{-1}$, where D is a diagonal matrix with its eigenvalues on the main diagonal, and P represents a basis of eigenvectors. That basis spans V. Therefore V is the direct sum of the eigen spaces. Now let's proof $V = E_1 + E_2$ $\Rightarrow$ T is diagonalizable. This means that that basis of $E_1$ together with the basis of $E_2$ span V. Put those bases next to each other in a matrix P, and put the eigenvalues diagonally in a corresponding matrix D, and you have that $TP = PD$ $T = PDP^{-1}$ So T is diagonalizable. Therefore T is diagonalizable iff $V = E_1+ E_2 \qquad \blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738843441009521, "perplexity": 979.7854515270533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678818.13/warc/CC-MAIN-20151001215758-00012-ip-10-137-6-227.ec2.internal.warc.gz"}
https://socratic.org/questions/what-are-all-the-rational-zeros-of-2x-3-15x-2-9x-22
Precalculus Topics # What are all the rational zeros of 2x^3-15x^2+9x+22? Aug 29, 2016 Use the rational roots theorem to find the possible rational zeros. #### Explanation: $f \left(x\right) = 2 {x}^{3} - 15 {x}^{2} + 9 x + 22$ By the rational roots theorem, the only possible rational zeros are expressible in the form $\frac{p}{q}$ for integers $p , q$ with $p$ a divisor of the constant term $22$ and $q$ a divisor of the coefficient $2$ of the leading term. So the only possible rational zeros are: $\pm \frac{1}{2} , \pm 1 , \pm 2 , \pm \frac{11}{2} , \pm 11 , \pm 22$ Evaluating $f \left(x\right)$ for each of these we find that none work, so $f \left(x\right)$ has no rational zeros. $\textcolor{w h i t e}{}$ We can find out a little more without actually solving the cubic... The discriminant $\Delta$ of a cubic polynomial in the form $a {x}^{3} + b {x}^{2} + c x + d$ is given by the formula: $\Delta = {b}^{2} {c}^{2} - 4 a {c}^{3} - 4 {b}^{3} d - 27 {a}^{2} {d}^{2} + 18 a b c d$ In our example, $a = 2$, $b = - 15$, $c = 9$ and $d = 22$, so we find: $\Delta = 18225 - 5832 + 297000 - 52272 - 106920 = 150201$ Since $\Delta > 0$ this cubic has $3$ Real zeros. $\textcolor{w h i t e}{}$ Using Descartes' rule of signs, we can determine that two of these zeros are positive and one negative. ##### Impact of this question 2875 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991425633430481, "perplexity": 222.1924014231517}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00325.warc.gz"}
http://cool--math.blogspot.in/
## Sunday, September 29, 2013 ### Cool Math Cool math gives you cool math problems. We supply math problems, that are for the geeks. Cool math provides math problems ranging from high school level to the math researcher level. Cool math problems are simpler to understand and grasp, but difficult to solve. We, at Cool math strive to add new challenging problems to this set, every day. Try solving them, solving is very much appreciated. And you know what, you get scores and cool tags for solving these, it is coming soon at Cool math. Some of these problems/solutions are surprising, some are awesome, and some are downright insane. These cool math, will tell everyone who-da-man! Welcome to cool math, where the coolest math geeks hang out! ## Wednesday, October 10, 2012 ### Cool math Olympics - II Cool math Olympics - 1960 1. Determine all three-digit numbers N having the property that N is divisible by 11. and N / 11 is equal to the sum of the squares of the digits of N. 2. For what values of the variable x does the following inequality hold: (4x^{2})/(1 -\sqrt{1 + 2x})^{2} < 2x + 9. 3. In a given right triangle ABC the hypotenuse BC of length a is divided into n equal parts (n an odd integer). Let \alpha be the acute angle subtending, from A that segment which contains the midpoint of the hypotenuse. Let h be the length of the altitude to the hypotenuse of the triangle. Prove: tan(\alpha) = (4nh) / ((n^{2} - 1) * a). 4. Construct triangle ABC given h_{a}, h_{b} (the altitudes from A and B) and m_{a}, the median from vertex A. 5. Consider the cube ABCDA^{1}B^{1}C^{1}D^{1} (with face ABCD directly above face A^{1}B^{1}C^{1}D^{1}). (a) Find the locus of the midpoints of segments XY where X is any point of AC and Y is any point of B^{1}D^{1}. (b) Find the locus of points Z which lie on the segments XY of part (a) with ZY = 2XZ. 6. Consider a cone of revolution with an inscribed sphere tangent to the base of the cone. A cylinder is circumscribed about this sphere so that one of its bases lies in the base of the cone. Let V1 be the volume of the cone and V2 the volume of the cylinder. (a) Prove that V1 != V2. (b) Find the smallest number k for which V1 = kV2, for this case, construct the angle subtended by a diameter of the base of the cone at the vertex of the cone. 7. An isosceles trapezoid with bases a and c and altitude h is given. (a) On the axis of symmetry of this trapezoid, find all points P such that both legs of the trapezoid subtend right angles at P: (b) Calculate the distance of P from either base. (c) Determine under what conditions such points P actually exist. (Discuss various cases that might arise.) ### Cool Math Olympics - 1959 Key 1. Solution: GCD of a and b is the greatest common divisor of a and b. It is known that, what divides a and b also divides a - b. So, GCD of a and b should be same as that of a and a - b (when a > b) or that of b and a - b. Considering that, and also from the observation that, some rational number is irreducible, iff, the numerator and denominator have a GCD of 1, it follows that: GCD(21n + 4, 14n + 3) = 1                                    <=> GCD(21n + 4, (21n + 4) - (14n + 3)) = 1               <=> GCD(21n + 4, 7n + 1) = 1 Now, calculate GCD normally, like we do, divide take remainder and repeat the procedure with the remainder and the divider. The remainder in 21n + 4 when divided by 7n + 1 is 1. which means the GCD is 1. It immediately follows that, the GCD of original numbers is also 1, hence, those are irreducible. 2. Square the expression on both sides, noting that A >= 0 (since square root is a function, and results of that function are always non-negative). \sqrt{(x + \sqrt{2x - 1})} + \sqrt{(x - \sqrt{2x - 1})} = A                                       <=> (\sqrt{(x + \sqrt{2x - 1})} + \sqrt{(x - \sqrt{2x - 1})})^{2} = A^{2}, (A >= 0)     <=> 2x +  2 * \sqrt{(x + \sqrt{2x - 1})(x - \sqrt{2x - 1})} = A^{2}, (A >= 0)               <=> But here, notice that the expression inside square roots is nothing but (x^2 - 2x + 1, which is a square expression). Now, again; 2x + 2|x - 1| = A^{2}, (A >= 0, x >= 1/2)               <=> (|x| is the absolute value of x). The condition x >= 1/2 is because of \sqrt{2x - 1}, where the one inside square-root cant be negative. Now substitute values for A, (i) A = 1, x + |x - 1| = 1/2. Notice that, |x - 1| cant be 1 - x. If it is so, A^{2}/2 = 1 always, regardless of the actual value of A. So, this yields another condition: If A^{2} / 2 != 1, then x >= 1. Consider these conditions into the expression (i), and you get x = 3/4, which is not according to the conditions. So, (i) does not have a solution. (ii) A = 1/2, x + |x - 1| = 1/8. Here, we get x = 9/16, since this is also less than 1, (ii) also has no solution. (iii) A = 2, x + |x - 1| = 2. Here you get, x = 3/2, which satisfies all conditions. So, only this combination has solutions for x, which is given by 3/2. 3. Remember cos(2x) = 2cos^{2}(x) - 1. And, in a quadratic equation ax^{2} + bx + c = 0, the sum of roots is given by -b/a and product is given by c/a. First, derive formulae for cos(2x_{1}) + cos(2x_{2}) and cos(2x_{1})cos(2x_{2}) in terms of cos(x_{1}) + cos(x_{2}) and cos(x_{1})cos(x_{2}), where x_{1} and x_{2} are roots of original equation. cos(2x_{1}) + cos(2x_{2}) = 2cos^{2}(x_{1}) - 1 + 2cos^{2}(x_{2}) - 1         <=> = 2(cos(x_{1}) + cos(x_{2}))^{2} - 4cos(x_{1})cos(x_{2}) - 2. (Since x^2 + y^2 = (x + y)^2 - 2xy). cos(2x_{1})cos(2x_{2}) = (2cos^{2}(x_{1}) - 1) * (2cos^{2}(x_{2}) - 1)                 <=> = 4cos^{2}(x_{1}) * cos^{2}(x_{2}) - 2cos^{2}(x_{1}) - 2cos^{2}(x_{2}) + 1  <=> = 4(cos(x_{1})cos(x_{2}))^{2} - 2(cos(x_{1}) + cos(x_{2}))^{2} + 4cos(x_{1})cos(x_{2})  + 1. Substituting cos(x_{1}) + cos(x_{2}) = -b/a and cos(x_{1})cos(x_{2}) = c/a, you get the following: -b^{1}/a^{1} = 2 * (-b/a)^{2} - 4 * (c/a) - 2 <=> = (2b^{2} - 4ca -2a^{2})/a^{2} c^{1}/a^{1} = 4 * (c/a)^{2} - 2 * (-b/a)^{2} + 4 * (c/a) + 1 <=> = (4c^{2} - 2b^{2} + 4ca + a^{2}) / a^{2} So, one can use: a^{1}  = a^{2}, b^{1} = (-2b^{2} + 4ca + 2a^{2}) and c^{1} = (4c^{2} - 2b^{2} + 4ca + a^{2}). The resulting expression is, a^{2} cos^{2}(2x) + (-2b^{2} + 4ca + 2a^{2}) cos(2x) + (4c^{2} - 2b^{2} + 4ca + a^{2}) = 0. When you substitute, a = 4, b = 2, c = -1, you get. the roots for original equation: \alpha = (-1 + \sqrt{5}) / 4, and \beta = (-1 - \sqrt{5}) / 4. Surprisingly, the equation in cos(2x) also yields the same roots. and in fact, the sane equation when simplified by eliminating common factors between co-efficients. However, there is no need to be surprised. Because, calculate cos(2x) for \alpha, it will be \beta and cos(2x) for \beta will be \alpha. So, both the equations are right, and compatible. (Its a neat trick substitution). 4. Consider an angle in the right triangle to be \theta, where \theta != 90^{o}. The median divides the hypotenuse into 2 equal lengthed parts. Since the lengh of hypotenuse is c, one side will be c * cos(\theta) and another will be c * sin(\theta). The median divides the right triangle into two other triangles. Consider the triangle inside which we have angle \theta. Using the cosine rule of lengths of sides, gives the following: (To brush up: Cosine rule says that, if angle between sides length a and b is \theta, the third side in the triangle is given by sqrt{a^{2} + b^{2} - 2ab * cos(\theta)}). The geometric mean of sides is, sqrt{c * cos(\theta) * c * sin(\theta)} = c * sqrt{sin(2 * \theta) / 2} Using cosine rule, c^{2} * sin(2 * \theta) / 2 = c^{2} / 4 + c^{2} * cos^{2} (\theta) - 2 * c/2 * c * cos(\theta) * cos(\theta) <=> c^{2} * sin(2 * \theta) / 2 = c^{2} / 4 + c^{2} * cos^{2} (\theta) - c^{2} * cos^{2} (\theta)     <=> c^{2} * sin(2 * \theta) / 2 = c^{2} / 4, So, the side length is c / 2 as well (note that the length of parts of hypotenuse after being divided by the median, are also same as c/2). Now, to the angle \theta. (sin(2 * \theta)) / 2 = 1/ 4 <=> sin(2 * \theta) = 1/2       <=> 2 * \theta = 30^{o} or 2 * \theta = 150^{o} (since sin(30^{o}) = sin(150^{o}) = 1/2). => \theta = 15^{o} or \theta = 75^{o} (In fact, both angles are complementary in a right angled triangle). Basically, to construct such a triangle, with any c, choose an angle to be 15^{o} in the right triangle. 5. Now to the last problem, this is a geom problem. I will present anal. geom. solution except for the last part. (Which is too easy in co-ordinate geometry). Consider this theorem, well known in circles of (pun is intended), ananlytical geom. If a Jyaa, AB in a circle makes an angle \theta at the center C of the circle, (namely, <ACB is \theta), then on any point on the circle, to the side of C, it makes an angle of < \theta / 2). With the help of this theorem, you can easily solve this problem. First of all, notice that <ANM and <MNB are both 45^{o}, since <APM and <MQB are 90^{o} (P and Q are also centers of circum-circle for a square). Consider <NAM = \theta. <NMA = 135^{o} - \theta. <NMB = 45^{o} + \theta and <NBM = 90^{o} - \theta. Apply sine rule now, using NM which is same in both triangles, \delta NMA and \delta NMB. (NM)/sin(\theta) = (AM)/sin(45^{o}) and (NM)/sin(90^{o} - \theta) = (MB)/sin(45^{o}) This gives, sin(\theta) = (NM)/(sqrt{2} * AM) and cos(\theta) = (NM)/(sqrt{2} * MB), which both can be combined to give, tan(\theta) = (MB)/(AM). But, notice that, (MB)/(AM) = (MB)/(MC) = (FM)/(AM). so, this, along with the facts that <NAM = \theta and <NBM = 90^{o} - \theta, gives that, A, N and F are co-linear, as well as, B, C and N are co-linear. (ii) Consider for now, A is the origin, and AB is the x axis. Consider AM = l_{1} and AB = l. Consider the point R at (l/2, -l/2). The angle <BMR = 135^{o} - \theta since the opposite angle <AMN is of same measure. tan(<BMP) = tan(135^{o} - \theta) = -tan(45^{o} + \theta) = (1 + tan(\theta))/(tan(\theta) - 1). We also know that. tan(\theta) = (l - l_{1}) / l_{1}, substituting it. tan(<BMP) = l / (l - 2 * l_{1}) and notice that with mid-point S of AB, MS length = l/2 - l_{1}. This gives length of SR = (MS) * tan(<BMP) = (l/2 - l_{1}) * l / (l - 2 * l_{1})  <=> SR = (l - 2 * l_{1}) / 2 * l / (l - 2 * l_{1}) = l / 2 which is irrespective of any \theta or equivalently l_{1}. (iii) Using co-ord geom, P is (l_{1} / 2 , l_{1} / 2) and Q is (l_{1} + (l - l_{1})/2, (l - l_{1})/2) which is same as, ((l + l_{1})/2, (l - l_{1})/2). So, their mid-point is given by ((l + (2 *  l_{1}))/4, l/2). So, that means locus is, y = l/2. ## Monday, October 1, 2012 ### Cool Math Olympics International Math Olympiad problems will be presented along with solutions here in "Cool Math Olympics". Olympiad problems from 1959 (first time Math Olympiad started) to current year will be presented and solved here. First set of Olympiad problems and solutions (one day gap) for 1959 Olympiad is here at "1959 Cool Math Olympics". Olympians, or wannabe Olympians, participate in our "Cool Math Olympics". You performance is much appreciated. 1. Prove that the fraction (21n+4) / (14n+3) is irreducible for every natural number n. (Hint: What is the definition of GCD? And if a and b have GCD of 1, what about a and a - b or, b and a - b). 2. For what real values of x is \sqrt{(x + \sqrt{2x - 1})} + \sqrt{(x - \sqrt{2x - 1})} = A, given (a) A = 1 (b) A = 1/2, (c) A = 2 where only non-negative real numbers are admitted for square roots? (Hint: Too many square roots, looks like it requires raising to power two multiple times, or is it?) 3. Let a, b, c be real numbers. Consider the quadratic equation in cos(x): a * cos^{2}(x) + b * cos(x) + c = 0 Using the numbers a, b, c form a quadratic equation in cos(2x), whose roots are the same as those of the original equation. Compare the equations in cos(x) and cos(2x) for a = 4, b = 2, c = -1. (Hint: Use formulas for sum and product of roots, also apply same for cos(2x)). 3a. Bouns question: What happened when you used a = 4, b = 2, c = -1 and why? 4. Construct a right triangle with given hypotenuse c such that the median drawn to the hypotenuse is the geometric mean of the two legs of the triangle. (Hint: Parameterize by \theta). 4a. What is the angle \theta got? Is it dependent on c? 4b. What is the geometric mean? What is it same as? 5. An arbitrary point M is selected in the interior of the segment AB. The squares AMCD and MBEF are constructed on the same side of AB; with the segments AM and MB as their respective bases. The circles circumscribed about these squares, with centers P and Q; intersect at M and also at another point N. Let N^{1} denote the point of intersection of the straight lines AF and BC. (a) Prove that the points N and N^{1} coincide. (b) Prove that the straight lines MN pass through a fixed point S independent of the choice of M. (c) Find the locus of the midpoints of the segments PQ as M varies between A and B. (Hint: Use Cartesian Co-ordinates. Like many such problems, choose origin and axes to minimize the paraphernalia). 5d. Bonus problem. There is a way to solve 5a, using analytical geometry. 5e. Prove that AF and BC intersect making an angle of 90^{0}. 5f. Prove also using analytical geometry, (use \theta where tan(\theta) = (AM)/(MB) as well) that there is such a fixed point like S. 5g. Argue that such a fixed point, would be on the perpendicular bisector of AB (use symmetry arguments, possibly?) 6. Two planes, P and Q intersect along the line p. The point A is given in the plane P and the point C in the plane Q; neither of these points lies on the straight line p. Construct an isosceles trapezoid ABCD (with AB parallel to CD) in which a circle can be inscribed, and with vertices B and D lying in the planes P and Q respectively. (Don't know what this problem is about. Whats the idea of this problem, anyway? What is "in which a circle can be inscribed", totally vague). 1959 Cool Math Olympics complete. Boy, these things take time, Phew. I personally prefer analytical geometry approach to problems than Co-ordinate geom approach, that's just dry, what do you people think? Though these problems are way back in 1959, Olympics solving took time for me, partly because I dealt with these math subjects, long long long time ago (15 years). However, these problems rank as "brilliant" for their formulation, where the results are pretty surprising (Especially 4 and 5). Hats-off to those people who had such great geometric insights. (Spoiler: Pretty much everything in geometry has an interesting locus, or passes through a fixed point I think. What do you people think?) People who could crack these (with or without hints) can start telling their friends that they are good enough to do it. Start the celebrations, and the geekery of you telling others about these problems. Next time, with 1960 Cool Math Olympics. ## Saturday, September 29, 2012 ### Cool Math - I Current set of cool math problems (solutions later). See if you can see through them. Best of luck. Note: I have solutions to a lot of these questions, almost all. But, it could be that some solution slipped by time, since I used these long time ago. In that case, I might not have some solutions. *1. Consider a 4n * 4n square. Now, consider rectangles of integer dimensions which would fit inside this square (max dimension of them can be 4n). What is the probability (considering very large n), that these rectangles have an area lesser than or equal to 4n^{2} chosen over all possible dimensions (that would fit inside the square). *2.  Prove that, if a number n has the prime number expansion form of p_{1}^{k_{1}}*p_{2}^{k_{2}}*p_{3}^{k_{3}}..., the sum of its factors is (where 1 and n are also considered factors) is given by (p_{1}^{k_{1}+1} - 1) / (p_{1} - 1) * (p_{2}^{k_{2}+1} - 1) /(p_{2} - 1) * ... *3. From the above definition of factors, prove that a perfect number has sum of its factors, equal to twice the number. **4. Prove fermat's little theorem: If p is a prime, a^{p} \equiv a (mod p). (Hint: Use binomial expansion to power p, and mathematical induction on a). 5. Prove that if p is not a prime, then 2^{p} - 1 is not a prime. **6. Prove that all even perfect numbers are of the form, 2^{p−1} * (2^{p}−1) where (2^{p}−1) is a prime. **7. Prove that if both p and (2p + 1) are primes, (2p + 1) will be a factor of (2^{p}−1). 8. A triangular number (n th) is a number, which is got by summing the first n, positive integers. Prove that  every number of the form 2^{p−1} * (2^{p}−1), is a triangular number. 9. Derive a formula for 'sum of first n odd cubes', and prove that every number of the form 2^{p−1} * (2^{p}−1), is a sum of first n odd cubes. 10. Even perfect numbers (except 6) give a remainder of 1, when divided with 9. 11. Subtracting 1 from a perfect number, and dividing it by 9, always gives a perfect number. 12. Summing all digits in a number, and summing all the digits in the resulting number, and repeating the process till a single digit is obtained is called, getting the "digital root" of a number. Prove that a positive number is divisible by 9, iff, its digital root is 9. 13. Any number of the form 2^{m−1} * (2^{m}−1), where m is any odd integer, leaves a remainder of 1 when divided with 9. *14. An Ore's harmonic number, is a number whose factors have a harmonic mean of an integer. Prove that every perfect number is an Ore's harmonic number. *15. Prove that, for any perfect number, sum of reciprocals of its factors is equals to two. *16. Prove that for any integer M, the product of arithmetic mean of its factors and harmonic mean of the factors, is the number M itself. *17. Any odd perfect number N is of the form, N = q^{\alpha} * p_{1}^{2e_{1}} *  p_{2}^{2e_{2}} *  p_{3}^{2e_{3}} ...p_{k}^{2e_{k}}. Where q, p_{1}, p_{2} are all distinct primes, with q \equiv \alpha \equiv 1 (mod 4). 18. Continuing on 17, the smallest prime factor of N is less than (2k + 8) / 3. 19. Continuing on 18, prove also that N < 2^{4^{k+1}}. 20. Continuing on 19, prove also that the largest prime factor of N is greater than 10^{8}. 21. Continuing on 20, prove also that q^{\alpha} > 10^{62} or p_{j}^{2e_{j}} > 10^{62} for some j. 22. Continuing on 21, prove also that the largest prime factor of N is greater than 10^{8}. 23. Continuing on 22, prove also that the second largest prime factor is greater than 10^{4}, and the third largest prime factor is greater than 100. 24. Continuing on 23, prove also that if N has at least 101 prime factors and at least 9 distinct prime factors. If 3 is not one of the factors of N, then N has at least 12 distinct prime factors. 25. Prove that an odd perfect number is not divisible by 105. 26. Every odd perfect number is of the form N ≡ 1 (mod 12), N ≡ 117 (mod 468), or N ≡ 81 (mod 324). 27. The only even perfect number of the form x3 + 1 is 28. 28. 28 is also the only even perfect number that is a sum of two positive integral cubes 29. Prove that no perfect number can be a square number. 30. The number of perfect numbers less than n is less than c\sqrt{n}, where c > 0 is a constant. 31. Numbers where the sum is less than the number itself are called deficient, and where it is greater than the number, abundant. Give an example of each. 32. In number theory, a practical number or panarithmic number is a positive integer n such that all smaller positive integers can be represented as sums of distinct divisors of n. Prove that any even perfect number and any power of two is a practical number. 33. Prove that prime numbers are infinite. (Use proof by contradiction). 34. Prove that 2n_{C_{n}} is divisible by a prime number n, iff, n's p-ary repesentation contains all digits less than p / 2. 35. Prove that if N is a perfect number, none of its multiples or factors is a perfect number. 36. when written in decimal notation, how many trailing zeros are present in 32!. Can you generalize it for n. 37. Prove that only square numbers have odd number of factors. 38. Prove that the number of factors of a number n, is always not greater than 2 * \lfloor\sqrt{n}\rfloor. 39. Continuing on 38, is there any number for which the number of factors is equal to the upper bound? Why? 40. when written in decimal notation, what digit do powers of 5 end with? What about powers of 6? 41. Prove that in x^{4} + y^{4} = z^{4} (x, y and z are positive integers), at least one of x or y is divisible by 5. 42. Consider a number with maximum number of factors between 1 and n^{2}. Prove that such a number is not divisible by any prime number p >= n, for n > 4. 43. Prove that n! cant be a square number for any n > 1. (My proof involves the proven conjecture that there exists a prime number between 2n and 3n, for n > 1). 44. Every prime number p > 3 satisfies the following: p^{2} = 24*k + 1 for some positive number k. 45. Also, every prime number p > 30 satisfies the following: p^{4} = 240*k + 1 for some positive number k. 46. Prove that a composite number (N) definitely has a prime factor (<= \lfloor\sqrt{n}\rfloor). 47. Prove that the fraction of "the number of divisors of first n prime numbers", in any N numbers is same as (1 - 1/2) * (1 - 1/3) * (1 - 1/5) * ... (Denominators are all first n prime numbers. Hint: Generalize Eratosthenes Sieve). 48. Using +, -, *, /, \sqrt, . (decimal), ! (factorial) and ^ (to the power of), find a way to represent 73 using four fours. 49. Do same as 48, for 77. 50. Do same as 48, for 87. 51. Do same as 48, for 93. 52. Do same as 48, for 99. 53. There is a way to represent all positive numbers using repeated trigonometric functional application on a single 4. Could you get it? 54. Prove that successive terms of a fibonacci sequence are relatively prime. 55. Prove that if the n th fibonacci number is denoted as F_{n}, F_{n} | F_{mn} for any m, n. 56. Consider the fibonacci sequence modulo some positive number k. Prove that modulo k, the fibonacci sequence has zeros equally spaced (periodically arranged). 57. Prove that the n th prime number, is less than n^{2}, for n > 1. 58. Prove that, in every 6 numbers (numbers themselves greater than 6), there can only be 2 prime numbers. 59. Continuing 58, also prove that every 15 numbers, there can only be 4 primes. 60. Continuing 59, also prove that every 35 numbers, only 8 can be primes. 61. Continuing 60, can you generalize it. 62. Continuing 61, can you generalize it as a formula representing number of prime numbers from 1 to N. 63. Continuing 62, we seem to sort of know when the number of primes exceeds the known prime numbers so far. So, does it give a way of finding where next prime would lie? (Well, it did not give a precise enough formula for me. Looked very promising though.) 64. Prove that the fraction of numbers relatively prime to 3 are 2/3. Prove that the fraction of numbers relatively prime to 5 are 4/5. Also, prove that the fraction of numbers which are relatively prime to 15 are 8/15. 65. Continuing on 64, what do these results indicate? 66. Continuing on 65, can you generalize it (the fraction of relatively prime numbers to a number n) for an arbitrary number n (Consider the prime power expansion of n). 67. Show that if a number is deficient, all of its factors are deficient too. 68. Show that if a number is abundant, all of its multiples are abundant too. 69. Based on 67 and 68, devise a method to stop searching for perfect numbers, when starting search from 1 and going upwards. 70. Prove that no prime number is a perfect number. 71. Prove that no number of the form p_{1}^{k_{1}}, where p_{1} is a prime can be a perfect number. 72. Prove that no number of the form p_{1}^{k_{1}} * p_{2}^{k_{2}}, where p_{1} > 2 and p_{2} > p_{1} are primes can be a perfect number. 73. Prove that, if all prime factors of a perfect number (N) are more than p, prove that N must have at least log_{p/(p-1)}^{2} number of prime factors. 74. Prove that, for any prime number p, a^{(p-1)*n} + b^{(p-1)*n} = c^{(p-1)*n} for any positive n, and positive numbers a, b and c, implies that at least one of a, b and c are divisible by p. 75. Prove that any prime factor of (a^{5} - b^{5}) / (a - b) is of the form of 10*k + 11, for some non-negative k, or it is 5. 76. Generalize 75, generalize about any prime factor of (a^{p} - b^{p}) / (a - b) where p is a prime number. 77. Continuing on 75, also prove that 5 is a factor of that expression, iff (a - b) is divisible by 5. 78. Generalize similarly for a generic prime exponent p. 79. Depending on 75-78, consider a equality of the form a^{5} - b^{5} = c^{5} - d^{5}. Prove that if (a - b) is divisible by a prime number p not of the form 10*k + 11, then (c - d) also is divisible by p. (Taxicab problem related). 80. Can we say something about a (or b) in the equation, a^{p} + b^{p} = c^{p}. What observation can be made about it? 81. How many 2 digit numbers are divisible by the digits in them. (If same digit occurs multiple times, divide multiple times with the digit. Exclude zeroes from divisors) 82. How many three digit numbers are divisible by all digits in them 83. If n! + 1 = m^{2} has solutions, then n! (consider n beyond trivial ones for which we know some solution, like, 4, 5 and 7) can be factorized into two numbers a and b such that a - b = 2, and ab = n!. 84. Continuing on 83, a and b are such that, one of them is divisible by 2 but not by 4. 85. Continuing on 84, a and b are such that, every prime number n > p > 2, is a factor of exactly one of them, but not both. 86. Continuing on 85, a is of the form 2 * op_{1}^{k_{1}} * op_{2}^{k_{2}} * op_{3}^{k_{3}} * ... and b is of the form 2^{k} * op_{a}^{k_{a}} * op_{b}^{k_{b}} * op_{c}^{k_{c}} * ... where op_{i} are all distinct odd primes. 87. Continuing on 86, you have to prove that for large k, op_{1}^{k_{1}} * op_{2}^{k_{2}} * op_{3}^{k_{3}} ... + 1 (or -1), where op_{i} are odd-primes cannot have high powers of 2 as a factor (for solving the Brocard's problem). Proof, anyone? 88. Continuing on 87, prove that 3^{n} + 1 is divisible by either 2 or 4, but not by higher powers of 2. 89. Continuing on 88, prove that 5^{n} + 1 is always divisible by 2, but not by higher powers of 2. 90. Continuing on 89, prove that p^{2n} + 1, where p is a prime number, is always divisible by 2, but not by 4. 91. Continuing on 90, prove that p^{2n+1} + 1 is divisible by the same power of 2, that divides p + 1. 92. Continuing on 91, prove that 3^{2n+1} - 1 is always divisible by 2, but not by higher powers of 2. 93. Continuing on 92, prove that 5^{2n+1} - 1 is always divisible by 4, but not by higher powers of 2. 94. Continuing on 93, prove that p^{2n + 1} - 1, where p is a prime number, is always divisible by the same power of 2, that divides p - 1. 95. Continuing on 94, prove that p^{2n} - 1, where p is a prime number, is always divisible by same power of 2 (r), unless n is a power of 2 itself. 96. Continuing on 95, prove that p^{2^{n}} - 1, where p is a prime number, is always divisible by 2^{n+k}, where 2^{k} divides p^{2i} - 1 maximally, where i is not of the form 2^{o}. 97. Continuing on 96, one can give simpler formulation of problem in 83. For this, prove that 2^{n} is never a factor of n!. 98. Continuing on 97, prove that n! always has a factor of the form 2^{(1 - \delta) * n}, where \delta decreases with increasing n. 99. Continuing on 98 and 83, for such an n to exist, you have to prove that for large n, op_{1}^{k_{1}} * op_{2}^{k_{2}} * op_{3}^{k_{3}} ... + 1 (or -1), where op_{i} are odd-primes should divide 2^{(7 * n)/8} ((7 * n) / 8 is for instance). 100. Prove that 2^{p−1} * (2^{p}−1) is an even perfect number whenever 2^{p}−1 is a prime (Euclid). 101. Prove that, if p is an odd number such that p + 1 = 2^{k_{1}}*o_{1}, where o_{1} is an odd number and k_{1} > 1, and q is another similar number (like p), then pq + 1 is divisible by 2 and not by 4. 102. Prove that, if p is an odd number such that p + 1 = 2*o_{1}, where o_{1} is an odd number, and q is another similar number (like p), then pq + 1 is divisible by 2 and not by 4. 103. Prove that, if p is an odd number such that p - 1 = 2^{k_{1}}*o_{1}, where o_{1} is an odd number and k_{1} >= 1, and q is another similar number (like p), then pq - 1 is divisible by min(k_{1}, k_{2}) unless k_{1} = k_{2}. 104. Prove that, if p is an odd number such that p + 1 = 2^{k_{1}}*o_{1}, where o_{1} is an odd number, and q is another similar number (like p), then pq + 1 is divisible by highers powers of 2 than (2^{1}), iff, exactly one of k_{1} and k_{2} is equal to 1. 105. Prove that, op_{1}^{k_{1}} * op_{2}^{k_{2}} * op_{3}^{k_{3}} * ... + 1 where op_{i} are odd primes, can be reduced to something that divides 2 but not 4, or, something of the form (2*o_{1} - 1) * (2^{k}*o_{2} - 1) + 1 for some odd numbers o_{1} and o_{2}. 106. State and prove the conditions where the expression in 105 divides only 2 but not 4. 107. Consider the case in 105, where o_{1} = 1. In this case, prove that it does not result in an n such that n! + 1 = m^{2}. (k is not as big as (7*n)/8). 108. Continuing on 103, prove that all prime numbers between n/i and n/(i+1) (where i >= 1) have the same power in n!. 109. Continuing on 103, prove that only the prime numbers between n/2 and n have an exponent of 1, in the prime number expansion of n. 110. Continuing on 10, consider (2^{k}*o_{2} - 1) = 3^{k_{1}} (Since any such prime number would do). Prove that if 3^{k_{1}} divides 2^{l} - 1, then l is such that l = (4 * k_{1} - 2). 111. Prove that, \pi . p_{i}^(1 / (p_{i} - 1)) (Here, \pi means product of), where p_{i} are prime numbers (starting from 2), does not converge as i increases. 112. Continuing on 111, prove that the lower-bound of the expression in 111, is, (2\pi n)^{1/(2n)} . n/e where e is the natural logarithm base (Hint: Remember something, yeah!). 113. Prove that, if 2n numbers are present, where the minimum difference between then is \delta and maximum difference is a polynomial function of \delta with constant term zero, then any difference between two terms formed by multiplying n each numbers is always divisible by \delta. 114. Prove that the exponent of prime number p in n! is always not less than (n + 1)/(p -1) * (1 - 1/(p^{\lfloor log_{p}^{n} \rfloor})) - \lfloor log_{p}^{n} \rfloor. 115. Prove that the exponent of prime number p in n! is always not greater than n/(p -1) * (1 - 1/(p^{\lfloor log_{p}^{n} \rfloor})) 116. Prove that, all numbers between n/i and n/(i + 1) have same power in prime number expansion of n!, whenever i < \sqrt (n). 117. Continuing on 116, prove that, for two prime numbers p_{1} and p_{2}, belonging to same interval (p_{1} > p_{2}), and having exponent k in prime number expansion of n!, the ratio of p_{1}^{k} to p_{2}^{k} is always less than e (the natural log base). 118. Also prove that, when a solution to Brocard's problem exists, the prime terms in expansion of n!, should be distributed across two terms (A and B) such that, A - B = 1, and AB * 4 = n! + 1. 119. Continuing on 118, prove that either A or B contains 2^({k_{1}} - 2), where k_{1} is the exponent of 2 in n! (meaning, its prime number expansion). 120. Continuing on 119, prove that if equal number of prime numbers are segregated into A and B, a solution to Brocard's problem is not possible. (Use 113). 121. Continuing on 117, if prime numbers within same interval, are factored such that equal number of them are in A and in B, then give an estimate of their ratio, in terms of e (Reduce the ratio as much as possible). 122. Prove that for large n, the smallest term, in the prime number expansion of n! is always not lesser than (n + 1) / 2. 123. MRB constant is given by , \sigma_{n=1} (-1)^{n} * 1/(n^{n}) (till infinity). Prove that, for any first n terms of the MRB constant (denoted by MRB(n)), (MRB(n))^{n!^{n}} is a rational number. 124. Prove that, any sub-sequence of terms for the MRB constant, is not a transcendental number.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326090574264526, "perplexity": 1027.7425501750902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691977.66/warc/CC-MAIN-20170925145232-20170925165232-00668.warc.gz"}
http://fm.mizar.org/1996-5/fm5-2.html
Formalized Mathematics    (ISSN 1426-2630) Volume 5, Number 2 (1996): pdf, ps, dvi. 1. Grzegorz Bancerek. Ideals, Formalized Mathematics 5(2), pages 149-156, 1996. MML Identifier: FILTER_2 Summary: The dual concept to filters (see \cite{FILTER_0.ABS}, \cite{FILTER_1.ABS}) i.e. ideals of a lattice is introduced. 2. Grzegorz Bancerek. Categorial Categories and Slice Categories, Formalized Mathematics 5(2), pages 157-165, 1996. MML Identifier: CAT_5 Summary: By categorial categories we mean categories with categories as objects and morphisms of the form $(C_1, C_2, F)$, where $C_1$ and $C_2$ are categories and $F$ is a functor from $C_1$ into $C_2$. 3. Yatsuka Nakamura, Piotr Rudnicki, Andrzej Trybulec, Pauline N. Kawamoto. Preliminaries to Circuits, I, Formalized Mathematics 5(2), pages 167-172, 1996. MML Identifier: PRE_CIRC Summary: This article is the first in a series of four articles (continued in \cite{MSAFREE2.ABS},\cite{CIRCUIT1.ABS},\cite{CIRCUIT2.ABS}) about modelling circuits by many-sorted algebras.\par Here, we introduce some auxiliary notations and prove auxiliary facts about many sorted sets, many sorted functions and trees. 4. Miroslava Kaloper , Piotr Rudnicki. Minimization of finite state machines, Formalized Mathematics 5(2), pages 173-184, 1996. MML Identifier: FSM_1 Summary: We have formalized deterministic finite state machines closely following the textbook \cite{ddq}, pp. 88--119 up to the minimization theorem. In places, we have changed the approach presented in the book as it turned out to be too specific and inconvenient. Our work also revealed several minor mistakes in the book. After defining a structure for an outputless finite state machine, we have derived the structures for the transition assigned output machine (Mealy) and state assigned output machine (Moore). The machines are then proved similar, in the sense that for any Mealy (Moore) machine there exists a Moore (Mealy) machine producing essentially the same response for the same input. The rest of work is then done for Mealy machines. Next, we define equivalence of machines, equivalence and $k$-equivalence of states, and characterize a process of constructing for a given Mealy machine, the machine equivalent to it in which no two states are equivalent. The final, minimization theorem states: \begin{quotation} \noindent {\bf Theorem 4.5:} Let {\bf M}$_1$ and {\bf M}$_2$ be reduced, connected finite-state machines. Then the state graphs of {\bf M}$_1$ and {\bf M}$_2$ are isomorphic if and only if {\bf M}$_1$ and {\bf M}$_2$ are equivalent. \end{quotation} and it is the last theorem in this article. 5. Grzegorz Bancerek. Subtrees, Formalized Mathematics 5(2), pages 185-190, 1996. MML Identifier: TREES_9 Summary: The concepts of root tree, the set of successors of a node in decorated tree and sets of subtrees are introduced. 6. Grzegorz Bancerek. Terms Over Many Sorted Universal Algebra, Formalized Mathematics 5(2), pages 191-198, 1996. MML Identifier: MSATERM Summary: Pure terms (without constants) over a signature of many sorted universal algebra and terms with constants from algebra are introduced. Facts on evaluation of a term in some valuation are proved. 7. Marian Przemski. On the Decomposition of the Continuity, Formalized Mathematics 5(2), pages 199-204, 1996. MML Identifier: DECOMP_1 Summary: This article is devoted to functions of general topological spaces. A function from $X$ to $Y$ is $A$-continuous if the counterimage of every open set $V$ of $Y$ belongs to $A$, where $A$ is a collection of subsets of $X$. We give the following characteristics of the continuity, called decomposition of continuity: A function $f$ is continuous if and only if it is both $A$-continuous and $B$-continuous. 8. Andrzej Trybulec. A Scheme for Extensions of Homomorphisms of Manysorted Algebras, Formalized Mathematics 5(2), pages 205-209, 1996. MML Identifier: MSAFREE1 Summary: The aim of this work is to provide a bridge between the theory of context-free grammars developed in \cite{LANG1.ABS}, \cite{DTCONSTR.ABS} and universally free manysorted algebras(\cite{MSAFREE.ABS}. The third scheme proved in the article allows to prove that two homomorphisms equal on the set of free generators are equal. The first scheme is a slight modification of the scheme in \cite{DTCONSTR.ABS} and the second is rather technical, but since it was useful for me, perhaps it might be useful for somebody else. The concept of flattening of a many sorted function $F$ between two manysorted sets $A$ and $B$ (with common set of indices $I$) is introduced for $A$ with mutually disjoint components (pairwise disjoint function -- the concept introduced in \cite{PROB_2.ABS}). This is a function on the union of $A$, that is equal to $F$ on every component of $A$. A trivial many sorted algebra over a signature $S$ is defined with sorts being singletons of corresponding sort symbols. It has mutually disjoint sorts.} 9. Adam Grabowski. The Correspondence Between Homomorphisms of Universal Algebra \& Many Sorted Algebra, Formalized Mathematics 5(2), pages 211-214, 1996. MML Identifier: MSUHOM_1 Summary: The aim of the article is to check the compatibility of the homomorphism of universal algebras introduced in \cite{ALG_1.ABS} and the corresponding concept for many sorted algebras introduced in \cite{MSUALG_3.ABS}. 10. Yatsuka Nakamura, Piotr Rudnicki, Andrzej Trybulec, Pauline N. Kawamoto. Preliminaries to Circuits, II, Formalized Mathematics 5(2), pages 215-220, 1996. MML Identifier: MSAFREE2 Summary: This article is the second in a series of four articles (started with \cite{PRE_CIRC.ABS} and continued in \cite{CIRCUIT1.ABS}, \cite{CIRCUIT2.ABS}) about modelling circuits by many sorted algebras.\par First, we introduce some additional terminology for many sorted signatures. The vertices of such signatures are divided into input vertices and inner vertices. A many sorted signature is called {\em circuit like} if each sort is a result sort of at most one operation. Next, we introduce some notions for many sorted algebras and many sorted free algebras. Free envelope of an algebra is a free algebra generated by the sorts of the algebra. Evaluation of an algebra is defined as a homomorphism from the free envelope of the algebra into the algebra. We define depth of elements of free many sorted algebras.\par A many sorted signature is said to be monotonic if every finitely generated algebra over it is locally finite (finite in each sort). Monotonic signatures are used (see \cite{CIRCUIT1.ABS},\cite{CIRCUIT2.ABS}) in modelling backbones of circuits without directed cycles. 11. Artur Kornilowicz. On the Group of Automorphisms of Universal Algebra \& Many Sorted Algebra, Formalized Mathematics 5(2), pages 221-226, 1996. MML Identifier: AUTALG_1 Summary: The aim of the article is to check the compatibility of the automorphisms of universal algebras introduced in \cite{ALG_1.ABS} and the corresponding concept for many sorted algebras introduced in \cite{MSUALG_3.ABS}. 12. Yatsuka Nakamura, Piotr Rudnicki, Andrzej Trybulec, Pauline N. Kawamoto. Introduction to Circuits, I, Formalized Mathematics 5(2), pages 227-232, 1996. MML Identifier: CIRCUIT1 Summary: This article is the third in a series of four articles (preceded by \cite{PRE_CIRC.ABS},\cite{MSAFREE2.ABS} and continued in \cite{CIRCUIT2.ABS}) about modelling circuits by many sorted algebras.\par A circuit is defined as a locally-finite algebra over a circuit-like many sorted signature. For circuits we define notions of input function and of circuit state which are later used (see \cite{CIRCUIT2.ABS}) to define circuit computations. For circuits over monotonic signatures we introduce notions of vertex size and vertex depth that characterize certain graph properties of circuit's signature in terms of elements of its free envelope algebra. The depth of a finite circuit is defined as the maximal depth over its vertices. 13. Alexander Yu. Shibakov, Andrzej Trybulec. The Cantor Set, Formalized Mathematics 5(2), pages 233-236, 1996. MML Identifier: CANTOR_1 Summary: The aim of the paper is to define some basic notions of the theory of topological spaces like basis and prebasis, and to prove their simple properties. The definition of the Cantor set is given in terms of countable product of $\{0,1\}$ and a collection of its subsets to serve as a prebasis. 14. Oleg Okhotnikov. Logical Equivalence of Formulae, Formalized Mathematics 5(2), pages 237-240, 1996. MML Identifier: CQC_THE3 Summary: 15. Czeslaw Bylinski. Some Properties of Restrictions of Finite Sequences, Formalized Mathematics 5(2), pages 241-245, 1996. MML Identifier: FINSEQ_5 Summary: The aim of the paper is to define some basic notions of restrictions of finite sequences. 16. Czeslaw Bylinski, Yatsuka Nakamura. Special Polygons, Formalized Mathematics 5(2), pages 247-252, 1996. MML Identifier: SPPOL_2 Summary: 17. Jozef Bialas. The One-Dimensional Lebesgue Measure, Formalized Mathematics 5(2), pages 253-258, 1996. MML Identifier: MEASURE7 Summary: The paper is the crowning of a series of articles written in the Mizar language, being a formalization of notions needed for the description of the one-dimensional Lebesgue measure. The formalization of the notion as classical as the Lebesgue measure determines the powers of the PC Mizar system as a tool for the strict, precise notation and verification of the correctness of deductive theories. Following the successive articles \cite{SUPINF_1.ABS}, \cite{SUPINF_2.ABS}, \cite{MEASURE1.ABS}, \cite{MEASURE6.ABS} constructed so that the final one should include the definition and the basic properties of the Lebesgue measure, we observe one of the paths relatively simple in the sense of the definition, enabling us the formal introduction of this notion. This way, although toilsome, since such is the nature of formal theories, is greatly instructive. It brings home the proper succession of the introduction of the definitions of intermediate notions and points out to those elements of the theory which determine the essence of the complexity of the notion being introduced.\par The paper includes the definition of the $\sigma$-field of Lebesgue measurable sets, the definition of the Lebesgue measure and the basic set of the theorems describing its properties. 18. Andrzej Trybulec. Categories without Uniqueness of \bf cod and \bf dom, Formalized Mathematics 5(2), pages 259-267, 1996. MML Identifier: ALTCAT_1 Summary: Category theory had been formalized in Mizar quite early \cite{CAT_1.ABS}. This had been done closely to the handbook of S. McLane \cite{MACLANE:1}. In this paper we use a different approach. Category is a triple $$\langle O, {\{ \langle o_1,o_2 \rangle \}}_{o_1,o_2 \in O}, {\{ \circ_{o_1,o_2,o_3}\}}_{o_1,o_2,o_3 \in O} \rangle$$ where $\circ_{o_1,o_2,o_3}: \langle o_2,o_3 \rangle \times \langle o_1,o_2 \rangle \to \langle o_1, o_3 \rangle$ that satisfies usual conditions (associativity and the existence of the identities). This approach is closer to the way in which categories are presented in homological algebra (e.g. \cite{BALCERZYK}, pp.58-59). We do not assume that $\langle o_1,o_2 \rangle$'s are mutually disjoint. If $f$ is simultaneously a morphism from $o_1$ to $o_2$ and $o_1'$ to $o_2$ ($o_1 \neq o_1'$) than different compositions are used ($\circ_{o_1,o_2,o_3}$ or $\circ_{o_1',o_2,o_3}$) to compose it with a morphism $g$ from $o_2$ to $o_3$. The operation $g\cdot f$ has actually six arguments (two visible and four hidden: three objects and the category).\par We introduce some simple properties of categories. Perhaps more than necessary. It is partially caused by the formalization. The functional categories are characterized by the following properties: \begin{itemize} \item quasi-functional that means that morphisms are functions (rather meaningless, if it stands alone) \item semi-functional that means that the composition of morphism is the composition of functions, provided they are functions. \item pseudo-functional that means that the composition of morphisms is the composition of functions. \end{itemize} For categories pseudo-functional is just quasi-functional and semi-functional, but we work in a bit more general setting. Similarly the concept of a discrete category is split into two: \begin{itemize} \item quasi-discrete that means that $\langle o_1,o_2 \rangle$ is empty for $o_1 \neq o_2$ and \item pseudo-discrete that means that $\langle o, o \rangle$ is trivial, i.e. consists of the identity only, in a category. \end{itemize}\par We plan to follow Semadeni-Wiweger book \cite{SEMAD}, in the development the category theory in Mizar. However, the beginning is not very close to \cite{SEMAD}, because of the approach adopted and because we work in Tarski-Grothendieck set theory. 19. Artur Kornilowicz. Extensions of Mappings on Generator Set, Formalized Mathematics 5(2), pages 269-272, 1996. MML Identifier: EXTENS_1 Summary: The aim of the article is to prove the fact that if extensions of mappings on generator set are equal then these mappings are equal. The article contains the properties of epimorphisms and monomorphisms between Many Sorted Algebras. 20. Yatsuka Nakamura, Piotr Rudnicki, Andrzej Trybulec, Pauline N. Kawamoto. Introduction to Circuits, II, Formalized Mathematics 5(2), pages 273-278, 1996. MML Identifier: CIRCUIT2 Summary: This article is the last in a series of four articles (preceded by \cite{PRE_CIRC.ABS}, \cite{MSAFREE2.ABS}, \cite{CIRCUIT1.ABS}) about modelling circuits by many sorted algebras.\par The notion of a circuit computation is defined as a sequence of circuit states. For a state of a circuit the next state is given by executing operations at circuit vertices in the current state, according to denotations of the operations. The values at input vertices at each state of a computation are provided by an external sequence of input values. The process of how input values propagate through a circuit is described in terms of a homomorphism of the free envelope algebra of the circuit into itself. We prove that every computation of a circuit over a finite monotonic signature and with constant input values stabilizes after executing the number of steps equal to the depth of the circuit. 21. Artur Kornilowicz. Definitions and Basic Properties of Boolean \& Union of Many Sorted Sets, Formalized Mathematics 5(2), pages 279-281, 1996. MML Identifier: MBOOLEAN Summary: In the first part of this article I have proved theorems about boolean of many sorted sets which are corresponded to theorems about boolean of sets, whereas the second part of this article contains propositions about union of many sorted sets. Boolean as well as union of many sorted sets are defined as boolean and union on every sorts. 22. Yatsuka Nakamura, Grzegorz Bancerek. Combining of Circuits, Formalized Mathematics 5(2), pages 283-295, 1996. MML Identifier: CIRCCOMB Summary: We continue the formalisation of circuits started in \cite{PRE_CIRC.ABS},\cite{MSAFREE2.ABS},\cite{CIRCUIT1.ABS}, \cite{CIRCUIT2.ABS}. Our goal was to work out the notation of combining circuits which could be employed to prove the properties of real circuits.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356256127357483, "perplexity": 1210.0019584008169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171004.85/warc/CC-MAIN-20170219104611-00180-ip-10-171-10-108.ec2.internal.warc.gz"}
https://brilliant.org/practice/lhopitals-rule-problem-solving/
× Back to all chapters # L'Hôpital's Rule When you've got a limit that looks like 0/0 or ∞/∞, L'Hôpital's rule can often find its value -- and make it clear that not all infinities are equal! # L'Hopital's Rule - Problem Solving If $$\displaystyle f(x) = \sum_{k=1}^{n} x^k$$, what is the value of $\displaystyle \lim_{n \to \infty} \frac{9n^2 + 10n + 11}{f'(1)} ?$ Evaluate $$\displaystyle \lim_{x \to 1} \frac{x^{13} + 10x-11}{x-1}$$. If $$\displaystyle \lim_{x \to 0^+} (1+\sin 5 x)^{\cot x} = a$$, what is the value of $$\ln a$$? Evaluate $$\displaystyle \lim_{x \to 0} \frac{1 - \cos (6x)}{x^2}$$. Evaluate $$\displaystyle \lim_{x \to 0^+} x \ln x$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442439079284668, "perplexity": 1722.4436370218332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119080.24/warc/CC-MAIN-20170423031159-00553-ip-10-145-167-34.ec2.internal.warc.gz"}
https://physicsworld.com/a/new-isotope-doubles-up/
# New isotope doubles up 20 May 2005 Belle Dumé An international team of nuclear physicists has created zinc-54 for the first time. Moreover, they have confirmed that it can undergo two-proton decay, a rare process that has only been seen in one other isotope before (nucl-ex/0505016). The results should shed more light on how protons are bound together in the nucleus. Nuclei decay when they contain too many neutrons or too many protons to be stable. The most commons forms of decay are nuclear fission and alpha, beta and gamma decay. However, some nuclei that contain more protons than neutrons can also decay by emitting a proton — a process that was first observed about 20 years ago. Single-proton emission is observed in nuclei with an odd number of protons. However, theorists also predicted that some nuclei that contain an even number of protons and/or neutrons could undergo two-proton emission. This process was seen for the first time in 2002 in iron-45, which contains 26 protons and 19 neutrons. The iron-45 nuclei were created by firing a beam of nickel-58 ions onto a nickel or beryllium target. Now, Bertram Blank of the CENBG laboratory in France and colleagues have created zinc-54 — which contains 30 protons and 24 neutrons — in a similar experiment involving nickel-58 ions and a nickel target at the GANIL laboratory. The zinc-54 nuclei were created in about 1 in 1017 of the collisions, and Blank and co-workers found that the proton energy and decay half-life of about 3.7 milliseconds both agreed with predictions. “Having a second two-proton emitter allows for a first real comparison between theory and experiment, which is somewhat difficult to make with just one case,” Blank told PhysicsWeb. “Two is better than one and hopefully there will be more.” The group now plans to search for other two-proton emitters. It will also study iron-45 and zinc-54 in more detail with a time projection chamber that will be able to follow the paths of the two protons. “In this way we can study the correlations between the two protons and better understand the emission process itself,” says Blank.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488798141479492, "perplexity": 1046.8400321735924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00437.warc.gz"}