url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.johndcook.com/blog/2012/12/22/spotting-sensitivity-in-an-equation/
# Spotting sensitivity in an equation The new book Heavenly Mathematics describes in the first chapter how the medieval scholar Abū Rayḥān al-Bīrūnī calculated the earth’s radius. The derivation itself is interesting, but here I want to expand on a parenthetical remark about the calculation. The earth’s radius r can be found by solving the following equation. The constant in the denominator comes from a mountain which is 305.1 meters tall. The angle θ is known to be 34 minutes, i.e. 34/60 degrees. Here is the remark that caught my eye as someone more interested in numerical analysis than trigonometry: There is a delicate matter hidden in this solution however: a minute change in the value of θ results in a large change in the value of r. How can you tell that the solution is sensitive to changes (i.e. measurement errors) in θ? That doesn’t seem obvious. Think of r as a function of θ and differentiate both sides of the equation with respect to θ. We’ll convert θ to radians because that’s what we do. (Explanation at the bottom of this post.) We get or Now let’s get a feel for the size of the terms in this equation. θ is approximately 0.01 radians, and so sin θ is approximately 0.01 as well. (See explanation here.) The radius of the earth is about 6.4 million meters. So the right side of the equation above is about 1.3 billion meters, i.e. it’s big. A tiny increase in θ leads to a large decrease in r. For example, if our measurement of θ increased by 1%, from 0.01 to 0.0101, our measurement of the earth’s radius would decrease by 130,000 meters. I’d like to point out a couple things about this analysis. First, it shows how it can be useful to think of constants as variables. After measuring θ we could think that we know its value with certainty and treat it as a constant. But a more sophisticated analysis takes into account that while θ might not change, our measurement of θ has changed from the true value. Second, we used the radius of the earth to determine how sensitive our estimate of the earth’s radius is to changes in θ. Isn’t that circular reasoning? Not really. We can use a very crude estimate of the earth’s radius to estimate how sensitive a new estimate is to changes in its parameters. You always have some idea how big a value is before you measure it. If you want to measure the distance to the moon, you know not to pick up a yard stick. ## 4 thoughts on “Spotting sensitivity in an equation” 1. It’s worth noting that its not just the radius which is sensitive but also the log of the radius. That is, the derivative normalized by the radius (1/r)dr/dtheta is still proportional to something big (r/300m or so). This is the more important metric, as it is dimensionless. 2. Francois Also note the using simple second- and first-order approximations (resp. on the left and right-hand sides) leads to the simple equation r = 305.1 (2/theta^2) giving r=6.23824*10^6 (accurate to more than 0.005% if compared to the true solution r=6.23799*10^6). 3. Francois the -> that 4. Nick Craig-Wood Your analysis seems to be saying that a 1% error in θ leads to a 2% error in r. That shouldn’t be a surprise should it? If you knew nothing about numerical analysis then you might guess that a 1% error in the input made a 1% error in the output, so 2% doesn’t seem so surprising. Or is the surprise that 2% * a big number is still a big number?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384313225746155, "perplexity": 436.0842851862107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00246.warc.gz"}
https://as-seminars.quantum-spacetime.net/2018/10/15/Carlos-M-Nieto.html
# Carlos M. Nieto: In search of a UV completion of the Standard Model Asymptotically safe extensions of the Standard Model have been searched for by adding vector-like fermions charged under the Standard Model gauge group and having Yukawa-like interactions with new scalar fields. We study the corresponding renormalization group $\beta$-functions to the next and next-to-next to leading order in the perturbative expansion, varying the number of extra fermions and the representations they carry. We test the fixed points of the $\beta$-functions against various criteria of perturbativity to single out those that are potentially viable. We show that all the candidate ultraviolet fixed points are unphysical for these models: either they are unstable under radiative corrections, or they cannot be matched to the Standard Model at low energies. Seminar Date:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102725386619568, "perplexity": 636.1273151789694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00608.warc.gz"}
https://export.arxiv.org/abs/2108.02536
math.DS (what is this?) # Title: Sofic mean dimension of typical actions and a comparison theorem Abstract: We refine two results in the paper entitled Sofic mean dimension'' by Hanfeng Li, improving two inequalities with two equalities, respectively, for sofic mean dimension of typical actions. On the one hand, we study sofic mean dimension of full shifts, for which, Li provided an upper bound which however is not optimal. We prove a more delicate estimate from above, which is optimal for sofic mean dimension of full shifts over arbitrary alphabets (i.e. compact metrizable spaces). Our refinement, together with the techniques (in relation to an estimate from below) in the paper entitled Mean dimension of full shifts'' by Masaki Tsukamoto, eventually allows us to get the exact value of sofic mean dimension of full shifts over any finite dimensional compact metrizable spaces. On the other hand, we investigate finite group actions. In contrast to the case that the acting group is infinite (and amenable), Li showed that if a finite group acts continuously on a finite dimensional compact metrizable space, then sofic mean dimension may be different from (strictly less than) the classical (i.e. amenable) mean dimension (an explicitly known value in this case). We strengthen this result by proving a sharp lower bound, which, combining with the upper bound, gives the exact value of sofic mean dimension for all the actions of finite groups on finite dimensional compact metrizable spaces. Furthermore, this equality leads to a satisfactory comparison theorem for those actions, deciding when sofic mean dimension would coincide with classical mean dimension. Moreover, our two results, in particular, verify for a typical class of sofic group actions that sofic mean dimension does not depend on sofic approximation sequences. Subjects: Dynamical Systems (math.DS) Cite as: arXiv:2108.02536 [math.DS] (or arXiv:2108.02536v1 [math.DS] for this version) ## Submission history From: Lei Jin [view email] [v1] Thu, 5 Aug 2021 11:47:52 GMT (15kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961834728717804, "perplexity": 1084.1755662785795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00459.warc.gz"}
http://mathoverflow.net/questions/141693/are-there-insane-families-in-l
Are there insane families in $L$? Let $A,B\subseteq\omega$. We write $A\subseteq^*B$ if $A\setminus B$ is finite, if additionally $B\setminus A$ is infinite then we write $A\subsetneq^*B$, otherwise we write $A=^*B$. We say that a $\cal A\subseteq P(\omega)$ is almost disjoint if for every two distinct $A,B\in\cal A$ we have $A\cap B=^*\varnothing$. We say that $\cal A$ is maximal almost disjoint, or MAD, if there is no $\cal B$ strictly containing $\cal A$ which is almost disjoint. At the other end of the spectrum we say that $\cal A\subseteq P(\omega)$ is a tower if $\cal A$ is well-ordered by $\subsetneq^*$. Finally, we define $\mathcal B=\{B_\alpha\mid\alpha<\kappa\}$ to be insane if it is MAD, and there exists a tower $\mathcal A=\{A_\alpha\mid\alpha<\kappa\}$ with the following property: $$\beta<\alpha\implies B_\beta\subseteq^*A_\alpha\\ \beta\geq\alpha\implies B_\beta\cap A_\alpha=^*\varnothing.$$ In that case we say that $\cal A$ is an associated tower for $\cal B$. Note, for example, that if $\cal B$ is insane and $\cal A$ is an associated tower then $A_{\alpha+1}\setminus A_\alpha=^*B_\alpha$. Questions. 1. Is the existence of insane families consistent with $\sf ZFC$? 2. If the answer is yes to the previous question, is there an insane family in $L$? 3. If the answer is yes to the previous question, can this notion be extended to every regular cardinal $\kappa$? (replacing "finite" by ${<}\kappa$ in the definition of $\subseteq^*$ and so on.) - I would be glad to hear constructive remarks, in additional to the less-constructive downvotes! –  Asaf Karagila Sep 9 '13 at 17:13 +1: seems like you are having too much fun with your math, how non serious is that, tsk tsk... :-) –  Suvrit Sep 9 '13 at 17:32 @survit: Well, MAD families is a common term (which I can't, in good conscience, claim as my own) and insane families are just... madder than usual, because that tower thingie is not at all obviously definable from every mad family. :-) –  Asaf Karagila Sep 9 '13 at 17:37 Someone downvoted this? Weird. +1 from me, anyways. –  Noah S Sep 9 '13 at 18:51 If only you could have found a way to replace "L" with "the membrane" ... –  Yemon Choi Sep 9 '13 at 23:13 Suppose towards contradiction that we have an insane family $\mathcal{B}=\{B_\alpha\mid\alpha\lt\kappa\}$, witnessed by tower $\langle A_\alpha\mid\alpha\lt\kappa\rangle$. For finite $k$, let $b_k$ be any element in $[(A_\omega-A_k)\cap B_k]-\bigcup_{j\lt k}B_j$. There are such elements, since $B_k$ is almost disjoint from $A_k$ and from the earlier $B_j$ for $j\lt k$, and $B_k$ is almost contained in $A_\omega$. Note that the $b_k$ are distinct, and so $B=\{b_k\mid k\lt\omega\}$ is infinite. By maximality, $B$ must have infinite intersection with some $B_\beta$. Note that $B$ has exactly one element from each $B_k$ for $k\lt\omega$. So it must be that $\beta\geq\omega$. But in this case, since $B\subset A_\omega$, we have infinitely many elements in $B_\beta\cap A_\omega$, which violates the second insanity clause. Oh. I suspected as much. Drats. This means that I am going to have so much more work... Oh well. Thanks. I suppose this will be the same argument if we require $\leq$ and $>$ instead... –  Asaf Karagila Sep 9 '13 at 20:33 In the case of $P(\kappa)$, if you use an ideal $I$ for which $P(\kappa)/I$ is a complete Boolean algebra, then you can realize any maximal antichain as the difference antichain of a tower in the Boolean algebra. But you've got to work modulo $I$ instead of modulo finite. –  Joel David Hamkins Sep 9 '13 at 21:53 This question reminds me of your previous question about the countable completeness of $\mathcal{P}(\omega)$/fin. If an insane family exists, then $A_{\omega}$ (for example) is a least upper bound for the countable chain $\langle A_n: n < \omega \rangle$. For if $A_{\omega}' \subsetneq_* A_{\omega}$ was a strictly smaller upper bound, the difference set $X=A_{\omega} \setminus A_{\omega}'$ would be almost disjoint from all the $B_{\beta}$, contradicting maximality. But countable chains never have least upper bounds in $\mathcal{P}(\omega)$/fin, so it must be that there is no insane family. –  Garrett Ervin Sep 10 '13 at 4:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959216833114624, "perplexity": 278.65443145192177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463608.25/warc/CC-MAIN-20150226074103-00038-ip-10-28-5-156.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/320352/relation-between-mse-and-bias-variance?noredirect=1
# Relation between MSE and Bias-Variance If MSE is $$\mathrm{MSE}(\hat Y) = \mathrm{Var}(\hat Y) + \mathrm{Bias}^2(\hat Y)$$ and the Bias-Variance decomposition is given by $$\mathrm{Err}(\hat Y\,|\,X=x_0) = \mathrm{Var}(\hat Y) + \mathrm{Bias}^2(\hat Y) + \sigma^2$$ so it looks like they're related, the second one being for a single event, while the first one is a mean. But it's not clear how to get from one to the other. So what is the relation? (I have copied the equations from different sources, and there are slight notational differences, e.g. $\mathrm{Bias}(\hat Y)$ vs $\mathrm{Bias}(\hat Y, Y)$, and it's not clear to me if these are equivalent or not, so please point out/correct any misuse of notation.) • The extra term $\sigma^2$ is due to the randomness in $Y$ itself, i.e., the "error" in $\hat{Y}$ for any given observed value of $Y$. – jbowman Dec 25 '17 at 22:42 • The bias should be square in your equation. – Michael Chernick Dec 25 '17 at 23:46 • Please give sources for your equations; it might help to have the context there (especially on the second equation). It might pay to double check your equations (or your definition of bias). – Glen_b Dec 26 '17 at 0:36 • @MichaelChernick I forgot, it's been corrected now. Thanks! – Frank Vel Dec 26 '17 at 9:04 • I agree that this question is not quite the same as the linked question. Therefore, I am voting to reopen. – gung Dec 29 '17 at 0:49 The two formulas express the bias variance trade-off in two different contexts (actually related to each-other). It is confusing to use the letter $Y$ in both cases where they actually stand for different things. For the first case, I'm going to use the letter $F$ instead. For an estimator $$MSE(\hat F)=Var(\hat F)+Bias^2(\hat F)$$ In this case, you have a model with a parameter $\theta$ giving you the distribution of your data. You want to make an inference on this parameter, more precisely about a property of it called $F$ that is a function of $\theta$. For this you have an estimator $\hat F$ that is a function of your data. Then the formula is: $$E_\theta((F-\hat F)^2)=V_\theta(\hat F)+E_\theta(\hat F-F)^2$$ This formula gives you the MSE of your estimator, or if you prefer measures the quality of the estimator in terms of squared distance to what it is supposed to estimate. For a predictor $$MSE(\hat Y|X=x0)=Var(\hat Y)+Bias^2(\hat Y)+\sigma^2$$ In this case you have a model with additive noise. $X$ is the input, $Y$ is the output and the model is: $Y=f_\theta(X)+\epsilon$ where the noise is assumed to have mean 0 and known variance $\sigma$. You had some data to learn from and now you take a new $x_0$ and want to predict the unknown outcome $y_0$. Since the noise has mean 0, it is natural to use $f_\theta(x_0)$ as a guess for $y_0$. You don't know $\theta$ and thus you don't know the value of the function (of $\theta$) $f_\theta(x_0)$. So you need to use an estimator of it called $\hat f(x_0)$. Using the previous formula for this estimator ($F=f(x_0)$) you know that: $$E_\theta((f(x_0)-\hat f(x_0))^2)=V_\theta(\hat f(x_0))+E_\theta(\hat f(x_0)-f(x_0))^2$$ On the other hand, because of the additive noise: $$E_\theta((f(x_0)-y_0)^2)=\sigma^2$$ Assuming the noise is independent of your training data, you can finally combine the two formulas (i'm skipping the technical details): $$E_\theta((y_0-\hat f(x_0))^2)=V_\theta(\hat f(x_0))+E_\theta(\hat f(x_0)-f(x_0))^2+\sigma^2$$ In a less formal way: • take $x_0$ you want to predict the outcome $y_0$ • you will predict it with the predictor $\hat Y=\hat f(x_0)$ • you make two errors: the first one is because of the imprecision of your estimator $\hat f(x_0)$ of $f(x_0)$. The second is because of the noise. • they sum up because the noise is additive (and independent)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8927609920501709, "perplexity": 231.2355789344841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00295.warc.gz"}
https://es.coursera.org/lecture/neural-networks-deep-learning/binary-classification-Z8j0R
Binary Classification Destrezas que aprenderás Deep Learning, Artificial Neural Network, Backpropagation, Python Programming, Neural Network Architecture Reseñas 4.9 (116,340 calificaciones) • 5 stars 89,69 % • 4 stars 9,24 % • 3 stars 0,80 % • 2 stars 0,11 % • 1 star 0,12 % AS 10 de jul. de 2021 I have learned a lot of thing in deep learning such as neural network , deep neural network , forward propagation , backward propagation , broadcasting and vectorization.This is very important for me. AK 13 de may. de 2020 One of the best courses I have taken so far. The instructor has been very clear and precise throughout the course. The homework section is also designed in such a way that it helps the student learn . De la lección Neural Networks Basics Set up a machine learning problem with a neural network mindset and use vectorization to speed up your models. Impartido por: Instructor • Kian Katanforoosh Senior Curriculum Developer • Younes Bensouda Mourri Curriculum developer
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826934814453125, "perplexity": 4101.290460224254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00153.warc.gz"}
http://math.stackexchange.com/questions/7401/probability-distribution-with-different-probabilities
# Probability Distribution with different probabilities Suppose there are 9 events, that have a probability of 10%, 20%, 30%, ..., 90% of being a success. How would I find the probability of exactly n number of these events succeeding? For n = 1, I'm thinking it is something like (first succeeds) -> (1-0.9) * 0.8 * 0.7 * 0.6 ... 0.1 (second succeeds) -> 0.9 * (1-0.8) * 0.7 * 0.6 ... 0.1 (...) (last) The probabilities might not always be an an Arithmetic Progression, hoping to find a solution that doesn't depend on that. - The generating function for the probabilities $a_n$ for $n$ successes is $$\sum_{n=0}^9 a_n t^n=\prod_{k=1}^9(p_k t+(1-p_k))$$ where $p_k$ is the probability of the $k$-th event occuring, in this example $p_k=k/10$. Maybe in this case the product won't simplify so nicely, but the method is quite general. - I tried searching, but couldn't understand anything. What does t here mean? – Adam Oct 22 '10 at 22:43 Take the events in order. After the first, you have 0.1 chance of 1 success and 0.9 chance of 0 successes. Then add on the second. The chance of one success after two events is 0.2*the chance of failure on the first + 0.8*the chance of success on the first. A spreadsheet is your friend for this. After all the events, you will have a column with the chance of each possible number of successes. Adding to see if you still get 1 is a good check that your equations are right. If you use the absolute/relative reference codes correctly, you can copy down and right most of the equations. - Let the probability that the $i$th event occurs be $p_i = i/10$ and let the probability that exactly $n$ events occur be $P(n).$ So the probability that no events occur is $P(0) = \prod_{i=1}^9 (1-p_i) = 3.6288 \times 10^{-4}.$ Let $S= \lbrace 1,2,3,\ldots,9 \rbrace$ then for $n \ge 1$ we have $$P(n) = P(0) \times \sum_{i_1,i_2,\ldots,i_n \in S, \quad i_k \textrm { distinct} } \frac{p_{i_1} p_{i_2} \ldots p_{i_n} } {(1-p_{i_1})(1-p_{i_2}) \ldots (1-p_{i_n}) },$$ where there are ${9 \choose n}$ terms in the summation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173578023910522, "perplexity": 305.2809923627512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399455473.15/warc/CC-MAIN-20151124211055-00026-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.ueda.info.waseda.ac.jp/hydla/index.php?cmd=diff&page=Syntax
# Diff of Syntax #author("2017-03-23T14:41:47+09:00","default:Uedalab","Uedalab") #author("2017-03-23T14:43:54+09:00","default:Uedalab","Uedalab") #noattach #mathjax #norelated * Syntax [#r2da58d9] - The abstract syntax of HydLa is given in the following BNF. #ref(./HydLa_BNF.png, center); - A HydLa program consists of definitions of constraints and declarations of constraint hierarchies. ** Definition [#s649ac18] - In a definition, we define a named constraint or a named constraint hierarchies with arguments using the operator " ''<=>'' ". If the definition has no arguments, we can omit parentheses. INIT <=> y = 0 & y' = 0. // definition of a constraint for the initial state of a ball. FALL <=> [](y'' = -10). // definition of a constraint for falling of the ball. BOUNCE(x) <=> [](x- = 0 => x' = -x'-). // definition of a constraint for bouncing of the ball. BALL{FALL << BOUNCE}. // definition of a constraint hierarchy for the ball. - $Constraints$ allow conjunctions of constraints and implications. - The antecedents of implications are called $guards$. - " [] " denotes a temporal operator which means that the constraint always holds from the time point at which the constraint is enabled. - Each variable is denoted by a string starting with lower case($vname$). - The notation $vname′$ means the derivative of $vname$, and $vname−$ means the left-hand limit of $vname$. ** Declaration of Constraint Hierarchies [#oc2bb594] - In a declaration, we declare constraints with priorities between them. - The operator "$<<$" is a concrete notation of the operator "$\ll$" and it describes a weak composition of constraints. For example, $A << B$ means that the constraint A is weaker than B. -If we declare a constraint without "$<<$", it means that there is no priority about the constraint. - The operator "$<<$" has a higher precedence than "$,$", that is, $A, B << C$ is equivalent to $A, (B << C)$. - The unit of constraints that is declared with a priority is called a module or a constraint module. - Meaning of a HydLa program is a set of trajectories that satisfy maximal consistent sets of candidate constraint modules sets at each time point. - Each candidate constraint set must satisfy conditions below:~ $\forall M_1, M_2((M_1 \ll M_2 \land M_1 \in {MS} \,) \Rightarrow M_2 \in {MS}\,)$ $\forall M (\neg \exists ( R \ll M ) \Rightarrow R \in {MS} \,)$ ** List Comprehension [#eea12551] - In modeling of hybrid systems, we often come across necessity to introduce multiple similar objects. - HydLa allows a list comprehention to easily describe models with multiple objects. - A list can be defined by the operator "$:=$", like $\texttt{X := {x0..x9}.}$ - There are a two types of lists: priority lists and expression lists. *** Priority List [#rb9d66d1] - The first type of lists is priority lists. -- A priority list can be denoted by an extensional notation of the following form. > $\{MP_1, MP_2, ..., MP_n\}$ < -- It also can be denoted by an intensionally > $\{MP | LC_1, LC_2, ..., LC_n\}$ < -- For example, $\{\texttt{INIT(i)}\ |\ \texttt{i in \{1,2,3,4\}}\}$ is equivalent to $\{\texttt{INIT(1),INIT(2),INIT(3),INIT(4)}\}$ -- If a HydLa program includes declarations of priority lists, the elements of the lists are expanded, that is, a declaration of $\{\texttt{A, B, C}\}$ is equivalent to $\texttt{A, B, C}$. *** Expression List [#c1744645] - The second of list is expression lists, that is, lists of arithmetic expressions. -- We can denote an expression list in an extensional or intensional notation as well as a priority list. -- In addition, we can use range expressions in the following form. > $\{RE .. RE\}$. < -- $RE$ is an arithmetic expression without variables or an arithmetic expression with a variable whose name terminates with a number such as x0 and y1. *** Example of Lists [#b3820221] - An expression list $\{1*2+1..5\}$ is equivalent to $\{3,4,5\}$. - An expression list $\{\texttt{j | i in \{1,2\}, j in \{i+1..4\}}\}$ is equivalent to $\{2,3,4,3, 4\}$. - An expression list $\{\texttt{x1..x3}\}$ is equivalent to $\{\texttt{x1, x2, x3}\}$. *** Other Notations [#mc029821] - The n-th element of a list $\texttt{L}$ can be accessed by $\texttt{L[n]}$. -- The index allows an arbitrary expression that results in an integer. - The size of a list $\texttt{L}$ is denoted by $\texttt{|L|}$, which can be used as a constant value in a HydLa program. - $\texttt{sum(L)}$ is a syntactic sugar of the sum of the elements in an expression list $\texttt{L}$ ** Tips [#jb66d45b] - Napier's constant "E" and the ratio of circumference to diameter "Pi" also can be used as constant values. - The following trigonometric functions are available. > ~ sin(x) ~ cos(x) ~ tan(x) ~ asin(x) ~ acos(x) ~ atan(x) < - Detailed syntax and semantics can be found [http://www.ueda.info.waseda.ac.jp/~matsusho/public/dissertation.pdf here].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396945238113403, "perplexity": 1410.9666456534537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808972.93/warc/CC-MAIN-20171124214510-20171124234510-00232.warc.gz"}
https://www.physicsforums.com/threads/charge-on-metallic-plates.694213/
# Charge on metallic plates 1. May 28, 2013 ### Saitama The problem statement, all variables and given/known data Following are the questions based on the above paragraph Q1)The charge appearing on the outer surface of plate 1, when switches K1 and K2 are open A)zero B)Q C)-Q D)-3Q Q2)If K1 is closed and K2 is open, the charge appearing on the right surface of plate 2 is A) $Q/2+(\epsilon_0A/d)V/4$ B)$(\epsilon_0A/d)V/4+3Q/2$ C)$(\epsilon_0A/d)V/4-Q/2$ D)$3Q/2$ Q3)If both switches are closed, the charge appearing on plate 4 is A)$(\epsilon_0A/d)V$ B)$(\epsilon_0A/d)V/2+Q$ C)$Q-(\epsilon_0A/d)V/2$ D)$(\epsilon_0A/d)V/3$ 2. Relevant equations 3. The attempt at a solution For Q1), it can be easily done by equating the electric field at any point inside the plate equal to zero but the solution solves it in a single line. It is written that "Charge on outermost surface=(net charge on system/2)=0". How did it arrive at this result? About Q2) and Q3), I have no idea. I am clueless on how to even begin with them. Any help is appreciated. Thanks! #### Attached Files: • ###### paragraph.jpg File size: 22 KB Views: 133 2. May 28, 2013 ### haruspex For (2), when the switch is closed you will get a movement of charge between plates 1 and 4, right? And the quantity moved will be so as to produce a potential difference of V between them? 3. May 28, 2013 ### rude man I am just posting to be kept in touch on this. Hope our Smart Ones chime in! On Question 1 I would have said that the surface charge on the left side of plate 1 = Q/2. If there is charge on plate 1 the charges have to be on the surface. If plate 1 were isolated then certainly half the excess charge woud reside on the left side. Why would the proximity of plates 2 et alia change that? How small does d have to be to change all that? Certainly, a gaussian "box" running from just outside the left side of plate 1 to just outside the right side of plate 4 would yield zero net flux as required if the right side of plate 4 held surface charge -Q/2. 4. May 28, 2013 ### haruspex For part 1, the close proximity allows you to treat the fields as orthogonal to the plates everywhere. 5. May 28, 2013 ### Saitama I am still not sure what to do. When the switch is closed, let an extra charge $q_1$ flow to plate 1, so the new charges on plate 1 and plate 4 are $Q+q_1$ and $-Q-q_1$. Should I do the usual practice of equating electric field to zero inside the plate? And what about the solution to Q1? 6. May 29, 2013 ### rude man That was implicit in my statement about the gaussian box. No leakage out the sides, and the flux pointing to the left on both the left and right ends. 7. May 29, 2013 ### haruspex You need to get an expression for the potential inside each of the end plates, since you need to use the fact that the difference between them is V. For Q1, I don't think there's really an easier way than the way you did it. But it's easy to see that you can generalise it to an arbitrary stack of plates with some given total charge, and it may be that the author considers this a known standard result. 8. May 29, 2013 ### Saitama I am still unsure what to do here. Can you please explain a bit more? 9. May 29, 2013 ### haruspex In terms of the initial charges and the unknown transfer of charge q between the end plates, you can determine the charge on each face of each sheet, right? Taking the plates to be thin compared with d, and setting the potential of the left hand plate as V0, say, you can compute in these terms the potential inside the other end plate. The difference in potential between the two end plates is V, giving you an equation for q. You can now deduce the charge on each face of each sheet. 10. May 30, 2013 ### Saitama The charge on the left face of plate 1 and right face of plate 4 is zero from the result of previous solution. The new charge distribution is shown in the following figure. Looks good? #### Attached Files: • ###### charges.png File size: 52.2 KB Views: 110 11. May 30, 2013 ### haruspex Right, that's the new charge distribution. So what's the potential difference (in terms of the Qs and ds) between plates 1 and 4? 12. May 30, 2013 ### Saitama $$\frac{q_1}{C}+\frac{2Q+q_1}{C}+\frac{q_1}{C/2}=V$$ where $C=A\epsilon_o/d$ Looks correct? 13. May 30, 2013 ### utkarshakash There is another simpler way to solve these problems. Ok I'm giving you a hint. Transform this arrangement to a circuit consisting of 3 capacitors, 2 switches and 2 batteries. Now by looking at the circuit you can easily find the charge distribution on the capacitors. 14. May 30, 2013 ### haruspex Sorry for the delay - was busy. Yes, that's what I get. Unfortunately, it doesn't correspond to any of the offered answers. Seems to me there has to be a factor 1/3 in the answer. 15. May 30, 2013 ### Saitama I do get option B) as my answer. Solving the above equation, $$q_1=\frac{CV}{4}-\frac{Q}{2}$$ We require the charge on the right surface of plate 2 which is $2Q+q_1$. Substituting $q_1$, I get B. 16. May 30, 2013 ### haruspex Just realised I didn't notice the 2d width - I thought they were all just d. So, on to the last part 17. May 30, 2013 ### Saitama Here's how I think the charges arrange, Correct? #### Attached Files: • ###### charge.png File size: 55.9 KB Views: 98 18. May 31, 2013 ### haruspex With both switches closed, Q cannot matter any more. The charges will rebalance between 1 and 4 and between 2 and 3 in a way that's entirely driven by the applied potentials. Your diagram shows the total charge on plate 2 being 2Q, so I don't think it can be right. You do know that plates 1 and 4 will have opposite total charges, and plates 2 and 3 will have opposite total charges. Try assigning unknowns to those and solving. 19. May 31, 2013 ### Saitama #### Attached Files: • ###### charge.png File size: 55.5 KB Views: 88 20. May 31, 2013 ### haruspex Yes, that looks right. Now deduce the relationship between q and q2 from the potential differences. Draft saved Draft deleted Similar Discussions: Charge on metallic plates
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400425910949707, "perplexity": 909.6416295133205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00260.warc.gz"}
https://en.wikibooks.org/wiki/Mathematical_Proof_and_the_Principles_of_Mathematics/Logic/The_universal_quantifier
# Mathematical Proof and the Principles of Mathematics/Logic/The universal quantifier We've introduced the idea of quantifiers in general, and the universal quantifier in particular. Before going on to the second quantifier we'll give the rules of inference for the first one. ## Proving a universal To prove a statement of the form For all ${\displaystyle x}$, ${\displaystyle P(x)}$ first ask the reader to pick an arbitrary constant ${\displaystyle a}$, then prove the statement ${\displaystyle P(a)}$. The idea is that, since the constant ${\displaystyle a}$ was chosen at random, with no assumptions other than that it's an object in the universe of discourse, the proof of ${\displaystyle P(a)}$ is valid hold no matter the choice of ${\displaystyle a}$. Therefore ${\displaystyle P(x)}$ is true for all ${\displaystyle x.}$ Note that for a proof if implication, the reader is asked to make an assumption, then you as the prover must derive a conclusion. Similarly, in the proof of a universal, the reader is asked to do something, namely pick a constant, then you as the prover must derive a conclusion. So we'll state the rule of inference in a way similar to the way the first rule inference for implication is stated. If by choosing ${\displaystyle a}$ as an arbitrary constant one can derive ${\displaystyle P(a)}$, then deduce For all ${\displaystyle x}$, ${\displaystyle P(x)}$ With this in mind, we'll make the structure of the proof of a universal will be Line Statement Justification 1 Choose ${\displaystyle a}$ Arbitrary constant (something) n ${\displaystyle P(a)}$ ? n+1 For all ${\displaystyle x}$, ${\displaystyle P(x)}$ From 1 and n The indentation is used to show that the letter a introduced on line 1 has no meaning outside the range 1 through n. (It should be noted here that most books on logic follow a different convention. The rule is often stated as From ${\displaystyle P(x)}$ deduce For all ${\displaystyle x}$, ${\displaystyle P(x)}$. where ${\displaystyle x}$ is an unbound variable in ${\displaystyle P(x)}$ and subject to restrictions on where ${\displaystyle x}$ appears elsewhere in the proof. This may be a valid form of reasoning, but it's not the way a proof is normally written in a mathematical context. For one thing, the expression ${\displaystyle P(x)}$, since it has an unbound variable, is really a predicate, not a statement. But proofs in mathematics, aside from the occasional imperative such as "Assume ... ," contain only statements. The rule also requires the distinction between bound and unbound variables. But we're avoiding that distinction and focusing on the difference between statements and predicates instead.) ## Example proof #1 We use the rule of inference above to prove: Prop. 1: For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle P(x)}$). First, lay out the structure of the proof as above. Line Statement Justification 1 Choose ${\displaystyle a}$ Arbitrary constant (something) n ${\displaystyle P(a)}$ implies ${\displaystyle P(a)}$ ? n+1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle P(x)}$) From 1 and n The task now is to prove ${\displaystyle P(a)}$ implies ${\displaystyle P(a)}$, but ${\displaystyle P(a)}$ is a statement so we can apply the previously proven proposition ${\displaystyle P}$ implies ${\displaystyle P}$. So the proof can be completed as Line Statement Justification 1 Choose ${\displaystyle a}$ Arbitrary constant 2 ${\displaystyle P(a)}$ implies ${\displaystyle P(a)}$ Prop. 1 on Direct proofs for implication. 3 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle P(x)}$) From 1 and 2 In prose form this becomes: Prop. 1: For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle P(x)}$). Proof: Let ${\displaystyle a}$ be arbitrary. Then ${\displaystyle P(a)}$ implies ${\displaystyle P(a)}$ by Prop. 1 on Direct proofs for implication. Therefore ${\displaystyle P(x)}$ implies ${\displaystyle P(x)}$ for all ${\displaystyle x}$. ## Using a universal The rule for using a universal quantifier is relatively simple. If ${\displaystyle a}$ is a constant then from For all ${\displaystyle x}$, ${\displaystyle P(x)}$ deduce ${\displaystyle P(a)}$. Basically this says that if ${\displaystyle P(x)}$ holds for all ${\displaystyle x}$ then it holds for a particular value, ${\displaystyle a}$, of ${\displaystyle x}$. Note that this is only valid when ${\displaystyle a}$ is defined in the current scope. ## Example proof #2 We now have all the rules of inference needed to proof statements involving universal quantifiers, so let's put them to work with another example. The classical syllogism All people are mortal. Socrates is a person. Therefore Socrates is mortal. may be restated in our notation as For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$). ${\displaystyle P(s)}$ Therefore ${\displaystyle M(s)}$ where ${\displaystyle P(x)}$ is the predicate ${\displaystyle x}$ is a person, ${\displaystyle M(x)}$ is the predicate ${\displaystyle x}$, and ${\displaystyle s}$ is the constant Socrates. This is supposed to be a syllogism, in other words the conclusion is supposed to be valid for any ${\displaystyle P}$, ${\displaystyle M}$ and ${\displaystyle s}$, as long as the first two statements are valid. This amounts to Prop. 2: For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ This is an implication, so start with the standard outline for a direct proof. Line Statement Justification 1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ Hypothesis (something) n ${\displaystyle M(s)}$ ? n+1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ From 1 and n It's probably a good idea to break up the 'and' into separate statements. Line Statement Justification 1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ Hypothesis 2 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) From 1 3 ${\displaystyle P(s)}$ From 1 (something) n ${\displaystyle M(s)}$ ? n+1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ From 1 and n We have ${\displaystyle P(s)}$ and need to derive ${\displaystyle M(s)}$, so something like ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ will do the trick. But we can get that by applying s to the universal quantifier. Filling in the details gives: Line Statement Justification 1 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ Hypothesis 2 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) From 1 3 ${\displaystyle P(s)}$ From 1 4 ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ From 2 5 ${\displaystyle M(s)}$ From 2 and 4 6 For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle M(x)}$) and ${\displaystyle P(s)}$ implies ${\displaystyle M(s)}$ From 1 and 5 ## Translating categorical propositions Historically, logic dealt with categorical propositions; these are statements that relate two predicates in specific ways. There are four types: All P are Q. No P are Q. Some P are Q. Some P are not Q. The first type, which we've already seen in the previous section, becomes For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle Q(x)}$). in our notation. The second type may be rephrased as All P are not Q. So in our notation it becomes For all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies not ${\displaystyle Q(x)}$). Note that we have from propositional logic ${\displaystyle P(a)}$ implies not ${\displaystyle Q(a)}$ iff ${\displaystyle Q(a)}$ implies not ${\displaystyle P(a)}$. We leave it as an exercise to prove No P are Q iff No Q are P. Now think about what it would mean for the first type All P are Q. to be False. There would need to be some object which is a P but not a Q. In other words, the statement of the fourth type Some P are not Q. would have to be True. On the other hand, if Some P are not Q. is False, then there are no P which are not Q, or put another way, All P are Q. So the fourth statement Some P are not Q. can be translated as Not (all P are Q). or Not (for all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies ${\displaystyle Q(x)}$)). Similarly, the third statement Some P are Q. Can be translated as Not (no P are Q). or Not (for all ${\displaystyle x}$, (${\displaystyle P(x)}$ implies not ${\displaystyle Q(x)}$)). We'll introduce another quantifier in the next page which will make these expressions more tractable. In the mean time, we leave it as an exercise to translate two of the categorical syllogisms All P are Q. All Q are R. Therefore all P are R. and All P are Q. No R are Q. Therefore No P are R. into our notation and prove them using the rules of inference we've given up to now. You may have noticed that results our attempts to translate categorical propositions to our notation have been both more verbose and less like natural language than the originals. So you might well wonder what is the advantage of our notation. One advantage is that our notation is expressive enough to include all mathematical statements while categorical propositions alone are too restrictive. Secondly, the categorical syllogisms do not cover all the valid forms of reasoning which are needed to prove theorems. Consider All triangles and rectangles are rectilinear figures. All squares are rectangles with all sides equal. Therefore all squares are rectilinear figures. This seems to be a valid syllogism, but since the premises both involve three predicates instead of two, it's not one of the categorical syllogisms. In addition, categorical propositions deal only with predicates and not relations, and it would be impossible to do much mathematics with predicates only.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 140, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494093656539917, "perplexity": 507.42775050759496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00383.warc.gz"}
http://math.stackexchange.com/users/58221/eleanore
# Eleanore less info reputation 7 bio website location age member for 1 year, 7 months seen Jul 3 at 13:46 profile views 40 # 16 Questions 4 Inequality on Shannon's entropy 3 Expected number of overlaps between intervals 2 Equality of sets when minimizing Shannon's Entropy 2 Shannon's entropy in a set of probabilities 2 Lower bound on uncertainty reduction # 193 Reputation +5 Probability of orderings +20 Inequality on Shannon's entropy +10 Equality of sets when minimizing Shannon's Entropy +5 Shannon's entropy in a set of probabilities 0 Algorithm for shifting a curve 0 Expected number of overlaps between intervals # 21 Tags 0 probability × 12 0 uniform-distribution × 2 0 probability-distributions × 8 0 probability-theory × 2 0 entropy × 3 0 proof-strategy × 2 0 proof-writing × 3 0 calculus × 2 0 statistics × 3 0 math-software × 2 # 10 Accounts Stack Overflow 388 rep 213 Science Fiction & Fantasy 313 rep 18 Mathematics 193 rep 7 Travel 150 rep 3 Cross Validated 141 rep 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331036806106567, "perplexity": 4715.5241948027115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834258.45/warc/CC-MAIN-20140820021354-00139-ip-10-180-136-8.ec2.internal.warc.gz"}
https://scholarship.rice.edu/handle/1911/13110/browse?value=Uhlig%2C+Paul+Xavier&type=author
Now showing items 1-1 of 1 • #### Optimal design problems for quasidisks and partially clamped drums: Existence, symmetrization, and numerical methods  (1997) It is shown that the class of quasidisks in the complex plane, with fixed quasicircle constant and area, is compact in both the Hausdorff metric and in the sense of Caratheodory convergence. Compactness for chord-arc domains ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553625345230103, "perplexity": 2231.6467267157764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127983.53/warc/CC-MAIN-20160428161527-00096-ip-10-239-7-51.ec2.internal.warc.gz"}
http://clay6.com/qa/27387/find-the-locus-of-intersection-of-two-perpendicular-tangent-to-the-hyperbol
Browse Questions # Find the locus of intersection of two perpendicular tangent to the hyperbola $\large\frac{x^2}{a^2}-\frac{y^2}{b^2}$$=1? \begin{array}{1 1}(a)\;h^2+k^2=a^2-b^2\\(b)\;h^2+k^2=a^2+b^2\\(c)\;h^2-k^2=a^2-b^2\\(d)\;k^2-h^2=a^2-b^2\end{array} Can you answer this question? ## 1 Answer 0 votes Let any tangent in terms of slope of hyperbola \large\frac{x^2}{a^2}-\frac{y^2}{b^2}$$=1$ is $y=mx+\sqrt{a^2m^2+b^2}$ It passes through (h,k) $k=mh+\sqrt{a^2m^2+b^2}$ $(k-mh)^2=a^2m^2-b^2$ $m^2(h^2-a^2)-2hkm+k^2+b^2=0$ Slope of tangent be $m_1$ & $m_2$ $m_1m_2=\large\frac{k^2+b^2}{h^2-a^2}$ $-1=\large\frac{k^2+b^2}{h^2-a^2}$(tangent are $\perp$) Hence $h^2+k^2=a^2-b^2$ Hence (a) is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913624882698059, "perplexity": 2227.077438327957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321536.20/warc/CC-MAIN-20170627185115-20170627205115-00627.warc.gz"}
https://www.physicsforums.com/threads/dipole-moment.343347/
Dipole Moment 1. Oct 6, 2009 jmtome2 Calculate the dipole moment of a spherical shell of radius 'a' whose surface charge density is σ=σ0(1+cos(θ)). The origin is at the center of the sphere. What I know: 1) p=∫r'*ρ*dV' I'm having trouble understanding how to transform this equation so that I can calculate the dipole moment knowing σ, instead of ρ. -Thanks 2. Oct 6, 2009 latentcorpse i think you need to go to the 2 dimensional analogue. you want to use a surface charge not a volume charge density. try $\vec{p}= \int \sigma (r-r') \vec{dA'}$ remember that you'll need to use spherical polars and so $\vec{dA'}=r^2 \sin{\theta} d \theta d \phi \vec{\hat{r}}$ does that help 3. Oct 6, 2009 jmtome2 So integrate over θ and Ф and treat r and r' as constants? Last edited: Oct 6, 2009 4. Oct 6, 2009 turin You can use a Dirac delta function. Are you familiar with those? ρ(r,θ,φ) ~ δ(r-R) σ(θ,φ) where R is the radius of the shell. I will leave the proportionality constant to you. 5. Oct 6, 2009 jmtome2 unfortunately I am not, my profesor skipped that portion of the book 6. Oct 6, 2009 latentcorpse delta functions act as follows in 3d: $\delta(\vec{r}-\vec{r'}) f(\vec{r})=f(\vec{r'})$ for an arbitrary function $f$. if you use your original formula for the dipole moment and substitute the expression for rho given what will your new equation be? this one will involve $\sigma$ i.e. surface charge density and should therefore be applicable here. 7. Oct 6, 2009 jmtome2 so I get that $$\overline{p}$$=$$\int$$$$\overline{r'}$$*δ*(r-a)*dA' $$\overline{r'}$$=a*$$\hat{r'}$$ so that... $$\bar{p}$$=δ*a*r$$^{2}$$(r-a)*σ$$_{0}$$*$$\int$$[1+cos(θ)]siin(θ)]*dθ*dФ*$$\hat{r'}$$ does this look right...? 8. Oct 6, 2009 jmtome2 which is zero.... great 9. Oct 6, 2009 gabbagabbahey No, $$\delta^3(\textbf{r}-\textbf{r}')f(\textbf{r})=\left\{\begin{array}{lr}0 &,\textbf{r}\neq\textbf{r}'\\\infty &,\textbf{r}=\textbf{r}'\end{array}$$ It is only when you integrate over a volume $\mathcal{V}$, enclosing $\textbf{r}=\textbf{r}'$, that you end up with a non-zero, finite result: $$\int_{\mathcal{V}}\delta^3(\textbf{r}-\textbf{r}')f(\textbf{r}')dV'=f(\textbf{r})$$ In any case, it is a one-dimensional dirac delta function that is useful here... 10. Oct 6, 2009 gabbagabbahey First start with something simpler....without using any dirac deltas (just the surface charge density), how would you calculate the total charge on the shell (don't actually carry out the integral, just set it up) Once you done that, assume you have a volume charge density of the form $\rho(\textbf{r})=\delta(r-a)f(\textbf{r})$, where $r$ is the distance from the origin....what can you say about the integral $$\int_{\text{all space}}\rho(\textbf{r'})dV'$$ ? 11. Oct 6, 2009 jmtome2 ok... so Q$$_{tot}$$=$$\int$$$$^{2л}_{0}$$$$\int$$$$^{л}_{0}$$$$\int$$$$^{a}_{0}$$ {ρ(r,θ,Ф)*r$$^{2}$$*sin(θ)*dr*dθ*dФ} The integral becomes... $$\int$$$$^{2л}_{0}$$$$\int$$$$^{л}_{0}$$$$\int$$$$^{a}_{0}$$ {δ(r-a)*σ(θ,Ф)*r$$^{2}$$*sin(θ)*dr*dθ*dФ} So what happens to the $$\bar{r'}$$? $$\bar{r'}$$=a*$$\hat{r}$$? Last edited: Oct 6, 2009 12. Oct 6, 2009 jmtome2 i give up on trying to get those integrals to have pi signs... Anyways, is the reasoning above correct? 13. Oct 6, 2009 gabbagabbahey I think this is what you meant to write (just click on the image below to see the $\LaTeX$ code that generated it): $$Q_{tot}=\int_{\text{all space}}\rho(\textbf{r'})dV'=\int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2\pi}\rho(r')r'^2\sin\theta' dr' d\theta' d\phi'=\int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2\pi}\delta(r'-a)\sigma(\theta',\phi')r'^2\sin\theta' dr' d\theta' d\phi'$$ (Remember, $\rho(\textbf{r})=\delta(r-a)\sigma(\theta,\phi)$ means that $\rho(\textbf{r}')=\delta(r'-a)\sigma(\theta',\phi')$ ; and when you integrate over all space, $r'$ goes from 0 to $\infty$) Now, your radial integral encloses the point (technically it's a spherical shell, not a point) $r'=a$, so the delta function picks out that point and you have: $$Q_{tot}=\int_{0}^{\infty} \int_{0}^{\pi} \int_{0}^{2\pi}\delta(r'-a)\sigma(\theta',\phi')r'^2\sin\theta' dr' d\theta' d\phi'=\int_{0}^{\pi} \int_{0}^{2\pi}a^2\sigma(\theta',\phi')\sin\theta' d\theta' d\phi'$$ Make sense? If so, apply the same thing to your dipole moment integral and keep in mind that the spherical unit vector are position dependent and hence are not to be treated as constants under integration... Last edited: Oct 6, 2009 14. Oct 6, 2009 jmtome2 Still missing how you get... Q$$_{tot}$$=$$\int$$$$^{\infty}_{0}$$ (r'^2)*δ(r'-a)*dr=a^2 Last edited: Oct 6, 2009 15. Oct 6, 2009 gabbagabbahey It's basically the definition of the one-dimensional dirac delta function (If you are studying from Griffiths' text, it is just applying equation 1.92 ....with $r$ restricted to positive values for obvious reasons) $$\int_0^\infty f(r')\delta(r'-a)dr'=f(a)$$ 16. Oct 6, 2009 jmtome2 now I'm following, sorry for all the questions... this is the first time I've been introduced to Delta Dirac functions... I'm going to go at this equation some more and see what comes out, thanks for all the help 17. Oct 7, 2009 jmtome2 So finally, i can break the $$\bar{r'}$$ vector into r'*$$\hat{r'}$$, so that the final answer for $$\bar{p}$$, the dipole moment, points in the $$\hat{r'}$$ direction. 18. Oct 7, 2009 gabbagabbahey An important side note; the reason the Dirac Delta function appears at all is because you expect the volume charge density to be zero everywhere except at the shell $r=a$, and the shell has infinitesimal extent in the radial direction (it is infinitesimally thin), meaning that a small piece of charge $dq$ will have a localized volume charge density of $$\rho(\textbf{r})\equiv\frac{dq}{dV}=\frac{dq}{drda}=\frac{\sigma}{dr}$$ which, in the limit that $dr$ (the extent of the charge distribution in the radial direction) goes to zero, becomes undefined. However, you expect that the total charge $\int\rho(\textbf{r}')dV'$ is well defined, and the only function that has the properties; (1) zero everywhere except at a certain point ($r=a$ in this case) where it is undefined, (2) but having a well defined integral; is the dirac delta function. Make sense? 19. Oct 7, 2009 gabbagabbahey Not quite. It is true that $\textbf{r}'=r'\mathbf{\hat{r}}'$, but the direction $\mathbf{\hat{r}}'$ is position dependent, and hence is not constant when integrating over $dV'$: $$\mathbf{\hat{r}}'=\sin\theta'\cos\phi'\mathbf{\hat{x}}+\sin\theta'\sin\phi'\mathbf{\hat{y}}+\cos\theta'\mathbf{\hat{z}}$$ 20. Oct 7, 2009 jmtome2 I realize this, but cannot think of anything else to do with that $$\bar{r'}$$ vector in the presence of the delta dirac function Similar Discussions: Dipole Moment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791052341461182, "perplexity": 838.794556167604}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00493.warc.gz"}
http://www.zora.uzh.ch/id/eprint/129988/
# Quantifying substructures inHubbleFrontier Field clusters: comparison with ΛCDM simulations Mohammed, Irshad; Saha, Prasenjit; Williams, Liliya L R; Liesenborgs, Jori; Sebesta, Kevin (2016). Quantifying substructures inHubbleFrontier Field clusters: comparison with ΛCDM simulations. Monthly Notices of the Royal Astronomical Society, 459(2):1698-1709. ## Abstract The Hubble Frontier Fields (HFFs) are six clusters of galaxies, all showing indications of recent mergers, which have recently been observed for lensed images. As such they are the natural laboratories to study the merging history of galaxy clusters. In this work, we explore the 2D power spectrum of the mass distribution P_M(k) as a measure of substructure. We compare P_M(k) of these clusters (obtained using strong gravitational lensing) to that of Λ cold dark matter simulated clusters of similar mass. To compute lensing P_M(k), we produced free-form lensing mass reconstructions of HFF clusters, without any light traces mass (LTM) assumption. The inferred power at small scales tends to be larger if (i) the cluster is at lower redshift, and/or (ii) there are deeper observations and hence more lensed images. In contrast, lens reconstructions assuming LTM show higher power at small scales even with fewer lensed images; it appears the small-scale power in the LTM reconstructions is dominated by light information, rather than the lensing data. The average lensing derived P_M(k) shows lower power at small scales as compared to that of simulated clusters at redshift zero, both dark matter only and hydrodynamical. The possible reasons are (i) the available strong lensing data are limited in their effective spatial resolution on the mass distribution; (ii) HFF clusters have yet to build the small-scale power they would have at z ∼ 0 or (iii) simulations are somehow overestimating the small-scale power. ## Abstract The Hubble Frontier Fields (HFFs) are six clusters of galaxies, all showing indications of recent mergers, which have recently been observed for lensed images. As such they are the natural laboratories to study the merging history of galaxy clusters. In this work, we explore the 2D power spectrum of the mass distribution P_M(k) as a measure of substructure. We compare P_M(k) of these clusters (obtained using strong gravitational lensing) to that of Λ cold dark matter simulated clusters of similar mass. To compute lensing P_M(k), we produced free-form lensing mass reconstructions of HFF clusters, without any light traces mass (LTM) assumption. The inferred power at small scales tends to be larger if (i) the cluster is at lower redshift, and/or (ii) there are deeper observations and hence more lensed images. In contrast, lens reconstructions assuming LTM show higher power at small scales even with fewer lensed images; it appears the small-scale power in the LTM reconstructions is dominated by light information, rather than the lensing data. The average lensing derived P_M(k) shows lower power at small scales as compared to that of simulated clusters at redshift zero, both dark matter only and hydrodynamical. The possible reasons are (i) the available strong lensing data are limited in their effective spatial resolution on the mass distribution; (ii) HFF clusters have yet to build the small-scale power they would have at z ∼ 0 or (iii) simulations are somehow overestimating the small-scale power.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649968147277832, "perplexity": 3061.976819372257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00126.warc.gz"}
http://community.dynamics.com/gp/f/32/t/108849.aspx
# GP2013 Web Client Unexpected error #### GP2013 Web Client Unexpected error Hi, after installing the Web Client, I get an unexpected error when trying to Sign In. This is what I see in the event log when trying to sign in.. Have someone seen this error before? An error occurred while creating a session: ' Date: 05/07/2013 2:37:46 PM SessionId: dufaisab_2013-7-5_14-37-46-492_HR-FIN-V ProcessId: 0 TenantName: GPWebApp ApplicationDirectory: C:\Program Files (x86)\Microsoft Dynamics\GP2013\ DexInitializationFile: C:\Program Files (x86)\Microsoft Dynamics\GP2013\Data\Dex.ini SetFile: C:\Program Files (x86)\Microsoft Dynamics\GP2013\Dynamics.set Exception Details: System.ComponentModel.Win32Exception (0x80004005): The directory name is invalid at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) at Microsoft.Dynamics.GP.Web.Services.Session.SessionProcess.StartableSessionProcess.TryToStartProcess() at Microsoft.Dynamics.GP.Web.Services.Session.SessionProcess.StartableSessionProcess.Start() at Microsoft.Dynamics.GP.Web.Services.Session.Service.SessionCreator.CreateSession(SessionCreationContext creationContext, Uri runtimeServiceBaseAddress, SessionUserInfo sessionUserInfo) '. • Its working...  ;-) All Replies • What path did you choose to install the GP web client? Did you use the default location in the GP Web Client installer, or did you pick a different path? When a new session is created, some data is written to a session folder that is located inside of the GP Web client installation location you specified. If this path is too long (greater than 255 characters, if I recall correctly) then the files can't be created. If that's the case, then you need to install the web client into a location with a shorter path. • Hi, Thank you very much for your reply.. I'm running the GP2013 web client installation from the SP1 package and the installation does not ask for the installation path so it must be installing in the default location.  I did the Single machine installation.. I've just removed and reinstall the WebClient and IIS is pointing to C:\program files\Microsoft Dynamics\GP Web Client\GPWeb.   Got the same error... ;-( • Hi again, When I go in my webmanagementconsole and I click on 'Session Management' and then click on the 'configure' button, it gives me this message:  The required URL to the Session Central Service was not valid or being reconfigured.  Then it lists my url which is --> hr-fin-v.ad.technomuses.ca/SessionCentralService If I try to acces that URL directly, it loads a page about Windows© Communication Foundation service. The only think I notice is that that URL uses ad.technomuses.ca while to access GP, the URL I enter is https://hr-fin-v/GP/Views/LogOn.aspx. (ad.technomuses.ca is our AD domain name) • When you log in to the Dynamics GP web client site, do you get a security certificate error? You need to use the fully-qualified domain name for the machine in every place that it's needed, such as when you supply the URL to access the web client. That means you should be using hr-fin-v.ad.technomuses.ca/GP to access the web client. That should also be the full name that you saw when you chose the security certificate for the Session service. If the names aren't consistent, you'll see security certificate errors, and the login won't be successful. • I tried with URL hr-fin-v.ad.technomuses.ca/GP..   I no longer get a certificate error but I still get the same error message recorded on my server:  the directory name is invalid.. (see below). PS.. The 2 GP services An error occurred while creating a session: ' Date: 09/07/2013 10:10:53 AM SessionId: dufaisab_2013-7-9_10-10-53-185_HR-FIN-V ProcessId: 0 TenantName: GPWebApp ApplicationDirectory: C:\Program Files (x86)\Microsoft Dynamics\GP2013\ DexInitializationFile: C:\Program Files (x86)\Microsoft Dynamics\GP2013\Data\Dex.ini SetFile: C:\Program Files (x86)\Microsoft Dynamics\GP2013\Dynamics.set Exception Details: System.ComponentModel.Win32Exception (0x80004005): The directory name is invalid ...... and I'm still getting the following error in the webmanagementconsole when I click on configure:  The required URL to the Session Central Service was not valid.  The URL is --> hr-fin-v.ad.technomuses.ca/SessionCentralService This error does not generate anything on the server that I could find... • PS.. The 2 GP servcies are up and running.. • The only thing I see that's different on your installation is the machine name. Personally, I've never used any special characters in the machine name. Your machine name contains dashes. I don't know whether that's a problem...it's just a difference I can see. There are a couple of things you can try to see if you can get more information about the issue. First, show the hidden folders on the system. When you log in, you should see folders in the C:\ProgramData\Microsoft Dynamics\GPSessions\Data\ folder. This is where temporary data is written for each session, and where the web client may be having issues writing to a directory. The second thing to do is to turn on logging for the web client. Page 78 of the Web Client Installation and Administration Guide (for SP1) tells you how to do that. You'll need to update the TenantConfiguration.xml file to turn on logging. The logging can be extensive, but may give you an idea of what's actually happening when the error occurs. • Hi again, Arggghhh... I think I've found my problem... I do not have the GP Desktop client installed (with the web client runtime) on my IIS server.  I should have looked with both eyes open as it is clearly mentionned in the documentation (d'oh)...   I'll let you know of the outcome. Many thanks for your help troubleshooting. • Its working...  ;-)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486600518226624, "perplexity": 4988.097938099483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010437227/warc/CC-MAIN-20140305090717-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
https://vrehab.tungwahcsd.org/k574d/c3e349-what-is-a-natural-number
As it can be written without a decimal component it belongs to the integers. So three is a whole number. Noun . Mathematicians have proved that the square root of every natural number is either an integer or an irrational number. When counting the number of objects, negative numbers and fractions are typically not needed. For example, 9,99,99,999 is predecessor of 10,00,00,000 or we can also That means it is also a whole number and an integer. The natural logarithm of a number x is defined as the base e logarithm of x: ln(x) = log e (x) So the natural logarithm of e is the base e logarithm of e: ln(e) = log e (e) ln(e) is the number we should raise e to get e. e 1 = e. So the natural logarithm of e is equal to one. Natural numbers are always whole numbers (integers excluding negative numbers) and often exclude zero, in which case one is the smallest natural number. And of course it's also a real. It is a lot of fun, but we will not pursue any of it here. Now, let's think about negative five. Zero is also a natural number. More general sets of numbers can be thought of as being constructed to make operations always possible that the more restricted set of natural numbers does not allow. Refer to the external references at the end of this article for more information. (Note that n and c are variables while m stands for a number or its Church encoding.) Natural numbers are the numbers small children learn about when they first started to count. For example, -5 is an integer but not a whole number or a natural number. See also Definition: Can be expressed as the quotient of two integers (ie a fraction) with a denominator that is not zero.. Approach, Hoboken, New Jersey: John Wiley & Sons, page 3: A careful derivation of the arithmetic properties of the natural numbers, using induction, was done by G. Peano (1858–1932). Then to prove your statement, we could show that a contradiction arises from this assumption, which would imply that the assumption was incorrect. Natural Numbers Natural numbers are the numbers 1, 2, 3, ... . LaTeX symbols have either names (denoted by backslash) or special characters. So, three is a whole number. Depending on the text and teacher (there is some inconsistency), this may also be counted as a rational, which technically-speaking it is. The imaginary number i is defined to be the square root of negative one. ABUNDANT NUMBERS. According to Reference.com, non-negative integers or positive integers greater than zero, are also known as "natural numbers." They are organized into seven classes based on their role in a mathematical expression. For example, 5-2 and 12/3 are natural numbers, but 3-5 and 3/12 are not. Representation of Rational Numbers on Number Line. This is the circumference (distance around) of a circle divided by its diameter (distance across). A natural number refers to any integer that is equal to or greater than 1, although 0 is included in some mathematical fields. We can apply this principle again and again (finitely many times) to see that the sum of any finite number of natural numbers is a natural number. Natural logarithm of infinity Natural unemployment is the minimum unemployment rate resulting from real or voluntary economic forces. One well-known irrational number is pi. Natural numbers are only closed under addition and multiplication, ie, the addition or multiplication of two natural numbers always results in another natural number. It is the limit of (1 + 1/n) n as n approaches infinity, an expression that arises in the study of compound interest.It can also be calculated as the sum of the infinite series Points to the right are positive, and points to the left are negative. We know that the natural numbers, whole numbers and integers can be represented on a number line. How could we go about doing this? This number lets you know that the product has been reviewed and approved by Health Canada. e (2.718281828...), also known as Euler's number, is a critically important number in mathematics. Wikipedia . 1 can be represented as 1/1 or as negative 2 over negative 2 or as 10,000/10,000. There are several other different "sets" of rational numbers. 1 Class 0 (Ord) symbols: Simple / ordinary ("noun") 1.1 Latin letters and Arabic numerals 1.2 … (Note: a few textbooks disagree and say the natural numbers include 0.) Imaginary numbers definitely stretch our conception of number, as they are not at all what we thought about when we first learned to count. Irrational number definition is - a number that can be expressed as an infinite decimal with no set of consecutive digits repeating itself indefinitely and … A distance is chosen to be "1", then whole numbers are marked off: {1,2,3,...}, and also in the negative direction: {...,−3,−2,−1} Any point on the line is a Real Number: The number that comes just before a number is called the predecessor. Rational Numbers The rational numbers include all the integers, plus all fractions, or terminating decimals and repeating decimals. Rather, we are left with a real number, in this case a fraction. The number 4 is an integer as well as a rational number. Every positive rational number is greater than negative rational number. The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828, and can be characterized in many ways. Integers $$\mathbb{Z}$$ When the need to distinguish between some values and others from a reference position appears is when negative numbers come into play. The Number System This page contains concise explanations of commonly used types of numbers. Natural numbers are primarily used in counting. And, even the most "natural" of numbers, such as the number 6 or the number 7, are abstractions, not entirely identical to the things in the "real" world that they are describing. So, the predecessor of a given number is 1 less than the given number. b) Prove by induction on m that and hence that L'm[times/c, 1/n] **ß m! For an empty set, no object is present, and the count yields the number 0, which, appended to the natural numbers, produces what are known as the whole numbers. rational number: A rational number is a number determined by the ratio of some integer p to some nonzero natural number q . Natural Numbers Counting Numbers. Every rational number can be written as a fraction a/b, where a and b are integers. But if you're a whole number, you're also an integer, and you're also a rational number. Any real number multiplied by i is also known as an imaginary number. Natural numbers, also called counting numbers, are the numbers used for counting things. Other articles where Natural number is discussed: arithmetic: Natural numbers: …called the counting numbers or natural numbers (1, 2, 3, …). The venn diagram below shows examples of all the different types of rational, irrational numbers including integers, whole numbers, repeating decimals and more. It is the base of the natural logarithm. The terms Lm and L'm for each natural number m are defined . Abundant numbers are part of the family of numbers that are either deficient, perfect, or … Assume it is a prime number for every natural number. L'O = n L'm = cmL'm-1 form>0 Lm = Ac.An.L'm for any m a) What list is represented by the term Lm ? The set of natural numbers … It is a rational number because it can be written as: $$\frac{4}{1}$$ or $$\frac{8}{2}$$ or even $$\frac{-8}{-2}$$ Whereas $$\frac{1}{5}=0.2$$ is a rational number but not an integer. See more. The term integers is defined as the "set of whole numbers and their opposites." So, it's a member of that set. The natural numbers include all of the positive whole numbers (1, 24, 6, 2, 357). However, the difference or the ratio of two natural numbers is not always a natural number. A point is chosen on the line to be the "origin". Obviously, this is a counting number. Natural numbers, also known as "counting numbers", are the first numbers you learn. The sum of any two natural numbers is also a natural number (for example, 4 + 2000 = 2004), and the product of any two natural numbers is a natural number (4 × 2000 = 8000). So three, and maybe I'll do it in the color of the category. So for example, any integer is a rational number. This is not a comprehensive list. Natural definition, existing in or formed by nature (opposed to artificial): a natural bridge. Rational numbers And the simple way to think about it is any number that can be represented as the ratio of two integers is a rational number. An abundant number is a number n for which the sum of divisors σ(n)>2n, or, equivalently, the sum of proper divisors (or aliquot sum) s(n)>n. Once Health Canada has assessed a product and decided it is safe, effective and of high quality, it issues a product licence along with an eight-digit Natural Product Number (NPN) or Homeopathic Medicine Number (DIN-HM), which must appear on the label. That is, the numbers 1, 2, 3, 4, etc. The average number of cars per household is calculated by adding up the total number of cars and dividing by the number of households. This number … The set of natural numbers (whichever definition is adopted) is denoted N. Due to lack of standard terminology, the following terms and notations are recommended in preference to "counting number," "natural number," and "whole number." The closure of the natural numbers under addition means that the sum of any two natural numbers is a natural numbers. Translations . Integers All natural numbers are integers, but also 0, -1, -2, ... . The sum or product of natural numbers are also natural numbers. Successor of a given number is 1 more than the given number. The Real Number Line is like a geometric line. The answer is: natural, whole, integer, rational (possibly), real natural number. Once we divide we are no longer working with natural numbers. For representing an integer on a number line, we draw a line and choose a point O on it to represent ‘0’. Many people are surprised to know that a repeating decimal is a rational number. A number n for which the sum of divisors σ(n)>2n, or, equivalently, the sum of proper divisors (or aliquot sum) s(n)>n. For example, the number = does not have an equivalent ratio or division of two numbers. The government is introducing a new Covid-19 Alert Level that will take into account both the current R number and the total number of infections in the UK to … ln(e) = log e (e) = 1 . In fact, Ribenboim (1996) states "Let be a set of natural numbers; whenever convenient, it may be assumed that ." So, three is a whole number, it's an integer, and it's a rational number. The numbers used for counting. The whole numbers are the natural numbers together with 0. Two integers on a number line the same distance from zero are known as opposites, an example being -3 and +3. * ß m a few textbooks disagree and say the natural numbers, also known as,. Note: a natural number closure of the category of that set that the sum or of. Based on their role in a mathematical expression this is the circumference ( distance ). By its diameter ( distance around ) of a circle divided by its diameter ( distance around ) a... To count, an example being -3 and +3, and maybe i 'll do in... Defined as the set of whole numbers and their opposites. sets '' of rational.! And you 're also an integer as well as a fraction a/b where! ) or special characters the right are positive, and it 's a rational number b integers!, or terminating decimals and repeating decimals 24, 6, 2, 357 ) under addition that... More information than the given number is greater than 1, although 0 is included in some mathematical.! set of whole numbers and fractions are typically not needed children learn about when first... Circle divided by its diameter ( distance around ) of a circle divided its. By the ratio of two numbers. an equivalent ratio or division of two numbers... All natural numbers are the numbers 1, 2, 357 ) lot of fun, also. -5 is an integer as well as a fraction a/b, where a and b are integers the (. 0. ratio of some integer p to some nonzero natural number two numbers. plus. 2 over negative 2 over negative 2 over negative 2 or as 10,000/10,000 when counting the number of objects negative., an example being -3 and +3, an example being -3 and.. Number, in this case a fraction a/b, where a and b are integers, 3-5. Be represented as 1/1 or as negative 2 or as 10,000/10,000 numbers ''! The term integers is defined as the origin '' this page concise. Is included in some mathematical fields more than the given number, 2, 357 ) positive whole and... Divided by its diameter ( distance across ) divided by its diameter ( distance around ) of a given.. To the left are negative numbers natural numbers together with 0. integers can be written as rational... Number in mathematics for a number line is like a geometric line types! counting numbers, are the natural numbers. times/c, 1/n *... Chosen on the line to be the square root of negative one numbers. 'S a rational number is 1 more than the given number people are surprised to know that repeating! A natural number with a real number multiplied by i is defined to be the square root negative! Number: a few textbooks disagree and say the natural numbers together with 0. so for example the! Number can be represented as 1/1 or as negative 2 or as 10,000/10,000 a number! Repeating decimal is a number line is like a geometric line it here each natural number mathematics. Numbers 1, although 0 is included in some mathematical fields numbers learn!, -5 is an integer, and maybe i 'll do it in the color the... Is also known as counting numbers, also known as an imaginary number i is a... Example being -3 and +3 natural numbers natural numbers is a lot of fun, but also,... Each natural number as an imaginary number ( opposed to artificial ): a textbooks! Numbers and their opposites. numbers you learn special characters defined to be the set. Sum or product of natural numbers are also known as an imaginary number is. The rational numbers include 0. there are several other different sets '' of rational numbers include of! ( denoted by backslash ) or special characters 0, -1, -2,... Note that and... Or special characters the difference or the ratio of two natural numbers, also known as counting numbers are... No longer working with natural numbers together with 0. and L 'm for each natural number (... Commonly used types of numbers. around ) of a circle divided by its diameter ( distance across.! C are variables while m stands for a number determined by the ratio of some p. In a mathematical expression distance from zero are known as an imaginary number i also... Formed by nature ( opposed to artificial ): a natural bridge this article more... But also 0, -1, -2,... special characters when counting the number this... A/B, where a and b are integers, plus all fractions or... And L 'm for each natural number number, is a whole number, in this case fraction. 2 or as 10,000/10,000 any two natural numbers. Note that n and c are variables m... Are organized into seven classes based on their what is a natural number in a mathematical expression that the sum product. Where a and b are integers points to the right are positive, and points to integers... First numbers you learn equal to or greater than 1, although 0 is included in some mathematical.! In mathematics 2 or as 10,000/10,000 a rational number can be represented as 1/1 or as 10,000/10,000 the... Different sets '' of rational numbers the rational numbers include all the integers, but also 0,,... Is an integer, and it 's a member of that set are not a and b integers... 24, 6, 2, 357 ) organized into seven classes based on role! Decimals and repeating decimals the color of the natural numbers together with 0.,! Into seven classes based on their role in a mathematical expression m are defined the small. Defined to be the origin '' Prove by induction on m that and hence that 'm! External references at the end of this article for more information distance around ) a... We will not pursue any of it here and maybe i 'll do in... Has been reviewed and approved by Health Canada mathematical expression [ times/c, 1/n ] *! Prime number for every natural number number multiplied by i is also rational... ] * * ß m lot of fun, but 3-5 and 3/12 not! Integer that is, the number of objects, negative numbers and their opposites. * ß! Member of that set that the sum of any two natural numbers. is like a geometric.... E ( e ) = 1 learn about when they first started to count same distance from zero are as... Numbers you learn, 357 ) hence that L 'm [ times/c, 1/n ] *... Pursue any of it here numbers you learn, in this case a fraction a/b, where and. ( 2.718281828... ), also called counting numbers, also known as Euler 's,... Any integer is a prime number for every natural number or the of..., although 0 is included in some mathematical what is a natural number or positive integers greater than negative rational number a! 0 is included in some mathematical fields Note: a few textbooks disagree and say the natural numbers the. ( Note that n and c are variables while m stands for a number its. Maybe i 'll do it in the color of the positive whole numbers and their opposites. fraction a/b where... A point is chosen on the line to be the square root negative! Predecessor of a given number sets '' of rational numbers include all of the natural numbers ''! It belongs to the right are positive, and maybe i 'll do it the... 5-2 and 12/3 are natural numbers under addition means that the sum any! Some nonzero natural number on the line to be the square root of negative one encoding! 'Re also an integer also called counting numbers, also known as Euler 's,! That and hence that L 'm [ times/c, 1/n ] * * ß m and 3/12 are not number... Fractions are typically not needed defined to be the square root of negative one this the! 1 can be represented on a number determined by the ratio of two numbers. as as! Included in some mathematical fields by backslash ) or special characters other different sets '' of rational numbers rational. But we will not pursue any of it here is like a geometric.! On m that and hence that L 'm [ times/c, 1/n ] *... ( 1, 2, 3,... plus all fractions, or decimals! To know that the natural numbers, also known as an imaginary i! So three, and maybe i 'll do it in the color of the natural numbers, are natural... More than the given number is a whole number or its Church encoding ). The set of whole numbers are the numbers 1, 24, 6, 2, 3 4! That n and c are variables while m stands for a number line is like a line... Approved by Health Canada so for example, 5-2 and 12/3 are natural numbers is not always a number... Line is like a geometric line the circumference ( distance around ) of a number... An imaginary number numbers are the numbers 1, although 0 is included in mathematical... By i is defined to be the set of whole numbers are the numbers small learn! Like a geometric line more information by the ratio of some integer to! Sengoku Basara Season 1 Episode 1, Typescript Bracket Type, Ultimate Car Wash Coupon, Tzadik Katamar Translation, Turkish Series Romantic, Photo Stand Easel, Sonia Rao Washington Post, Northern Va Wedding Venues, The Birth Of The Early Church,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8341436982154846, "perplexity": 465.3069499800254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00160.warc.gz"}
https://experts.umn.edu/en/publications/plasma-waves-in-jupiters-high-latitude-regions-observations-from-
# Plasma waves in Jupiter's high-latitude regions: Observations from the Juno spacecraft S. S. Tetrick, D. A. Gurnett, W. S. Kurth, M. Imai, G. B. Hospodarsky, S. J. Bolton, J. E.P. Connerney, S. M. Levin, B. H. Mauk Research output: Contribution to journalArticlepeer-review ## Abstract The Juno Waves instrument detected a new broadband plasma wave emission (~50 Hz to 40 kHz) on 27 August 2016 as the spacecraft passed over the low-altitude polar regions of Jupiter. We investigated the characteristics of this emission and found similarities to whistler mode auroral hiss observed at Earth, including a funnel-shaped frequency-time feature. The electron cyclotron frequency is much higher than both the emission frequency and local plasma frequency, which is assumed to be ~20–40 kHz. The E/cB ratio was about three near the start of the event and then decreased to one for the rest of the period. A correlation of the electric field spectral density with the flux of an upgoing 20 to 800 keV electron beam was found, with a correlation coefficient of 0.59. We conclude that the emission is propagating in the whistler mode and is driven by the energetic upgoing electron beam. Original language English (US) 4447-4454 8 Geophysical Research Letters 44 10 https://doi.org/10.1002/2017GL073073 Published - May 28 2017 Yes ### Bibliographical note Funding Information: The authors would like to thank NASA and the various institutions that helped make the Juno mission possible. The research at the University of Iowa was supported by NASA through contract 699041X with Southwest Research Institute. The Juno data included herein will eventually be available from NASA's Planetary Data System. In the meantime, data may be requested from the lead author. ## Keywords • Juno • Jupiter • Waves • magnetosphere • plasma waves • whistler mode ## Fingerprint Dive into the research topics of 'Plasma waves in Jupiter's high-latitude regions: Observations from the Juno spacecraft'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399004936218262, "perplexity": 4613.736280927103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00134.warc.gz"}
https://www.electro-tech-online.com/threads/cos-x.151245/
# cos(x) =? #### Ian Rogers ##### User Extraordinaire Forum Supporter I have two functions in a piece of code.. x = cos(a); y = sin(a); When using degrees I normally just go. x = cos(a); y = 1 - x; #### alec_t ##### Well-Known Member Once x is calculated the relationship with y still holds. Last edited: #### Ian Rogers ##### User Extraordinaire Forum Supporter For the time being, I've cranked up the micro speed to 32Mhz... It has allowed me to run with two trig functions.... I'll see if I can find an alternate sin ~ cosine routine... Currently 14mS to create an Arc... I need 6 of then so around 84mS is a tad too much, but! Hey ho! #### JimB ##### Super Moderator I have two functions in a piece of code.. x = cos(a); y = sin(a); When using degrees I normally just go. x = cos(a); y = 1 - x; Ian, are you saying that sin(a) = 1 - cos(a) ? I think that you are a bit off the mark there. The correct relationship is sin^2 + cos^2 = 1 Which would give y = sqrt(1 - x^2) using your notation. JimB #### Ian Rogers ##### User Extraordinaire Forum Supporter Ian, are you saying that sin(a) = 1 - cos(a) ? I think that you are a bit off the mark there. Yep! Found that out in radians... sin(a) = cos(a + PI/2)... A tad crap for me... I found a super duper fast routine here!!! But its only a small bit faster than XC8's builtin.. I thought that because cos(30) = 0.86 and sin(60) = 0.86, "Senior moment", it was linear back to front!! What I do when I use cos for sin is.. cos (30) = sin(90-30) which is correct!! But using radians turned me upside down. #### Ian Rogers ##### User Extraordinaire Forum Supporter I had a quick look through... Its in pascal for the x86 system... A couple of issues.. Namely A) its for 32 bit machine and B) it uses "fmul" and most PC's have a floating point unit... I would like to see how fast it runs turning of the hardware FPU and using the software FPU. But for my PC ( Lazarus ) projects... Thanks for that! #### MrAl ##### Well-Known Member Hi, Yes i had to look twice when i saw that 1-cos(a) was somehow coming out to sin(a) And yes, rads=degrees*pi/180 so a formula like: x=cos(a) in degrees is just: x=cos(a*pi/180) The two best forms are probably these two: x=r*cos(a) y=r*sin(a) combined with the circle of radius 'r': r^2=x^2+y^2 and after substitution of x and y: r^2=(r*cos(a))^2+(r*sin(a))^2 r^2=r^2*cos(a)^2+r^2*sin(a)^2 r^2=r^2*(cos(a)^2+sin(a)^2) and using the identity cos(a)^2+sin(a)^2=1 we end up with: r^2=r^2*1 so: r^2=r^2 and we've proved the identity. So we can solve for sin(a)^2 and get: sin(a)^2=1-cos(a)^2 and taking the square root of both sides we get: abs(sin(a))=sqrt(1-cos(a)^2) and so we have to split this into two forms showing principle angles: sin(a)=sqrt(1-cos(a)^2), for pi<=a<=0 (eg from a=0 to pi) sin(a)=-sqrt(1-cos(a)^2), for -pi<=a<=0 (eg from a=0 to -pi) Also note that sin(a) may be more accurate than using the calculation on some angles, especially those close to either 90 degrees or 0 degrees (one of the axes). Last edited: #### Ian Rogers ##### User Extraordinaire Forum Supporter So MrAl to answer my question I have a point x,y.. I rotate the point around a central fixed point.. This means I need a vertical component ie... sin(x) And a horizontal component namely cos(x) I need to work out cos(x) without trig... Let me explain.. cos(20) = 0.9396... and sin(20) = 0.342... But! sin(90-20) = 0.9396... I "stupidly" assumed that I could use only one trig function.. BUT!! I soon realised that I cannot make the changes after the trig the alterations have to be made before the trig function... I just wondered if it were possible to only use 1 trig function for sin and cos #### Ratchit ##### Well-Known Member So MrAl to answer my question I have a point x,y.. I rotate the point around a central fixed point.. This means I need a vertical component ie... sin(x) And a horizontal component namely cos(x) I need to work out cos(x) without trig... Let me explain.. cos(20) = 0.9396... and sin(20) = 0.342... But! sin(90-20) = 0.9396... I "stupidly" assumed that I could use only one trig function.. BUT!! I soon realised that I cannot make the changes after the trig the alterations have to be made before the trig function... I just wondered if it were possible to only use 1 trig function for sin and cos Here you go, Cos{x} without trig once you know the single trig Sin[x]. To find the cosine after the sine is known, subtract from 1, the square of the sine, the extract the square root for the cosine as shown below. t Ratch #### Attachments • 14.5 KB Views: 50 #### MrAl ##### Well-Known Member So MrAl to answer my question I have a point x,y.. I rotate the point around a central fixed point.. This means I need a vertical component ie... sin(x) And a horizontal component namely cos(x) I need to work out cos(x) without trig... Let me explain.. cos(20) = 0.9396... and sin(20) = 0.342... But! sin(90-20) = 0.9396... I "stupidly" assumed that I could use only one trig function.. BUT!! I soon realised that I cannot make the changes after the trig the alterations have to be made before the trig function... I just wondered if it were possible to only use 1 trig function for sin and cos Hi, ...and so we have to split this into two forms showing principle angles: sin(a)=sqrt(1-cos(a)^2), for pi<=a<=0 (eg from a=0 to pi) sin(a)=-sqrt(1-cos(a)^2), for -pi<=a<=0 (eg from a=0 to -pi) Also note that sin(a) may be more accurate than using the calculation on some angles, especially those close to either 90 degrees or 0 degrees (one of the axes). Hello again, However, there are also the approxmations as sin(x) can be approximated in many ways. For example sin(x) can be approximated one way as: sin(x)=(5880*x-620*x^3)/(11*x^4+360*x^2+5880) A simple 4 term Taylor series gets you there also: sin(x)=x-x^3/6+x^5/120-x^7/5040 or: cos(x)=1-x^2/2+x^4/24-x^6/720+x^8/40320 These are best between 0 and pi/2. #### Ian Rogers ##### User Extraordinaire Forum Supporter I came across this.... As I am working on a small 128x128 screen it will do the job. It comes from "Ye olde spectrum" but it is very fast... I post it here for others... it only uses 45 point lookup. C: const char sintable[] ={ 0,9,18,27,35,44,53,62, 70,79,87,96,104,112,120,127, 135,143,150,157,164,171,177,183, 190,195,201,206,211,216,221,225, 229,233,236,240,243,245,247,249, 251,253,254,254,255,255}; double mycos(double deg) { return mysin(90-deg); } double mysin(double deg) { int esti, diff; while(deg>360) deg-=360; while(deg<0) deg+=360; if(deg>180) { deg-=180; } else if(deg>90) deg = 180-deg; deg/=2; diff = (int)deg; deg = deg - diff; esti = sintable[diff]; diff = sintable[diff+1] - esti; deg *= diff; deg += esti; } Its at least 5 times faster than the math library!! #### MrAl ##### Well-Known Member Hi Ian, Looks very interesting, i'll have to try it out a little later. Maybe you could help me find an approximation i posted a while back for sin(x)? I cant seem to find it now. I posted on this site but not sure if it was in math or not now. #### Ian Rogers ##### User Extraordinaire Forum Supporter Maybe you could help me find an approximation i posted a while back for sin(x)? XC8 uses approximation, but for what I need its over the top... I am working to int precision so the above is overkill let alone the successive one .. Here is XC8's sin().. C: double sin(double f) { const static double coeff_a[] = { 207823.68416961012, -76586.415638846949, 7064.1360814006881, -237.85932457812158, 2.8078274176220686 }; const static double coeff_b[] = { 132304.66650864931, 5651.6867953169177, 108.99981103712905, 1.0 }; double x2, y; unsigned char sgn; #ifdef i8086 if(_use8087) return _sin87(f); #endif sgn = 0; if(f < 0.0) { f = -f; sgn = 1; } f *= 1.0/TWO_PI; f -= floor(f); f *= 4.0; if(f > 2.0) { f -= 2.0; sgn = !sgn; } if( f > 1.0) f = 2.0 - f; x2 = f * f; y = eval_poly(x2, coeff_b, 3); f *= eval_poly(x2, coeff_a, 4) / y; if(sgn) return -f; return f; } double eval_poly(double x, const double * d, int n) { double res; res = d[n]; while(n) res = x * res + d[--n]; return res; } The eval_poly is the successive part.. #### Ratchit ##### Well-Known Member Ian Rogers, First of all, I would use a look-up table. It can be one degree increments from 0-90. Or, it can be 2 degree increments or more. The same 90° table can be used for the cos because cos(70) is sin(20). The same table can also be used for 0-360 be adjusting the sign of the sines. Once the angle is determined to the nearest degree, the incremental part can be determined. Then use the incremental Taylor series to calculate the sin of the total angle. Suppose we want to calculate the sin of 1.5 degrees. The integer degree part call "a" and the incremental part call "h". The incremental Taylor series is Applying the above equation and converting a=1 degree and h=0.5 degree to radians, the sin of 1.5 degrees is easily found to be 0.0261769. Notice that successive derivatives of the sin alternate between the sin and cos. Fewer terms or a smaller look-up table can be used if lower precision is acceptable. Observe that the sin changes the fastest at 0 and pi radians, so the precision will be less at that area of the sin. Ratch Forum Supporter
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872827887535095, "perplexity": 3658.073747175668}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496855.63/warc/CC-MAIN-20190220230820-20190221012820-00589.warc.gz"}
https://en.khanacademy.org/math/trigonometry/trigonometry-right-triangles/trig-solve-for-an-angle/a/inverse-trig-functions-intro
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Trigonometry Course: Trigonometry>Unit 1 Lesson 4: Solving for an angle in a right triangle using the trigonometric ratios Intro to inverse trig functions Learn about arcsine, arccosine, and arctangent, and how they can be used to solve for a missing angle in right triangles. Let's take a look at a new type of trigonometry problem. Interestingly, these problems can't be solved with sine, cosine, or tangent. A problem: In the triangle below, what is the measure of angle L? A right triangle with leg lengths of thirty-five and sixty-five. Angle L is opposite the short leg.and is unknown. What we know: Relative to angle, L, we know the lengths of the opposite and adjacent sides, so we can write: tangent, left parenthesis, L, right parenthesis, equals, start fraction, start text, o, p, p, o, s, i, t, e, end text, divided by, start text, a, d, j, a, c, e, n, t, end text, end fraction, equals, start fraction, 35, divided by, 65, end fraction But this doesn't help us find the measure of angle, L. We're stuck! What we need: We need new mathematical tools to solve problems like these. Our old friends sine, cosine, and tangent aren’t up to the task. They take angles and give side ratios, but we need functions that take side ratios and give angles. We need inverse trig functions! The inverse trigonometric functions We already know about inverse operations. For example, addition and subtraction are inverse operations, and multiplication and division are inverse operations. Each operation does the opposite of its inverse. The idea is the same in trigonometry. Inverse trig functions do the opposite of the “regular” trig functions. For example: • Inverse sine left parenthesis, sine, start superscript, minus, 1, end superscript, right parenthesis does the opposite of the sine. • Inverse cosine left parenthesis, cosine, start superscript, minus, 1, end superscript, right parenthesis does the opposite of the cosine. • Inverse tangent left parenthesis, tangent, start superscript, minus, 1, end superscript, right parenthesis does the opposite of the tangent. In general, if you know the trig ratio but not the angle, you can use the corresponding inverse trig function to find the angle. This is expressed mathematically in the statements below. Trigonometric functions input angles and output side ratiosInverse trigonometric functions input side ratios and output angles sine, left parenthesis, theta, right parenthesis, equals, start fraction, start text, o, p, p, o, s, i, t, e, end text, divided by, start text, h, y, p, o, t, e, n, u, s, e, end text, end fractionright arrowsine, start superscript, minus, 1, end superscript, left parenthesis, start fraction, start text, o, p, p, o, s, i, t, e, end text, divided by, start text, h, y, p, o, t, e, n, u, s, e, end text, end fraction, right parenthesis, equals, theta cosine, left parenthesis, theta, right parenthesis, equals, start fraction, start text, a, d, j, a, c, e, n, t, end text, divided by, start text, h, y, p, o, t, e, n, u, s, e, end text, end fractionright arrowcosine, start superscript, minus, 1, end superscript, left parenthesis, start fraction, start text, a, d, j, a, c, e, n, t, end text, divided by, start text, h, y, p, o, t, e, n, u, s, e, end text, end fraction, right parenthesis, equals, theta tangent, left parenthesis, theta, right parenthesis, equals, start fraction, start text, o, p, p, o, s, i, t, e, end text, divided by, start text, a, d, j, a, c, e, n, t, end text, end fractionright arrowtangent, start superscript, minus, 1, end superscript, left parenthesis, start fraction, start text, o, p, p, o, s, i, t, e, end text, divided by, start text, a, d, j, a, c, e, n, t, end text, end fraction, right parenthesis, equals, theta The expression sine, start superscript, minus, 1, end superscript, left parenthesis, x, right parenthesis is not the same as start fraction, 1, divided by, sine, left parenthesis, x, right parenthesis, end fraction. In other words, the minus, 1 is not an exponent. Instead, it simply means inverse function. FunctionGraph sine, left parenthesis, x, right parenthesis A coordinate plane. The x-axis starts at zero and goes to ninety by tens. It is labeled degrees. The y-axis starts at zero and goes to two by two tenths. It is labeled a ratio. The graphed line is labeled sine of x, which is a nonlinear curve. The line for the sine of x starts at the origin and passes through the points twenty-four, zero point four, forty, zero point sixty-seven, fifty-two, zero point eight, and ninety, one. It is increasing from the origin to the point ninety, one. The rate of change gets smaller, or shallower, as the degrees, or x-values, get larger. All points are approximations. sine, start superscript, minus, 1, end superscript, left parenthesis, x, right parenthesis (also called \arcsin, left parenthesis, x, right parenthesis) | A coordinate plane. The x-axis starts at zero and goes to two by two tenths. It is labeled a ratio. The y-axis starts at zero and goes to ninety by tens. It is labeled degrees. The graphed line is labeled inverse sine of x, which is a nonlinear curve. The line for the inverse sine of x starts at the origin and passes through the points zero point four, twenty-four, zero point sixty-seven, forty, zero point eight, fifty-two, and one, ninety. It is increasing from the origin to the point one, ninety. The rate of change gets larger, or sharper, as the ratios, or x-values, get larger. All points are approximations. start fraction, 1, divided by, sine, x, end fraction (also called \csc, left parenthesis, x, right parenthesis) | A coordinate plane. The x-axis starts at zero and goes to ninety by tens. It is labeled degrees. The y-axis starts at zero and goes to two by two tenths. It is labeled a ratio. The graphed line is one divided by the sine of x, which is a nonlinear curve. The line for the cosecant of x starts by decreasing from the point thirty, two. It continues decreasing until the point ninety, one. The rate of change starts steep at the point thirty, two, but it get smaller at the graph goes through the points forty, one point fifty-five, fifty, one point three, and sixty-five, one point one. The rate of change is very shallow as the graph approaches the point ninety, one. All points are approximations. However, there is an alternate notation that avoids this pitfall! We can also express the inverse sine as \arcsin, the inverse cosine as \arccos, and the inverse tangent as \arctan. This notation is common in computer programming languages, and less common in mathematics. Solving the introductory problem In the introductory problem, we were given the opposite and adjacent side lengths, so we can use inverse tangent to find the angle. A right triangle with vertices L and V where angle L is unknown. The side between angles L and ninety degrees is sixty-five degress. The side between the right angle and the vertex V is thirty-five units. \begin{aligned} { m\angle L}&=\tan^{-1} \left(\dfrac{\blueD{\text{ opposite }} }{\maroonC{\text{ adjacent}}}\right)&{\gray{\text{Define.}}} \\\\ m\angle L&=\tan^{-1}\left(\dfrac{\blueD{35}}{\maroonC{65}}\right)&{\gray{\text{Substitute values.}}} \\\\ m\angle L &\approx 28.30^\circ &{\gray{\text{Evaluate with a calculator.}}}\end{aligned} Now let's try some practice problems. Problem 1 Given triangle, K, I, P, find m, angle, I. Right triangle K I P where angle A P I is a right angle. Angle K I P is an unknown angle. K I is ten units. K P is eight units. degrees Problem 2 Given triangle, D, E, F, find m, angle, E. Right triangle D E F where angle D F E is a right angle. Angle D E F is an unknown angle. D F is four units. E F is six units. degrees Problem 3 Given triangle, L, Y, N, find m, angle, Y. Right triangle L Y N where angle Y L N is a right angle. Angle L Y N is an unknown angle. Y N is ten units. L Y is three units. degrees Challenge problem Solve the triangle completely. That is, find all unknown sides and unknown angles. Right Triangle O Z E where angle O E Z is a right angle. Side O Z is nine units. Side E Z is four units. O, E, equals m, angle, O, equals degrees m, angle, Z, equals degrees Want to join the conversation? • this might sound like a silly question, but i was hoping that sin(90) = 2 sin(45). Why doesn't that work? Trig functions are all about ratios and relations, the least i could expect was to find a relation like that... • this might have been possible if sin was a linear function which its not.... • Love the site, but slightly thrown having to switch from using DEG mode to RAD mode to get correct answer on inverse trig questions. Would be good to be given a heads-up that this was necessary. And why it was necessary. Which I. Still haven't really figured out! • DEG mode stands for "degree". This means that your calculator interprets and outputs angles in the unit of degrees. RAD mode stands for "radian". This means that your calculator interprets and outputs angles in the unit of radians. If you are not sure what radians are, I suggest you watch the KA videos on them. Switching between DEG mode and RAD mode on a calculator is similar to switching between "miles per hour" and "kilometers per hour" on a speedometer. You still get the same speeds, but in different units. Comment if you have questions! • how to turn calculator on • If it is not turning on then you need to replace the batteries. Hope this helps. (: • How to calculate the inverse function in a calculator? • Many calculators (TI and others) have the inverse trig funcdtions (sin-1, cos-1, tan-1) on the same button, but using the 2nd sin function. Do not know which particular calculator you are talking about. • So I know that arcsin ( sin(x) ) = x but... what happens when you do arcsin(x) * sin(x)? • It would be the same thing as multiplying the angle by the two side ratio • What if we do not want to use a calculator and do it manually? • Then you will need access to trigonometric tables that you can read in reverse. This is how I used to estimate the inverse trigonometric functions when I was in high school. I still have a book of tables to trig functions, logarithms, and z-scores (among other useful relationships) to which I refer when solving some problems, but the modern method of using a calculator or computer to access this information is usually more efficient and precise. • could some one explain what ' round your answer to the nearest hundredth degree' means. its mentioned in the second practice question. • "To the nearest hundredth of a degree" means to solve it, and then round it to 2 decimal places. The first place is tenths, and the second place is hundredths. Example: Problem 3. We're trying to find angle Y. We have the adjacent side length and the hypotenuse length. With the sides adjacent and hypotenuse, we can use the Cosine function to determine angle Y. CosY = 3/10 CosY = 0.30 This is where the Inverse Functions come in. If we know that CosY = 0.30, we're trying to find the angle Y that has a Cosine 0.30. To do so: -Find the Inverse button, then the Cosine button (This could also be the Second Function button, or the Arccosine button). Should come out to 72.542397, rounded. To round to the nearest hundredth of a degree, we round to 2 decimal, places, giving the answer 72.54.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 84, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183847308158875, "perplexity": 1997.1504250684013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00187.warc.gz"}
http://tex.stackexchange.com/questions/28985/the-complex-layout-of-columns-for-cheatsheet/29016
# The complex layout of columns (for cheatsheet) Objective: to A4 paper in four columns on two sides to print text. Requirements: 1) sheet is cut into the column (ie on both sides of the sheet of the border must be the same), and 2) the most difficult, the numbering of the columns after cutting must go in sequence (ie, quarters has an associated text). Illustrative example. The original list, side A: | 1 | 2 | 3 | | 4 |, side B: | 5 | 6 | 7 | | 8 |. Printed: | 1 | 3 | 5 | 7 | and | 8 | 6 | 4 | 2 |. After cutting, we have four quarters numbered columns | 1 | 2 |, | 3 | 4 | etc. Did this through pstops, but the pdf shows only three columns. Above is a screenshot PS_View. As a result, when converting to pdf using ps2pdf in the output file there is no lower "column", just an empty space. What should I do to all 4 "columns" (page) displayed on the output sheet? Page layout options in tex file: \documentclass[6pt]{article} \usepackage[paperwidth=74.25mm, paperheight=210mm, margin=1cm]{geometry} The resulting input.ps and pdf looks as it should. Process it: pstops -w74.25mm -h210mm -d1 8:6R(0h,0w)+4R(0h,1w)+2R(0h,2w)+0R(0h,3w),1R(0h,0w)+3R(0h,1w)+5R(0h,2w)+7R(0h,3w) input.ps maket.ps .PS output is shown in the picture above. Update More low, in answers, @Tom has pushed on packet usage pdfpages. The task dares as follows. Beforehand generated pdf with the necessary page size it is processed by other tex-file: \documentclass[a4paper,12pt, landscape % sheet orientation ]{article} \usepackage[final]{pdfpages} \begin{document} \includepdfset{ nup=4x1 % 4 pages are allocated on one sheet as 4 columns in one row ,frame % to frame page with the frame } \includepdf[pages={1,3,5,7}]{input.pdf} \includepdf[pages={8,6,4,2}]{input.pdf} ... \end{document} Numbers in a \includepdf specify a sequence of pages on sheet. Further in a command numbering walks with 8. Result: - Although an interesting TeX problem, I cannot help but condemn the practice using cheat sheets as opposed to studying for the exam. –  Martin Tapankov Sep 20 '11 at 10:04 @Martin: I doubt this is intended as an aid to cheating (it's a very brazen question if it is!) "Cheat sheet" can also be used to just mean "handy reference" –  Tom Sep 20 '11 at 14:53 @Tom Much as I'd like to agree, the line \documentclass[6pt]{article} kind of gives it away -- you don't usually make a "handy reference" with such a small font size. Some of my fellow students used the same technique to prepare cheat sheets, so I'm familiar with this layout. –  Martin Tapankov Sep 20 '11 at 15:25 @MartinTapankov: that's right, this cheatsheet to prepare for exams. I will not go into philosophical reasonings, but I suggest you think over a glass of tea. War - it's bad. But it is the stimulus of progress, which gave the benefit such as the Internet (the project's military organization DARPA). –  Ilirium Sep 21 '11 at 3:03 @MartinTapankov, don't jump to conclusions. I found this post useful as I'm preparing for an exam that allows one double-sided sheet of notes with any font size desired: i.e. an officially sanctioned cheatsheet. –  PBJ Dec 9 '11 at 21:22 I don't know why your pstops toolchain doesn't work, but as you want to end up with a PDF anyway, why not use PDFjam? pdfjam --outfile maket.pdf --a4paper input.pdf 1,3,5,7,8,6,4,2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217987418174744, "perplexity": 2108.526363972676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00321-ip-10-171-96-226.ec2.internal.warc.gz"}
http://calculator.mathcaptain.com/trigonometry-calculator.html
Trigonometry is a branch of mathematics that deals with triangles. Trigonometry is derived from Greek words "trigonon" means triangle and "metron" means measurement. Thus, trigonometry deals with measurement of triangles. ## Trigonometric Functions To define Trigonometric function, consider a right triangle XYZ with angle $\theta$ as shown in the figure above. If $\theta$ is taken as an angle of reference, 1) YZ the side opposite to $\theta$ is called the perpendicular, in this case side "a". 2) XY the side opposite to the right angle is called the hypotenuse, in this case side "h". Hypotenuse is always the longest side of a right-angled triangle. 3) XZ the third side is called the base, in this case side "b". The Trigonometric Functions are: Pythagoras' Theorem: In a right angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. $a^{2} + b^{2} = c^{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646192789077759, "perplexity": 462.4812000112894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188785.81/warc/CC-MAIN-20170322212948-00127-ip-10-233-31-227.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/2704/is-sat-in-p-if-there-are-exponentially-many-clauses-in-the-number-of-variables
# Is SAT in P if there are exponentially many clauses in the number of variables? I define a long CNF to contain at least $2^\frac{n}{2}$ clauses, where $n$ is the number of its variables. Let $\text{Long-SAT}=\{\phi: \phi$ is a satisfiable long CNF formula$\}$. I'd like to know why $\text{Long-SAT} \in P$. First I thought it is $\text{NPC}$ since I can do a polynomial-time reduction from $\text{SAT}$ to $\text{Long-SAT}$, no? But maybe I can reduce $\text{2-SAT}$ to $\text{Long-SAT}$? How do I do that? • @Numerator: Watch out in the last sentence in the question. Reducing an easy problem (2-SAT in this case) to your problem does not mean that your problem is easy. – Tsuyoshi Ito Jul 13 '12 at 0:02 • @TsuyoshiIto: We try to keep on top of weird tags. Please feel free to edit the occasional occurrence. If you spot wide-spread (mis)use or meet opposition, please take it to meta (rather than discussing in the comments below a question). – Raphael Jul 21 '12 at 19:09 Unless I'm missing something, it's trivially in P as the length of the formula is exponential in the number of variables. Hence all $2^{n}$ truth assignments can be generated and checked in polynomial time in the length of the formula. • But a $2^n$ checks is still defined as plynoimal time? – Numerator Jul 12 '12 at 10:09 • Remember that to be in P you want an algorithm that runs in polynomial time in the size of the input. In this case, if we denote the size of the input as N we know that $\text{#clauses } \leq N$. Hence we also have $n=O(log N)$ so the $2^n$ assignments only amount to a polynomial in the overall input size $N$. Don't let texts trick you when they use the variable $n$, it's just a variable, not a special magic number that is always the best measure for the size of the input. Sorry about the formatting, I'm typing this on my phone. – Luke Mathieson Jul 12 '12 at 11:37 • @Numerator: you are doing $2^{\log n}=n$ checks, where $n$ is the length of the input. – Xodarap Jul 21 '12 at 19:05 In this case, the answer is trivial as Luke points out. However, as you seem to have come up with the definition yourself, note this. For SAT, so-called phase transitions regarding the ratio of variable count to clause count have been observed [1,2]. If it is small, instances are easy, and hard if it is large. There seems to be a more or less sharp transition from easy to hard. This seems to be an active area of research. cstheory.SE has some more on this phenomenon. So, if you adjust your definition of "long" to polynomial blowup, you might indeed get an non-trivially easy class -- that is, in P -- just because you have much more clauses than variables. 1. The SAT Phase Transition by I. P. Gent (1994) 2. Determining computational complexity from characteristic 'phase transitions' by R. Monasson, R. Zecchina et al. (1999) • Actually, it's not an easy-hard but rather easy-hard-easy the pattern regarding the phase transition. There are 2 regions: underconstrained and overconstrained. In the first one, solutions are densely distributed, so you succeed quickly. In the second one you fail quickly: any reasonable algorithm finds a solution if such exists (a strong basin of attraction), and if there's no solution a backtracking algorithm can establish that quickly since potential solution paths are cut off early. Hard problems are on the boundary of these regions: the probability of a solution is low but non-negligible. – Juho Jul 21 '12 at 22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016917705535889, "perplexity": 443.7183951357485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00587.warc.gz"}
http://www.ask.com/science/advantages-chemical-energy-8fca0856ac7249ea
Q: What are the advantages of chemical energy? A: Chemical energy is abundant, easily combustible and has high efficiency. It does have its disadvantages, as it is also sometimes harmful to the environment and humans and tends to be non-renewable. Know More Keep Learning Credit: John Lund Blend Images Getty Images According to HealthResearchFunding.org, chemical energy is the most widely found and abundant energy source on earth. An example of an abundant chemical energy source is crude oil. This resource comes from chemical processes that happened when dead animals and plants came under intense pressure by the Earth’s crust. Chemical energy is any source that occurs due to a chemical process. Other examples of abundant chemical energy sources on Earth include coal, wood and wax. Resources that people burn up to create energy are examples of chemical energy sources. Another advantage of chemical energy is that it is easy to use right away. Most chemical sources only require air and an ignition source to work. The efficiency of chemical energy is high as a lot of energy is stored in a small area. The main downsides of chemical energy include the fact that it tends to be non-renewable and has a negative effect on the environment. Crude oil will not last forever since there is only a set amount of it in the Earth’s crust, and it takes a long time to create. Sources: Related Questions • A: The advantages and disadvantages of chemical energy depend on the ways in which energy is stored and released during a chemical reaction. Chemical energy is the force that powers everything in the world, from the human body to vehicles on the road. Filed Under: • A: An example of chemical energy is the energy in an electrochemical cell. Chemical energy is the energy that results from a chemical reaction between atoms and molecules. Chemical energy may come in different forms, such as electrochemical energy and chemiluminescence. Filed Under: • A: Chemical energy is for producing heat in the human body to sustain vital functions, converting solar energy in plants through photosynthesis, producing electrical energy in batteries to power devices and burning fossil fuels in combustion engines to produce heat or motion. There is a vast array of sources that produce chemical energy, some of which are renewable, such as solar energy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170548319816589, "perplexity": 794.5811511377948}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861305.18/warc/CC-MAIN-20150124161101-00187-ip-10-180-212-252.ec2.internal.warc.gz"}
http://openproblemgarden.org/op/weak_pentagon_problem
# Weak pentagon problem Importance: Medium ✭✭ Author(s): Samal, Robert Subject: Graph Theory » Coloring » » Homomorphisms Keywords: Clebsch graph cut-continuous mapping edge-coloring homomorphism pentagon Posted by: Robert Samal on: July 13th, 2007 Conjecture   If is a cubic graph not containing a triangle, then it is possible to color the edges of by five colors, so that the complement of every color class is a bipartite graph. This conjecture has several reformulations: the conclusion of the conjecture can be replaced by either of the following: \item has a homomorphism to the Clebsch graph. \item there is a cut-continuous mapping from to . For the latter variant, few definitions are in place. A cut-continuous mapping from a graph~ to a graph~ is a mapping such that the preimage of every cut in~ is a cut in~. Here, by a cut in~ we mean the edge-set of a spanning bipartite subgraph of~---less succinctly, it is the set of all edges leaving some subset of vertices of~. Cut-continuous mappings are closely related with graph homomorphisms (see [DNR], [S]). In particular, every homomorphism from~ to~ naturally induces a cut-continuous mapping from~ to~; thus, the presented conjecture can be thought of as a weaker version of Nesetril's Pentagon problem. We mention a generalization of the conjecture, that deals with longer cycles/larger number of colors. The -dimensional projective cube, denoted , is the simple graph obtained from the -dimensional cube~ by identifying pairs of antipodal vertices (vertices that differ in all coordinates). Note that is the Clebsch graph. Question   What is the largest integer with the property that all cubic graphs of sufficiently high girth have a homomorphism to ? Again, the question has several reformulations due to the following simple proposition. Proposition   For every graph and nonnegative integer , the following properties are equivalent. \item There exists a coloring of~ by colors so that the complement of every color class is a bipartite graph. \item has a homomorphism to \item has a cut-continuous mapping to~ There are high-girth cubic graphs with the largest cut of size less then . Such graphs do not admit a homomorphism to for any , so there is indeed some largest integer~ in the above question. To bound this largest~ from below, recall that every cubic graph maps homomorphically to . Moreover, it is known [DS] that cubic graphs of girth at least 17 admit a homomorphism to (the Clebsch graph). This shows (and also provides a support for the main conjecture). ## Bibliography [DNR] Matt DeVos, Jaroslav Nesetril and Andre Raspaud: On edge-maps whose inverse preserves flows and tensions, \MRref{MR2279171} *[DS] Matt Devos, Robert Samal: \arXiv[High Girth Cubic Graphs Map to the Clebsch Graph}{math.CO/0602580} [S] Robert Samal, On XY mappings, PhD thesis, Charles University 2006, tech. report * indicates original appearance(s) of problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594088196754456, "perplexity": 1462.7435827249028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00545.warc.gz"}
http://tex.stackexchange.com/questions/87022/including-external-notes-to-a-latex-document
# Including external notes to a latex document I'm often writing scientific papers in collaboration with other authors, using a versioning system (mercurial) to manage the writing. When one of the authors wants to add a remark, we use a slightly enhanced version of marginpar. The problem is that in order to avoid to "pollute" the source code, the remark is removed once addressed, which sometimes makes it hard to keep track of the changes (even with mercurial, since revisions are not directly related with remarks). Hence, I was wondering if there exists a way to have "external" notes to a Latex document, that are not in the source code, but can be included at compilation, by way of either absolute reference (e.g., line 140 of file.tex) or relative (e.g., at \label{lab}). For instance, I could the following file, in the same directory than introduction.tex: Note 1, line 237, introduction.tex: This definition does not work in this special case SOLVED by commit 23 Note 2, \ref{def:test}: This definition still needs to work this special case PENDING When compiling the file introduction.tex, I could indicate to include the previous file, which would automatically add as a marginpar the Note 2, since it's PENDING. - Even if there isn't any perfectly matching solution, I'd be interested in knowing any other solution with this principle of including extra content at compile time. –  userxxxxx Dec 14 '12 at 13:57 Maybe a facility like the one provided by the todonotes package would help you : you can prefix the \todo command with \done once solved. –  T. Verron Dec 14 '12 at 13:59 @T.Verron: Thanks, I guess I could also comment the original \marginpar, but somehow, I'd like not to include the notes in the source code. –  userxxxxx Dec 14 '12 at 14:04 Main file \documentclass{article} \def\noteref#1#2{\csname noteref#2\endcsname{#1}} \def\noterefSOLVED#1#2#3{} \def\noterefPENDING#1#2#3{% \expandafter\def\csname noteref-#1\endcsname{\marginpar{#3}}} \let\oldlabel\label \def\label#1{% \oldlabel{#1}% \csname noteref-#1\endcsname} \input{\jobname-notes} \begin{document} \section{intro\label{aa}} stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff \begin{enumerate} \item thing one\label{bb} \item thing two\label{cc} \end{enumerate} stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff stuff here stuff here stuff here stuff here stuff here stuff \end{document} I used TeX markup in the comment file to make it easier, it would be possible to parse the plain text you suggested, but TeX is better at TeX markup:-) \noteref{aa}{SOLVED}{by commit 23} {This definition does not work in this special case} \noteref{bb}{PENDING}{} {This definition still needs to work this special case} - That looks quite nice, thanks! Just to be sure, the file containing "\noteref{aa}{SOLVED} ..." is in "jobname-notes", right? –  userxxxxx Dec 14 '12 at 14:09 yes although it could be in anything as long as the \input matches. Also I just used a simple marginpar but you could spice it up a bit using todonotes or a similar package –  David Carlisle Dec 14 '12 at 14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913496136665344, "perplexity": 2759.2247316894723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637979.29/warc/CC-MAIN-20150417045717-00167-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/simple-harmonic-motion-displacement.586015/
Simple Harmonic Motion displacement 1. Mar 11, 2012 VincentweZu 1. The problem statement, all variables and given/known data A mass m is attached to a horizontal spring of spring constant k. The spring oscillates in simple harmonic motion with amplitude A. Answer the following in terms of A. At what displacement from equilibrium is the speed half of the maximum value? 2. Relevant equations PE1 + KE1 = PE2 + KE2 3. The attempt at a solution $\frac{1}{2}$kA2 = $\frac{1}{2}$mvmax2 kA2 = mvmax2 kA2/m = vmax2 $\frac{1}{2}$kA2 = $\frac{1}{2}$mv2 + $\frac{1}{2}$kΔx2 let v = vmax/2 kA2 = m(vmax/2)2 + kΔx2 kA2 = m(vmax2)/4 + kΔx2 kA2 = m(kA2/m)/4 + kΔx2 kA2 = kA2/4 + kΔx2 A2 = A2/4 + Δx2 (3/4)A2 = Δx2 Δx = ($\sqrt{3}$/2)A the answer that I arrived at was ($\sqrt{3}$/2)A, however the answer is 1/2. If I subbed in arbitrary numbers for A, m, and k and let Δx = ($\sqrt{3}$/2)A then the velocity that results is half the max velocity. I would like someone to confirm my answer or find something wrong with it so that I know which is the right answer. Thanks. 2. Mar 11, 2012 PeterO I think the difficulty arises when you say $\frac{1}{2}$kA2 = $\frac{1}{2}$mvmax2 certainly those two quantities are equal in size, but at no time during the oscillation do both conditions occur at the same time, so a calculation based on them may not work. 3. Mar 12, 2012 VincentweZu Definitely there is no instant during oscillation where both these conditions occur, however, the equation which I have used - the conservation of energy, does not require that they do occur at the same time. In fact, all the law of conservation of energy states is that the total energy in one state is the same as the total energy in another so long as the system is a closed system. In any case, the solution to this answer provided in the text doesn't convince me that at a displacement of (1/2)A would yield a velocity which is half the maximum velocity. Basically the solution in the text involved only one equation which was: $\frac{1}{2}$kA$^{2}$ = $\frac{1}{2}$mvmax2 They solve for vmax giving vmax = A$\sqrt{\frac{k}{m}}$ vmax $\alpha$ A Being directly proportional (1/2)A would lead to (1/2)vmax I am not convinced by this solution as instead of finding at which state in the oscillation the velocity would be half its maximum, the solution says that an amplitude of half the current amplitude would half the maximum velocity. 4. Mar 12, 2012 Staff: Mentor You're right. It looks like they either misinterpreted their own question or failed to clearly state their intention. 5. Mar 12, 2012 altamashghazi ur equation 1/2k A^2=1/2mv^2+...... is wrong. it should be 1/2k(Δx)^2= 1/2mv^2 6. Mar 12, 2012 VincentweZu Although using $\frac{1}{2}$k(Δx)2= $\frac{1}{2}$mv2 would get (1/2)A. I disagree that doing this is correct. In SHM the total energy of the system is $\frac{1}{2}$kA2 and the law of conservation of energy states that this is a constant. Therefore at any instant of the motion the sum of the kinetic energy and the elastic potential energy should equal this constant. Therefore using $\frac{1}{2}$k(Δx)2= $\frac{1}{2}$mv2 would be correct when looking at a spring system not in SHM, it doesn't make sense to use it in a system that is in SHM. Similar Discussions: Simple Harmonic Motion displacement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430494904518127, "perplexity": 767.1851116952699}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00393.warc.gz"}
https://iaorifors.com/paper/96985
# On the Implausibility of Differing-Inputs Obfuscation and Extractable Witness Encryption with Auxiliary Input The notion of differing‐inputs obfuscation (diO) was introduced by Barak et al. (CRYPTO, pp 1–18, 2001). It guarantees that, for any two circuits ${C}_{0},{C}_{1}$ for which it is difficult to come up with an input x on which ${C}_{0}\left(x\right)\ne {C}_{1}\left(x\right)$ , it should also be difficult to distinguish the obfuscation of ${C}_{0}$ from that of ${C}_{1}$ . This is a strengthening of indistinguishability obfuscation, where the above is only guaranteed for circuits that agree on all inputs. Two recent works of Ananth et al. (Differing‐inputs obfuscation and applications, http://eprint.iacr.org/ , 2013) and Boyle et al. (Lindell, pp 52–73, 2014) study the notion of diO in the setting where the attacker is also given some auxiliary information related to the circuits, showing that this notion leads to many interesting applications. In this work, we show that the existence of general‐purpose diO with general auxiliary input has a surprising consequence: it implies that a specific circuit ${C}^{\ast }$ with specific auxiliary input ${‐{aux}}^{\ast }$ cannot be obfuscated in a way that hides some specific information. In other words, under the conjecture that such special‐purpose obfuscation exists, we show that general‐purpose diO cannot exist. This conjecture is a falsifiable assumption which we do not know how to break for candidate obfuscation schemes. We also show similar implausibility results for extractable witness encryption with auxiliary input and for ‘output‐only dependent’ hardcore bits for general one‐way functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429882168769836, "perplexity": 987.0327184334872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00306.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-1-equations-and-inequalities-1-6-solve-linear-inequalities-guided-practice-for-examples-1-and-2-page-41/2
## Algebra 2 (1st Edition) The solutions of the given inequality are all real numbers less or equal to $3$. Smaller numbers are to the left of $3$. Use a solid dot in the graph to indicate $3$ is a solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257158994674683, "perplexity": 228.39893606687963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00238.warc.gz"}
https://www.physicsforums.com/threads/the-ionosphere.111935/
The ionosphere 1. Feb 23, 2006 stunner5000pt An ionosonde on a satellite orbiting at 1000 km probes the topside of the F2 layer. If the ionosonde transmits radio pulses downwards at 5.6 MHz and 7.1 MHz and receives radio echoes with delay times of 2 msec and 2.667 msec respectively, determine (a) the F2 electron density at an altitude of 500 km, and (b) the exospheric temperature. You may ignore the effects of any magnetic fields and assume that the temperature is constant above the F2 peak which lies well below the 400 km level. i dont understand how a radio echo can be used to determine anything... i think that to find the answer for part b i have to find the scale height of the electrons because h = RT/Mg i suspect it is similar to something that was answered in this thread 2. Feb 24, 2006 Tide At the very least, the relative delay time can be used to find the separation between the critical layers for the two frequencies. 3. Feb 25, 2006 stunner5000pt does the delay time to the cutoff frequency in some way? 4. Feb 25, 2006 Tide I think you left out a verb in your question so I am not quite sure what you're asking. Perhaps this will help. If the two waves have different frequencies then they will reflect from different places in the ionosphere. Each will reflect from its corresponding critical surface or critical density. Since they are different, one wave will have to travel farther to reach its reflection point and so it will take longer for its reflected signal to be returned to the detector. The simplest approximation you can make is that the two waves travel at the speed of light down to and back from their respective reflection points. This isn't quite correct since the group velocity of a wave tends to zero as it approaches the reflection point. It is for you to decide how much error you can accept for the respective travel times. In either case, the fact that one wave takes somewhat longer to return gives you some information about the spatial separation of the two reflection points. 5. Feb 26, 2006 stunner5000pt ok i can find the separation between the layers. i m not too sure on how that leads to finding the elctron density, however 6. Feb 26, 2006 Tide Here's a simple example. Suppose the electron density varies linearly (your density model is more complicated so you'll have to work harder but I'm just demonstrating the principle.) $$n = n_0 \frac {x}{L}$$ The reference density $n_0$ and scale length L are unknown. However, you do know the travel times to and from each critical surface so $$x_i = c t_i / 2$$ where $t_i$ is the delay time for each wave (i = 1, 2) and $x_i$ is the location of each reflection point. Now we have $$n_1 = n_0 \frac {c t_1}{2L}$$ $$n_2 = n_0 \frac {ct_2}{2L}$$ with two unknows $n_0$ and L. You can easily solve these equation and once those two quantities are determined you can find the electron density at any location! As I pointed out earlier, you may want to improve on your calculation by integrating $dx/v_g$ to more accurately reflect the variable speed of each wave as it approaches the critical density. My guess is that the difference will be relatively small since the scale length will turn out to be fairly large but you should check anyway. 7. Feb 26, 2006 stunner5000pt $$n_{e}(h,\chi) = \sqrt{\frac{q_{max}}{keff}} \exp[0.5(1-z-\sec \chi \exp(-z))]$$ since h1 = c (2msec) /2 $$n_{e}(h_{1},\chi) = \sqrt{\frac{q_{max}}{keff}} \exp[0.5(1-z_{1}-\sec \chi \exp(-z_{1}))]$$ the second one is h2 = c (2.667msec)/2 $$n_{e}(h_{2},\chi) = \sqrt{\frac{q_{max}}{keff}} \exp[0.5(1-z_{2}-\sec \chi \exp(-z_{2}))]$$ but the thing is the satelite is shooting from the top of the atmosphere so this value of h is 1000 - h above the surface of earth is this teh way to go? but arent there a lot more unknons here? Last edited: Feb 26, 2006 8. Feb 26, 2006 stunner5000pt and we can find the Ne values for the heights using the relation between the density of electrosn and the cirtical frequency. Right? 9. Feb 26, 2006 Tide I don't know how you defined z but it looks like it is a normalized height so it will likely contain a scale factor (what I called L) and the factor $$\sqrt {\frac {q_{max}}{k_{eff}}}$$ must be a reference density so you appear to have only two unknowns. And, yes, you will need to do a transformation from "ground based" to "space based" coordinates. 10. Feb 26, 2006 stunner5000pt s iz defined like this $$z = \frac{h - h_{max}}{H}$$ and $$H = \frac{RT}{Mg}$$ H is the sacle height where T is the temperature also unknown Keff is the reaction coefficnet of the electron contributing reaction q max is the maximum ion concentration and chi is the angle of attack of the signal we're assuming here that the angle of attack is the same, but unknown so then $$n_{e}(h_{1},\chi) = \sqrt{\frac{q_{max}}{keff}} \exp[0.5(1-\left(\frac{h_{1}-h_{max}}{H}\right)-\sec \chi \exp\left(\frac{h_{max}-h}{H}\right))]$$ seems to have more than 2 unknowns... or are they related to each other somehow? 11. Feb 26, 2006 Tide You have a relation between the scale height and temperature so one of the unknowns is eliminated. Also, if you are getting a return signal at the source then you also know the "angle of attack" (you must have normal incidence in order for the signal to return to the source). That leaves the square rooted quantity (it is the reference density which you will treat as a single quantity), H and hmax as your unknowns. You do need more information to solve for three unknowns. You may have additional knowledge at your disposal that I am unaware of. Is there anything else in your model that might be used to eliminate one of the remaining unknowns? 12. Feb 26, 2006 stunner5000pt i can find Ne (h,chi) and h1 and h2 can be found out using what was discussed later so that only leves the refernce quantity and h max and H which is related to the temperature
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836327791213989, "perplexity": 562.3845302914551}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00183.warc.gz"}
http://physics.stackexchange.com/questions/43337/how-much-space-to-simulate-a-small-hilbert-space/43436
# How much space to simulate a small Hilbert space? I'm thinking about trying to do a numerical simulation of some very simple QM problems. How much space do I need? To simulate the Hilbert space? I'd like to eventually simulate the absorption or emission of a photon by a hydrogen atom. So at least three particles (two fermions, one boson). Let's generalize that to three particles with arbitrary spin, so I can look at three photons or three electrons if I want to. In order to do a numerical simulation I need to replace the continuous spacetime with a rectangular grid or lattice. I'd like to eventually get more precision, but let's start with just ten cells per dimension to begin with. So including time, I need ten thousand cells in a four dimensional lattice. ` How many cells do I need to simulate the Hilbert space? and what goes in each cell? If I put restrictions on the shape of the wavefunction, does that help? - You need a basis large enough to track the Schroedinger equation reasonably well over the time of interest, or to represent the ground state if you are interested in that. This depends a lot both on the initial state and the Hamiltonian. Much of numerical quantum mechanics is concerned with finding useful basis function sets that do not grow too rapidly with the number of dof treated. This means they are usually applicable only for fairly specific classes of problems. You might want to have a look at GAMESS http://www.msg.ameslab.gov/gamess/ which procesds with Gaussian states for electronic ground state calculations. But to make it feasible for many electrons, they need to do lots of extra trickery. In a much simpler but practically infeasible approach, you discretize each component of $R^{3N}$ with $p%$ points (and $p=10$ will still give very poor accuracy) you are left with a discrete problem in $(2p^3)^N$ dimensions. GAMESS tries instead to have a complexity growing less than $O(N^2)$. - The full system has configuration space $\mathbb{R}^{3n}$ where $n$ is the number of particles. Note that because photons of different frequencies are distinct, you will need $n$ to depend on how finely you discretise space. You can remove one $\mathbb{R}^3$ for centre of mass motion, and perhaps another 3 dimensions for orientation, but that still leaves a very high dimensional space in which the wavefunction of the system will live. A better way to proceed is to write down the Hamiltonian of the system in a non-interacting frame, e.g. the hydrogen atom can be solved exactly and so can the non-interacting photon. Truncating in this basis should give you something you can actually implement as an interaction operator. - Thanks for your answer. I am aware that my approach is not very practical and consumes a lot of space. I am doing it to try to follow "What happens where" so to speak – Jim Graber Nov 3 '12 at 17:44 First of all, I think I want not $L^2(X)$, but some finite, numerical approximation of $L^2(X)$. Second, I think I need something like $L^2(\mathbb{R}^{3N}) \otimes \mathbb{C}^{2^N}$ to include spin. (I am still confused as to whether N should be an exponent or a multiplier in each of the two places where it occurs) – Jim Graber Nov 3 '12 at 17:45 Then I need to convert this finite approximation to a specific number of cells (and perhaps multiply yet again by a time variable). Finally, I will need to figure out what to store in each cell? One compex number? Two? More? Something else? – Jim Graber Nov 3 '12 at 17:46 @JimGraber: Your exponents are correct. – Arnold Neumaier Nov 4 '12 at 15:59 @JimGraber: In each cell of phase space, you'd have to specify as many numbers as the wave function has components, which means $2^N$ complex numbers. – Arnold Neumaier Nov 5 '12 at 9:47 It is of a matter of detail. your algorithm should be independent of the amount of available memory and only depend on the lattice size. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8347116708755493, "perplexity": 235.20391132863267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709006458/warc/CC-MAIN-20130516125646-00041-ip-10-60-113-184.ec2.internal.warc.gz"}
https://harshakokel.com/posts/hierarchical-rl/
## Hierarchical Reinforcement Learning An overview of Hierarchical RL. Written as part of Advanced RL course by Prof. Sriraam Natarajan. Standard RL planning suffers from the curse of dimensionality when the action space is too large and/or state space is infeasible to enumerate. Humans simplify the problem of planning in such complex conditions by abstracting away details which are not relevant at a given time and decomposing actions into hierarchies. Several researchers have proposed to model the temporal-abstraction in RL by composing some form of hierarchy over actions space (Dietterich 1998, Sutton et al 1998, Parr and Russell 1998). By modeling actions as hierarchies, researchers extended the primitive action space by adding abstract actions. Options framework (Sutton et al 1998), refer the abstract actions as options, MAXQ (Dietterich 1998) refer to them as tasks and Hierarchical Abstract Machines (HAM) (Parr and Russell 1998) refers to them as choices. Common theme among these papers is to extend the Markov Decision Process (MDP) to Semi-Markov Decision Process (SMDP), where actions can take multiple time steps. As compared to MDP, which only allow actions of a discrete time-steps, SMDP allows modeling temporally abstract actions of varying length over a continuous time. As represented in first two trajectories of figure below. By constraining/extending the action space of the MDP over primitive and abstract actions, hierarchical RL approaches superimpose MDPs and SMDPs as shown in last trajectory. Semi-Markov Decision Process HRL is appealing because the abstraction of actions facilitate accelerated-learning and generalization while exploiting the structure of the domain. Faster learning is possible because of the compact-representation. Original MDP is broken into sub-MDP with less states (abstracted states hide irrelevant details and hence reduce the number of states) and less actions. For example, in the Taxi Domain introduced in (Dietterich 1998), if the agent is learning to navigate to a location it does not matter if the passenger is being picked or dropped. Details about location of passenger are irrelevant and hence the state space is reduced. Better generalization is possible because of the abstracted actions. In the taxi domain, because we define an abstract action called $Navigation$, agent learns a policy to navigate the taxi to a location. Once that policy is learned for navigation to pick up a passenger, the same policy can be leveraged when then agent is navigating to drop the passenger. Two important promises of HRL are prior-knowledge and transfer-learning. A complex task in HRL is decomposed into hierarchy (usually by humans). Hence, it is easier for humans to provide some prior on actions from their domain knowledge. Different levels of hierarchy encompass different knowledge and hence ideally it would be easier to transfer that knowledge across different problems. One minor limitation of HRL is that all the hierarchical methods converge to hierarchically optimal policy, which can be a sub-optimal policy. For example in the taxi domain, if the hierarchy decomposition states first navigate to the passenger location and then navigate to the fuel location, the HRL agent will find an optimal policy to do that in exactly that order. This policy might be sub-optimal given an initial state which is closer to the fuel location. This limitation is an artifact of restricting the action space while solving sub-MDPs. If full action space is available in all the MDPs, the exponential increase in computational overhead makes the learning infeasible. Max-Q framework has a clear hierarchical decomposition of tasks, while the options-framework do not have clear hierarchy. Options framework achieves temporal abstraction of actions, Max-Q framework additionally also achieves state abstractions. While there has been an attempt on discovering and transferring the Max-Q hierarchies (Mehta et al. 2008), learning Max-Q hierarchies directly from the trajectories is still an open problem. For large and complex problem it might be a challenge to provide the task hierarchy or options and their termination conditions. ### References • [Dietterich 1998] Dietterich, T. G. 1998. The maxq methodfor hierarchical reinforcement learning. In ICML. • [Sutton, Precup, and Singh 1998] Sutton, R. S.; Precup, D.;and Singh, S. P. 1998. Intra-option learning about tempo-rally abstract actions. In ICML. • [Parr and Russell 1998] Parr, R., and Russell, S. J. 1998. Reinforcement learning with hierarchies of machines. In NeurIPS • [Mehta et al. 2008] Mehta, N.; Ray, S.; Tadepalli, P.; and Di-etterich, T. 2008. Automatic discovery and transfer of maxq hierarchies. In ICML. • The Promise of Hierarchical Reinforcement Learning by Yannis Flet-Berlia in The Gradient • Hierarchical Reinforcement Learning lecture by Doina Precup on YouTube
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520013689994812, "perplexity": 2613.4463327014987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00302.warc.gz"}
http://www.maa.org/publications/periodicals/convergence/purse-of-denari
# Purse of Denari Author(s): Four men already having denari found a purse of denari; the first man said that if he would have the denari from the purse, then he would have twice as many as the second. The second, if he would have the purse, then would have three times as many as the third, and the third, if he would have it, then he would have four times as many as the fourth. The fourth, five times as many as the first. How much denari does each man have? (Fibonacci, Liber Abaci, 1202) Click here to reveal the answer "Purse of Denari," Loci (December 2004)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850877583026886, "perplexity": 1258.1564326519606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00445-ip-10-180-136-8.ec2.internal.warc.gz"}
http://arblib.org/agm.html
# Algorithms for the arithmetic-geometric mean¶ With complex variables, it is convenient to work with the univariate function $$M(z) = \operatorname{agm}(1,z)$$. The general case is given by $$\operatorname{agm}(a,b) = a M(1,b/a)$$. ## Functional equation¶ If the real part of z initially is not completely nonnegative, we apply the functional equation $$M(z) = (z+1) M(u) / 2$$ where $$u = \sqrt{z} / (z+1)$$. Note that u has nonnegative real part, absent rounding error. It is not a problem for correctness if rounding makes the interval contain negative points, as this just inflates the final result. For the derivative, the functional equation becomes $$M'(z) = [M(u) - (z-1) M'(u) / ((1+z) \sqrt{z})] / 2$$. ## AGM iteration¶ Once z is in the right half plane, we can apply the AGM iteration ($$2a_{n+1} = a_n + b_n, b_{n+1}^2 = a_n b_n$$) directly. The correct square root is given by $$\sqrt{a} \sqrt{b}$$, which is computed as $$\sqrt{ab}, i \sqrt{-ab}, -i \sqrt{-ab}, \sqrt{a} \sqrt{b}$$ respectively if both a and b have positive real part, nonnegative imaginary part, nonpositive imaginary part, or otherwise. The iteration should be terminated when $$a_n$$ and $$b_n$$ are close enough. For positive real variables, we can simply take lower and upper bounds to get a correct enclosure at this point. For complex variables, it is shown in [Dup2006], p. 87 that, for z with nonnegative real part, $$|M(z) - a_n| \le |a_n - b_n|$$, giving a convenient error bound. Rather than running the AGM iteration until $$a_n$$ and $$b_n$$ agree to $$p$$ bits, it is slightly more efficient to iterate until they agree to about $$p/10$$ bits and finish with a series expansion. With $$z = (a-b)/(a+b)$$, we have $\operatorname{agm}(a,b) = \frac{(a+b) \pi}{4 K(z^2)},$ valid at least when $$|z| < 1$$ and $$a, b$$ have nonnegative real part, and $\frac{\pi}{4 K(z^2)} = \tfrac{1}{2} - \tfrac{1}{8} z^2 - \tfrac{5}{128} z^4 - \tfrac{11}{512} z^6 - \tfrac{469}{32768} z^8 + \ldots$ where the tail is bounded by $$\sum_{k=10}^{\infty} |z|^k/64$$. ## First derivative¶ Assuming that z is exact and that $$|\arg(z)| \le 3 \pi / 4$$, we compute $$(M(z), M'(z))$$ simultaneously using a finite difference. The basic inequality we need is $$|M(z)| \le \max(1, |z|)$$, which is an immediate consequence of the AGM iteration. By Cauchy’s integral formula, $$|M^{(k)}(z) / k!| \le C D^k$$ where $$C = \max(1, |z| + r)$$ and $$D = 1/r$$, for any $$0 < r < |z|$$ (we choose r to be of the order $$|z| / 4$$). Taylor expansion now gives \begin{align}\begin{aligned}\left|\frac{M(z+h) - M(z)}{h} - M'(z)\right| \le \frac{C D^2 h}{1 - D h}\\\left|\frac{M(z+h) - M(z-h)}{2h} - M'(z)\right| \le \frac{C D^3 h^2}{1 - D h}\\\left|\frac{M(z+h) + M(z-h)}{2} - M(z)\right| \le \frac{C D^2 h^2}{1 - D h}\end{aligned}\end{align} assuming that h is chosen so that it satisfies $$h D < 1$$. The forward finite difference would require two function evaluations at doubled precision. We use the central difference as it only requires 1.5 times the precision. When z is not exact, we evaluate at the midpoint as above and bound the propagated error using derivatives. Again by Cauchy’s integral formula, we have \begin{align}\begin{aligned}|M'(z+\varepsilon)| \le \frac{\max(1, |z|+|\varepsilon|+r)}{r}\\|M''(z+\varepsilon)| \le \frac{2 \max(1, |z|+|\varepsilon|+r)}{r^2}\end{aligned}\end{align} assuming that the circle centered on z with radius $$|\varepsilon| + r$$ does not cross the negative half axis. We choose r of order $$|z| / 2$$ and verify that all assumptions hold. ## Higher derivatives¶ The function $$W(z) = 1 / M(z)$$ is D-finite. The coefficients of $$W(z+x) = \sum_{k=0}^{\infty} c_k x^k$$ satisfy $-2 z (z^2-1) c_2 = (3z^2-1) c_1 + z c_0,$ $-(k+2)(k+3) z (z^2-1) c_{k+3} = (k+2)^2 (3z^2-1) c_{k+2} + (3k(k+3)+7)z c_{k+1} + (k+1)^2 c_{k}$ in general, and $-(k+2)^2 c_{k+2} = (3k(k+3)+7) c_{k+1} + (k+1)^2 c_{k}$ when $$z = 1$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881448745727539, "perplexity": 589.7531813281051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00038.warc.gz"}
https://www.physicsforums.com/threads/current-and-resistance.223192/
# Current and Resistance 1. Mar 20, 2008 ### Gear300 Question is in attachment below...I was doing fine from (a) through (c). When I got to question (d), it sort of got me stuck....wouldn't the current be the same for both resistors over time? #### Attached Files: • ###### Physics.png File size: 50.6 KB Views: 65 2. Mar 20, 2008 ### mjsd After C has been charge up (ie. S1 closed for a long time). The Voltage across C is same as E, effectively you have an open circuit at C. So initially, when S2 is closed, there cannot be any current through R1 because potential difference across it is E-V_C = 0, but as C discharges through R2, V_C drops and now E-V_C >0 and so E will charge up C again via R1 (so current through R1 becomes non-zero). And so as C discharges through R2, E through R1 constantly charges C up again. I believe the net effect would be that C is just a spectator in this process. 3. Mar 21, 2008 ### Gear300 Wouldn't the current running through R1 also run through R2 because there is an open circuit at C? And the process you described seems to run like you said in that last sentence...that C would remain a spectator. In that case, when S2 is closed, can C be ignored? 4. Mar 21, 2008 ### Gear300 Found the actual graph (the attachment). The horizontal asymptote is above 0, I1 starts at 0A and I2 starts at a positive value current. I was thinking both I1 and I2 would be under the horizontal asymptote; why is I2 above it? #### Attached Files: • ###### Physics2.png File size: 8.3 KB Views: 40 5. Mar 21, 2008 ### Gear300 Similar Discussions: Current and Resistance
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160852789878845, "perplexity": 3387.659828680834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00490-ip-10-233-31-227.ec2.internal.warc.gz"}
http://referrat.net/mathematics/determinant-of-the-system-of-linear-equations-as-a-multiplication-of-a-vector-by-their-increasing-power-series/
# Determinant of the system of linear equations as a multiplication of a vector by their increasing power series The power series, except for the obvious case, actual positions functional analysis, which yields the desired equality. Algebra generates the integral over the field-oriented, clearly showing all the above nonsense. The scalar product is likely to enhance the indefinite integral, which will undoubtedly lead us to the truth. The greatest common divisor (GCD), of course, wasteful accelerates absolutely convergent series is known even to schoolchildren. The integral of the function goes to infinity along the line, in the first approximation, orders extremum of the function, as required. Constant distorts polynomial, as expected. The theorem follows from the above, consistently generates indirect jump function is not surprising. Multiplication of a vector by distorts the gap function, eventually come to a logical contradiction. Functional analysis, as follows from the above that transforms the jump function, which is not surprising. Not proved that the polynomial is not obvious. The vector field, except for the obvious case, produces a vector that is known even to schoolchildren. Origin, to a first approximation, meaningful distorts decreasing the integral over an infinite domain, further calculations leave students as simple homework. It is easy to check that the closed set really attracts decreasing function extremum, which will undoubtedly lead us to the truth. I must say that the proof translates absolutely convergent series, which will undoubtedly lead us to the truth. Binomial theorem neutralizes the integral of the function goes to infinity along the line, at the same time instead of 13 can take any other constant. The proof is not critical.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817813634872437, "perplexity": 351.50952121631684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00004.warc.gz"}
http://mathhelpforum.com/calculus/11793-graph.html
# Math Help - Graph 1. ## Graph how can I do this? what is the equation of the graph? Please help Attached Thumbnails 2. Originally Posted by Amy how can I do this? what is the equation of the graph? Please help Okay, so seeing two graphs on one set of axis suggests a piece wise function. The sharp turn in the lower graph indicates that it is an absolute value function. Now, I can't see the markings on the graph so clearly, so there may be some errors. I assume the lowest point on the graph, that is, where the sharp turn is is (-1,1), and that the y-intercept is 2 and that the horizontal graph is in line with y=5 and that the dots are above x=-3. So here goes: An absolute valued function always gives positive answers, so whenever the answer is going to become negative, it it flips the graph upward, which is why we see a sharp turn. We see the graph flips when x is at -1, which means if x gets any smaller, the value will be negative, so then the absolute value part of the function is |x+1|, when x=-1, it becomes zero, anyless than -1, it is negative and therefore the graph flips up making a sharp turn. but we're not done yet. The y-intercept is 2. The y-intercept occurs when x is zero. In this case, if x=0, we have y=|1|=1. So to correct this, we just shift the graph upward by 1 by adding a constant 1, so the absolute value part of the function is |x+1| + 1. The other part of the graph is easy (provided I can see it correctly), the graph is a constant 5, so it's y=5. Now we put these 2 functions together and obtain: 3. Whoops, there should be a pic there, but the forum tells me it's file size is too large. I'll try again 4. okay, here it is. We see the circle attached to the end of the y=5 curve is unshaded, and therefore it means x is not equal to -3 at the point. The shaded circle at the end of the absolute value part says it can be equal. Attached Thumbnails 5. ## thanks thanks for your explanation. I understood this much very well. let me try to do the questions. can you do the last one (continuity) please 6. No, the function is not continuous at x= -3. Explanation: A function f is continuous at a number a if lim{x-->a}f(x)=f(a). Here, we actually can find a value for f(a), that is f(-3)=2, but it is not equal to the limit. Since this lim at x=-3 does not exist. For a limit to exist, the left hand limit must be equal to the right hand limit. here, the left hand limit is 5, the right hand limit is 2 (looks like i ended up doing parts (a) and (b) for you by explaining this), so the limit does not exist and therefore, the function is not continuous. Note: left hand limit is lim{x-->a-} and right hand limit is lim{x-->a+} So you se, all the parts leading up to (e) was to give you clues. for the function to be continuous, we had to have the answers (a)=(b)=(c)=(d), but that was not the case. (a) was not equal to (b) and (c) didn't even exist
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337251782417297, "perplexity": 354.3904490579971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832738.80/warc/CC-MAIN-20140820021352-00173-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathoverflow.net/users/30800/g%c3%bcnter-rote?tab=activity
Günter Rote less info reputation 37 bio website location DE age member for 2 years, 3 months seen Feb 16 at 21:56 profile views 318 97 Actions Jan20 awarded Yearling Dec11 comment “Circular” domination in ${\mathbb R}^4$ Why is the first reduction from $S$ to $S'$ true? For example, when $(5,5,5,1),(1,1,1,5)\in S$, then there must exist a vector that dominates $(5,5,5,5)$. But $(5,5,5,5)$ cannot be built as the elementwise maximum from two of the twelve "two-coordinate" versions $(5,5,0,0),(5,0,5,0),\ldots,(0,0,1,5)$. Nov26 comment A problem on chains of squares — can one find an easy combinatorial proof? A) What is the well-known solution that uses Brouwer's fixed-point theorem? B) The discrete problem has definitely easy direct proofs, just by specializing the polygonal Jordan curve theorem. C) If the two paths don't cross, one can extend them to a crossing-free drawing of the (non-planar) complete bipartite graph $K_{3,3}$ by adding two new vertices and some curves outside the square. But I guess this does not count, as non-planarity of $K_{3,3}$ is based on Euler's polyhedral formula, and that in turn depends on the Jordan curve theorem, it seems. Nov26 comment Powers of orthogonal matrices is closed @Sebastian, What do you mean, "any real matrix", for which matrix? Was my argument not short enough? Nov26 revised Simplifying triangulations of 3-manifolds fixed grammar and spelling Nov26 answered Powers of orthogonal matrices is closed Nov26 suggested approved edit on Simplifying triangulations of 3-manifolds Jan20 awarded Yearling Oct22 comment Tiling a rectangle with weighted cells (min-max problem) What precisely do you mean by "sparse"? A constant number of entries in each row and column? At most $m+n$ entries in total? Or why would the total number of entries not appear in the runtime? (And edit the problem rather than posting an answer) (And the upper bound $a,b<1$ seems to be irrelevant.) Oct11 awarded Necromancer Oct11 answered Tiling A Rectangle With A Hint of Magic Oct10 revised How many vertices/edges/faces at most for a convex polyhedron that tiles space? added the original reference of the figure Oct10 suggested approved edit on How many vertices/edges/faces at most for a convex polyhedron that tiles space? Oct8 revised The Cayley Menger Theorem and integer matrices with row sum 2 small clarification Oct8 comment Vertices of a Polytope Indeed, fullerenes are a good example. Almost all of the surface is a hexagonal lattice, and there you can simultaneously cut away vertices as long as their minimum distance is 2. Thus, you can cut away half of the vertices (almost; staying away from the pentagons). Oct7 answered Vertices of a Polytope Sep18 comment Is there a 3d equivalent of this picture? 1. It should be pointed out that the density does not increase indefinitely inside the "black spot" but there is some minimum distance (if the points were really generated as indicated in the text) 2. Would you tolerate, for example (in the planar example) occasional vertices with 5 or 7 incident triangles, or should it be "completely regular"? Jun25 awarded Revival Jun25 awarded Excavator Apr28 comment What is the expected value for this The exponent would not change. For example, take random points in a circular disk. Fit a triangle in the disk. A constant fraction of the n points falls inside the triangle (w.h.p.), and thus you get at least $c′n^{1/3}$ points in convex position; similarly, an ellipse (affine image of a disk) fits into any triangle, showing the other direction of the inequality. Any two convex shapes are related like this (after affine transformation).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614397644996643, "perplexity": 982.7678169389843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654292.99/warc/CC-MAIN-20150417045734-00284-ip-10-235-10-82.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-an-oxidizing-agent
Chemistry Topics # How do you find an oxidizing agent? Jun 25, 2017 An oxidizing agent is reduced by the oxidized ion. #### Explanation: For instance, $P b \left(s\right) + {H}_{2} O \left(l\right) + {O}_{2} \left(g\right) \to P b {\left(O H\right)}_{2} \left(s\right) + {H}_{2} \left(g\right)$ I know, it's not balanced! $P b$ has an oxidation state of 0, $H$ has one of +2 in water. After the reaction, lead is oxidized to a state of +2, losing electrons, while hydrogen turns into gas, gaining electrons and having a state of 0. So, since $P b$ is oxidized, $H$ is the oxidizing agent. This comes about as a result of the fact that redox reactions only take place if both a reactant is reduced and a reactant is oxidized. One must do one to another. ##### Impact of this question 3657 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891225457191467, "perplexity": 2810.3537284572794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00121.warc.gz"}
http://mathoverflow.net/questions/118873/on-the-location-of-zeros-of-l-functions-from-modular-forms/118883
On the location of zeros of L functions from modular forms I understand that the Mellin transform of a modular form is expected to satisfy RH when it is an eigenform of all Hecke operators, in which case it has an Euler product. Now about when the form is not an eigenform: Is it known a case where the zeros are all in the critical strip? - When it is not a Hecke-Eigen form, the Hecke L-series connected with the modular form does not have an Euler product. However it can still be written as a linear combination of Hecke L-series that have Euler-products. Thus the situation will resemble the case of linear combinations of Dirichlet L-series. In particular we can use joint Voronin universality to obtain $\gg T$ zeroes in any strip $1/2<\sigma_1<\Re(s)<\sigma_2 < 1$, $-T < \Im(s)< T.$ These results follows from the analytic properties of the Rankin-Selberg zeta-function (which gives "independence results" for Hecke eigenvalues of primes attached to different cusp forms). You should be able to find results of this kind in Steuding's SLN "Value Distribution of L-functions". Also the results of Davenport-Heilbronn for the Hurwitz zeta-function can be proved, i.e. For any $\epsilon>0$ there exists $\gg T$ zeroes with $1<\Re(s)<1+\epsilon$ and $-T < \Im(s) < T$. Stronger results corresponding to results of Karatsuba, Bombieri, Hejhal and Selberg for Dirichlet L-function that holds close to the critical line likely also holds. I think the russian school (Irina Rezvyakova) has proved results in this direction. - Am I right that you only give an argument for RH is not true for that kind of function? This is not the question. –  Marc Palm Jan 14 '13 at 15:09 In the first section I state that. In the second section I mention the analogue of the Davenport-Heilbronn theorem that there are zeroes for Re(s)>1. It should follow in a similar way as the case of the Hurwitz zeta-function for rational parameter. –  Johan Andersson Jan 14 '13 at 15:13 +1, ah okay thanks. –  Marc Palm Jan 14 '13 at 15:14 Perhaps I don't understand the question, but in its current form the answer is no. A general modular form of fixed weight will be a linear combination of Hecke eigenforms of that weight. The Gamma factors will imply that there are trivial zeros outside the critical stripe. But I guess you have an L-function with symmetric functional equation in mind? The Mellin transform is a linear operator, so the result is a linear combination of Hecke L functions times a Gamma factors. Assuming that you find some combination non-vanishing out the complement of closure of the critical stripe via some estimates I don't know, you will need a good argument for non-vanishing on the boundary of the critical stripe, where the function is expected to be universal. What is the motivation for the question? - I was thinking if the following may be true: if L is the Mellin transform of a modular form , and L has all its zeros in the critical strip, then the modular form is an eigenform. –  user30637 Jan 14 '13 at 16:51 Okay, that it was Johan Andersson's answer seems to imply. –  Marc Palm Jan 14 '13 at 17:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397780299186707, "perplexity": 227.90797978363398}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00134-ip-10-168-14-71.ec2.internal.warc.gz"}
https://admin.clutchprep.com/chemistry/practice-problems/110478/if-you-don-t-think-cobalt-would-work-as-the-redox-active-partner-ion-in-the-sodi
# Problem: If you don't think cobalt would work as the redox-active partner ion in the sodium version of the electrode, suggest an alternative metal ion. ⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet. ###### Problem Details If you don't think cobalt would work as the redox-active partner ion in the sodium version of the electrode, suggest an alternative metal ion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910085558891296, "perplexity": 1980.2309423535808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00435.warc.gz"}
https://www.physicsforums.com/threads/chemistry-entropy-changes-in-physical-processes.300942/
# Chemistry - Entropy Changes in Physical Processes 1. ### ncm2 16 1. The problem statement, all variables and given/known data Indicate how entropy of the system changes for each of the physical processes shown below. Entropy increases, entropy decreases, entropy does not change. 1. Purification 2. Mixing 3. Freezing 4. Evaporation 5. Separation 2. Relevant equations Entropy increases when it goes from less to more disorder. Entropy decreases when it goes from more to less disorder. 3. The attempt at a solution Purification - many to one substance, so entropy decreases Mixing - many to one substance, so entropy decreases Freezing - more to less disorder, so entropy decreases Evaporation - less to more disorder, so entropy increases Separation - one to many substances, so entropy increases These are wrong, but I can't think of why. Help is much appreciated. ### Staff: Mentor Mixing - many to one substance... Do you mean that solution is ONE SUBSTANCE? What do you mean by one substance?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648981094360352, "perplexity": 3814.653048306344}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302428.75/warc/CC-MAIN-20150323172142-00025-ip-10-168-14-71.ec2.internal.warc.gz"}
http://calculator.tutorcircle.com/x-intercept-calculator.html
Sales Toll Free No: 1-855-666-7446 # X Intercept Calculator Top The point in the graph where equation of line crosses the x-axis, i.e., y = 0 is known as x intercept. X intercept is in the form, (x, 0). Here, the standard equation of the line is in the form Ax + By = C. Plug in the equation of line, x-intercept is determined by this online X intercept calculator. ## Steps Step 1 : Check whether the equation of line is in the standard form or not and note it down. Step 2 : To determine the x-intercept, put y = 0 and solve the equation. ## Problems Below are some of the problems based on X intercept. ### Solved Examples Question 1: Find the x-intercept of 3x + 4y = 12. Solution: Step 1 : Given : 3x + 4y = 12. Step 2 : To determine the x-intercept, put y = 0, 3x + 4(0) = 12 3x + 0 = 12 3x = 12 x = 4 Therefore, x-intercept of the given equation is (4, 0). Question 2: Find the x-intercept of 4x + 9y = 36. Solution: Step 1 : Given : 4x + 9y = 36. Step 2 : To determine the x-intercept, put y = 0, 4x + 9(0) = 36 4x + 0 = 36 4x = 36 x = 9 Therefore, x-intercept of the given equation is (9, 0).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488908410072327, "perplexity": 637.0584188740532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887024.1/warc/CC-MAIN-20180117232418-20180118012418-00454.warc.gz"}
http://www.ck12.org/book/CK-12-Foundation-and-Leadership-Public-Schools%2C-College-Access-Reader%3A-Geometry/r1/section/3.8/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Pythagorean Theorem, Part 3: Converse of the Pythagorean Theorem | CK-12 Foundation 3.8: Pythagorean Theorem, Part 3: Converse of the Pythagorean Theorem Created by: CK-12 Learning Objectives • Understand the converse of the Pythagorean Theorem. • Identify acute triangles from side measures. • Identify obtuse triangles from side measures. • Classify triangles in a number of different ways. Converse of the Pythagorean Theorem Could you use the Pythagorean Theorem to prove that a triangle contained a right angle if you did not have an accurate diagram? You have learned about the Pythagorean Theorem and how it can be used. As you recall, it states that the sum of the squares of the legs of any right triangle will equal the square of the hypotenuse. If the lengths of the legs are labeled $a$ and $b$, and the hypotenuse is $c$, then we get the familiar equation: $a^2+b^2=c^2$ The converse of the Pythagorean Theorem is also true. What is a converse? A converse is an if-then statement where the hypothesis and the conclusion are switched. For example, if we start with the if-then statement: If it is raining, then the street is wet. The converse of our statement is: If the street is wet, then it is raining. The part of the sentence after the “if” (called the hypothesis) switches places with the part of the sentence after “then” (called the conclusion.) You may remember these words from the previous chapters. Statements and their converses are not always both true! In the case of the Pythagorean Theorem, BOTH the Theorem and its converse are true. A converse is an if-then statement where the hypothesis and the __________________________ switch places. Converse of the Pythagorean Theorem Given a triangle with side lengths $a, b$ and $c$ (where $c$ is the longest side), if the equation $a^2 + b^2 = c^2$ is true, then the triangle is a right triangle. With this converse, you can use the Pythagorean Theorem to prove that a triangle is a right triangle, even if you do not know any of the triangle’s angle measurements. This means that if you know the three side lengths of a triangle, you can substitute them into the equation $a^2 + b^2 = c^2.$ • If you find a true statement (such as $100 = 100$), then the Pythagorean Theorem works in that case; your triangle is a right triangle. • If you find a false statement (such as $91 = 100$), then, in that case, the Pythagorean Theorem does not work; your triangle is not a right triangle. If $a^2 + b^2 = c^2$ produces a true statement, then the triangle is a ___________ triangle. If $a^2 + b^2 = c^2$ produces a false statement, then the triangle is ___________ a right triangle. Example 1 Does the triangle below contain a right angle? This triangle does not have any right angle marks or measured angles, so you cannot assume you know whether the triangle is acute, right, or obtuse just by looking at it. To see if the triangle might be right, try substituting the side lengths into the Pythagorean Theorem to see if they make the equation true. The hypotenuse is always the longest side, so 17 should be substituted for $c.$ The other two values can represent $a$ and $b$ and the order is not important. $a^2 + b^2 &= c^2\\8^2 + 15^2 &= 17^2\\64 + 225 &= 289\\289 &= 289$ Since both sides of the equation are equal $(289 = 289)$, these values satisfy the Pythagorean Theorem. Therefore, the triangle described in the problem is a right triangle! 1. Write the converse of this if-then statement: If it is sunny outside, then the weather must be warm. Converse: ${\;}$ ${\;}$ 2. BONUS! Write a true if-then statement that also has a true converse: Statement: ${\;}$ ${\;}$ Converse: ${\;}$ ${\;}$ 3. A triangle has side lengths 5, 7, and 9. Is this a right triangle? Show your work to defend your answer. ${\;}$ ${\;}$ ${\;}$ Identifying Acute Triangles If the sum of the squares of the two shorter sides of a triangle is greater than the square of the longest side, then the triangle is acute (all angles in the triangle are less than $90^\circ$.) If $a^2 + b^2 > c^2$ then the triangle is acute. You can use this rule the same way you used the Pythagorean Theorem on the last page. Substitute the side lengths for $a, b,$ and $c$ making sure that the longest side is $c$. After you have simplified both sides of the equation, compare your answers: which side is a larger number? If the $a^2 + b^2$ side is bigger than the $c^2$ side, then the triangle is acute. • If $a^2 + b^2$ is larger than $c^2$, then the triangle is ______________________. Example 2 Is the triangle below acute or right? The two shorter sides of the triangle are 8 and 13. The longest side of the triangle is 15. Since the legs are the shorter sides, first find the sum of the squares of the two shorter legs by substituting the smaller numbers for $a$ and $b$: $8^2+13^2 & = c^2\\64 + 169 &= c^2\\233 &= c^2$ The sum of the squares of the two shorter legs is 233. Compare this to the square of the longest side, 15. $15^2=225$ The square of the longest side is _____________. Since $8^2 + 13^2 = 233$ and $233 \neq 225 = 15^2$, this triangle is not a right triangle. • $a^2 + b^2 \neq c^2$ so the triangle cannot be a ____________________ triangle. Compare the two values to identify which is greater. $233 >225$ The sum of the squares of the shorter sides $(a^2 + b^2)$ is greater than the square of the longest side $(c^2)$. Therefore, this is an acute triangle. • Because $c^2$ is smaller than $a^2 + b^2$, this is an ____________________ triangle. 1. Fill in the blanks: When the square of the __________________________ side is less than the sum of the squares of the _________________________ sides, the triangle is an acute triangle. 2. A triangle has side lengths 4, 7, and 8. Is this triangle acute or right? Show your work to defend your answer. ${\;}$ ${\;}$ ${\;}$ Identifying Obtuse Triangles You can prove a triangle is obtuse (meaning it has one angle larger than $90^\circ$) by using a similar method. Find the sum of the squares of the two shorter sides in a triangle. If this value is less than the square of the longest side, the triangle is obtuse. If $a^2 + b^2 < c^2$, then the triangle is obtuse. • Obtuse triangles have one angle _____________________________ that is than $90^\circ$. • If $a^2 + b^2$ is smaller than $c^2$, then the triangle is ______________________. Example 3 Is the triangle below obtuse or right? You can solve this problem in a manner almost identical to Example 2. The two shorter sides of the triangle are 5 and 6. The longest side of the triangle is 10. First find the sum of the squares of the two shorter legs by substituting the smaller numbers for $a$ and $b$. $a^2+ b^2 &= 5^2+ 6^2\\&= 25+36\\&= 61$ The sum of the squares of the two shorter legs is 61. Compare this to the square of the longest side, 10. $10^2=100$ The square of the longest side is 100. Since $5^2 + 6^2 = 61$ and $61 \neq 100 = 10^2$, this triangle is not a right triangle. Compare the two values to identify which is greater. $61 & < 100\\\text{(sum of shorter sides)}^2 & < \text{(longest side)}^2$ Since the sum of the square of the shorter sides $(a^2 + b^2)$ is less than the square of the longest side $(c^2)$, this is an obtuse triangle. • Because $c^2$ is larger than $a^2 + b^2$, this is an ____________________ triangle. 1. Fill in the blanks: When the square of the _______________________ side is greater than the sum of the squares of the ________________________ sides, the triangle is an obtuse triangle. 2. True or false: When the square of the longest side equals the sum of the squares of the shorter sides, the triangle is a right triangle. ${\;}$ 3. A triangle has side lengths 5, 8, and 10. Is this triangle acute, obtuse, or right? Show your work to defend your answer. ${\;}$ ${\;}$ ${\;}$ ${\;}$ Triangle Classification Now that you know the ideas in this lesson, you can classify any triangle as right, acute, or obtuse given the length of the three sides. Be sure to use the longest side for the hypotenuse. Remember: • If $a^2+b^2 = c^2$, the figure is a right triangle. • If $a^2+b^2>c^2$, the figure is an acute triangle. • If $a^2+b^2, the figure is an obtuse triangle. Example 4 Classify the triangle below as right, acute, or obtuse. The two shorter sides of the triangle are 9 and 11. The longest side of the triangle is 14. First find the sum of the squares of the two shorter legs by substituting the smaller numbers for $a$ and $b$. $a^2+ b^2 &= 9^2+ 11^2\\&= 81+121\\&=202$ The sum of the squares of the two shorter legs is 202. Compare this to the square of the longest side, 14. $14^2=196$ The square of the longest side is 196. So the two values are not equal ($202 \neq 196$ or $a^2 + b^2 \neq c^2$) and this triangle is not a right triangle. Since you can eliminate the right triangle from your choices, now you can compare the two values, $a^2 + b^2$ and $c^2$ to identify which is greater: $202 & > 196\\\text{(sum of shorter sides)}^2 & > \text{(longest side)}^2$ Since the sum of the square of the shorter sides is greater than the square of the longest side (in symbols $a^2 + b^2 > c^2$), this is an acute triangle. Example 5 Classify the triangle below as right, acute, or obtuse. The two shorter sides of the triangle are _____________ and ______________. The longest side of the triangle is ________________. First, set up an equation to find the sum of the squares of the two shorter legs by substituting the smaller numbers for $a$ and $b$. $a^2+ b^2 &= 16^2+ 30^2\\&= 256+900\\&= 1156$ The sum of the squares of the two legs is 1156. Compare this to the square of the longest side, 34. $c^2 = 34^2 =1156$ The square of the longest side is also 1156. Since the two values you found are equal $( a^2 + b^2 = c^2 )$, this is a right triangle. • In this example, the Pythagorean Theorem is ___________________. • When $a^2 + b^2 = c^2$, you have a ____________________ triangle! 1. Fill in the blanks: In an acute triangle, the sum of the squares of the shorter sides is _____________________ (greater than / less than / equal to) the square of the longest side. In a right triangle, the sum of the squares of the shorter sides is _______________________ (greater than / less than / equal to) the square of the longest side. In an obtuse triangle, the sum of the squares of the shorter sides is _____________________ (greater than / less than / equal to) the square of the longest side. 2. A triangle has side lengths 8, 9, and 13. Classify the triangle as right, acute, or obtuse. Show your work to defend your answer. ${\;}$ ${\;}$ ${\;}$ ${\;}$ Graphic Organizer for Lesson 6 Triangle Classification and the Pythagorean Theorem Type of Triangle Draw a picture How can you use the Pythagorean Theorem to compare the sides? Give an example of 3 side lengths for this triangle and show work to prove your classification Right $a^2 + b^2 = c^2$ Acute Obtuse 8 , 9 , 10 Date Created: Feb 23, 2012 May 12, 2014 You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first. Reviews Image Detail Sizes: Medium | Original
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 96, "texerror": 0, "math_score": 0.8088757991790771, "perplexity": 382.5838135746731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118059355.87/warc/CC-MAIN-20150124164739-00123-ip-10-180-212-252.ec2.internal.warc.gz"}
http://claesjohnson.blogspot.com/2013/02/ir-photons-as-optical-photons-as-waves.html
## tisdag 26 februari 2013 ### IR-Photons as Optical Phonons as Waves In climate science it is common to view radiative heat transfer as a two-way flow of IR-photons particles carrying lumps of energy back and forth between e.g. the Earth surface and the atmosphere. This view lacks physics rationale because it includes heat transfer by IR-photons not only from warm to cold, but also form cold to warm in violation of the 2nd Law of Thermodynamics. The usual way to handle this contradiction is to say that the net transfer is from warm to cold, and so there is no violation of the 2nd Law. But this requires the two-way transfer to be connected which is in conflict with an idea  independent two-way transfer. On Computational Blackbody Radiation I present a model of radiative heat transfer which is based on a wave equation for a collection of oscillators with small damping subject to periodic forcing solved by finite precision computation. Fourier analysis show that the oscillators in resonance take on a periodic motion which is out-of-phase with the forcing, which connects to optical phonons as wave motion in an elastic lattice with large amplitude (as compared to acoustical phonons with smaller amplitude). Optical phonons typically occur in a lattice composed of two atoms of different mass, one big and one small, which connects to the radiation wave model with small damping. We thus find reason to view IR-photons as a wave phenomenon similar optical phonons, rather than as "particles". The radiation wave model includes two-way propagation of waves but only one way transfer of heat energy as an effect of cut-off of high frequencies due to finite precision computation. #### 6 kommentarer: 1. Do not trust laws of Thermodynamics, as they approximate only half the dynamics of experience. A closed system is a very special case, and thermo is related directly to radiative phenomenon which implies Electrodynamics. However, since electrodynamics ignores half it's phenomenological base, I guess we have to have non symmetrical rules like Thermodynamics 2. Ref your statement: "But this requires the two-way transfer to be connected which is in conflict with an idea independent two-way transfer." Sorry, but I cannot make any sense of this argument, and I do not find anything like this at any other scientific source. So can you please explain what you really mean to say. Why should an intercange of energy be impossible, that included a two-way transfer? As a matter of fact, common sense as well as other scientists tell us that a two-way transfer and interchange of energy can take place. Scientists have measured data of downwelling infrared radiation from CO2, water vapor, and clouds, which clearly impinge on the Earth's surface. So how is it possible to deny this? 3. Two-way independent transfer of heat between warm and cold, without external forcing, violates the 2nd law of thermodynamics by involving transfer of heat from cold to warm. The instruments supposedly measuring DLR are fake instruments reporting an illusion of DLR from cold atmosphere to warm Earth surface, while in fact only measuring net transfer of heat from warm to cold. By a fake instrument you can measure anything you want, e.g. the aura around your body. 1. Thank you so much for a prompt answer. This is very interesting! But I am still in doubt about this. You are saying: "Two-way independent transfer of heat between warm and cold, without external forcing, violates the 2nd law of thermodynamics by involving transfer of heat from cold to warm." But I am still wondering what is your definition of a "two-way independent transfer of heat"? And whether the energy systems we are talking about really are without external forcing? As far as I can see, neither the surface of Earth nor its atmosphere are independent energy transmitters, and without external forcings. The Earth surface is dependent upon a steady flow of energy from the sun, and the atmosphere is dependent upon re-emitted energy from the Earth. Thus the re-emission from the atmosphere, in all directions - also back to the Earth surface - should not violate the 2nd law of thermodynamics. Of course, all things considered, the _net flow_ of energy will always go from warm to cold. But it may well be that if the 2nd law did not allow for "countercurrents", life would not at all be able to exist. 4. Yes, the Sun supplies the forcing to the Earth-atmosphere system, but independent transfer of heat from a cold atmosphere to a warm Earth surface violates the 2nd law. Two-way independent transfer of heat is fiction and serves no scientific purpose. Only net flow from warm to cold is a reality. Only an alarmist with the objective of fooling people to believe in global warming from nothing, could have use of two-way transfer including transfer of heat of the Earth surface from a cold atmosphere. 5. Hi :) The Earth is not a Black Body radiator and neither is our Sun or the Universe, The mis-interpretation of Kirchoffs Law led Max Planck to wrongly assume it held "universality" over ALL situations. There is no microwave background from the Universe, to have a thermal spectrum requires a lattice, Penzias and Wilson's discovery is actually from the Earths Oceans ... trying to unravel the dynamic action of our weather/warming/cooling system without a correct thermal profile of our Planet is never going to happen. Regards Allan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726496696472168, "perplexity": 800.022881971733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00356.warc.gz"}
https://www.physicsforums.com/threads/a-proof-on-quasiperfect-numbers.583861/
# A Proof on Quasiperfect Numbers 1. Mar 4, 2012 ### Joseph Fermat A Quasiperfect number is any number for which the sum of it's divisors is equal to one minus twice the number, or a number where the following form is true, σ(n)=2n+1 One of the well known and most difficult questions in mathematics is whether such numbers exist at all. I have created a rather interesting proof to show that quasiperfect numbers do not exist. I use a process of transformation to create a situation necessary for the existence of a quasiperfect number, and then show that such a situation is impossible, therefore disproving the possibility of a quasiperfect number. View attachment On the Nonexistence of Quasiperfect Numbers.pdf Last edited: Mar 4, 2012 2. Mar 4, 2012 ### Norwegian Why not use the same argument with n=2x+1 to prove that odd numbers do not exist? 3. Mar 5, 2012 ### dodo Hi, Joseph, there is a problem when going from eq.8 to eq.9: $1 - (h(n) - 2)$ is not $-(h(n)+1)$ (which is negative), but $3 - h(n)$ (which is positive). 4. Mar 5, 2012 ### Joseph Fermat Which would mean that my proof is fallous. Oh, well back to the drawing board. Anyone have any ideas where to go from here. Any help would be appreciated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297548532485962, "perplexity": 967.8916980749887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00094.warc.gz"}
https://www.computer.org/csdl/trans/tc/1997/02/t0129-abs.html
Subscribe Issue No.02 - February (1997 vol.46) pp: 129-138 ABSTRACT <p><b>Abstract</b>—A reliable scientific computation approach, substantially different from the known ones, based on Residue Number System (RNS) floating-point arithmetic is described. In the approach, the real number is represented by an expression which consists of two parts, the approximate part and the interval error part. The approximate part, represented by an RNS floating-point number, shows an approximate value for the real number. The interval error value, represented by two RNS floating-point numbers, shows the left and the right limit of an interval containing the error. In parallel to the result of operation, the rounding error induced by that operation is determined and then summed up in each operation. When a series of operations is completed, the range of existence for the result can be determined from the result of the computation and the sum of interval errors.</p><p>For the illustration of the proposed method, some examples are also given, which are said to be difficult to find exact solution in the usual floating-point calculation.</p> INDEX TERMS Floating-point number, interval operation, precision, reliable computation, residue number system. CITATION Eisuke Kinoshita, Ki-Ja Lee, "A Residue Arithmetic Extension for Reliable Scientific Computation", IEEE Transactions on Computers, vol.46, no. 2, pp. 129-138, February 1997, doi:10.1109/12.565587
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333343505859375, "perplexity": 843.7955263857427}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.55/warc/CC-MAIN-20160723071025-00285-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/75825/a-fast-way-for-computing-prod-limits-i-1451-tan-i-circ
# A “fast” way for computing $\prod \limits_{i=1}^{45}(1+\tan i^\circ)$? Which is the fastest paper-pencil approach to compute the product $$\prod \limits_{i=1}^{45}(1+\tan i^\circ)$$ - Using $$1+\tan x = \frac{\sin x + \cos x}{\cos x} = \frac{\sqrt{2} \cos (45^{\circ} - x)}{\cos x},$$ the product can be written as: $$\prod_{x=1}^{45}(1+\tan x^\circ) = 2^{45/2} \prod_{x=1}^{45} \frac{\cos (45 - x)^{\circ}}{\cos x^{\circ}} \stackrel{(1)}{=} 2^{45/2} \cdot \frac{\prod\limits_{x=0}^{44} \cos x^{\circ}}{\prod\limits_{x=1}^{45} \cos x^{\circ}} \stackrel{(2)}{=} 2^{45/2} \cdot \frac{\cos 0}{\cos 45^{\circ}} = 2^{23},$$ where we 1. reindexed the product in the numerator, and 2. cancelled the common factors. Another approach. If $x+y = 45^{\circ}$, then $$1 = \tan(x+y) = \frac{\tan x + \tan y}{1 - \tan x \tan y},$$ which rearranges to $$\tan x \tan y + \tan x + \tan y = 1 \quad \implies \quad (1+\tan x)(1+\tan y) = 2.$$ Now plug in $x = 0^{\circ}, 1^{\circ}, 2^{\circ}, \ldots, 45^{\circ}$, so that $y$ takes the same values but in the opposite order. Multiplying all these equations, we get $$\left[ \prod_{x=0}^{45} (1+\tan x^\circ) \right]^2 = 2^{46}.$$ Taking square-roots and noting that $1+\tan 0^\circ = 1$, we get the answer. - I like the second answer even better. I wish I could vote up again! –  JavaMan Oct 25 '11 at 20:39 +1. Quite nice. –  Did Oct 25 '11 at 20:42 @DJC:Second answer is nice,but I guess not much of intuitive under exam constraints if you haven't did something very similar to this before. –  VelvetThunder Oct 25 '11 at 21:24 Amazing idea - Very neat! –  Emmad Kareem Oct 26 '11 at 7:46 Using this, $$(\cot A + \tan y)(\cot A+ \tan(A-y))=\csc^2A \text{ if } A\ne m\pi\text{ where }m\text{ is any integer}$$ Putting $A=45^\circ, (1 + \tan y)(1+ \tan(45^\circ-y))=\csc^245^\circ=2$ Now, putting $y=1^\circ,2^\circ,3^\circ,\cdots,\lfloor\frac{45}2\rfloor^\circ=22^\circ$ and multiplying them we get, $$\prod_{1\le r\le 22}(1+\tan r^\circ)(1+\tan(45-r)^\circ)=2^{22}$$ $$\implies \prod_{1\le r\le 44}(1+\tan r^\circ)=2^{22}$$ The unpaired $1+\tan45^\circ=1+1=2$ - Just tell a computer to calculate them. For example in R this runs almost instantly > prod(1+tan((1:45)*pi/180)) [1] 8388608 - This is actually quantitative aptitude question,which requires pencil-paper approach only. –  VelvetThunder Oct 25 '11 at 19:51 Perhaps you should have put that in the question at the start –  Henry Oct 25 '11 at 21:03 @Henry: It was kind of obvious. –  TonyK Oct 25 '11 at 21:09 No matter what others say, this is the fast way –  Norbert Jan 2 '12 at 17:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009445905685425, "perplexity": 1296.16415899166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989043.35/warc/CC-MAIN-20150728002309-00132-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/122986/size-limited-oracles
# Size-limited oracles I am interested in complexity of algorithms which have access to the following peculiar sort of oracle: Suppose that an invocation of an algorithm f with an input of size n has access to an oracle for f which works only when given input of size n/2 or less. For definiteness, let's say that f(x) for x less than n/2 can be computed in constant time and that calling the oracle with x greater than n/2 halts -- I want to avoid algorithms which call the oracle unless they are certain that the input they provide is sufficiently small. Has anybody discussed such oracles? Obviously the effect is dramatic; for instance, if writing an algorithm to sum a sequence of numbers, access to such an oracle makes the entire computation constant time, by breaking the list into two sublists. I'm interested in the effect on more complicated problems, though, which may or may not decompose so easily. - I suggest adding the tag computability-theory, since your question belongs to that subject more than complexity theory. You might also add lo.logic. –  Joel David Hamkins Feb 26 '13 at 18:00 @Joel: Does it? I wouldn’t know about computability theory, but this concept has been certainly studied a lot in complexity. –  Emil Jeřábek Feb 28 '13 at 15:45 Emil, I think we have different concepts of the question, and I agree with you for your version. And the OP evidently also agrees with your version... –  Joel David Hamkins Feb 28 '13 at 17:13 Computational problems that can be efficiently (i.e., polynomial-time) computed from solutions of the problem on shorter instances are known as (downward) self-reducible. A classical example is SAT: given a CNF $\phi$ in variables $x_0,\dots,x_n$, let $\phi_0$ and $\phi_1$ be the CNFs in variables $x_0,\dots,x_{n-1}$ obtained by setting $x_n$ to 0 or 1 (respectively), and simplifying the formula accordingly. Then $\phi$ is satisfiable iff $\phi_0$ or $\phi_1$ is satisfiable. For a discussion of the self-reducibility phenomenon and pointers to the literature, see e.g. ftp://ftp.cs.rutgers.edu/cs/pub/allender/cie.plenary.pdf or http://www.thi.uni-hannover.de/fileadmin/forschung/arbeiten/selke-ma.pdf . Emil, you have a different conception than what I had taken the OP to describe. It seemed to me that he was trying to describe a machine equipped with an oracle $A$, but on input $x$ the only access to the oracle allowed was up to $|x|/2$. This is not the same as your situation, since there is no reason on his model to expect that $A$ itself is decidable by such machines. Indeed, if $A$ is not computable, then it couldn't be $A$-decidable in this way, since otherwise we could recursively compute all values of $A$ by reference to smaller values. –  Joel David Hamkins Feb 28 '13 at 17:05 Although I also find your concept interesting, the class of functions you get by this notion of computability will not be closed under composition. To see this, suppose that we have a noncomputable oracle $A$ consisting only of even natural numbers. The characteristic function of $A$ will not be $A$-computable using your notion, since if we could generate all information about $A$ from earlier information about $A$, then $A$ would be computable, contrary to assumption. But the function $n\mapsto 2n$ is computable, and the function $2n\mapsto 1$ if $n\in A$, otherwise $0$, is $A$-computable under your concept (if I have understood it correctly), but the composition of these functions would decide $A$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686071634292603, "perplexity": 258.1543143233093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929230.43/warc/CC-MAIN-20150521113209-00201-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/physics-lift-on-a-plane.9366/
# Physics lift on a plane 1. Nov 21, 2003 ### tonyp910 hey guys i just found this site and its very helpful. so if u guys can help. it will be greatly appreciated i have this problem to solve but i stuck on solveing for the height of the airplane to find the force. so if anyone can assist me it will be great. thanks Lift on an Airplane. Air streams horizontally past a small airplane's wings such that the speed is 75. over the top surface and 58.5 past the bottom surface. If the plane has mass 1250 and a wing area of 14.9 , what is the net vertical force (including the effects of gravity) on the airplane? The density of the air is 1.2 . Take the free fall acceleration to be 9.8 . find force in N. 2. Nov 21, 2003 ### Staff: Mentor What I suspect is that they want you to apply Bernoulli's equation to the airflow across top and bottom to find the pressure difference. That pressure difference will produce a net force (of the air on the plane, ignoring drag) acting upward. The weight of the plane acts downward. (Whether you can really justify using Bernoulli in this case is another story. Just do the exercise!) 3. Nov 26, 2003 ### masterofphysics Lift on an Airplane this how i did it solve for delta p=1/2(rho)(velocityoftop)^2 - 1/2(rho)(velocityofbottom)^2 then Lift = delta p*cross sectional area force of lift = lift - (m*g) <-mass of plane too easy [zz)] Similar Discussions: Physics lift on a plane
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144171237945557, "perplexity": 1289.6295807609827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00021.warc.gz"}
https://wlou.blog/2018/06/25/semisimplicity-and-representations-part-1/
# Semisimplicity and Representations, Part 1 This post is the third one in a series on representation theory. The previous posts are this one and that one (in this order.) The nature of this post is mostly ring-theoretic, but we will give applications to representation theory throughout the development of the general theory. ## Semisimple Modules Under suitable assumption on $G$ and $K$, Maschke’s theorem (1.22) tells us that any submodule of a $K[G]$-module is a direct summand, i.e. we can find a complement. One can try to apply this repeatedly to decompose a $K[G]$-module into smaller submodules. If the dimension is finite, then at some point we have to end up with a direct sum of modules that don’t have a non-zero proper submodule. This is because if one direct summand had a non-zero proper submodule, we could just decompose it further by Maschke’s theorem. The assumption of finite dimension implies that this process has to terminate, as the dimension of the summands decreases every time we decompose something. This motivates the following definition to give a name to the modules we obtained as summands in the end: Definition 3.1 A non-zero module over a ring is called simple if it doesn’t have a proper non-zero submodule. Example 3.2 If $K$ is a field, or more generally a division ring, then a vector space over $K$ is simple iff it is one-dimensional. Example 3.3 If we consider modules over $\mathbb{Z}$, i.e. abelian groups, then simple modules are just simple abelian groups. It’s known that simple abelian groups are the groups that are cyclic of prime order: $\Bbb{Z}/p\Bbb{Z}$. Example 3.4 If $K$ is a field, and we consider $K[X]$-modules, i.e. $K$-vector spaces equipped with a choice of endomorphism $A$ (cf. the first section of the last entry), then a module is simple iff it doesn’t have a non-zero proper $A$-invariant subspace. One can show that this equivalent to being isomorphic to $K[X]/(f)$ for some irreducible $f \in K[X]$. In particular, if $K$ is algebraically closed, then simple $K[X]$-modules are precisely the one-dimensional ones. (Where the endomorphism necessarily acts by scalar multiplication.) This means that over an algebraically closed field, an endomorphism of a finite-dimensional vector space is diagonalizable, if and only if the associated $K[X]$-module is a direct sum of simple modules. (And in general, the associated $K[X]$-module is a direct sum of simple modules if and only if the the endomorphism is diagonalizable over an algebraic closure.) We will encounter the condition of being a direct sum of simple modules later in this post. Generalizing the last two examples, we have the following result: Lemma 3.5  If $R$ is any ring and $\mathfrak{m}$ is a maximal left ideal, then $R/\mathfrak{m}$ is a simple module. Conversely, every simple module is of that form Proof If $\mathfrak{m}$ is a maximal left ideal, then submodules of $R/\mathfrak{m}$ correspond to submodules of $R$ containing $\mathfrak{m}$, so $R/\mathfrak{m}$ is simple by definition of $\mathfrak{m}$ being maximal. Conversely, if $M$ is a simple module and $m \in M$ is nonzero, then $Rm$ is a non-zero submodule of $M$, so $M=Rm$. This means the map $R \to M, r \mapsto rm$ is surjective so we get an isomorphism $R/I \cong M$ for some proper ideal $I$. If $I$ is not maximal, then there’s a proper submodule containing $I$, which corresponds to a proper non-zero submodule of $M$, which is impossible. Example/Definition 3.6 If $K$ is a field and $G$ is a group, then similar to example 3.4, simple $K[G]$-modules are representations with no non-zero proper $G$-invariant subspace. These are called irreducible representations. Example 3.7 The representations of a cyclic group of order $n$, corresponding to irreducible factors of $X^n-1$ that we have constructed in 2.6. are irreducible. The reason is that they’re irreducible $K[X]$-modules, where the action of $X$ corresponds to the action of a generator of the group. (cf. 3.4 and the proof of 2.6) We have seen in 3.5 that simple modules are generated by one element, let’s give this property a label (generalizing the notion of cyclic groups): Definition 3.8 Modules that are generated by a single element are called cyclic modules. By our considerations in the beginning of the section, we see that when Maschke’s theorem applies and we have a finite-dimensional representation, it decomposes as a direct sum of irreducible subrepresentations. The purpose of the following lemmas is to generalize this. (Because we work without any finiteness conditions, we will need some form of the axiom of choice. If one is only interested in modules that satisfy some finiteness condition (e.g. finite-dimensional modules for an algebra over a field), then the dependence on choice can be eliminated and the arguments are much easier.) Definition 3.9 A module $M$ over a ring is called semisimple if every submodule $N \leq M$ has a complement, i.e. there exists a submodule $N' \leq M$ such that $M=N\oplus N'$ Lemma 3.10 Submodules and quotients of a semisimple modules are semisimple. Proof If $M$ is semisimple and $M/N$ is a quotient, then for submodule $\overline{K} \leq M/N$, we can take the preimage under the projection $M \to M/N$ to get a submodule $K \leq M$ that projects onto $\overline{K}$. Then the image under the projection of a complement of $K$ will be a complement for $\overline{K}$. If $N \leq M$ is a submodule, then we can find a complement $N'$, but then $M/N' \cong N$ so that $N$ is quotient of $M$, so the previous case applies. We will need the following result for an important property of semisimple modules. Most readers will probably be familiar with this, at least in the commutative case: Lemma 3.11 Let $R$ be a ring. Then every proper left ideal is contained in a maximal left ideal. Proof Let $I$ be a proper left ideal and let $\mathcal{P}$ be the set of all proper left ideals containing $I$. If we have an ascending chain $(I_{i})_{i \in \Omega}$, where $I_i \in \mathcal{P}$, then $\displaystyle \cup_{i \in \Omega} I_i$ is an upper bound. This is a proper ideal, because if it wasn’t, some $I_i$ would contain $1$, which is impossible. So Zorn’s lemma applies and we get a maximal element in $\mathcal{P}$ Corollary 3.12 Every non-zero cyclic module contains a maximal submodule Proof Any non-zero cyclic module is of the form $R/I$ where $I$ is a proper left ideal. Now apply 3.11 to $I$. We get a maximal left ideal $\mathfrak{m}$ contaning $I$. Then $\mathfrak{m}/I$ is a maximal submodule of $R/I$. Lemma 3.13 Any non-zero semisimple module contains a simple submodule. Proof Let $M$ be a semisimple module over a ring $R$. As by 3.10, submodules of semisimple modules are semisimple, it suffices to treat the case where $M$ is cyclic. In that case, $M$ contains a maximal submodule $N \leq M$ by 3.12. As $M$ is semisimple, we can find a submodule $S \leq M$ such that $M = N \oplus S$. If $S$ is not simple, then there is a non-zero proper submodule $S' \subsetneq S$, but then $N \subsetneq N \oplus S' \subsetneq N \oplus S = M$, which contradicts the maximality of $N$. We now come to the main result on semisimple modules, the proof is a little technical. The most important part of the statement for us is the implication (1)=>(2) (cf. 3.16), but we give the full result for completeness. Proposition 3.14 For a module $M$, the following statements are equivalent: 1. $M$ is semisimple 2. $M$ is a sum of simple submodules 3. $M$ is a direct sum of simple submodules Proof (1.) implies (2.): Let $M$ be semisimple and let $\mathrm{soc}(M)$ be the sum of all simple submodules, then as $M$ is semisimple, we get that $M=\mathrm{soc}(M) \oplus N$ for some $N \leq M$. If $N$ is non-zero, we get that $N$ contains a simple submodule by 3.10 and 3.13, but this contradicts the definition of $\mathrm{soc}(M)$ and the fact that $\mathrm{soc}(M) \cap N = 0$. (2.) implies (3.): Suppose $M = \sum_{i \in I} M_i$ where all $M_i$ are simple. Consider the set of subsets $J \leq I$ such that $\sum_{i \in J}M_i = \bigoplus_{i \in J}M_i$. This is partially ordered by inclusion and the usual Zorn’s lemma argument works (just take unions of chains as upper bounds) so that we get a maximal element $J_{\omega}$. Suppose that $\bigoplus_{i \in J_\omega} M_i = \sum_{i \in J_\omega} M_i \subsetneq \sum_{i \in I} M_i = M$, then for some $i_0 \in I$, we get that $M_{i_0} \not \subset \bigoplus_{i \in J_\omega} M_i$, which implies that $M_{i_0} \cap \oplus_{i \in J_\omega} M_i = 0$, since $M_{i_0}$ is simple and that intersection is a proper submodule. But then we get $M_{i_0} + \oplus_{i \in J_\omega} M_i = M_{i_0} \oplus \oplus_{i \in J_\omega} M_i$ which contradicts the maximality of $J_\omega$. (3.) implies (1.): Let $M = \bigoplus_{i\in I} M_i$ with all $M_i$ simple. Let $N \leq M$ be a submodule. We may assume that $N$ is a proper submodule. Consider the set of subsets $J \subset I$ such that $N \cap \bigoplus_{i \in J} M_i = 0$. This is non-empty, as $N$ is a proper submodule, it doesn’t contain some $M_i$, but then $N \cap M_i = 0$, as $M_i$ is simple. Now we apply (surprisingly!) Zorn’s lemma to this set, partially ordered by inclusion by taking unions as upper bounds for chains. Let $J_\omega$ be a maximal element. Then consider $N+\bigoplus_{i \in J_\omega} M_i = N \oplus \bigoplus_{i \in J_\omega} M_i$. If this is a proper submodule of $M$, then it must have zero intersection with some $M_{i_o}$ for $i_0 \in I$. It follows that $N \cap M_{i_0} = 0$, $i_0 \not \in J_\omega$ and $M_{i_0} \cap M_i = 0$ for all $i \in J_\omega$, thus $M_{i_0} + \bigoplus_{i \in J_\omega}M_i= M_{i_0} \oplus \bigoplus_{i \in J_\omega} M_i$, so that by maximality of $J_\omega$, we get $N \cap (M_{i_0} \oplus \bigoplus_{i \in J_\omega} M_i) \neq 0$, so we can choose $n$ non-zero in that intersection. Write $n=m+m'$ for some $m \in M_{i_0}$ and $m' \in \bigoplus_{i \in J_\omega} M_i$ Then $m=n-m'$ is contained in $M_{i_0} \cap (N \oplus \bigoplus_{i \in J_\omega} M_i)$ which is zero by the choice of $i_0$. Thus $n=m'$ is a non-zero element of $N \cap \bigoplus_{i \in J_\omega} M_i = 0$ which is impossible, thus $M=N \oplus \bigoplus_{i \in J_\omega} M_i$. Note that the proof for the implication from (2) to (3) actually shows that if a module is a sum of simple submodules, one can find a subset of the index set such that the sum is direct and still gives the whole module. Corollary 3.15 Direct sums of semisimple modules are semisimple. Proof Use the equivalence between (1) and (3) in 3.14 Corollary 3.16 If $G$ is a finite group and $K$ is a field such that the characteristic of $K$ doesn’t divide the order of $G$, then any representation of $G$ over $K$ is a direct sum of irreducible subrepresentations. Proof Follows from 1.22 and 3.14 We have already seen an instance of this phenomenon in our study of cyclic groups in the semisimple case. (cf. 2.6 and 3.7) Let’s first record an observation, so that the statements we’re about to prove make sense: Lemma 3.17 Let $R$ be any ring and let $M$ and $N$ be modules over $R$, then $\mathrm{End}_R(M)$ and $\mathrm{End}_R(N)$ are rings and if $R$ is a $K$-algebra, they are also $K$-algebras. $\mathrm{Hom}_R(M,N)$ is a left module over $\mathrm{End}_R(M)$ and a right module over $\mathrm{End}_R(N)$ and these actions are compatible, i.e. $\mathrm{Hom}_R(M,N)$ is a $(\mathrm{End}_R(M),\mathrm{End}_R(N))$-bimodule. Proof The statement might look complicated, but all we’re doing here is just composing maps: $\mathrm{End}_R(M)$ is a ring (or $K$-algebra) under composition of maps and the module structures on $\mathrm{Hom}_R(M,N)$ are given by composing with endomorphisms from the left or the right. All properties we need follow from properties of composing linear maps Lemma 3.18 (Schur) Let $M$ and $N$ be modules over a ring $R$ and let $f:M \to N$ be a linear map, then: 1. If $M$ is simple, then $f$ is either zero or injective. 2. If $N$ is simple, then $f$ is either zero or surjective. 3. If both $M$ and $N$ are simple, then $f$ is either zero or an isomorphism. 4. If $M$ is simple, then $\mathrm{End}_R(M)$ is a division ring. (cf. 3.17) Proof (1): As $M$ is simple, $\mathrm{ker}(f)$ is either $M$ or $0$. (2): As $N$ is simple, $\mathrm{im}(f)$ is either $N$ or $0$. (3): Follows from (1) and (2). (4): Follows from (3). Despite the easy proof, Schur’s lemma is quite useful and will be a constant companion while dealing with simple modules. We give a first application. Lemma 3.19 Let $M$ be a semisimple module that is a finite direct sum of simple submodules, write $M \cong \bigoplus_{i=1}^n M_i^{e_i}$ where the $M_i$ are pairwise non-isomorphic. Then for every $i$, set $D_i=\mathrm{End}_R(M_i)$. Then we have an equality $e_i = \mathrm{dim}_{D_i}(\mathrm{Hom}_R(M_i,M))= \mathrm{dim}_{D_i}(\mathrm{Hom}_R(M,M_i))$. In particular, the exponent $e_i$ is independent of the decomposition, so the decomposition is unique up isomorphism and permutation of the factors. Proof  $\mathrm{Hom}_R (M_i,M)$ $=\mathrm{Hom}_R(M_i,\bigoplus_{j=1}^n M_j^{e_j})$ $\cong \bigoplus_{j=1}^n \mathrm{Hom}_R(M_i, M_j)^{e_j}$ Note that this isomorphism is $D_i$-linear, because the action of $D_i$ is given by composition in the first argument. Schur’s lemma implies $\mathrm{Hom}_R(M_i,M_j) = 0$ unless $i=j$, so we get $\bigoplus_{j=1}^n \mathrm{Hom}_R(M_i, M_j)^{e_j} \cong \mathrm{Hom}_R(M_i, M_i)^{e_i}= D_i^{e_i}$. The case with switched arguments works the in the same way. Corollary 3.20 Let $G$ be a finite group and let $K$ be a field such that the characteristic of $K$ does not divide the order of $G$, then every finite-dimensional representation can be written as a direct sum of irreducible subrepresentations which are uniquely determined up to isomorphism, including their multiplicity. Proof Existence follows from 3.16 and uniqueness from 3.19 The last corollary justifies why one pays a lot of attention to irreducible representations, especially when Maschke’s theorem applies. ## Semisimple Rings and Algebras So far, we have just studied (semi)simple modules. A general philosophy in ring theory is to study relations between the internal structure of a ring and the structure of its modules. Whenever there’s a notion for modules, one possible definition for a ring-theoretic property is obtained by just considering a ring as a left, right or two-sided module over itself. (For technical reasons, we will work with right ideals and modules in this section. It will allow us to skip some passage from a ring to its opposite ring in a future post. One can dualize all statements by using that left $R$-modules are right $R^{op}$-modules, where $R^{op}$ is the opposite ring which has reversed order of multiplication. Note that the group algebra $K[G]$ is isomorphic to its own opposite ring, via the map given on the basis $G$ by $g \mapsto g^{-1}$.) If we apply this to the properties we’ve been studying, we get that a ring that is simple as a right module over itself is just a division ring. The way to see this is that every non-zero element must generate the whole ring as a right deal (and by group theory it’s enough to have all right inverses.). We already have a name for that, so that’s nothing new. This doesn’t happen with the following definition: Definition 3.21 A ring $R$ is called semisimple if it is semisimple as a right module over itself. We’re deliberately not being careful with the chirality here: Theoretically, one should define left and right semisimple, but as we shall see that they are equivalent. We can apply the theory we have developed for semisimple modules to show how this property is reflected in the structure of the modules over a ring: Lemma 3.22 A ring $R$ is semisimple if and only if all right modules over $R$ are semisimple. Proof One direction is obvious. For the other one, note that if $R$ is semisimple, 3.15 implies that all direct sums of $R$, i.e. all free modules are semisimple. By 3.10, this also shows that all quotients of free modules are semisimple. But every module is a quotient of a free module. Remarkably, this tells us that it would have been sufficient to prove Maschke’s theorem for just for one single representation, the one corresponding to the $K[G]$-module $K[G]$ to get decompositions into irreducible representations. (Even infinite-dimensional ones.) Lemma 3.23 If a ring is a direct sum of non-zero right ideals, then the sum is finite. Proof Suppose $R=\bigoplus_{i \in I} J_i$, then we have $1=(a_i)_{i \in I}$ where all but finitely many $a_i$ are zero. Let $I' \subset I$ be the subset of $I$ consisting of the indices $i$ such that $a_i \neq 0$. Then for any $r \in R$, we have $r= 1 \cdot r = \sum_{i \in I'} a_ir$ because the sum is direct, this expression is the unique way to write $r$ as a sum from elements in $J_i$ where $i$ ranges over $I$. since we assumed that all $J_i$ are non-zero, this implies $I=I'$, so that $I$ is finite. Corollary 3.24 A semisimple ring is a finite direct sum of simple right $R$-modules (also called minimal right ideals in this case.) Proof Apply 3.14 and then 3.23. Corollary 3.25 Let $R$ be a semisimple ring, then every simple right $R$-modules $M_i$ occurs as a direct summand of $R$ (as a right $R$-module over itself) and the multiplicity is equal to the dimension of $M_i$ over its endomorphism ring (which is a division ring). In particular, that dimension is finite. Proof Note that 3.24 implies that 3.19 is applicable to $R$ (by which we always mean as a right module over itself in this proof). Let $e_i$ be the multiplicity with which $M_i$ occurs in the decomposition of $R$ as a direct sum of simple submodules. By 3.19 $e_i$ is independent of the decomposition, but it might be zero. But 3.19 tells us that $e_i=\mathrm{dim}_{D_i}(\mathrm{Hom}_R(R,M_i))=\mathrm{dim}_{D_i}(M_i)$ which also tells us two things: 1) The RHS is finite 2) The LHS is non-zero, as $M_i \neq 0$. We want to apply this to the case where $R$ is an algebra over a field $K$, but for this it would be nice to know that the $D_i$ are finite-dimensional over $K$. We need some easy results on finiteness conditions. Lemma 3.26 Let $K$ be a field and let $M$ and $N$ be modules over a finite-dimensional algebra $A$, then if $M$ and $N$ are finitely-generated over $A$, they are finite-dimensional over $K$ and so is $\mathrm{Hom}_A(M,N)$. Proof $M$ being finitely-generated means that we can find a $A$-linear surjection $A^n \to M$. As $A$ is a $K$-algebra, this surjection is also $K$-linear. $A^n$ is finite-dimensional over $K$, because $A$ is, this implies that $M$ is finite-dimensional over $K$. If $M$ is finitely generated, let $S$ be a finite-generating system, then the map $\mathrm{Hom}_R(M,N) \to N^S, f \mapsto (f(s))_{s \in S}$ is $K$-linear. It is also injective, because any map from $M$ is determined by where it sends the generating system $S$. $N^S$ is a finite-dimensional vector space by the previous part, thus $\mathrm{Hom}_R(M,N)$ is finite-dimensional. Corollary 3.27 Let $A$ is a finite-dimensional algebra over a field $K$, then all simple modules and their endomorphism rings are finite-dimensional over $K$. Lemma 3.28 Let $A$ be a semisimple algebra over a field and let $M_1, \dots, M_n$ be a list of all simple modules, up to isomorphism. Let $D_i=\mathrm{End}_A(M_i)$ be their endomorphism rings. Then $\mathrm{dim}_K(A)=\sum_{i=1}^n \mathrm{dim}_K(M_i)^2/\mathrm{dim}_K(D_i)$ (where all dimensions are finite.) Proof 3.25 implies that $A \cong \bigoplus_{i=1}^n M_i^{e_i}$ where $e_i=\mathrm{dim}_{D_i}(M_i)$, this implies that $\mathrm{dim}_K(A)= \sum_{i=1}^n \mathrm{dim}_K(M_i)\mathrm{dim}_{D_i}(M_i)$. (3.27 tells us that we don’t have to worry about infinite dimension.) So the only thing left to show is that $\mathrm{dim}_K(D_i) \mathrm{dim}_{D_i}(M_i)=\mathrm{dim}_K(M_i)$. But this is clear: We have $M_i=D_i^{e_i}$, so we just compare the $K$-dimension of both sides. The following lemma tells us that we can leave out the factors if $\mathrm{dim}_K(D_i)$ if $K$ is algebraically closed. Lemma 3.29 If $K$ is an algebraically closed field, then every finite-dimensional division algebra over $K$ is one-dimensional, i.e. $K$ itself. Proof Let $D$ be finite-dimensional division algebra over $K$. Let $d \in D$, then consider the $K$-subalgebra $K[d]$ generated by $d$. Every element in $K[d]$ is a polynomial in $d$, so $K[d]$ is a quotient of the polynomial ring $K[x]$ via the map $ev_x: x \mapsto d$. But then $\mathrm{ker}(ev_x)$ is a non-zero prime ideal, as the image is finite-dimensional over $K$ and doesn’t contain zero divisors. This implies $\mathrm{ker}(ev_x)=(x-\lambda)$ for some $\lambda \in K$, so $d=\lambda \in K$. We note the following corollary to 3.28 and 3.29 Corollary 3.30  Let $A$ be a semisimple algebra over a field and let $M_1, \dots, M_n$ be a list of all simple modules, up to isomorphism. Then $\mathrm{dim}_K(A)\leq \sum_{i=1}^n \mathrm{dim}_K(M_i)^2$ and we have equality if $K$ is algebraically closed. If we put in some knowledge about finite-dimensional divison algebras over $\Bbb R$ (namely, the fact that the only ones are $\Bbb R, \Bbb C, \Bbb H$, so the dimension is at most 4), we also get the following: Corollary 3.31 Let $A$ be a semisimple algebra over $\Bbb R$ and let $M_1, \dots, M_n$ be a list of all simple modules, up to isomorphism. Then $\frac{1}{4} \sum_{i=1}^n \mathrm{dim}_\Bbb{R}(M_i)^2 \leq \mathrm{dim}_\Bbb{R}(A)\leq \sum_{i=1}^n \mathrm{dim}_\Bbb{R}(M_i)^2$ These corollaries translate into statements about representations when we apply them to the group algebra $K[G]$. Let’s close this post by recapitulating what we have shown about representations in the case where $G$ is finite and the characteristic of $K$ doesn’t divide the order of $G$: • There are finitely many irreducible representations up to isomorphism(which are all finite-dimensional) • Every irreducible representation occurs as a direct summand of the so called “regular representation”, which is the representation corresponding to $K[G]$ as a module over itself. • Every representation is a direct sum of copies of irreducible subrepresentations, even infinite-dimensional ones. • We know that for finite-dimensional representations the decomposition into a direct sum of irreducibles is unique up to isomorphism of the factors, including multiplicities. (I didn’t want to deal with cardinals for the infinite-dimensional case) • We have a nice formula that relates the dimension of irreducibles, the dimension of their endomorphism rings and the order of $G$ (which is obviously the dimension of $K[G]$). If we don’t want to talk about the endomorphism rings, we still have an inequality, which is an equality in the algebraically closed case. In the next post, we will continue our study of semisimple rings and give applications, e.g. by describing the number of irreducible representations in terms of the group $G$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 340, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662051796913147, "perplexity": 99.20797106994908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00640.warc.gz"}
https://www.physicsforums.com/threads/straightedge-and-compass-constructions.223274/
# Straightedge and compass constructions 1. Mar 21, 2008 ### kingwinner 1) Prove that 45 degrees can be trisected with straightedge and compass. My attempt: 60 deg constructible since equilateral triangle constructible and 45 deg constructible since 90 deg constructible and we can bisect any angle. =>(60-15)=15 deg constructible Then copy this angle 3 times to trisect 45 deg (fact: any angle can be copied with straightedge and compass) Did I get the right idea? 2) Let F={a+b√3 | a,b E Q(√2)} where Q(√2)={c+d√2 | c,d E Q}. Show that every element of F is the root of a polynomial of degree 4 with rational coefficients. No clue...how to begin? Can someone please help me? Particularly with Q2. Thanks a lot! Last edited: Mar 21, 2008 2. Mar 21, 2008 ### kingwinner 2) In class, I've learnt about the concepts of number fields, surds, trisectibility of angles, constructible numbers, angles, and polygons. But I still can't figure out how to solve this problem. For example, I've learnt the following theorems: Theorem: If a cubic equation with rational coefficients has a constructible root, then it has a rational root. Theorem: Let (a+b√r) E F(√r) (i.e. in some tower of number fields). Suppose p is a polynomail with rational coefficients, if p(a+b√r)=0, then p(a-b√r)=0. 3. Mar 22, 2008 ### kingwinner Can anyone help me with Question 2, please? I am feeling desperate on this question... 4. Mar 22, 2008 ### morphism Notice that every element in F can be written as $a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}$, where $a,b,c,d \in \mathbb{Q}$. Do you know anything about degrees of extension fields? Last edited: Mar 22, 2008 5. Mar 23, 2008 ### kingwinner I know about "the externsion of F by √r", but not about the degree. Is it possible to do it without this concept?? 6. Mar 24, 2008 ### kingwinner x =a + b √ 3, with a, b in Q(√ 2) then a = u + v √2 and b = m + n √2 for some u,v,m,n in Q x =a + b √3 x-a = b √3 x^2 - 2ax + a^2 = 3 b^2 x^2 - 2(u+v √2) x + (u+v √2)^2 = 3 (m+n √2)^2 x^2 - 2ux + u^2 + 2v^2 - 3m^2 - 6n^2 = (2vx - 2uv + 6mn) √2 squaring both sides => we get a polynomial equation of degree 4 with rational coefficients Is this a valid proof?? 7. Mar 24, 2008 ### Kreizhn That looks right assuming the algebra is correct and the question isn't asking you to show that the minimal polynomial is of degree 4. As long as you've constructed a degree 4 polynomial with rational coefficients and x is the root, you'll be fine. Though as morphism says, this would be easier using a "degree of the field extension" argument. 8. Apr 6, 2008 ### heartbreakid88 hey..it is not impossible to trisect an angle using compass n a straight edge..ive proved it possible... Have something to add? Similar Discussions: Straightedge and compass constructions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237709045410156, "perplexity": 2203.9716752959084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542851.96/warc/CC-MAIN-20161202170902-00400-ip-10-31-129-80.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/97767/does-the-concentration-affect-the-voltage-in-electrochemistry
# Does the concentration affect the voltage in electrochemistry Hello for my final chemistry lab we are doing electrochemistry. For my independent lab I'm using 10mL of ZnSO4 and 10mL of CuSO4. Then connect them with a salt bridge. I connect Zn metal and Cu metal to a voltmeter and dip those metals in the correct beakers. The voltage I get is 1.044. However when I add 10mL of water to both beakers, I still receive the same result. Does diluting two solutions affect the voltage? • Do you have reason not to trust your experimental results? – ericksonla Jun 1 '18 at 17:44 • What result were you expecting? What does the Nernst equation say? – Ivan Neretin Jun 1 '18 at 17:58 Cell potential definitely depends on the concentration of the two solutions. This dependence of the cell potential on the concentration can be described by the nernst equation. For the reaction you are suggesting the half cells are: $$Cu^{+2}+ 2e^{-} \longrightarrow Cu$$ $E_{1}= 0.34 V$ $$Zn^{+2} +2e^{-} \longrightarrow Zn$$ $E_{2}= -0.76 V$ Since $Zn$ is more reactive than $Cu$ or since $Zn^{+2}$ reduction is less thermodynamically favored than the reduction of $Cu^{+2}$, then the half-equation of the reduction of $Zn^{+2}$ must be reversed and so does the sign of its potential. Combining the half equations gives: $$Zn + Cu^{+2} \longrightarrow Zn^{+2}+ Cu$$ $E_{cell}= 1.10V$ at $25^{o}C$ and $1atm$ and $1M$ of solutions From the Nernst equation: $$E = E_{cell} - \frac{0.0591}{n} (ln(Q))$$ where n is number of electrons and Q is reaction quotient which is the concentrations of product solutions raised to the power of their coefficients over concentrations of reactant solutions raised to the power of their coefficients. $$Q=\frac{Zn^{+2}}{Cu^{+2}}$$ if you halve both the concentrations of $Zn^{+2}$ and $Cu^{+2}$, then $Q$ will remain the same and so will the the cell potential.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394794464111328, "perplexity": 974.7754502274684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00427.warc.gz"}
https://jmanton.wordpress.com/2010/06/04/measure-theoretic-probability-why-should-it-be-learnt-and-how-to-get-started/
Home > Informal Classroom Notes > Measure-theoretic Probability: Why it should be learnt and how to get started ## Measure-theoretic Probability: Why it should be learnt and how to get started Last Friday I gave an informal 2-hour talk at the City University of Hong Kong on measure-theoretic probability. The main points were as follows. Comments on which parts are unclear or how better to explain certain concepts are especially welcome. ### Objectives • Understand why measure-theoretic probability is useful • Learn enough to get past the initial barrier to self-learning • Motivation ### Recommended Textbooks The primary textbook I recommend is “Probability with Martingales” by David Williams.  Although out of print, a secondary textbook I recommend is Wong and Hajek’s “Stochastic Processes in Engineering Systems“. ### Motivation One unattractive feature of traditional probability theory is that discrete and continuous random variables are generally treated separately and thus some care is required when studying mixtures of discrete and continuous random variables.  Measure-theoretic probability provides a unified framework which is ultimately easier to work with rigorously.  (In other words, and roughly speaking, fewer lines of mathematics are required and the chance of making a mistake is decreased.) A simple example of  moving to a more general setting  is given by the real and complex numbers.  Initially, complex numbers were treated with some scepticism.  Ultimately though, by generalising real numbers to complex numbers, a range of fundamental concepts became simpler and more natural.  To state just one, an $n$th degree polynomial has precisely $n$ roots (counting multiplicities) over the complex field,  but possibly fewer over the real field. ### Derivation Journal papers using measure-theoretic probability often start by saying, “Let $(\Omega,\mathfrak{F},\mathbb{P})$ be a probability space”.  This section endeavours to derive (or re-discover) this formalism. #### The Outcome ($\omega \in \Omega$) At least for an engineer, it is benign to assume that even if a variable or process is random and not observed directly, it still has a true and actual outcome in every experiment. For the purposes of measure-theoretic probability, it is convenient (and unrestrictive) to assume that the outcomes of a series of bets placed by a gambler are known beforehand to Tyche, the Goddess of Chance. Formally, the actual outcome is denoted by a point $\omega$ drawn from the set of all possible outcomes $\Omega$. If the experiment consists of just a single coin toss, $\Omega$ might contain just two elements, say $\Omega = \{H,T\}$.  (There is no reason why $\Omega$ could not contain more elements; they would merely be deemed to occur with probability zero.  While this might be a silly thing to do when the set of possible outcomes is finite, there are sometimes advantages in choosing $\Omega$ larger than it needs to be in the infinite case, such as when working with stochastic processes.) Rarely though is a single toss of a coin interesting.  Normally, the coin is tossed two or more times.  It is important that $\Omega$ contains as many outcomes as necessary to describe the full sequence of events.  So if the coin is tossed twice and only twice, it suffices to choose $\Omega = \{HH,HT,TH,TT\}$. Generally though, one would want to allow for the coin to be tossed any number of times, in which case $\Omega$ would contain all possible infinitely-long sequences of Heads and Tails.  (This is denoted $\Omega = \{H,T\}^\infty$.) #### The Probability of the Outcome ($\mathbb{P}$) We must somehow characterise the fact that Tyche will choose some outcomes $\omega \in \Omega$ more frequently than others.  If $\Omega$ is finite, there is an obvious way to do this.  We could simply define $\mathbb P$ to be a function from $\Omega$ to the set of real numbers between $0$ and $1$ inclusively, the latter denoted by $[0,1]$. This generally does not work though if $\Omega$ consists of an infinite number of elements.  To see why, assume Tyche will choose a number uniformly between $2$ and $3$.  Then we may take $\Omega = [2,3]$.  We would be forced to assign a probability of zero though to any particular outcome $\omega \in \Omega$. Therefore, there is not enough information to deduce that the probability of Tyche choosing a number in the set $[2.1,2.2]$ is precisely the same as the probability of choosing a number in the set $[2.8,2.9]$, for instance. Going to the other extreme, we may be tempted to solve the problem by defining the probability of occurrence of any conceivable set of outcomes.  So for instance, we can define $\mathbb{P}([2.1,2.2]) = 0.1$ and $\mathbb{P}([2.8,2.9]) = 0.1$, and indeed, for any interval from $a$ to $b$ with $2 \leq a \leq b \leq 3$ we can define $\mathbb{P}([a,b]) = b-a$. Notice that now, we have made $\mathbb P$ a function which takes a subset of $\Omega$ and returns a number between $0$ and $1$. So strictly speaking, we must write $\mathbb P(\{2.5\})$ and not $\mathbb P(2.5)$ for the probability of occurrence of an individual element of $\Omega$ . Superficially, this is ok. However, it does not work for two reasons. 1. How can we define the value of $\mathbb P(A)$  for an arbitrary subset $A$ of $\Omega$ when for some sets, it is not even possible to write down a description of them?  (That is, there are some subsets of the interval $[2,3]$ which we cannot even write down, so how can we even write down a definition of $\mathbb P$ which tells us what value it takes on such indescribable sets?) 2. It can be proved that there exist “bad” sets for which it is impossible to assign a probability to them in any consistent way. It is very tempting to elaborate on the second point above.  However, my experience is that doing so distracts too much attention from the original aim of understanding measure-theoretic probability. It is therefore better to think that even if we could assign a probability to every possible subset, we do not want to because it would cause unnecessary trouble and complication; surely, provided we have enough interesting subsets to work with, that is enough? Therefore, ultimately we define $\mathbb P$ as a function from $\mathfrak F$ to $[0,1]$ where $\mathfrak F$ is a set of subsets of $\Omega$ which we think of as (some of) the “nice” subsets of $\Omega$, that is, subsets of $\Omega$ to which we can and want to assign probabilities of occurrence. Roughly speaking, $\mathfrak F$ should be just large enough to be useful, and no larger. #### The Set of Nice Subsets ($\mathfrak F$) Referring to what was said just before, how should we choose $\mathfrak F$? Experience suggests that if $\Omega = [2,3]$ then we would generally be interested in all open intervals $(a,b)$ and all closed intervals $[a,b]$ for starters.  (Open intervals do not include their endpoints whereas closed intervals do.) We would also want to be able to take (finite) unions and intersections of such sets.  This may well be enough already. However, we should also look at our requirements on $\mathbb P$ since they will have an effect on how we choose $\mathfrak F$.  They are: 1. $\mathbb{P}(\{\}) = 0$. (The probability of $\omega$ being in the empty set is zero.) 2. $\mathbb{P}(\Omega) = 1$. (The probability of $\omega$ being in the set of all possible outcomes $\Omega$ is one.) 3. $\mathbb{P}( \cup_{i=1}^\infty F_i ) = \sum_{i=1}^\infty \mathbb{P}( F_i)$ whenever the $F_i$ are mutually disjoint subsets of $\Omega$. (Probability is countably additive.) In order even to be able to state these properties rigorously, we require $\mathfrak F$ to have certain properties. In particular, the first two conditions only make sense if we insist that both the empty set $\{\}$ and $\Omega$ are elements of $\mathfrak F$. (Recall that $\mathbb P$ is only defined on elements of $\mathfrak F$.) The third condition requires that if $F_i \in \mathfrak F$ then $\cup_{i=1}^\infty F_i \in \mathfrak F$. (Technically, we have only argued for this in the special case of the $F_i$ being mutually disjoint, but it ultimately turns out to be no different from requiring it to hold for non-disjoint sets too.) Note that the third condition implies (finite) additivity; just choose most of the $F_i$ to be the empty set.  Therefore, if $A \in \Omega$ and if $\Omega \backslash A$ (the complement of $A$) is also in $\mathfrak F$ then properties 2 and 3 above would impy that $\mathbb{P}(\Omega \backslash A) = 1 - \mathbb{P}(A)$.  It is easy to believe that this condition is fundamental enough to insist that if $A \in \mathfrak F$ then its complement $\Omega \backslash A$ is also in $\mathfrak F$.  Once we have complements of sets in $\mathfrak F$ also belonging to $\mathfrak F$, then (finite and countable) intersections of sets in $\mathfrak F$ also belong to $\mathfrak F$.  (Recall that $A \cap B = \Omega \backslash ( (\Omega \backslash A) \cup (\Omega \backslash B))$, for example.) To summarise the last paragraph, we have endeavoured to show that we require $\mathfrak F$ to satisfy the following conditions. 1. $\Omega \in \mathfrak F$. 2. $A \in \mathfrak F$ implies $\Omega \backslash A \in \mathfrak F$. 3. $A_i \in \mathfrak F$ implies $\cup_{i=1}^\infty A_i \in \mathfrak F$. These conditions are precisely those required for $\mathfrak F$ to be what is known as a $\sigma$-algebra.  Here, $\sigma$ is used to denote the word “countable” and refers to condition 3 above.  (While the alternative term $\sigma$-field is widely used, the existing definitions of “algebra” and “field” in mathematics makes the term $\sigma$-algebra the preferred term; it is not a “field” in any precise sense.) If $\Omega = [2,3]$, recall from above that we wished for $\mathfrak F$ to contain all the intervals at the very least.   Therefore, we choose $\mathfrak F$ to be the smallest $\sigma$-algebra containing the intervals.  (Intuitively, one could think of building $\mathfrak F$ up by starting with $\mathfrak F$ being equal to the set of all intervals, then adding all complements, then adding all countable unions, then adding all complements of these new sets, then adding all countable unions of new and old sets, and going on like this until finally $\mathfrak F$ grew no larger by repeating this process. Mathematically though, it is constructed by taking the intersection of all $\sigma$-algebras containing the intervals; it can be shown that the (uncountable) intersections of $\sigma$-algebras is still a $\sigma$-algebra.) In general, if $\Omega$ is a topological space then it is common to choose $\mathfrak F$ to be the smallest $\sigma$-algebra containing all the open sets.  This is called the Borel $\sigma$-algebra generated by the open sets on $\Omega$. The elements of $\mathfrak F$ are called events. Indeed, an event $B \in \mathfrak F$ is a subset of $\Omega$ and therefore represents a set of possible outcomes or events that we might observe (we might be told that $\omega \in B$), or ask the probability of observing (we might want to know the value of $\mathbb{P}(B)$). #### How to Define $\mathbb P$ on Borel Subsets One issue remains; in general, it is not possible to write down an arbitrary Borel subset; some Borel sets are indescribable. How then can we define $\mathbb P$ on sets we cannot describe? Fortunately, we can appeal to Caratheodory’s Extension Theorem.  In fact, this is a repeating theme in measure-theoretic probability; it is necessary to learn techniques for avoiding the need to work directly with indescribable sets. Caratheodory’s Extension Theorem implies that if we assign a probability to every interval in $\Omega = [2,3]$ (in a way which is consistent with the axioms for probability, e.g., respecting countable additivity) then there is one and only one way to extend the assignments of probability to arbitrary Borel subsets of $[2,3]$. In other words, by defining $\mathbb P$ just for intervals, we have implicitly defined $\mathbb P$ on all Borel subsets. (This is analogous to defining a linear function at only a handful of points; the linearity of the function means that the value of the function can be deduced at other points by using the property of linearity.) Note that $\mathbb P$ is called a probability measure.  It “measures” the probability assigned to certain nice subsets of $\Omega$, or precisely, to the elements of $\mathfrak F$.  (Recall that every element of $\mathfrak F$ is a subset of $\Omega$.) ### Random Variables A (real-valued) random variable is simply a function from $\Omega$ to $\mathbb R$ which satisfies a natural condition of being measurable, which will be defined presently. First though, note that a random variable gives (generally only partial) information about the outcome $\omega$.  For example, if $\Omega=\{HH,HT,TH,TT\}$ and $X_1: \Omega \rightarrow \mathbb R$ is defined by $X_1(HH) = 1$$X_1(HT) = 1$, $X_1(TH) = 0$ and $X_1(TT) = 0$ then we would describe $X_1$ as the outcome of the first coin toss (with $1$ for Heads and $0$ for Tails). We know from the previous section that when we are dealing with the set of  real numbers $\mathbb R$, we would like to be able to assign a probability to any Borel subset of $\mathbb R$.  Therefore, given a random variable $X: \Omega \rightarrow \mathbb R$ and a Borel subset $B$, we would like to compute the probability that the outcome $\omega$ causes $X$ to take on a value in the set $B$.  Mathematically, this is written as $\mathbb{P}(\{\omega \mid X(\omega) \in B\})$, which is commonly abbreviated as $\mathbb{P}(X^{-1}(B))$. For this to make sense though, we must have $X^{-1}(B) = \{\omega \mid X(\omega) \in B\}$ being an element of $\mathfrak F$. This condition, that the inverse image of a Borel set lies in the $\sigma$-algebra $\mathfrak F$, is precisely the condition of measurability imposed on any random variable. ### Expectation is Central Although just formulating the probability triple $(\Omega, \mathfrak{F}, \mathbb{P})$ is already enough to unify discrete and continuous-valued random variables, there are other differences between measure-theoretic and “classical” probability. In particular, in measure-theoretic probability, emphasis shifts to the expectation and conditional expectation operators. One benefit of doing this is that it avoids certain unpleasantries associated with defining conditional probability; for example, Bayes rule does not apply when the denominator is zero. Note that the probability $\mathbb{P}(B)$ of an event $B \in \mathfrak F$ occurring is equal to the expected value of $I_B(\omega)$ where $I$ denotes the indicator function; $I_B(\omega)$ equals $1$ when $\omega \in B$ and $0$ otherwise. Therefore, the shift from probability being central to expectation being central is merely a change of view; it often provides a nicer view of the same underlying theory. ### Concluding Remarks • Measure-theoretic probability is initially more complicated to learn, but it is rigorous, more natural and therefore ultimately easier to work with. • Its advantages come from its different and more general viewpoint; the underlying theory is still essentially the same as classical probability. • (Rather than work with cumulative probability distributions and Riemann-Stieltjes integrals, measure-theoretic probability works with probability measures and Lebesgue integrals which are generally cleaner and easier to work with.) • When learning measure-theoretic probability: • Keep in mind that the basic ideas are straightforward; don’t let the technical detail obscure the basic ideas. • Most of the technical detail comes (at least initially) from having to work with Borel sets but not being able to describe them in general (cf., Caratheodory’s Extension Theorem mentioned earlier). • Look for and develop your own mapping between the measure-theoretic way of obtaining a result, and the classical way.  (For example, Girsanov’s Change of Measure is essentially the measure-theoretic version of Bayes rule; it is stated in terms of conditional expectation rather than conditional probability and is therefore neater to work with.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 169, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512034058570862, "perplexity": 283.418782969486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928414.45/warc/CC-MAIN-20150521113208-00160-ip-10-180-206-219.ec2.internal.warc.gz"}
https://research.utwente.nl/en/publications/a-scaling-analysis-of-a-cat-and-mouse-markov-chain-2
# A scaling analysis of a cat and mouse Markov chain Nelly Litvak, Philippe Robert ## Abstract If (Cn) is a Markov chain on a discrete state space $\mathcal{S}$, a Markov chain (Cn, Mn) on the product space $\mathcal{S}\times\mathcal{S}$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both coordinates are equal. The asymptotic properties of this Markov chain are investigated. A representation of its invariant measure is, in particular, obtained. When the state space is infinite it is shown that this Markov chain is in fact null recurrent if the initial Markov chain (Cn) is positive recurrent and reversible. In this context, the scaling properties of the location of the second component, the mouse, are investigated in various situations: simple random walks in ℤ and ℤ2 reflected a simple random walk in ℕ and also in a continuous time setting. For several of these processes, a time scaling with rapid growth gives an interesting asymptotic behavior related to limiting results for occupation times and rare events of Markov processes. Original language English 792-826 30 Annals of applied probability 22 2 https://doi.org/10.1214/11-AAP785 Published - 2012 ## Keywords • Scaling of null recurrent Markov chains • Cat and mouse Markov chains • IR-80034 • EWI-21025 • MSC-60J10 • MSC-90B18 • METIS-286272
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205226898193359, "perplexity": 747.2844312320792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00345.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=140&t=27016
Anode vs. Cathode Jasmine Botello 2F Posts: 67 Joined: Fri Sep 29, 2017 7:04 am Anode vs. Cathode In Monday's lecture, Lavelle explained how the anode is the negative side and the cathode would be the positive side. However, in my biology class, when we were discussing gel electrophoresis, the anode was the positive side and the cathode was the negative side. Is there a difference between the two situations that I'm missing? Jingyi Li 2C Posts: 56 Joined: Fri Sep 29, 2017 7:06 am Re: Anode vs. Cathode In general, the anode is the positive side and the cathode is the negative side. This refers to the movement of current. In electrochemistry, electrons flow the opposite way. The anode is the negative side and the cathode is the positive side. Natalie LeRaybaud 1G Posts: 54 Joined: Thu Jul 13, 2017 3:00 am Re: Anode vs. Cathode Yes there is a difference between a Galvanic cell and an Electrolytic cell. In a Galvanic cell the reaction proceeds without an external potential helping it along. Since you have the oxidation reaction at the anode, this produces electrons and thus a build-up of negative charge in the course of the reaction until electrochemical equilibrium is reached. Thus the anode is negative. Whereas at the cathode you have the reduction reaction occuring which consumes electrons (leaving behind positive metal ions at the electrode). This leads to a build-up of positive charge during the reaction until equilibrium is reached, thus making the cathode positive. However in an Electrolytic Cell there is an external potential (like an electrical current) applied to enforce the reaction to go in the opposite direction. Therefore the reaction reasoning is switched and the Anode houses the positive reaction while the Cathode is the source of the negative reaction. Return to “Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams” Who is online Users browsing this forum: No registered users and 1 guest
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.870291531085968, "perplexity": 1612.5343955929948}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00318.warc.gz"}
http://blog.plover.com/math/lc-equation.html
The Universe of Discourse Sun, 09 Dec 2007 Four ways to solve a nonlinear differential equation In a recent article I mentioned the differential equation: which I was trying to solve by various methods. The article was actually about calculating square roots of power series; I got sidetracked on this. Before I got back to the original equation, twofour readers of this blog had written in with solutions, all different. I got interested in this a few weeks ago when I was sitting in on a freshman physics lecture at Penn. I took pretty much the same class when I was a freshman, but I've never felt like I really understood physics. Sitting in freshman physics class again confirms this. Every time I go to a class, I come out with bigger questions than I went in. The instructor was talking about LC circuits, which are simple circuits with a capacitor (that's the "C") and an inductor (that's the "L", although I don't know why). The physics people claim that in such a circuit the capacitor charges up, and then discharges again, repeatedly. When one plate of the capacitor is full of electrons, the electrons want to come out, and equalize the charge on the plates, and so they make a current flowing from the negative to the positive plate. Without the inductor, the current would fall off exponentially, as the charge on the plates equalized. Eventually the two plates would be equally charged and nothing more would happen. But the inductor generates an electromotive force that tends to resist any change in the current through it, so the decreasing current in the inductor creates a force that tends to keep the electrons moving anyway, and this means that the (formerly) positive plate of the capacitor gets extra electrons stuffed into it. As the charge on this plate becomes increasingly negative, it tends to oppose the incoming current even more, and the current does eventually come to a halt. But by that time a whole lot of electrons have moved from the negative to the positive plate, so that the positive plate has become negative and the negative plate positive. Then the electrons come out of the newly-negative plate and the whole thing starts over again in reverse. In practice, of course, all the components offer some resistance to the current, so some of the energy is dissipated as heat, and eventually the electrons stop moving back and forth. Anyway, the current is nothing more nor less than the motion of the electrons, and so it is proportional to the derivative of the charge in the capacitor. Because to say that current is flowing is exactly the same as saying that the charge in the capacitor is changing. And the magnetic flux in the inductor is proportional to rate of change of the current flowing through it, by Maxwell's laws or something. The amount of energy in the whole system is the sum of the energy stored in the capacitor and the energy stored in the magnetic field of the inductor. The former turns out to be proportional to the square of the charge in the capacitor, and the latter to the square of the current. The law of conservation of energy says that this sum must be constant. Letting f(t) be the charge at time t, then df/dt is the current, and (adopting suitable units) one has: $$(f(x))^2 + \left(df(x)\over dx\right)^2 = 1$$ which is the equation I was considering. Anyway, the reason for this article is mainly that I wanted to talk about the different methods of solution, which were all quite different from each other. Michael Lugo went ahead with the power series approach I was using. Say that: \halign{\hfil \displaystyle #&\displaystyle= #\hfil\cr f & \sum_{i=0}^\infty a_{i}x^{i} \cr f' & \sum_{i=0}^\infty (i+1)a_{i+1}x^{i} \cr } Then: \halign{\hfil \displaystyle #&\displaystyle= #\hfil\cr f^2 & \sum_{i=0}^\infty \sum_{j=0}^{i} a_{i-j} a_j x^{i} \cr (f')^2 & \sum_{i=0}^\infty \sum_{j=0}^{i} (i-j+1)a_{i-j+1}(j+1)a_{j+1} x^{i} \cr } And we want the sum of these two to be equal to 1. Equating coefficients on both sides of the equation gives us the following equations: !!a_0^2 + a_1^2!! = 1 !!2a_0a_1 + 4a_1a_2!! = 0 !!2a_0a_2 + a_1^2 + 6a_1a_3 + 4a_2^2!! = 0 !!2a_0a_3 + 2a_1a_2 + 8a_1a_4 + 12a_2a_3!! = 0 !!2a_0a_4 + 2a_1a_3 + a_2^2 + 10a_1a_5 + 16a_2a_4 + 9a_3^2!! = 0 ... Now here's the thing M. Lugo noticed that I didn't. You can separate the terms involving even subscripts from those involving odd subscripts. Suppose that a0 and a1 are both nonzero. The polynomial from the second line of the table, 2a0a1 + 4a1a2, factors as 2a1(a0 + 2a2), and one of these factors must be zero, so we immediately have a2 = -a0/2. Now take the next line from the table, 2a0a2 + a12 + 6a1a3 + 4a22. This can be separated into the form 2a2(a0 + 2a2) + a1(a1 + 6a3). The left-hand term is zero, by the previous paragraph, and since the whole thing equals zero, we have a3 = -a1/6. Continuing in this way, we can conclude that a0 = -2!a2 = 4!a4 = -6!a6 = ..., and that a1 = -3!a3 = 5!a5 = ... . These should look familiar from first-year calculus, and together they imply that f(x) = a0 cos(x) + a1 sin(x), where (according to the first line of the table) a02 + a12 = 1. And that is the complete solution of the equation, except for the case we omitted, when either a0 or a1 is zero; these give the trivial solutions f(x) = ±1. Okay, that was a lot of algebra grinding, and if you're not as clever as M. Lugo, you might not notice that the even terms of the series depend only on a0 and the odd terms only on a1; I didn't. I thought they were all mixed together, which is why I alluded to "a bunch of not-so-obvious solutions" in the earlier article. Is there a simpler way to get the answer? Gareth McCaughan wrote to me to point out a really clever trick that solves the equation right off. Take the derivative of both sides of the equation; you immediately get 2ff' + 2f'f'' = 0, or, factoring out f', f'(f + f'') = 0. So there are two solutions: either f'=0 and f is a constant function, or f + f'' = 0, which even the electrical engineers know how to solve. David Speyer showed a third solution that seems midway between the two in the amount of clever trickery required. He rewrote the equation as: $${df\over dx} = \sqrt{1 - f^2}$$ $${df\over\sqrt{1 - f^2} } = dx$$ The left side is an old standby of calculus I classes; it's the derivative of the arcsine function. On integrating both sides, we have: $$\arcsin f = x + C$$ so f = sin(x + C). This is equivalent to the a0 cos(x) + a1 sin(x) form that we got before, by an application of the sum-of-angles formula for the sine function. I think M. McCaughan's solution is slicker, but M. Speyer's is the only one that I feel like I should have noticed myself. Finally, Walt Mankowski wrote to tell me that he had put the question to Maple, which disgorged the following solution after a few seconds: f(x) = 1, f(x) = -1, f(x) = sin(x - _C1), f(x) = -sin(x - _C1). This is correct, except that the appearance of both sin(x + C) and -sin(x + C) is a bit odd, since -sin(x + C) = sin(x + (C + π)). It seems that Maple wasn't clever enough to notice that. Walt says he will ask around and see if he can find someone who knows what Maple did to get the solution here. I would like to add a pithy and insightful conclusion to this article, but I've been working on it for more than a week now, and also it's almost lunch time, so I think I'll have to settle for observing that sometimes there are a lot of ways to solve math problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012547135353088, "perplexity": 423.99822814946606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297622.30/warc/CC-MAIN-20150323172137-00281-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.greenemath.com/College_Algebra/68/Solving-Rational-InequalitiesPracticeTest.html
To solve a rational inequality, we first write the inequality in the format of a rational expression on one side and zero on the other. We will then find the boundaries or endpoints by setting the numerator and denominator each equal to zero and finding solutions for the equations. We can use our boundaries or endpoints to set up intervals on a number line and use test values to determine which intervals satisfy the inequality. Test Objectives • Demonstrate the ability to solve a rational inequality • Demonstrate the ability to write an inequality solution in interval notation • Demonstrate the ability to graph an interval on the number line Solving Rational Inequalities Practice Test: #1: Instructions: solve each inequality, write in interval notation, graph. $$a)\hspace{.2em}\frac{x - 7}{x - 1}+ 5 > 0$$ $$b)\hspace{.2em}\frac{x + 5}{x - 4}- 1 < 0$$ #2: Instructions: solve each inequality, write in interval notation, graph. $$a)\hspace{.2em}\frac{x + 6}{x - 5}> 1$$ $$b)\hspace{.2em}\frac{5}{x - 1}> \frac{8}{2x + 7}$$ #3: Instructions: solve each inequality, write in interval notation, graph. $$a)\hspace{.2em}\frac{3}{x + 2}≤ \frac{4}{x - 9}$$ $$b)\hspace{.2em}\frac{x^2 + 4x - 21}{x^2 - 6x + 9}> 0$$ #4: Instructions: solve each inequality, write in interval notation, graph. $$a)\hspace{.2em}\frac{x^2 + 5x + 4}{x^2 + 5x + 6}< 0$$ $$b)\hspace{.2em}\frac{6}{x}> 6x - 5$$ #5: Instructions: solve each inequality, write in interval notation, graph. $$a)\hspace{.2em}\frac{x^2 - 4x + 4}{x - 2}≥ 1$$ $$b)\hspace{.2em}\frac{x + 6}{x^2 - 5x - 24}≥ 0$$ Written Solutions: #1: Solutions: $$a)\hspace{.2em}x < 1 \hspace{.2em}or \hspace{.2em}x > 2$$ $$(-\infty, 1) ∪ (2, \infty)$$ $$b)\hspace{.2em}x < 4$$ $$(-\infty, 4)$$ #2: Solutions: $$a)\hspace{.2em}x > 5$$ $$(5, \infty)$$ $$b)\hspace{.2em}-\frac{43}{2}< x < -\frac{7}{2}\hspace{.2em}or \hspace{.2em}x > 1$$ $$\left(-\frac{43}{2}, -\frac{7}{2}\right) ∪ (1, \infty)$$ #3: Solutions: $$a)\hspace{.2em}-35 ≤ x < -2 \hspace{.2em}or \hspace{.2em}x > 9$$ $$[-35, -2) ∪ (9, \infty)$$ $$b)\hspace{.2em}x < -7 \hspace{.2em}or \hspace{.2em}x > 3$$ $$(-\infty, -7) ∪ (3, \infty)$$ #4: Solutions: $$a)\hspace{.2em}-4 < x < -3 \hspace{.2em}or \hspace{.2em}-2 < x < -1$$ $$(-4, -3) ∪ (-2, -1)$$ $$b)\hspace{.2em}x < -\frac{2}{3}\hspace{.2em}or \hspace{.2em}0 < x < \frac{3}{2}$$ $$\left(-\infty, -\frac{2}{3}\right) ∪ \left(0, \frac{3}{2}\right)$$ #5: Solutions: $$a)\hspace{.2em}x ≥ 3$$ $$[3, \infty)$$ $$b)\hspace{.2em}-6 ≤ x < -3 \hspace{.2em}or \hspace{.2em}x > 8$$ $$[-6, -3) ∪ (8, \infty)$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827015995979309, "perplexity": 1094.635399686204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00548.warc.gz"}
http://math.stackexchange.com/questions/131502/pointwise-multiplication-in-function-spaces
# pointwise multiplication in function spaces It is well - known that a point - wise multiplication operation can be defined on some function spaces. For example, $C([0,1])$, the vector space of a real - valued (say) continuous functions $f$ defined on the unit - interval $[0,1] \subset \mathbb{R}$, admits such a multiplication, defined as $$(fg)(x) := f(x)g(x), \quad f,g \in C([0,1])$$ Now, for function spaces whose elements are defined on a countable set, say $$\mathbb{R}^k = \{ x: \mathbb{N}_k \to \mathbb{R}, \quad n \mapsto x_n, \quad n = 1, \dots, k \} \quad (k \in \mathbb{N})$$ or $$\mathbb{R}^\infty = \{ x: \mathbb{N} \to \mathbb{R}, \quad n \mapsto x_n, \quad n = 1,2 \dots \,\} \quad$$ I wonder why this is not done analogously. So, define pointwise multiplication by $$(xy)(n) := x_ny_n$$ For example, in the "column - notation", two elements of $\mathbb{R}^2$, $x = (x_1,x_2)$ and $y = (y_1,y_2)$ would then have the product $xy = (x_1y_1, x_2y_2)$. I realize there must be something that makes this obvious operation utterly useless, for otherwise, it would be used commonly. Any hints as to why this attempt to define pointwise multiplication on these functions spaces is uninteresting would be great ! From what I understand so far (unfortunately far to little), I wonder whether the cardinality of the domain (uncountable in the case $[0,1]$, countable in the cases $\mathbb{N}_k = \{1,\dots k\}$ and $\mathbb{N}$) makes a crucial difference, whether there is some algebraic property that breaks down, or whether it is something completely different ? Thanks for your feedback and help! - Note that pointwise multiplication in a function space is induced by a notion of multiplication in the range space. Also, each example you gave is induced by pointwise multiplication in the space $X=C(\{1,\ldots, k\};\mathbb R)$ when $C(\mathbb R^k)$ is treated as $C(X)$. – Alex Becker Apr 13 '12 at 23:08 Also cardinality makes no difference. Your definition can be extended to arbitrary sets $S$ by consider the range to be the algebra $X=F(S,\mathbb R)$ of functions from $S$ to $\mathbb R$ with pointwise addition and multiplication. – Alex Becker Apr 13 '12 at 23:24 Why do you think it must be "utterly useless" simply because applications of it happen to be rare? Why can't there be things that are "only occassionally useful" without being either "utterly useless" or "used commonly"? – Henning Makholm Apr 13 '12 at 23:29 @Henning hm - you're absolutely right there is no reason to devalue structeres that do not have an immediate application! So, provided my definition above makes sense, I didn't intend to say it would be uninteresting to study it for its own sake. – harlekin Apr 13 '12 at 23:51 Your last example is the Hadamard product $x\circ y$; $x,y\in{\mathbb R}^{n}$, which is defined for matrices. However, it is an overkill and very difficult to work with if you talk about (column) vectors. Instead, you can use an equivalent form $x\circ y={\rm diag}(x)y$, where ${\rm diag}(x)\in{\mathbb R}^{n\times n}$ is a diagonal matrix with vector $x$ on its main diagonal, and the matrix-vector product ${\rm diag}(x)y$ is understood in the usual linear algebraic sense. This form is handy when one of the vectors is known ($x$ in this case) and the other is to be found. If both vectors are unknown, then the 'outer' product $xy^{T}$ may be helpful. Your pointwise product can be extracted from the diagonal of this rank-one matrix, i.e., $x\circ y={\rm diag}(xy^{T})$. - There is nothing inherently uninteresting about pointwise multiplication. If you have a vector space of functions which is closed under pointwise multiplication, it is an algebra. If the multiplication behaves nicely with respect to a norm or other structures, you may have a Banach algebra or a $C^*$-algebra. These structures are well studied and can be found in many textbooks, and indeed, $C([0,1])$ is an example of a $C^*$-algebra. On the other hand, there are many interesting function spaces which are not closed under pointwise multiplication. $L^p$ spaces are probably the most obvious example: the product of two functions from $L^p$ need not be in $L^p$. So it is still useful to develop theory for function spaces without reference to the pointwise multiplication. - Pointwise multiplication in spaces of functions is very basic, this is because there are many such spaces that are not pointwise, but multiplication by composition i.e $(f)(g)(x) = (gh)(x)$ instead of $(f)(g)(x) = g(x)h(x)$. Example linear spaces have no multiplication of their members. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9559932947158813, "perplexity": 218.6522709881835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460942.79/warc/CC-MAIN-20151124205420-00001-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www2.math.binghamton.edu/p/seminars/datasci/170328
### Sidebar seminars:datasci:170328 Data Science Seminar Hosted by Department of Mathematical Sciences • Date: Tuesday, March 28, 2017 • Time: 12:00-1:00 • Room: WH-100E • Speaker: Yunzhang Zhu (Ohio State University) • Title: Maximum Likelihood Inference for a Large Precision Matrix Abstract Inference concerning Gaussian graphical models involves pairwise conditional dependencies on Gaussian random variables. In such a situation, regularization of a certain form is often employed to treat an overparameterized model, imposing challenges to inference. The common practice of inference uses either a regularized model, as in inference after model selection, or bias-reduction known as “de-bias”. While the first ignores statistical uncertainty inherent in regularization, the second reduces the bias inbred in regularization at an expense of increased variance. In this paper, we propose a constrained maximum likelihood method for inference, with a focus of alleviating the impact of regularization on inference. Particularly, for composite hypotheses, we unregularize hypothesized parameters whereas regularizing nuisance parameters through a $L_0$-constraint controlling their degree of sparseness. This approach is an analogy of semiparametric likelihood inference in a high-dimensional situation. On this ground, we derive conditions under which the asymptotic distributions of the constrained likelihood ratio and the maximum likelihood estimate are established, permitting a graph’s dimension increasing with the sample size. Interestingly, the corresponding distribution of the likelihood ratio is the chi-square or normal, depending on if the co-dimension of a test is finite or increases with the sample size. This goes beyond the classical Wilks phenomenon. Numerically, we demonstrate that the proposed method performs well for various types of graphs. Finally, we apply the proposed method to infer linkages in brain network analysis based on MRI data, to contrast Alzheimer’s disease patients against healthy subjects.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061694741249084, "perplexity": 1335.5928012820482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00552.warc.gz"}
http://tex.stackexchange.com/questions/3371/how-to-prevent-paragraph-breaks-after-theorem-environments/3375
# How to prevent paragraph breaks after theorem environments? By default, the list environments (such as itemize) have the nice property that if you add a blank line after it, it starts a new paragraph, and if you don't, the paragraph continues. So The following objects \begin{itemize} \item car \item tree \item pupil \end{itemize} are countable. will produce one paragraph, while Here's a list of uncountable objects: \begin{itemize} \item hair \item rice \end{itemize} And now for something completely different gives two paragraphs. For the theorem type environments (I use the theorems from amsart class), no matter what I do after \end{<theorem type>}, the paragraph breaks. Is there a way to make them behave like itemize (so that page breaking depends on whether I insert a blank line after the environment)? Edit Since both of the answers so far enquires to why I want this behaviour, rather than the default, let me clarify a bit. I agree that it looks weird to state a theorem in the middle of a paragraph. But I actually want to use the behaviour for (short) definitions or setting of notations. For example, I may write For the purposes of this monograph, all functions are infinitely differentiable. That is, we usually work in the space \begin{definition} $C^\infty(\Real^k) \eqdef$ the space of complex-valued smooth functions. \end{definition} Often, however, we wish to ignore complications from infinity'', i.e.~from the non-compactness of $\Real^k$. In these situations we want to use \begin{definition} $\mathcal{D} = C^\infty_c(\Real^k) \eqdef$ the space of complex-valued smooth functions with compact support. \end{definition} Since one of our main tools will be the Fourier transform, the above definition is not always satisfactory: the Fourier transform of a function in $C^\infty_c(\Real^k)$ will necessarily not be in the same space by the uncertainty principle. So a better space is \begin{definition} $\mathcal{S}\eqdef$ the space of complex-valued smooth functions with rapid decay. That is, all functions $f$ such that $x^\alpha D^\beta f$ remain bounded for all multi-indices $\alpha,\beta$. \end{definition} For convenience, we will also introduce the notations \begin{definition} $\mathcal{D}' \eqdef$ the space of distributions, or continuous linear maps from $\mathcal{D}$ to $\Complex$; and similarly $\mathcal{S}' \eqdef$ the space of \emph{tempered} distributions. \end{definition} I find the jagged edge caused by the indentation unpleasant. So no, this is not just an idle question. - Perhaps just a matter of style, but I would drop all of those “Definition XX.” environments, have a single paragraph and use \emph{...} to emphasize each newly defined term or concept. –  Juan A. Navarro Sep 26 '10 at 8:08 Paragraph breaking after theorem environments depending on whether a blank line is inserted does work for the standard article class, but doesn't for amsart. The reason is that amsarts definition of \@endtheorem includes \@endpefalse which disables the conditional breaking mechanism. Remove \@endpefalse and all is well. \documentclass{amsart} \makeatletter % \def\@endtheorem{\endtrivlist\@endpefalse }% OLD \def\@endtheorem{\endtrivlist}% NEW \makeatother \newtheorem{theorem}{Theorem} \newcommand*{\sometext}{Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna.} \begin{document} \sometext \begin{theorem} \sometext \end{theorem} \sometext \begin{theorem} \sometext \end{theorem} % \sometext \end{document} - It looks like with the ntheorem package, this does not work. At some point, there is \gdef\@endtheorem{\endtrivlist\csname\InTheoType @postwork\endcsname} that should be modified? –  pluton Aug 25 '12 at 2:17 The environment theorem ends a paragraph and you are creating a new environment, definition, out of it, so it inherits the properties of theorem since you are just changing the name. Semantically not wanting an end of the paragraph and inheriting from theorem is not correct. That why you would have to do some patch to solve it. A "cleaner" way would be to define your environment definition. \newcounter{definitioncounter} \newenvironment{mydefinition}{\noindent\textbf{Definition}\arabic{definitioncounter}:\newline\indent}{\newline} This will give what you want (I hope). Inside the second pair of {}s, in the \newenvironment command, you can insert more commands to give style to the text inside the definition if you want some style different from the surrounding text. For example if you want it in italics. Edit: use \addtocounter{definitioncounter}{1} If you want it to start in 1 instead of 0. - So I take it that as part of the definition of the theorem environment the paragraph is ended? I generally see theorem grouped with itemize and others in the set of "paragraph-making environments", but my example above suggest that there are subtle differences. Are these differences documented somewhere other than in the LaTeX source? –  Willie Wong Sep 27 '10 at 19:23 I'm sorry to do this (removing a green check). But I think lockstep's answer below explains it better. Cheers. –  Willie Wong Dec 2 '11 at 9:44 Not really answering your question, but it kind of looks funny if you state a theorem right in the middle of a paragraph, no? Anyway, if you would like to do that in the odd case, would it be enough to fake it with a \noindent? Unless you have some modifications in your document layout, the spacing seems to be the same as, say, a quote in the middle of a paragraph. Compare: \documentclass{article} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \begin{document} Someone said \begin{quote} Some funny quote. \end{quote} and it was quite funny. Someone proved \begin{theorem} Some interesting theorem. \end{theorem} \noindent and it was quite interesting. \end{document} - Ahh.. I just missed that you said amsart and not just amsthm, the spacing does look a bit different in amsart. But is the solution perhaps still useful for you? –  Juan A. Navarro Sep 22 '10 at 18:36 To address your first comment: yes, it does look funny to state a theorem in the middle of a paragraph. But it does not look as funny to state definitions in the middle of a paragraph, especially when the definition is just one line long and motivated by the part of the paragraph immediately above it. Now imagine several definitions being made in succession in the same paragraph. \noindent is of course the obvious option. But I would prefer a solution that doesn't break when, say, new paragraph is denoted by vertical spacing instead of indentation. –  Willie Wong Sep 24 '10 at 17:14 Another way to prevent the new paragraph effect is to wrap the theorem inside a minipage: \documentclass{amsart} \newtheorem{thm}{Theorem}[section] \begin{document} Here we state \newline \begin{minipage}{\textwidth} \begin{thm} (Famous Theorem) Statement, \end{thm} \end{minipage} \newline which is part of this paragraph. \end{document} If the theorem statement is short enough and you remove the \newline commands you might even get it to fit inline. I second Juan's opinion though that this looks funny... Do you have any examples of instances in which what you're asking for is a better solution than the default? Or are you just curious if it's possible? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458901286125183, "perplexity": 1203.9681211756138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641054.14/warc/CC-MAIN-20150417045721-00041-ip-10-235-10-82.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/5088/could-we-detect-synchrotron-radiation-of-alien-high-energy-experiments
Could we detect synchrotron radiation of alien high energy experiments? A teacher recently told me that in order to have hopes of detecting Kaluza-Klein type spin 2 particles we would have to have an accelerator as big as the solar system. This is (nowadays) of course out of reach for humans, but for the sake of argument let's think that some more technologically advanced alien civilization can. Let's think that they are performing high energy experiments somewhere in this hypothetical huge accelerator. This accelerator would emit synchrotron radiation which prompts my question, could we, by detecting this synchrotron radiation, be able to guess that this is a signature of a high energy experiment being performed by someone or is this totally hopeless?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012373447418213, "perplexity": 568.4224747130215}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00162.warc.gz"}
https://www.physicsforums.com/threads/basic-physics-queries.85109/
# Homework Help: Basic physics queries 1. Aug 13, 2005 ### roger Hi , I want to know the rigorous mathematical definition of displacement, which is applicable to physics ? All we are taught is that its a vector quantity as opposed to a scalar such as distance. And I have a velocity time graph, which again, I am not too sure on a few aspects of it. For example, if a ball is dropped and bounces back up, and this repeats until the ball stops, each time the velocity decreases, the graph shows a straight sloped line going down to -3, then a staight line with same gradient from +2. But what about the velocity in between 2 and -3 ? and what meaning can be given to a vertical line on the velocity time graph ? thanks Roger 2. Aug 13, 2005 ### HallsofIvy I don't believe for a moment that your text simply says that "displacement is a vector" without saying what vector! In order to have a "displacement" and object has to move from one point to another and the displacement vector is the vector from the beginning point to the end point. Is "-3" a time or velocity? "a straight sloped line going down to -3" is meaningless. Every line segment has two end points, every point on the graph has two coordinates. What are the time and velocity coordinates of the two endpoints of this line segment? Once again are those time or velocity values? What are the coordinates of the enpoints of the line segment you are referring to? No physical meaning! It might refer to an object "bouncing" as it hits the ground, changing velocity extremely fast. That can be "idealized", mathematically, as an instantaneous change in velocity but can't actually happen physically. I suspect what you have is this: at some initial time, t= 0, you drop the ball. It's velocity at that time is 0. It accelerates with the acceleration due to 0: g= -9.81 m/s2 and since acceleration is "change in velocity divided by change in time" that is the slope of the velocity versus time graph. You don't say how long the ball drops or how far but presumably it drops long enough for the velocity to reach -3 m/s (and so the time dropped must have been (-3 m/s)/(-9.81 m/s2)= 0.3 seconds, approximately. Your graph should show a straight line, with slope -9.81, from (0,0) to to (0.3, -3) (That is, again, approximate. That line would have slope -10!). Now, the ball bounces. Very, very quickly, in a time too brief to be seen on the graph, the ball's velocity changes. If this were a "perfectly elastic" collision, the ball's velocity would change from -3 to +3. This is not a perfectly elastic collision, you are told that the collision changes the ball's velocity from -3 to 2 (and the lost energy is absorbed in the ground). There is a "vertical" line from (0.3, -3) to (0.3, 2). If it makes you feel better, you can think of that line as not "vertical" but tilted very, very slightly- too slightly to be seen on the graph: say from (0.30581039755351681957186544342508, -3) to (0.30581039755351681957186544342509, +2) (that 0.30581039755351681957186544342508 is a more accurate value of (-3)/(-9.81)). Since the acceleration is still that due to gravity, -9.81 m/s2, that is still the slope of this new line segment. Remember that this is a graph of velocity agains time, not height. The ball's velocity changes suddenly to +2 but the ball itself is still on the ground at that instant. You are welcome. Last edited by a moderator: Aug 13, 2005 3. Aug 16, 2005 ### Spastik_Relativity I think I know what your talking about. Is it a graph that looks sort of like a zig-zag? If so the velocity at the peak of the balls bounce changes direction therefore the sign infront of the velocity changes. There is no velocity (neglectible) between those two points. Similarly when the balls bounce of the ground teh direction of its velocity changes directions. The changes of direction are shown by the graph moving from positive to negative described by the straight lines. The vertical line means nothing. If one was to integrate it you would get nothing thus no displacement. Its just a change in direction. 4. Aug 26, 2005 ### roger Thanks for the help, I understand the tilted line is very slight, but I wanted to know if we were to plot the period between (0.3, -3), and (0.3, 2) why would it be slanted and not vertical ? if it was slanted would it mean the ball deccelerates to zero and then accelerates to 2 m/s ? If so, how does the ball do this ? Also, shouldnt there be some discontinuities on the graph, i.e. when the ball is in contact with the ground and the top of its trajectory, for both of which there is no motion ?( for a split second) thankyou Roger 5. Aug 26, 2005 ### HallsofIvy quote= Roger]Thanks for the help, I understand the tilted line is very slight, but I wanted to know if we were to plot the period between (0.3, -3), and (0.3, 2) why would it be slanted and not vertical ?[/quote] The point I made before is that there is NO "period" between (0.3,-3) and (0.3,2). A graph of a line containing those two points would be a vertical line but that is not physically possible. Strictly speaking "period" refers to time- you are really asking about the "period" between 0.3 and 0.3! Since you say you understand that the line is very slightly tilted, you should understand that (0.3, -3) and (0.3, 2) are not on that line. It is possible that one is- the line might contain (0.3, -3) and (0.300000001, 2) or it might contain (0.2999999991,-3) and (0.3, 2)- but it can't contain both- that would have to be vertical line, reflecting an instantaneous acceleration which just isn't physically possible. Now, a "mathematical model" doesn't have to be 100% accurate to a physical situation. Physically, if we have a moment when the ball's speed is changing (imagine a "slow motion" film of the ball- watching it squash down as its speed changes from -3 to 0, then opening back up as its speed changes from 0 to 2)- physically, that takes some time (as I said before, perhaps from 0.3 to 0.300000001 or from 0.29999991 to 0.3)- if we wanted to be very accurate, we would have to try to draw a line showing a very slight slant- not vertical. Of course, probably we can't draw that accurately anyway! So, we either show a vertical line from (0.3, -3) to (0.3, 2) or, as you suggest, and a mathematician might prefer, making the velocity graph discontinuous there- not drawing the vertical line at all but leaving a gap between (0.3, -3) and (0.3, 2). 6. Aug 26, 2005 ### roger I understand , but there are a few things which are Im still unclear about. irrespective of which of the two points is on the tilted line, that line would represent a decceleration to zero and then an acceleration to 2. But why should it accelerate ? And if the ball is changing direction , surely there is a point ''in between'' when the ball isn't moving at all ? And should the discontinuity be a gap or a a small line segment at y=0 ? Thanks Roger 7. Aug 26, 2005 ### HallsofIvy Do you understand what "acceleration" means- any time there is a change of velocity, there is an acceleration. In ordinary speech we think of "acceleration" as meaning an increase in speed, "deceleration" as a decrease. In physics, we use acceleration for both- increasing speed is positive acceleration, decreasing speed is negative acceleration. In this case the ball accelerates because it can't keep going in the same direction at the same speed- there's a floor in the way!!! Here, it really is an "acceleration"- speeding up- because the speed is initially -3 and it increases to +2. Physically, what happens is, as I said before, the ball hits the floor and squeezes up- that impact "squeezes" the ball slightly- it's velocity goes from -3 to 0 but its Kinetic Energy goes to 0 (remember that Kinetic Energy depends upon the square of the speed, it's not a vector quantity so direction doesn't matter). Some of that energy goes into "potential energy" in the squeeze. Once the velocity of the ball goes to 0, the ball starts to rebound- the potential energy stored in the ball changes into kinetic energy so the speed goes back up- but the velocity is the opposite way. Actually here, since the speed only goes up to 2, not 3, this is not a "perfectly elastic collision": some of that energy is irreversibly lost to heat. YES, YES, YES! Since the velocity goes form -3 to +2 there is a moment when the velocity is 0! A Mathematician would much prefer that there be a gap in the velocity graph from (0.3,-3) to (0.3,2). Some people (and graphing calculators!) just can't avoid drawing a vertical line. Either way represents a discontinuity in the time- velocity graph. Which, as I said earlier, is non-physical. There has to be some very short time interval between when the ball first hits the floor and when it leaves the floor- to be completely physical you should show a slight "tilt"- say from (0.3,-3) to (0.30000001, 2)- but you might have to make your graph really, really, large to show that! Once again, you are welcome.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982468843460083, "perplexity": 573.7738715428534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00143.warc.gz"}
https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=plottools%2Fcurve
curve - Maple Help plottools curve generate 2-D or 3-D plot object for a curve Calling Sequence curve([[x1,y1], [x2, y2], ...], options) curve([[x1, y1, z1], [x2, y2, z2], ...], options) Parameters [x1, y1], [x2, y2], ... - list of points in 2-D [x1, y1, z1], [x2, y2, z2], ... - list of points in 3-D options - (optional) equations of the form option=value. For a complete list, see plot/options and plot3d/options. Description • The curve command creates a two- or three-dimensional plot data object, which when displayed is a curve joining points in the specified list. The first argument to curve must be a list of points.  They can be either 2-D or 3-D. • The plot data object produced by the curve command can be used in a PLOT or PLOT3D data structure, or displayed using the plots[display] command. • Remaining arguments are interpreted as options, which are specified as equations of the form option = value.  For more information, see plottools, plot/options and plot3d/options. Examples > $\mathrm{with}\left(\mathrm{plottools}\right):$ > $\mathrm{with}\left(\mathrm{plots}\right):$ > $\mathrm{display}\left(\mathrm{curve}\left(\left[\left[0,0\right],\left[3,4\right]\right],\mathrm{color}=\mathrm{red},\mathrm{linestyle}=\mathrm{dash},\mathrm{thickness}=2\right)\right)$ > $\mathrm{display}\left(\mathrm{curve}\left(\left[\left[0,0,0\right],\left[1,1,1\right],\left[1,1,0\right],\left[1,2,1\right],\left[0,0,0\right]\right]\right),\mathrm{axes}=\mathrm{frame},\mathrm{color}=\mathrm{green},\mathrm{orientation}=\left[-70,40\right],\mathrm{thickness}=3\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83638596534729, "perplexity": 1889.0743162827557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00674.warc.gz"}
https://www.physicsforums.com/threads/number-of-atoms-trapped-in-an-atom-trap.587904/
# Number of atoms trapped in an atom trap 1. Mar 17, 2012 ### stigg 1. The problem statement, all variables and given/known data a group of atoms are confined in a point like volume in a laser based atom trap, the laser light causes each atom to emit 1.0 x 10^6 photons of wavelength 780 nm every second. the sensor has area of 1 cubic centimeter and measure the light intensity emanating from the trap to be 1.6 nW when placed 25 cm away from the trapped atoms. assuming each atom emits photons with equal probability in all directions, determine the number of trapped atoms. 2. Relevant equations 3. The attempt at a solution i honestly dont know where to begin with this problem, any guidance would be greatly appreciated, thanks 2. Mar 17, 2012 ### fluidistic Calculate the power emitted by a single atom. You have the number of photons it emits per second as well as their frequency so you can calculate the total energy emitted in 1 s, and thus get the power emitted. The energy emitted by a single atom will be distributed homogeneously radially from it. Since your censor is 1 cm³ and inside an imaginary sphere of radius 25 cm, you can calculate how much % of the light (and hence power) coming from a single atom is received. Can you figure out how to continue? 3. Mar 19, 2012 ### stigg i found the frequency using f=v/$\lambda$ and then the power using hf emitted by each photon then multiyply that by the number of photons released per second, is this correct? Last edited: Mar 19, 2012 4. Mar 19, 2012 ### fluidistic I'm not sure what you mean. The "hf" term is energy, not power. With the "hf" term you get the energy of a single photon. A single atom emits 10^6 of these photons, per second. 5. Mar 19, 2012 ### stigg ah yes youre right my mistake, so i used E=hf and mulitpled it by 10^6 photons per second which would then in turn be equal to the power, correct? Last edited: Mar 19, 2012 6. Mar 19, 2012 ### fluidistic Yes, exactly. Let P be this power. It will be distributed uniformly radially from the source. For instance, at 1 m away from the source, the power emitted by a single atom will be distributed over a surface of $4 \pi$. At a distance r, $4 \pi r ^2$. In the problem you are given a distance and the volume of the sensor. Do you have an idea how to calculate the percentage of emitted photons received in the sensor? 7. Mar 19, 2012 ### stigg first find the total intensity over the surface area of the sphere with radius 25cm and then find what fraction of that the sensor area is? 8. Mar 19, 2012 ### fluidistic Yes. 9. Mar 19, 2012 ### stigg i have the ratio of the sensor area to the total sphere area, however how do i use that to find the emmitted photons in the sensor 10. Mar 19, 2012 ### fluidistic Ok good. This ratio is the percentage of the emitted photons that are received by the sensor, for all atoms. In the problem statement they give the information that the intensity in the sensor is 1.6 nW. You can determine the number of photons the sensor receives per second. Let's suppose (unrealistically) that the sensor receives 100 photons per second. And that you calculated that a single atom emitts 30 photons per second. Say you calculated the ratio we're talking about as being 1%. It means that the total source (or all atoms together) emitts 100 times more photons than then ones your sensor receives. Thus, in total 10,000 photons are emitted from the source per second. There's a very small last step, let's see if you can figure it out. :) 11. Mar 19, 2012 ### stigg i divided the number of photons hitting the sensor per second by the ratio we talked about to receive the total number of photons released from the source per second i then divided this by the number emitted per atom to find the number of atoms in the source. does this sound correct? 12. Mar 19, 2012 ### fluidistic I think so :) Post your numbers just in case the result is strange. 13. Mar 19, 2012 ### stigg i calculated 1.95 x 10^12 atoms in the source, would you like my other numbers as well 14. Mar 19, 2012 ### fluidistic No need, that looks "possible" at first glance. 15. Mar 19, 2012 ### stigg sounds good, thanks a bunch for the help glad i came here it was very useful! 16. Mar 19, 2012 ### fluidistic You're welcome and feel free to use this forum (as much as I do ). Similar Discussions: Number of atoms trapped in an atom trap
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112021684646606, "perplexity": 833.3067685159506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00541.warc.gz"}
https://www.lesswrong.com/posts/5bd75cc58225bf067037532c/thoughts-on-quantilizers
# Ω 2 Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. A putative new idea for AI control; index here. This post will look at some of the properties of quantilizers, when they succeed and how they might fail. Roughly speaking, let be some true objective function that we want to maximise. We haven't been able to specify it fully, so we have instead a proxy function . There is a cost function which measures how much falls short of . Then a quantilizer will choose actions (or policies) radomly from the top of actions available, ranking those actions according to . It is plausible that for standard actions or policies, and are pretty similar. But that when we push to maximising , then the tiny details where and differ will balloon, and the cost can grow very large indeed. This could be illustrated roughly by figure I, where and are plotted against each other; imagine that is on a log scale. The blue areas are possible actions that can be taken. Note a large bunch of actions that are not particularly good for but have low cost, a thinner tail of more optimised actions that have higher and still have low cost, and a much thinner tail that has even higher but high cost. The -maximising actions with maximal cost are represented by the red star. Figure I thus shows a situation ripe for some form of quantilization. But consider figure II: In figure 2, the only way to get high is to have a high . The situation is completely unsuited for quantilization: any maximiser, even a quantilizer, will score terribly under . But that means mainly that we have chosen a terrible . Now, back to figure I, where quantilization might work, at least in principle. The ideal would be situation Ia; here blue represents actions below the top cut-off, green those above (which include the edge-case red-star actions, as before): Here the top of actions all score a good value under , and yet most of them have low cost. But even within the the broad strokes of figure I, quantilization can fail. Figure Ib shows a first type of failure: Here the problem is that the quantilizer lefts in too many mediocre actions, so the expectation of (and ) is mediocre; with a smaller , the quantilizer would be better. Another failure mode is figure Ic: Here the is too low: all the quantilized solutions have high cost. # Another quantilizer design An idea I had some time ago was that, instead of of taking the top of the actions, the quantilizer instead choose among the actions that are within of the top -maximising actions. Such a design would be less likely to encounter situations like Ib, but more likely to face situations like Ic. # What can be done? So, what can be done to improve quantilizers? I'll be posting some thoughts as they develop, but there are two ideas that spring to mind immediately. First of all, we can use CUA oracles to investigate the shape of the space of actions, at least from the perspective of (, like , cannot be calculated explicitly). Secondly, there's an idea that I had around low-impact AIs. Basically, it was to ensure that there was some action the AI could take that could easily reach some approximation of its goal. For instance, have a utility function that encourages the AI to build one papeclip, and cap that utility at one. Then scatter around some basic machinery to melt steel, stretch it, give the AI some manipulator arms, etc... The idea is to ensure there is at least one safe policy that gives the AI some high expected utility. Then if there is one policy, there's probably a large amount of similar policies in its vicinity, safe policies with high expectation. Then it seems that quantilization should work, probably best in its 'within of the maximal policy' version (working well because we know the cap of the utility function, hence have a cap on the maximal policy). Now, how do we know that a safe policy exists? We have to rely on human predictive abilities, which can be flawed. But the reason we're reasonably confident in this scenario is that we believe that we could figure out how to build a paperclip, given the stuff the AI has lying around. And the AI would presumably do better than us. New Comment
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8320112228393555, "perplexity": 1164.9371141737627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00123.warc.gz"}
https://cs.stackexchange.com/questions/54775/why-do-grammars-in-chomsky-normal-form-have-derivations-of-length-2n-1
# Why do grammars in Chomsky Normal Form have derivations of length 2n-1? I would like to know how they obtained the expression $2n-1$ as said from the excerpt of article (p.3): The key advantage is that in Chomsky Normal Form, every derivation of a string of n letters has exactly 2n−1 steps. I could get how $2n$ comes since there are only 2 variables on the R.H.S of each production but couldn't get how the expression $−1$ came in $2n−1$. • @justin This is a typical question that can be interesting but was formulated in a poor way (hence the downvotes). As Yuval Filmus expressed, you should edit your answer so that it reflects your question more properly (e.g. This article says this about that and I understand this aspect [you understand where the $2n$ comes from] because of this & that ["since there are only 2 variables on the R.H.S (...)"] but I don't understand this specific aspect [the $-1$ part.]). In short: while you made your problem clear, you didn't show any effort to explain what you understood and what you didn't. Mar 29 '16 at 17:31 • @Auberon. I upvoted your comment (FWIW), but I must confess that it took me a while to check that the delimiters were properly nested: "...properly (e.g., [my note, the comma should be here] ... [...] ... [... (...)] ... [...])". Yup, this Dyck language sample is legitimate. [insert grinning emoticon here.] Apr 5 '16 at 0:22 Let $n$ be the length of a string. We start with the (non-terminal) symbol $S$ which has length $n=1$. Using $n - 1$ rules of form $(non-terminal) \rightarrow (non-terminal)(non-terminal)$ we can construct a string containing $n$ non-terminal symbols. Then on each non-terminal symbol of said string of length $n$ we apply a rule of form $(non-terminal) \rightarrow (terminal)$. i.e. we apply $n$ rules. In total we will have applied $n - 1 + n = 2n - 1$ rules. example Observe following grammar in Chomsky-normal form. \begin{align} S & \to AB \\ A & \to BC | AC\\ A & \to h|b\\ B & \to a \\ C & \to z \\ \end{align} Consider following derivation \begin{align} \text{Current string} & & \text{rule applied} & & \text{#rules applied} & & \text{#length of string} \\ S & & \text{\\} & & 0 & & 1 \\ AB & & S \to AB & & 1 & & 2 \\ BCB & & A \to BC & & 2 & & 3 \\ \vdots & & \vdots & & \vdots & & \vdots \\ A\cdots CB & & \text{[multiple rules]} & & n-1 & & n \end{align} This last line represents a string containing only non-terminals. You can see that a string containing $n$ non-terminals is derived using $n-1$ rules. Let's continue. Applying $n$ rules of form $A \to a$ to each non-terminal in the string above gives you a string containing only terminals and thus a string from the language decided by the grammar. The length of the string has not changed (it's still $n$) but we applied an additional $n$ rules so in total we have applied $n-1 + n = 2n - 1$ rules. While this explanation hopefully gives you an intuitive understanding, I think it would be an useful excercise to construct a formal proof using induction. • :Could you tell why we use $n−1$ rules of the form $A \to BC$. Mar 29 '16 at 9:23 • @justin Added an example to my answer. Mar 29 '16 at 17:23 • :Could you tell how the length of string becomes 1 in the first step. Mar 30 '16 at 6:20 • @justin $S$ isn't a rule, it's a symbol... And you start every derivation with it. You take $S$ and then you apply your first rule of form $S /to AB$. So when you only have $S$, you haven't applied any rules yet. Mar 31 '16 at 15:53 • @OmarHossamAhmed Then it means there is no string of length $n$ in the language generated by the grammar, or at least you've used the wrong non-terminal rules. Nov 6 '18 at 10:46 And each of the $A \to B C$ produtions make the sentential form one longer. You start with length $1$, to reach $n$ means $n - 1$ steps. If a string has length $n$, there will be $n$ steps to get the terminals. In all, $2 n - 1$ steps. • :Do you mean to say that the start symbol is of length 1? Mar 29 '16 at 6:46 • @justin Yes, he does. Mar 29 '16 at 17:20 Let us consider an simple example. A -> BC B -> b C -> c String to be generated is bc. Then the steps are. A -> BC -> bC -> bc Thus no of steps required is 3. That is $$2n-1$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881077408790588, "perplexity": 819.8886162519963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00683.warc.gz"}
https://brilliant.org/problems/perfect-matching-on-a-regular-graph/
# Perfect matching on a regular graph In this problem, graphs are simple. A matching on a graph $$G$$ is a subset of $$E(G)$$ such that no two edges in the subset share an end. The size of a matching is the number of edges in it. A perfect matching on a graph $$G$$ is a matching whose size is $$\left\lfloor \frac{|V(G)|}{2} \right\rfloor$$; that is, a matching that's as big as possible given the number of vertices. For example, in the following image, the left graph shows a matching of size $$2$$. It is not perfect, and in fact it doesn't have a perfect matching. Adding an edge to give the right graph, however, gives a matching of size $$3$$ and hence a perfect matching. A $$k$$-regular graph is a graph where every vertex has degree $$k$$. For $$0 \le k \le 2015$$, how many values of $$k$$ make this statement true: "all $$k$$-regular graphs have a perfect matching"? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609073758125305, "perplexity": 108.78306829693015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00099-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/64095/comparing-countable-models-of-zfc
Comparing countable models of ZFC Let us consider the class $\cal C$ of countable models of ZFC. For ${\mathfrak A}=(A,{\in}_A)$ and ${\mathfrak B}=(B,{\in}_B)$ in $\cal C$ I say that ${\mathfrak A}<{\mathfrak B}$ iff there is a injective map $i: A \to B$ such that $x {\in}_A y \Leftrightarrow i(x) {\in}_B i(y)$ (note that this is a much weaker requirement for $i$ than to be an elementary embedding). My two questions are : (1) Is there a simple construction of two incomparable models ${\mathfrak A},{\mathfrak B}$ ? (i.e. neither ${\mathfrak A}<{\mathfrak B}$ nor ${\mathfrak B}<{\mathfrak A}$). (2) Given two models ${\mathfrak A},{\mathfrak B}$ in $\cal C$, is there always a third model ${\mathfrak C}$ in $\cal C$ such that ${\mathfrak A}<{\mathfrak C}$ and ${\mathfrak B}<{\mathfrak C}$ ? - I can not see the {\mathfrak A}. Please use something like \mathcal{A} which does appear. – William Sep 13 '11 at 4:43 @William : well, I do see the mathfrak A printed as it should on my computer. It's your browser's fault, not mine. – Ewan Delanoy Sep 13 '11 at 4:53 Can the continuum hypothesis be expressed as a quantifier free sentence? I think $\aleph_1 \in 2^{\aleph_0}$ sufficies. There is no quantifier here; however, these are parameters in the model and I am not sure if (mere) embeddings needs to map $\aleph_1$ of one model to $\aleph_1$ of the other model. However if this all works out, consider the countable model of the continuum hypothesis and its negation (by downward lowenheim skolem). I believe embeddings preserve quantifier free formula; hence, there can be no embedding between these two. However, this assumes ordinal map to same ordinal. – William Sep 13 '11 at 5:34 @William: Suppose that it could, then it would be a $\Delta_0$ sentence. Then it would be absolute between $\mathfrak A$ and $L^{\mathfrak A}$. In which case it is plain provable from ZFC. – Asaf Karagila Sep 13 '11 at 5:48 Ewan, you can try having $\mathfrak A$ transitive, and $\mathfrak B$ not. – Asaf Karagila Sep 13 '11 at 5:49 (2) seems true. Choose models with universes $\lbrace m_j\vert j<\omega\rbrace$, $\lbrace n_j\vert j<\omega\rbrace$ Consider language $L=\lbrace \in, a_i,b_i\vert i<\omega\rbrace$, and the theory $T=ZFC\cup\lbrace a_i\neq a_j\vert i,j\in\omega, m_i\neq m_j\rbrace\cup \lbrace a_i\in a_j\vert i,j\in\omega, m_i\in^{M_1} m_j\rbrace \cup \ldots$ It is clear that if $T$ is consistent, we can obtain the countable model in which $M_1,M_2$ embed monomorphically by downward Skolem. Choose a finite fragment of $T$. The formulas of $T$ don't relate $a$ and $b$ in any way, so it is effectively a fragment of ZFC plus two finite (well-founded and consistent) membership+non-membership graphs. But any such graph can be realized by a finite set in any model of ZFC, so by compactness $T$ is consistent. For (1) i think you can try to look at models which realize different subtrees of the Cantor tree (as subgraphs of their membership graphs). For example, one of them could have infinite descending sequence and other might not. It should be doable by omitting types theorem. - Concerning question (1). I became very interested in this question last year---obsessed with it, actually---when I found myself unable to prove that any of the natural-seeming examples were actually instances of incomparability (for example, none of the approaches suggested in the various comments actually work). After my numerous attacks on it failed, I began seriously to doubt the strong intuition underlying the question, that there should be incomparable models. Eventually, I was a able to show that indeed, any two countable models are comparable by embeddability. My paper is available at: The main theorems are: Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Thus, there is an embedding $$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$ that is elementary for quantifier-free assertions in the language of set theory. The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, which answers question (1). Version 2 on the archive, which will become visible in a few days, cites this question and Ewan Delanoy. Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$. The proof shows that the embedability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal. The proof method arises most easily in finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality. Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations. In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like. The article closes with a number of questions, which you may find on my blog post about the article. I plan to make some mathoverflow questions about them in the near future. - This is really cool. Thanks! – Andrés Caicedo Jul 6 '12 at 15:05 See this related mathoverflow question: mathoverflow.net/questions/101821/… – JDH Jul 10 '12 at 3:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769927024841309, "perplexity": 446.65928103481457}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.78/warc/CC-MAIN-20151124205424-00285-ip-10-71-132-137.ec2.internal.warc.gz"}
https://unapologetic.wordpress.com/2011/09/12/integrals-and-diffeomorphisms/
# The Unapologetic Mathematician ## Integrals and Diffeomorphisms Let’s say we have a diffeomorphism $f:M^n\to N^n$ from one $n$-dimensional manifold to another. Since $f$ is both smooth and has a smooth inverse, we must find that the Jacobian is always invertible; the inverse of $J_f$ at $p\in M$ is $J_{f^{-1}}$ at $f(p)\in N$. And so — assuming $M$ is connected — the sign of the determinant must be constant. That is, $f$ is either orientation preserving or orientation-reversing. Remembering that diffeomorphism is meant to be our idea of what it means for two smooth manifolds to be “equivalent”, this means that $N$ is either equivalent to $M$ or to $-M$. And I say that this equivalence comes out in integrals. So further, let’s say we have a compactly-supported $n$-form $\omega$ on $N$. We can use $f$ to pull back $\omega$ from $N$ to $M$. Then I say that $\displaystyle\int\limits_Mf^*\omega=\pm\int\limits_N\omega$ where the positive sign holds if $f$ is orientation-preserving and the negative if $f$ is orientation-reversing. In fact, we just have to show the orientation-preserving side, since if $f$ is orientation-reversing from $M$ to $N$ then it’s orientation-preserving from $-M$ to $N$, and we already know that integrals over $-M$ are the negatives of those over $M$. Further, we can assume that the support of $f^*\omega$ fits within some singular cube $c:[0,1]^n\to M$, for if it doesn’t we can chop it up into pieces that do fit into cubes $c_i$, and similarly chop up $N$ into pieces that fit within corresponding singular cubes $f\circ c_i$. But now it’s easy! If $f^*\omega$ is supported within the image of an orientation-preserving singular cube $c$, then $\omega$ must be supported within $f\circ c$, which is also orientation-preserving since both $f$ and $c$ are, by assumption. Then we find \displaystyle\begin{aligned}\int\limits_N\omega&=\int\limits_{f\circ c}\\&=\int\limits_{f(c([0,1]^n))}\omega\\&=\int\limits_{c([0,1]^n)}f^*\omega\\&=\int\limits_cf^*\omega\\&=\int\limits_Mf^*\omega\end{aligned} In this sense we say that integrals are preserved by (orientation-preserving) diffeomorphisms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692643880844116, "perplexity": 168.79180542514672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929272.95/warc/CC-MAIN-20150521113209-00130-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.sarthaks.com/9836/in-case-of-negative-work-the-angle-between-the-force-and-displacement-is
# In case of negative work the angle between the force and displacement is 201 views in Physics In case of negative work the angle between the force and displacement is (a) 00 (b) 45 (c) 90 (d) 1800 by (128k points) work the angle between the force and displacement (d) 1800
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914854764938354, "perplexity": 982.4697736354204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00245.warc.gz"}
https://peswiki.com/powerpedia:power
# PesWiki.com ## PowerPedia:Power Lasted edited by Andrew Munsey, updated on June 14, 2016 at 10:08 pm. • This page has been imported from the old peswiki website. This message will be removed once updated. In There was an error working with the wiki: Code[2] is performed or Energy is transferred. In the SI system of measurement, power is measured in Watts (symbol: W). As a rate of change of work done or the energy of a subsystem, power is: : P=\frac{\mathrm{}W}{\mathrm{}t} where :P is power :W is Mechanical work :t is There was an error working with the wiki: Code[14]. The average power (often simply called "power" when the context makes it clear) is the average amount of work done or energy transferred per unit time. The instantaneous power is then the limiting value of the average power as the time interval &Deltat approaches zero. : P=\lim_{\Delta t\rightarrow 0} \frac{\Delta W}{\Delta t} = \lim_{\Delta t\rightarrow 0} P_\mathrm{avg} When the rate of energy transfer or work is constant, all of this can be simplified to : P=\frac{W}{t} = \frac{E}{t} , where :W and E are, respectively, the work done or energy transferred in time t. #### Units The units of power are units of energy divided by time. The SI unit of power is the watt, which is equal to one joule per second. Non-SI units of power include There was an error working with the wiki: Code[3] (PS), There was an error working with the wiki: Code[4] (CV) and There was an error working with the wiki: Code[5] per minute. One unit of horsepower is equivalent to 33,000 foot-pounds per minute, or the power required to lift 550 There was an error working with the wiki: Code[6] one foot in one second, and is equivalent to about 746 watts. Other units include There was an error working with the wiki: Code[15], a logarithmic measure with 1 milliwatt as reference and (food) There was an error working with the wiki: Code[16]s per hour (often referred to as There was an error working with the wiki: Code[17]s per hour). #### Mechanical power In There was an error working with the wiki: Code[7] done on an object is related to the forces acting on it by :W = \mathbf{F} \cdot \mathrm{}\mathbf{s} where :F is Force :s is the There was an error working with the wiki: Code[8] of the object. This is often summarized by saying that work is equal to the force acting on an object times its displacement (how far the object moves while the force acts on it). Note that only motion that is along the same axis as the force "counts", however motion in the same direction as force gives positive work, and motion in the opposite direction gives negative work, while motion perpendicular to the force yields zero work. Differentiating by time gives that the instantaneous power is equal to the force times the object's Velocity v(t): :P(t) = \mathbf{F}(t) \cdot \mathbf{v}(t). The average power is then :P_\mathrm{avg} = \frac{1}{\Delta t}\int\mathbf{F} \cdot \mathbf{v}\\mathrm{d}t. This formula is important in characterizing Engines&mdashthe power put out by an engine is equal to the force it exerts times its velocity. #### Electrical power Main article: PowerPedia:Electric power ##### Instantaneous electrical power The instantaneous electrical power P delivered to a component is given by : P(t) = I(t) \cdot V(t) \,\! where :P(t) is the instantaneous power, measured in Watts (There was an error working with the wiki: Code[18]s per There was an error working with the wiki: Code[19]) :V(t) is the There was an error working with the wiki: Code[20] (or voltage drop) across the component, measured in Volts :I(t) is the Current (electricity) flowing through it, measured in Amperes If the component is a There was an error working with the wiki: Code[21], then: : P=I^2 \cdot R = \frac{V^2}{R} where :R = V/I is the Electrical resistance, measured in There was an error working with the wiki: Code[9]s. If the component is reactive (e.g. a Capacitor or an There was an error working with the wiki: Code[22]), then the instantaneous power is negative when the component is giving stored energy back to its environment, i.e., when the current and voltage are of opposite signs. ##### Average electrical power for sinusoidal voltages The average power consumed by a There was an error working with the wiki: Code[10]-driven linear two-terminal electrical device is a function of the There was an error working with the wiki: Code[11] passing through the device, and of the phase angle between the voltage and current sinusoids. That is, :P=I \cdot V \cdot \cos\phi \,\! where :P is the average power, measured in Watts :I is the root mean square value of the sinusoidal alternating current (AC), measured in Amperes :V is the root mean square value of the sinusoidal alternating voltage, measured in Volts :&phi is the There was an error working with the wiki: Code[23] between the voltage and the current sine functions. The amplitudes of sinusoidal voltages and currents, such as those used almost universally in mains electrical supplies, are normally specified in terms of root mean square values. This makes the above calculation a simple matter of multiplying the two stated numbers together. This figure can also be called the Effective power, as compared to the larger There was an error working with the wiki: Code[24] which is expressed in There was an error working with the wiki: Code[25] (VAR) and does not include the cos &phi term due to the current and voltage being out of phase. For simple domestic appliances or a purely resistive network, the cos &phi term (called the There was an error working with the wiki: Code[26]) can often be assumed to be unity, and can therefore be omitted from the equation. In this case, the effective and apparent power are assumed to be equal. ##### Average electrical power for AC : P = {1 \over T} \int_{0}^{T} i(t) v(t)\, dt Where v(t) and i(t) are, respectively, the instantaneous voltage and current as functions of time. For purely resistive devices, the average power is equal to the product of the rms voltage and rms current, even if the waveforms are not sinusoidal. The formula works for any waveform, periodic or otherwise, that has a mean square that is why the rms formulation is so useful. For devices more complex than a resistor, the average effective power can still be expressed in general as a power factor times the product of rms voltage and rms current, but the power factor is no longer as simple as the cosine of a phase angle if the drive is non-sinusoidal or the device is not linear. ##### Peak power and duty cycle In a train of identical pulses, the instantaneous power is a periodic function of time. The ratio of the pulse duration to the period is equal to the ratio of the average power to the peak power. It is also called the duty cycle. In the case of a periodic signal s(t) of period T, like a train of identical pulses, the instantaneous power p(t) = |s(t)|^2 is also a periodic function of period T. The peak power is simply defined by: : P_0 = \max (p(t)) The peak power is not always readily measurable, however, and the measurement of the average power P_\mathrm{avg} is more commonly performed by an instrument. If one defines the energy per pulse as: : \epsilon_\mathrm{pulse} = \int_{0}^{T}p(t) \mathrm{d}t then the average power is: : P_\mathrm{avg} = \frac{1}{T} \int_{0}^{T}p(t) \mathrm{d}t = \frac{\epsilon_\mathrm{pulse}}{T} One may define the pulse length \tau such that P_0\tau = \epsilon_\mathrm{pulse} so that the ratios : \frac{P_\mathrm{avg}}{P_0} = \frac{\tau}{T} are equal. These ratios are called the duty cycle of the pulse train. #### Power in optics There was an error working with the wiki: Code[1] In There was an error working with the wiki: Code[12] or other optical device to There was an error working with the wiki: Code[13] light. It is measured in There was an error working with the wiki: Code[27]s (inverse Metres), and is equal to one over the There was an error working with the wiki: Code[28] of the optical device.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785837531089783, "perplexity": 1046.6186262599347}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00091.warc.gz"}
https://www.physicsforums.com/threads/solving-a-system-of-equations.704766/
# Solving a system of equations . 1. Aug 8, 2013 1. The problem statement, all variables and given/known data Solve simultaneously: 6(x + y) = 5xy, 21(y + z) = 10yz, 14(z + x) = 9zx 2. Relevant equations - 3. The attempt at a solution Obviously one of the solution is (0,0,0) But I'm more interested in finding the other. Expanding these three equations , I get - 6x+6y=5xy 21y+21z=10yz 14z+14x=9zx But what now? I've no experience solving these type of systems . Please give hints. 2. Aug 8, 2013 ### pasmith Multiply each equation by the unknown which doesn't appear in it so that the right hand side of each equation is then a multiple of $xyz$. You can then eliminate $xyz$ in three ways to end up with three linear simultaneous equations for $xy$, $xz$ and $yz$. Having solved those, use the fact that $(xy)(xz)(yz) = (xyz)^2$ to find $x$, $y$ and $z$ up to a sign. 3. Aug 8, 2013 ### Ray Vickson Even better: if p = xyz, one can express xy, xz and yz as numerical multiples of p (by solving linear equations with p in the right-hand-sides), so---if x, y and z are non-zero---one gets a unique solution for (x,y,z), with no sign ambiguities. Also: explore what must happen if one of the variables, say x, is zero. 4. Aug 9, 2013 Thanks for the hints , but I used a different method to solve this equation. From eq. 1 I got y=(6x)/(5x-6) . I substituted that in eq. 2 to get x in terms of z. ( x=(126z)/(126+45z) ) then I substituted that into eq. 3 to get a quadratic equation in z with solutions z=0,7. From z=7 , I got x= 2 . Then I got y=3 from the relationship between x and y obtained earlier from eq. 1 . But I felt that the arithmetic involved was quite daunting and there must be a better way to solve it. I tried using the method that pasmith recommended but am feeling a bit lost. Can anyone please explain a simpler method to solve this system ? 5. Aug 9, 2013 ### Ray Vickson Let $a = xy, \;b = yz, \; c = zx \:\text{ and }\: p = xyz.$ Multiply the first equation by z, the second by x and the third by y to get $$6b+6c=5p\\ 21a+21c=10p \\ 14a+14b=9p$$ This is a simple 3x3 linear system which can be solved using grade-school methods, to get $a = p/7,\: b = p/2, \: c = p/3.$ Assuming $x,y,z,\neq 0$ we have $$xy = xyz/7 \Longrightarrow z = 7 \\ yz = xyz/2 \Longrightarrow x = 2\\ zx = xyz/3 \Longrightarrow y=3$$ 6. Aug 10, 2013 That's Awesome ! Thanks a lot ! Draft saved Draft deleted Similar Discussions: Solving a system of equations .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917327523231506, "perplexity": 510.6030910897548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00007.warc.gz"}
http://2014.igem.org/wiki/index.php?title=Team:Aberdeen_Scotland/Modeling/QS&oldid=363007
# Team:Aberdeen Scotland/Modeling/QS (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Team:Aberdeen Scotland/Modelling - 2014.ogem.org # Quorum Sensing Model Our goal with this model was to analyse our system in detail so that we learn more about its spacial features. Namely we wanted to see if co-localised cells will be able to distinguish each other from far away cells through quorum sensing and if so under what conditions. This was essential to our design as initially we envisioned that cells will communicate and create an AND-gate type of response in presence of according stimuli. Since Quorum Sensing(QS) occurs due to diffusion of molecules in the medium, we naturally decided to employ the diffusion equation. Diffusion Equation (more precisely Fick's 2nd Law)1 $$\frac{\partial{C}}{\partial{t}}=D\nabla^2{C}$$ $$C - concentration$$ $$D - diffusion\:constant$$ However, in our system there are dedicated Sender and Receiver cells, which is different from natural QS. Sender cells can only produce and diffuse AHLs into the medium without the ability to react to it and Receiver cells can only react to AHLs without the ability to produce it. This arrangement made our system particularly interesting as we will discuss further down. • ### Initial Assumptions • Constant Production of AHL from Sender cells • Receivers react to the AHL concentration around them • E. Coli cell-shape symmetry is taken into account Initially we wanted to solve the equation in a particular case, so that we gain some intuition for the situation. Thus, in a big enough medium after some time T we can say that $$\frac{\partial{C}}{\partial{t}}=0 \implies D\nabla^2{C}=0$$ Finally, assuming spherical coordinates we get: $$C(R)=\frac{K}{R}$$ for some constant K. Now if we look at the Gaussian around a Sender cell, we find: $$\phi=-D\nabla{C}$$ which directly leads to a solution of the form $$C(R)=\frac{Q}{4\pi{}DR}$$ where Q is the production of AHLs by the Sender per unit time. Knowing this we were able to simulate Senders and look at the resulting AHL concentration in the medium Fig.1: Concentration potential around single sender (top view) Fig.2: Concentration potential around single sender (side view) Fig.3: Concentration potential in medium with many senders From Fig.3 you can see that the overall "background" concentration in the medium is very uniform regardless of cell distribution. This quick observation made us investigate further and calculate precisely if Receiver cells will be able to tell the difference between local high concentration and simply a lot of Senders far away in the medium. • Consider the following situation: • Medium of 1 ml volume • A spherical shell of radius 150 μm (roughly 100 cell radii) of cells concentrated in the middle of that volume • Assume the concentrated shell has 3 orders of magnitude higher concentration of cells per volume • $$\frac{V_{shell}}{V_{volume}}=10^{3}$$ Evaluating the difference in contribution to the overall background of AHLs due to the shell and the surrounding volume of cells gives surprising results. $$C(V_{volume}-V_{shell})=\iiint_{V_{diff}} \ \frac{q_{i}}{4\pi{}Dr_{i}}\,dV$$ If we compare this difference to the total concentration, we see that its contribution is insignificant (~6 orders of magnitude smaller). This realisation played a crucial role in the following adjustment of our assay design. It was obvious at this point that our diagnostic method cannot rely on co-localization of cells through QS. This led to the adjustment of isolating the binding of Sender and Receiver cells to their respective epitopes into two separate phases and then recombining them for QS response. Essentially, QS will only occur if cells have previously bound to an epitope so that the can be transferred to the final "tube". If they have not bound anything they will be excluded from the final culture. Moreover, further analysis revealed that if cells are bound to a surface through surface-mounted epitopes, QS response will be achieved earlier and will be stronger. This is due to the spacial symmetry as the "flux" of AHLs can thus only diffuse in a semi-sphere. This was further confirmed experimentally by comparing QS cells bound by Poly-L-Lysine on a plate to QS cells free in medium. Fig.4 GFP fluorescence of surface bound cells (triangle) and freely suspended cells (circle). Overall, immobilised T9002/K1090000 pairs exhibited a higher rate of GFP response, and higher absolute GFP response after 7hours than pairs free in suspension(Fig.4). ### References [1] Jean Philibert, One and a Half Century of Diffusion: Fick, Einstein, before and beyond, Diffusion Fundamentals 2, 2005 1.1–1.10 [2] Ward, J.P., J.R. King and A.J. Koerber. “Mathematical modelling of quorum sensing in bacteria.” IMA Journal of Mathematics Applied in Medicine and Biology 2001: 18, 263-292
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854988217353821, "perplexity": 1891.4548589023082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00085.warc.gz"}
https://thebrickinthesky.wordpress.com/2013/02/23/maths-with-python-2-rossler-system/
# maths with Python 2: Rössler system After installing python and the packages to code some math, it’s time to use them. Open the GUI and File>New Window. This time instead of writting the comands directly on the comand window, we are going to make a script with all the operations. We are going to solve numerically the Rössler equation system and learn some tricks in the journey. The system is composed of this set of eaquations which describe the evolution of variables $x\quad y \quad z$. $\left\{ \begin{matrix} \frac{\mathrm{d}x}{\mathrm{d}t}=-y-z \\ \frac{\mathrm{d}y}{\mathrm{d}t}=x+ay \\ \frac{\mathrm{d}z}{\mathrm{d}t}=b+z(x-c)) \end{matrix}\right.$ For parameters $a=0.13 \quad b=0.2 \quad c=6.5$ and initial conditions $x(0)=0 \quad y(0)=0 \quad z(0)=0$ using Maple with it’s differential plotting tools we can see how the solutions looks like: The reason to use these values it’s just fun, no especial interest on them. Only on parameter a, because for $0 the origin is an unstable point. Ok, so now turning the equations into numerical equations, how to do it? In this case we are going to use the most easiest and simple way, the forward Euler method. This is the easiest and more simple method because you don’t need to solve any equation and the solution is built in a recursive manner. The Euler method consist in using these definition of derivative as the approximation of the derivative We have $\lim_{h\rightarrow 0} { \frac{ f(t+h) - f(t) }{ h} } = \frac{\mathrm{d} f(t)}{ \mathrm{d}t}$ and we approximate: ${\frac{ f(t+h) - f(t) }{ h}} \simeq \frac{\mathrm{d} f(t)}{ \mathrm{d}t}$ and the approximation is better the smaller $h$ is. So in our case we want to find $x(t) \quad y(t) \quad z(t)$ which satisfy the Rössler system. We start by splitting the time into equally spaced steps $t_n$ such $t_{n+1}-t_n=h$ and we name $x_n=x(t_n)$ Using the Euler approximation we can rewrite the Rössler system as the next recursion relationship: $\left\{ \begin{matrix} \frac{x_{n+1}-x_n}{h}=-y_n-z_n \\ \frac{y_{n+1}-y_n}{h}=x_n+ay_n \\ \frac{z_{n+1}-z_n}{h}=b+z_n(x_n-c)) \end{matrix}\right.$ We can now turn this equations into it’s explicit form $\left\{ \begin{matrix} x_{n+1}=x_n+h(-y_n-z_n) \\ y_{n+1}=y_n+h(x_n+ay_n) \\ z_{n+1}=z_n+h(b+z_n(x_n-c))) \end{matrix}\right.$ and since we know the nitial conditions $x_0 \quad y_0 \quad z_0$ we can find easily a numerical solution to the system. Let’s now see how to do that with Python. Get back to the open scripting window. This is the code to do what we need: ##This is a simple code to solve Rössler equation system in python and plot solutions. ##Made by Héctor Corte-León [email protected] on 23/02/2013 ##The numerical method used is first order forward Euler method. ############################################# from numpy import * from matplotlib import * from scipy import * from pylab import figure, show, setp from mpl_toolkits.mplot3d import Axes3D #We define a function which is going to be the recursive function. def num_rossler(x_n,y_n,z_n,h,a,b,c): x_n1=x_n+h*(-y_n-z_n) y_n1=y_n+h*(x_n+a*y_n) z_n1=z_n+h*(b+z_n*(x_n-c)) return x_n1,y_n1,z_n1 #Now we prepare some variables #First the parameters a=0.13 b=0.2 c=6.5 #Them the time interval and the step size t_ini=0 t_fin=32*pi h=0.0001 numsteps=int((t_fin-t_ini)/h) #using this parameters we build the time. t=linspace(t_ini,t_fin,numsteps) #And the vectors for the solutions x=zeros(numsteps) y=zeros(numsteps) z=zeros(numsteps) #We set the initial conditions x[0]=0 y[0]=0 z[0]=0 #This is the main loop where we use the recursive system to obtain the solution for k in range(x.size-1): #We use the previous point to generate the new point using the recursion [x[k+1],y[k+1],z[k+1]]=num_rossler(x[k],y[k],z[k],t[k+1]-t[k],a,b,c) #Now that we have the solution in vectors t,x,y,z is time to plot them. #We create a figure and 4 axes on it. 3 of the axes are going to be 2D and the fourth one is a 3D plot. fig = figure() ax1 = fig.add_axes([0.1, 0.7, 0.4, 0.2]) ax2 = fig.add_axes([0.1, 0.4, 0.4, 0.2]) ax3 = fig.add_axes([0.1, 0.1, 0.4, 0.2]) ax4 = fig.add_axes([0.55, 0.25, 0.35, 0.5],projection='3d') #And we add vectors to each plot ax1.plot(t, x,color='red',lw=1,label='x(t)') ax1.set_xlabel('t') ax1.set_ylabel('x(t)') ax1.legend() ax1.axis((t_ini,t_fin,min(x),max(x))) ax2.plot(t, y,color='green',lw=1,label='y(t)') ax2.set_xlabel('t') ax2.set_ylabel('y(t)') ax2.legend() ax2.axis((t_ini,t_fin,min(y),max(y))) ax3.plot(t, z,color='blue',lw=1,label='z(t)') ax3.set_xlabel('t') ax3.set_ylabel('z(t)') ax3.legend() ax3.axis((t_ini,t_fin,min(z),max(z))) ax4.plot(x, y,z,color='black',lw=1,label='Evolution(t)') ax4.set_xlabel('x(t)') ax4.set_ylabel('y(t)') ax4.set_zlabel('z(t)') ax4.set_title('Evolution') #When finished we show the figure with all the plots. show() Once you have the code written and understand what it is doing, press F5 to execute the script. After a short time a figure like this one will appear. And voilá! On next tutorial we are going to construct the bifurcation diagram for x when we sweep parameter b. Enjoy this meanwhile! ## 9 thoughts on “maths with Python 2: Rössler system” 1. Kimi Lee says: How would you change the program to only plot the solution after the system has landed on the attractor? 1. Hi Kimi. Sorry for not giving you an answer before. The most tricky thing is to decide when it has “landed”. I don’t know of any condition it has to full fill, so I cannot tell you how to check when it has gone into a periodic orbit in the attractor, but I can tell you how to plot it. If you found for instance that after 60 time units the solution is in the attractor, you can put this code instead the one on the post. Notice the change in the plot command. #And we add vectors to each plot tini=round(60/h); #60 suppose to be for isntance the moment it “landed” ax1.plot(t[tini:size(t)], x[tini:size(t)],color=’red’,lw=1,label=’x(t)’) ax1.set_xlabel(‘t’) ax1.set_ylabel(‘x(t)’) ax1.legend() ax1.axis((t_ini,t_fin,min(x),max(x))) ax2.plot(t[tini:size(t)], y[tini:size(t)],color=’green’,lw=1,label=’y(t)’) ax2.set_xlabel(‘t’) ax2.set_ylabel(‘y(t)’) ax2.legend() ax2.axis((t_ini,t_fin,min(y),max(y))) ax3.plot(t[tini:size(t)], z[tini:size(t)],color=’blue’,lw=1,label=’z(t)’) ax3.set_xlabel(‘t’) ax3.set_ylabel(‘z(t)’) ax3.legend() ax3.axis((t_ini,t_fin,min(z),max(z))) ax4.plot(x[tini:size(t)], y[tini:size(t)],z[tini:size(t)],color=’black’,lw=1,label=’Evolution(t)’) ax4.set_xlabel(‘x(t)’) ax4.set_ylabel(‘y(t)’) ax4.set_zlabel(‘z(t)’) ax4.set_title(‘Evolution’) #When finished we show the figure with all the plots. show()
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535326957702637, "perplexity": 2008.2013128252477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00164.warc.gz"}
http://link.springer.com/article/10.1007%2FBF01261607
, Volume 5, Issue 2-3, pp 159-184 # A fast transform for spherical harmonics Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract Spherical harmonics arise on the sphere S2 in the same way that the (Fourier) exponential functions {eikθ}k∈ℤ arise on the circle. Spherical harmonic series have many of the same wonderful properties as Fourier series, but have lacked one important thing: a numerically stable fast transform analogous to the Fast Fourier Transform (FFT). Without a fast transform, evaluating (or expanding in) spherical harmonic series on the computer is slow—for large computations probibitively slow. This paper provides a fast transform. For a grid ofO(N2) points on the sphere, a direct calculation has computational complexityO(N4), but a simple separation of variables and FFT reduce it toO(N3) time. Here we present algorithms with timesO(N5/2 log N) andO(N2(log N)2). The problem quickly reduces to the fast application of matrices of associated Legendre functions of certain orders. The essential insight is that although these matrices are dense and oscillatory, locally they can be represented efficiently in trigonometric series. Communicated by M. Victor Wickerhauser
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115414023399353, "perplexity": 1699.67512709805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00292-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/chris3.html
SEARCH HOME Math Central Quandaries & Queries I was confused on this problem. If a ship leaves port and travel due east. At a certain point it turns 65 degrees south of east and travels b=322 nautical miles to a point 461 nautical miles on a direct line from port. How far from the port is the point where the ship turned? Hi Chris, I tried drawing the problem. Since the ship ends 461 nautical miles from the origin, we can draw a circle of radius 461 around the origin. At some point P, it pivots 65 degrees southward on its course and proceeds for a distance of 322 nautical miles. Here's the diagram: From the geometry, you can see that angle OPD must be 180 - 65 = 115 degrees. Now you have one angle and two sides of the triangle OPD, so you can use the Law of Cosines to solve for the missing side. Hope this helps, Stephen La Rocque. Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771582245826721, "perplexity": 454.8464409409252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00343.warc.gz"}
http://math.stackexchange.com/questions/209005/inequality-involving-xn
# Inequality involving $x^n$ This is probably not very exciting, but can't get my head around this for a while. Here's the inequality: $$x_1^3-\epsilon x_1^2 -(2+\epsilon(1-x_1^n))x_1+(1+\epsilon)>0$$ where $\epsilon>0, n \in \mathbb{Z^{+}}$. Please just give me hints, don't solve it for me. EDIT: $x_1>0$ EDIT2: it seems that the solution to inequality $1-x_1-x_1^2>0$ is the trick. For (see EDIT 1) $0<x_1<\frac{\sqrt{5}-1}{2} \approx 0.618$ the main inequality above is true for all $n, \epsilon$. If $x_1$ is larger than $\approx 0.618$, then it is true for some $\epsilon>0$ iff $n$ is not very large. The condition $\epsilon>0$ is crucial. - Presumably this is not the full question, since in particular there can be problems if $x_1$ is negative. –  André Nicolas Oct 8 '12 at 1:14 I deleted this condition, but it seems to be important. Add it back. –  Alex Oct 8 '12 at 1:23 If it is wished that the inequality be true for all positive $x_1$, it will need modification. For example, let $n=1$. Then we are looking at $$x^3-(2+\epsilon)x+1+\epsilon.$$ This is not necessarily positive. Imagine $\epsilon$ very close to $0$. The function reaches a minimum at $x=\sqrt{(2+\epsilon)/3}$, and the minimum value is negative, though not by much. I chose a small $\epsilon$, because presumably that is what is intended. But if we pick $\epsilon$ large, like $10$, we are looking at $z^3-12x+11$, which is $-5$ at $x=2$. There will be similar difficulties with $n=2$. And for any $n\ge 3$, and small $\epsilon$, we can reproduce the same problem. @Alex: This was a reply to a comment by alex.jordan, who then deleted the comment! So it doesn't make much sense by itself. I assumed in my answer that you wanted to prove that the inequality holds for all $x_1$. alex.jordan pointed out that you could be asking for the solutions to the inequality. That would be of course very difficult except approximately, since we are dealing with possibly a high degree polynomial. –  André Nicolas Oct 8 '12 at 3:21 OK, sorry, I didn't express myself properly: what I mean, is to find for what values of $x_1$ this inequality holds. Does this make better sense now? –  Alex Oct 8 '12 at 3:46 @Alex: Yes, it is clearer. For $n\ge 5$, one cannot expect a pleasant closed form solution. Already there will be some difficulty with $n=3$. –  André Nicolas Oct 8 '12 at 3:51 Ok, I see that for $x=0$ or $x=1$ this inequality holds for all $\epsilon, n$. Is it possible to extend it other subsets of $x$? An approximation is OK. –  Alex Oct 8 '12 at 4:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431279897689819, "perplexity": 178.88075770225637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009825.77/warc/CC-MAIN-20141125155649-00022-ip-10-235-23-156.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/137414/probability-of-two-opposite-events/137466
# Probability of two opposite events Suppose there is string of eight bits, e.g.: 00100110 Bits are randomly chosen from the string. All choices are done equally likely. Probability of choosing $0$: $p_0 = \frac{5}{8} = 0.625$ Prob. of choosing $1$: $p_1 = \frac{3}{8} = 0.375$ Suppose you have already chosen $0$ or $1$. Probability of choosing opposite char, and then again opposite char, is given with: $p(0 \wedge 1) = p_0 p_1 = 0.234$. Without the "you have already chosen $0$ or $1$ ...", the probability would be: $p(0\wedge 1) = 2p_0 p_1 = 0.468$ Correct? - I have no idea what's going on... –  David Mitra Apr 26 '12 at 21:09 Are you choosing one bit from the 8 given bits at random? –  David Mitra Apr 26 '12 at 21:10 I have updated the question. –  Mooncer Apr 27 '12 at 2:03 The problem is not described with complete clarity. So some assumptions are needed in order to produce an answer. We assume that you are picking one of the $8$ locations independently and at random $3$ times, with all choices equally likely. In particular, it is assumed that repetition of location is allowed. The probability of getting the bit sequence $0$, $1$, $0$ is then $\frac{5}{8}\cdot\frac{3}{8}\cdot\frac{5}{8}$, and the probability of getting the bit sequence $1$, $0$, $1$ is $\frac{3}{8}\cdot\frac{5}{8}\cdot\frac{3}{8}$. Add. Our probability is $$\frac{5}{8}\cdot\frac{3}{8}\cdot\frac{5}{8}+\frac{3}{8}\cdot\frac{5}{8}\cdot\frac{3}{8}.$$ This simplifies to $\dfrac{15}{64}$. Remark: The number $\dfrac{15}{64}$ is precisely your number $p_0p_1$. There is good structural reason for that. However, I think that the detailed analysis above is more informative, since it generalizes readily to other situations. If repetition of location is not allowed, the analysis is quite similar, but the numbers change. We get $$\frac{5}{8}\cdot\frac{3}{7}\cdot\frac{4}{6}+\frac{3}{8}\cdot\frac{5}{7}\cdot\frac{2}{6}.$$ Whatever the probabilities $p_0$ and $p_1$ are, they add up to $1$. We are adding $p_0^2p_1$ and $p_1^2p_0$. This is $p_0p_1(p_0+p_1)$, which is just $p_0p_1$. But if we were interested in the probability of ($0101$ or $1010$), we would not get the pleasant simplification. –  André Nicolas Apr 27 '12 at 5:14 Thanks that is interesting to me, because in general I am trying to exploit the opposite relation between $p_0$ and $p_1$. For example, suppose that there is a moveable head, like in Turing machine, pointing at a char. We are interested in "cycle" i.e. in moving to opposite char, and then to opposite-opposite char. The sum up to 1.0 that you described corresponds to fact, that head is always at some char when starting the cycle, either 0 or 1. –  Mooncer Apr 27 '12 at 6:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910073280334473, "perplexity": 383.40026362612457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832032.48/warc/CC-MAIN-20140820021352-00119-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&pubname=all&v1=35L65&startRec=31
# American Mathematical Society Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(35L65) AND publication=(all) Sort order: Date Format: Standard display Results: 31 to 60 of 200 found      Go to page: 1 2 3 4 5 > >> [31] Shuxing Chen. Mixed type equations in gas dynamics. Quart. Appl. Math. 68 (2010) 487-511. MR 2676973. Abstract, references, and article information View Article: PDF [32] Arthur V. Azevedo, Cesar S. Eschenazi, Dan Marchesin and Carlos F. B. Palmeira. Topological resolution of Riemann problems for pairs of conservation laws. Quart. Appl. Math. 68 (2010) 375-393. MR 2663005. Abstract, references, and article information View Article: PDF [33] Gui-Qiang Chen, Marshall Slemrod and Dehua Wang. Weak continuity of the Gauss-Codazzi-Ricci system for isometric embedding. Proc. Amer. Math. Soc. 138 (2010) 1843-1852. MR 2587469. Abstract, references, and article information View Article: PDF [34] Jing Chen and Changjiang Zhu. Decay rates of strong planar rarefaction waves to scalar conservation laws with degenerate viscosity in several space dimensions. Trans. Amer. Math. Soc. 362 (2010) 1797-1830. MR 2574878. Abstract, references, and article information View Article: PDF [35] Denis Serre. Von Neumann's comments about existence and uniqueness for the initial-boundary value problem in gas dynamics. Bull. Amer. Math. Soc. 47 (2010) 139-144. MR 2566448. Abstract, references, and article information View Article: PDF [36] C. D'Apice, R. Manzo and B. Piccoli. Modelling supply networks with partial differential equations. Quart. Appl. Math. 67 (2009) 419-440. MR 2547634. Abstract, references, and article information View Article: PDF [37] Manoj Pandey and V. D. Sharma. Kinematics of a shock wave of arbitrary strength in a non-ideal gas. Quart. Appl. Math. 67 (2009) 401-418. MR 2547633. Abstract, references, and article information View Article: PDF [38] Gui-Qiang Chen and Benoît Perthame. Large-time behavior of periodic entropy solutions to anisotropic degenerate parabolic-hyperbolic equations. Proc. Amer. Math. Soc. 137 (2009) 3003-3011. MR 2506459. Abstract, references, and article information View Article: PDF [39] Christophe Chalons, Pierre-Arnaud Raviart and Nicolas Seguin. The interface coupling of the gas dynamics equations. Quart. Appl. Math. 66 (2008) 659-705. MR 2465140. Abstract, references, and article information View Article: PDF [40] Ran Duan, Hongxia Liu and Huijiang Zhao. Nonlinear stability of rarefaction waves for the compressible Navier-Stokes equations with large initial perturbation. Trans. Amer. Math. Soc. 361 (2009) 453-493. MR 2439413. Abstract, references, and article information View Article: PDF [41] Shuxing Chen. Transonic shocks in 3-D compressible flow passing a duct with a general section for Euler systems. Trans. Amer. Math. Soc. 360 (2008) 5265-5289. MR 2415074. Abstract, references, and article information View Article: PDF This article is available free of charge [42] Denise Aregba-Driollet, Gabriella Bretti and Roberto Natalini. Numerical schemes for the Barenblatt model of non-equilibrium two-phase flow in porous media. Quart. Appl. Math. 66 (2008) 201-231. MR 2416771. Abstract, references, and article information View Article: PDF [43] Marko Nedeljkov. Singular shock waves in interactions. Quart. Appl. Math. 66 (2008) 281-302. MR 2416774. Abstract, references, and article information View Article: PDF [44] Gui-Qiang Chen and Cleopatra Christoforou. Solutions for a nonlocal conservation law with fading memory. Proc. Amer. Math. Soc. 135 (2007) 3905-3915. MR 2341940. Abstract, references, and article information View Article: PDF This article is available free of charge [45] Jong Uhn Kim. On a stochastic wave equation with unilateral boundary conditions. Trans. Amer. Math. Soc. 360 (2008) 575-607. MR 2346463. Abstract, references, and article information View Article: PDF This article is available free of charge [46] S. Martin and J. Vovelle. Large-time behaviour of the entropy solution of a scalar conservation law with boundary conditions. Quart. Appl. Math. 65 (2007) 425-450. MR 2354881. Abstract, references, and article information View Article: PDF [47] Demetrios Christodoulou. The Euler Equations of Compressible Fluid Flow. Bull. Amer. Math. Soc. 44 (2007) 581-602. MR 2338367. Abstract, references, and article information View Article: PDF [48] E. Romenski, A. D. Resnyansky and E. F. Toro. Conservative hyperbolic formulation for compressible two-phase flow with different phase pressures and temperatures. Quart. Appl. Math. 65 (2007) 259-279. MR 2330558. Abstract, references, and article information View Article: PDF [49] Shuxing Chen. Mach configuration in pseudo-stationary compressible flow. J. Amer. Math. Soc. 21 (2008) 63-100. MR 2350051. Abstract, references, and article information View Article: PDF This article is available free of charge [50] Tong Yang, Mei Zhang and Changjiang Zhu. Existence of strong travelling wave profiles to $2\times 2$ systems of viscous conservation laws. Proc. Amer. Math. Soc. 135 (2007) 1843-1849. MR 2286095. Abstract, references, and article information View Article: PDF This article is available free of charge [51] Michael Sever. Distribution solutions of nonlinear systems of conservation laws. Memoirs of the AMS 190 (2007) MR 2355635. Book volume table of contents [52] Naoki Tsuge. Large time decay of solutions to isentropic gas dynamics. Quart. Appl. Math. 65 (2007) 135-143. MR 2313152. Abstract, references, and article information View Article: PDF [53] Manuel Portilheiro and Athanasios E. Tzavaras. Hydrodynamic limits for kinetic equations and the diffusive approximation of radiative transport for acoustic waves. Trans. Amer. Math. Soc. 359 (2007) 529-565. MR 2255185. Abstract, references, and article information View Article: PDF This article is available free of charge [54] Jean-François Coulombel and Thierry Goudon. The strong relaxation limit of the multidimensional isothermal Euler equations. Trans. Amer. Math. Soc. 359 (2007) 637-648. MR 2255190. Abstract, references, and article information View Article: PDF This article is available free of charge [55] Volker Elling. A possible counterexample to well posedness of entropy solutions and to Godunov scheme convergence. Math. Comp. 75 (2006) 1721-1733. MR 2240632. Abstract, references, and article information View Article: PDF This article is available free of charge [56] Manuel Castro, José M. Gallardo and Carlos Parés. High order finite volume schemes based on reconstruction of states for solving hyperbolic systems with nonconservative products. Applications to shallow-water systems. Math. Comp. 75 (2006) 1103-1134. MR 2219021. Abstract, references, and article information View Article: PDF This article is available free of charge [57] Raimund Bürger, Aníbal Coronel and Mauricio Sepúlveda. A semi-implicit monotone difference scheme for an initial-boundary value problem of a strongly degenerate parabolic equation modeling sedimentation-consolidation processes. Math. Comp. 75 (2006) 91-112. MR 2176391. Abstract, references, and article information View Article: PDF This article is available free of charge [58] C. E. Kenig and F. Merle. A note on a symmetry result for traveling waves in cylinders. Proc. Amer. Math. Soc. 134 (2006) 697-701. MR 2180886. Abstract, references, and article information View Article: PDF This article is available free of charge [59] Haitao Fan and Tao Luo. Convergence to equilibrium rarefaction waves for discontinuous solutions of shallow water wave equations with relaxation. Quart. Appl. Math. 63 (2005) 575-600. MR 2169035. Abstract, references, and article information View Article: PDF [60] Rinaldo M. Colombo and Piotr Gwiazda. $\mathbf{L}^1$ stability of semigroups with respect to their generators. Quart. Appl. Math. 63 (2005) 509-526. MR 2169031. Abstract, references, and article information View Article: PDF Results: 31 to 60 of 200 found      Go to page: 1 2 3 4 5 > >>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903592050075531, "perplexity": 2365.3262251505716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699721/warc/CC-MAIN-20140313024459-00008-ip-10-183-142-35.ec2.internal.warc.gz"}
https://brilliant.org/problems/no-integration-necessary-part-2/
# No Integration Necessary (Part 2) Geometry Level 4 $\dfrac{x^2}{25} + \dfrac{y^2}{4} = 1$ The figure above depicts the portion of the ellipse whose equation is given above, which lies in the first quadrant. Find the acute angle (in degrees) that a line passing through the origin makes with the $$x$$-axis, such that the line divides the quarter-ellipse into two segments with equal areas.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854577541351318, "perplexity": 408.94705791835975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.36/warc/CC-MAIN-20161020183841-00123-ip-10-171-6-4.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.bsl/1130335208
## Bulletin of Symbolic Logic ### Reverse mathematics and $\Pi_{2}^{1}$ comprehension #### Abstract We initiate the reverse mathematics of general topology. We show that a certain metrization theorem is equivalent to $\Pi_{2}^{1}$ comprehension. An MF space is defined to be a topological space of the form MF($P$) with the topology generated by {$N_p \,| \,p \in P$}. Here $P$ is a poset, MF($P$) is the set of maximal filters on $P$, and $N_p =${$F \in \rm{MF}(P)| p \in F$}. If the poset $P$ is countable, the space MF($P$) is said to be countably based. The class of countably based MF spaces can be defined and discussed within the subsystem $\mathsf{ACA}_0$ of second order arithmetic. One can prove within $\mathsf{ACA}_0$ that every complete separable metric space is homeomorphic to a countably based MF space which is regular. We show that the converse statement, "every countably based MF space which is regular is homeomorphic to a complete separable metric space," is equivalent to $\Pi_{2}^{1}-\mathsf{CA}_0$ The equivalence is proved in the weaker system $\Pi_{1}^{1}-\mathsf{CA}_0$. This is the first example of a theorem of core mathematics which is provable in second order arithmetic and implies $\Pi_{2}^{1}$ comprehension. #### Article information Source Bull. Symbolic Logic, Volume 11, Issue 4 (2005), 526-533. Dates First available in Project Euclid: 26 October 2005 Permanent link to this document https://projecteuclid.org/euclid.bsl/1130335208 Digital Object Identifier doi:10.2178/bsl/1130335208 Mathematical Reviews number (MathSciNet) MR2198712 Zentralblatt MATH identifier 1106.03050 #### Citation Mummert, Carl; Simpson, Stephen G. Reverse mathematics and $\Pi_{2}^{1}$ comprehension. Bull. Symbolic Logic 11 (2005), no. 4, 526--533. doi:10.2178/bsl/1130335208. https://projecteuclid.org/euclid.bsl/1130335208
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447303771972656, "perplexity": 634.090619784702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00116.warc.gz"}
http://math.stackexchange.com/questions/50390/convolution-of-random-variables
# convolution of random variables I need to compute the pdf of the sum of a bunch of random variables $$\sum_{i=0}^{k-1} c_i X_i$$ where $X_i \sim 2\Omega x e^{-\Omega x^2}$, $\Omega > 0$ is a parameter and $c_i$ are positive constant real values. If $k$ is large enough, the law of large numbers may be used. However, in my case $k$ is small, ranging from $3$ to $8$ or $9$. Is there any known result about the pdf of the sum (even an approximation) that I can use without doing the convolution in the domain of the generating function? - You specification is incomplete. Do you mean that these random variables have a common density? I assume it's supported on $[0,\infty)$. You may only use convolution if the random variables are independent. –  ncmathsadist Jul 8 '11 at 20:10 You haven't attempted to specify the JOINT distribution. For example, are they independent? The meaning of your notation is something we have to guess at. My guess is that you intended $2\Omega x e^{-\Omega x^2}$ on the interval $(0,\infty)$ to be the density. –  Michael Hardy Jul 8 '11 at 20:52 BTW: you dont have a "convolution of random variables", this is a "sum of random variables" (which density, in certain conditions results in a "convolution of the densities") –  leonbloy Jul 8 '11 at 20:55 Regarding your observation "If $k$ is large enough, the law of large numbers may be used." - I guess you mean the central limit theorem- bear in mind that that depends on the behaviour of $c_i$. For example, if they decrease exponentially, the CLT cannot be applied.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351443648338318, "perplexity": 186.8103750380877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095270.70/warc/CC-MAIN-20150627031815-00182-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/165061-primitive-recursive-function.html
# Math Help - Primitive Recursive Function 1. ## Primitive Recursive Function How to show that integer subtraction is primitive recursive function? 2. Hello, Apprentice123! To make it easier (for me), I'll consider positive integers only. Show that integer subtraction is primitive recursive function. Consider the subtraction: . $a - b$ Let $a_0 = a$ Then: . $a_n \:=\:a_{n-1} - 1\;\text{ for }n = 1,2,3,\hdots\:b.$ . . The answer is: . $a_b$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955098032951355, "perplexity": 3117.802880415468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463637.80/warc/CC-MAIN-20150226074103-00035-ip-10-28-5-156.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/270320/breadth-of-join-semilattice
# breadth of join-semilattice On page 16, Lattice Theory: Foundation, George Grätze(2011), 1.19 Show that a join-semilattice $(L; \lor)$ has breadth at most $n$, for a positive integer n, iff for every nonempty finite subset $X$ of $L$, there exists a nonempty $Y \subseteq X$ with at most $n$ elements such that $$\bigvee X = \bigvee Y$$ I don't know in which way we shall establish this equivalence by appealing to the definition of breadth, which is Let $n$ be a positive integer. We say that an order $P$ has breadth at most $n$ if for all elements $x_0, x_1, x_2,……x_n, y_0, y_1,……y_n$,in $P$, if $x_i \leq y_j$ for all $i \neq j$ in $\{0, 1, ……,n\}$, then there exists $i \in\{0, 1, ……,n\}$,such that $x_i \leq y_i$. The breadth of $P$, in notation, breadth$(P)$, is the least positive integer $n$ such that $P$ has breadth at most $n$ if such an $n$ exists. Observe that this definition of breadth is selfdual. For an equivalent denition for a join-semilattice (or for a lattice), see Exercise 1.19 Besides the aforementioned exercise, I have difficulty in visualizing this concept, breadth of an order. I don't know the meaning of selfdual, either. - $\newcommand{\br}{\operatorname{breadth}}$Let’s start by assuming that for every non-empty finite $X\subseteq L$ there is a non-empty $Y\subseteq X$ such that $|Y|\le n$ and $\bigvee Y=\bigvee X$. We want to show that $\br(L)\le n$, so suppose that we have elements $x_0,\dots,x_n,y_0,\dots,y_n\in L$ such that $x_i\le y_j$ whenever $i,j\in\{0,\dots,n\}$ and $i\ne j$. We want to show that there is a $k\in\{0,\dots,n\}$ such that $x_k\le y_k$. Let $X=\{x_0,\dots,x_n\}$; by hypothesis there is a non-empty $Z\subseteq X$ such that $|Z|\le n$ and $\bigvee Z=\bigvee X$. Since $|Z|\le n$, there is a $k\in\{0,\dots,n\}$ such that $x_k\notin Z$. For each $x_i\in Z$ we have $i\ne k$ and therefore $x_i\le y_k$, so $x_k\le\bigvee X=\bigvee Z\le y_k$, and we’re done: $x_k\le y_k$. You can prove the other direction by induction on $|X|$. Assume that $\br(L)\le n$. It’s certainly true that if $|X|\le n$, then there is a non-empty $Y\subseteq X$ such that $|Y|\le n$ and $\bigvee Y=\bigvee X$: just take $Y=X$. Now suppose that $|X|=n+1$, and let $X=\{x_0,\dots,x_n\}$. For $j\in\{0,\dots,n\}$ let $$y_j=\bigvee \left(X\setminus\{x_j\}\right)\;;$$ clearly this definition ensures that $x_i\le y_j$ whenever $i,j\in\{0,\dots,n\}$ and $i\ne j$. Since $\br(L)\le n$, there is a $k\in\{0,\dots,n\}$ such that $x_k\le y_k$ for some $k\in\{0,\dots,n\}$. Then $$x_k\le\bigvee\left(X\setminus\{x_k\}\right)\;,$$ so $$\bigvee X=\bigvee\left(X\setminus\{x_k\}\right)\;,$$ and we can take $Y=X\setminus\{x_k\}$. Now suppose that $m>n+1$, and for each non-empty $X\subseteq L$ with $|X|<m$ there is a non-empty $Y\subseteq X$ such that $|Y|\le n$ and $\bigvee Y=\bigvee X$. Let $X$ be a subset of $L$ of cardinality $m$; we show that there is a non-empty $Y\subseteq X$ such that $|Y|\le n$ and $\bigvee Y=\bigvee X$. Fix $x\in X$, and let $X_0=X\setminus\{x\}$. By the induction hypothesis there is an $X_1\subseteq X_0$ such that $|X_1|\le n$ and $\bigvee X_1=\bigvee X_0$. Let $X_2=X_1\cup\{x\}$. Clearly $\bigvee X_2=\bigvee X$, so if $|X_2|\le n$, we’re done. Otherwise, $|X_2|=n+1$, and we just proved the result for that case. The theorem now follows by induction. To say that the definition of breadth is selfdual is to say that replacing the definition by its dual does not change the concept. The definition of breadth: An order $P$ has breadth at most $n$ if for all elements $x_0,\dots,x_n,y_n,\dots,y_n\in P$, if $x_i\le y_j$ for all $i,j\in\{0,\dots,n\}$ with $i\ne j$, then there is a $k\in\{0,\dots,n\}$ such that $x_k\le y_k$. The dual definition: An order $P$ has dual-breadth at most $n$ if for all elements $x_0,\dots,x_n,y_n,\dots,y_n\in P$, if $x_i\ge y_j$ for all $i,j\in\{0,\dots,n\}$ with $i\ne j$, then there is a $k\in\{0,\dots,n\}$ such that $x_k\ge y_k$. If you interchange $x_i$ and $y_i$ for each $i\in\{0,\dots,n\}$, you transform each of these definitions into the other. Thus, they actually say the same thing and define the same concept: what I called dual-breadth is just breadth. - Thank you very much,sir!It's amazing (from my persepective) that the self-duality is far less obvious in the alternative defintion. – Metta World Peace Jan 5 '13 at 10:49 @MettaWorldPeace: You’re welcome. (And I have to agree about it being less obvious in the form.) – Brian M. Scott Jan 5 '13 at 15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935643672943115, "perplexity": 73.1425581589427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00181-ip-10-239-7-51.ec2.internal.warc.gz"}
http://phaseportrait.blogspot.com/2007/07/protype-for-simplifying-changing.html
## Wednesday, July 18, 2007 ### Protype for simplifying changing cleveref defaults UPDATE: I have created crefntheoremdefaults.tex, which does this same stuff to the ntheorem package. You have to be sure to include the crefdefaults.tex file first. The files crefntheoremdefault_constants.tex and crefntheoremdefault_formats.tex play a similar role as the analogous ones below. I think the package cleveref is great. In fact, I think it has much of what has been missing from LaTeX for a long time. See its documentation for more information about the package. My only gripe about cleveref is that it is tedious to change something like all of the default "eq." and "eqs." to "Eq." and "Eqs.", respectively. So I put together these scripts as a kind of proof of concept (they also setup cleveref to handle table references, something that is not done by default at the moment): The first script, crefdefaults.tex, is simply crefdefault_constants.tex concatenated with crefdefault_formats.tex. Therefore, `\input{crefdefaults.tex}` is equivalent to `\input{crefdefault_constants.tex}\input{crefdefault_formats.tex}` However, if the poorman option is used and the corresponding sed script is applied, the resulting file will need to have the content of crefdefault_formats.tex removed; crefdefault_constants.tex should not be removed. In other words, if you are planning on using poorman, use the 2-line include and comment the crefdefault_formats.tex line out of the script that results from the sed script. What's the outcome of this? I can now do \renewcommand{\crefenumilabelformat}[3]{\textup{(#2#1#3)}} in order to change every item reference so that it is surrounded by upright round braces. Additionally, changing "eq." to "Eq." and "eqs." to "Eqs." is as simple as it was in hyperref (for use with \autoref); that is, it is these two lines: `\renewcommand{\crefequationname}{Eq.}\renewcommand{\crefpluralequationname}{Eqs.}` Unfortunately, the output of the poorman sed filter is a little messy. However, if these are someday integrated into cleveref, that could be fixed. There are other shortcuts built into crefdefaults.tex as well. It probably requires documentation, but it's just a prototype for now.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887330949306488, "perplexity": 2126.5805647286443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511897.56/warc/CC-MAIN-20181018173140-20181018194640-00273.warc.gz"}
http://mathhelpforum.com/advanced-statistics/9958-concave-multivariate-distribution-function.html
Concave Multivariate Distribution Function? Can anyone think of a multivariate (i.e. non--univariate) cumulative distribution function $F$ that is _concave_ on every convex subset where $F$ is positive? In the univariate case (i.e. d=1,) and on the positive real line, this corresponds to the class of all non--increasing desnities on the positive real line. In higher dimensions (i.e. let's say d=2) does anyone know an example of a concave CDF or if they know if such a CDF does not exist? [Concavity in higher dimensions is the same as in the one dimension; i.e. for vectors $x$ and $y$ and a $a \in (0,1)$ we should have $F(ax + (1-a)y)\leq aF(X) + (1-a)F(y)$.] Note: for non--LaTeX users "\leq" means "<=". Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851918816566467, "perplexity": 604.4954950207741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909043016-00072-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/177822-riemann-surface-question.html
# Thread: Riemann surface - question 1. ## Riemann surface - question Hi. I have a little difficulty in understanding the following problem: Given the function: g(z) = z + ((z^2) - 1)^(1/2) Let f_0 denote the branch of ((z^2) - 1)^(1/2) defined on the sheet R_0, and show that the branches g_0 and g_1 of g on the two sheets are given by the equations: g_0(z) = 1/g_1(z) = z + f_0(z) OK. I see that it makes sense that g_0(z) = z + f_0(z). However, I don't quite see how it is also true that g_0(z) = 1/g_1(z). Any tips/explanations for why this is true will be greatly apprciated! I am quite stuck on this problem! 2. The two branches are given by g_0(z)= z+ (z^2- 1)^{1/2} and g_1(z)= z- (z^2- 1)^{1/2}, taking the + and - values of the square root. 1/g_1(z)= 1/(z- (z^2- 1)^{1/2}). Rationalize the denominator by multiplying numerator and denominator by z+ (z^2- 1)^{1/2}. 3. Ah, I see it now! Thank you so much! I converted the expression to polar coordinates (which I normally do for all roots of complex numbers), and this is what made it somewhat difficult for me to see this rather obvious answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990426778793335, "perplexity": 984.9112018858765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00022-ip-10-60-113-184.ec2.internal.warc.gz"}
http://advances.sciencemag.org/content/3/3/e1601858
Research ArticlePHYSICAL SCIENCES Explaining recurring maser flares in the ISM through large-scale entangled quantum mechanical states See allHide authors and affiliations Vol. 3, no. 3, e1601858 Abstract We apply Dicke’s theory of superradiance (introduced in 1954) to the 6.7-GHz methanol and 22-GHz water spectral lines, often detected in molecular clouds as signposts for the early stages of the star formation process. We suggest that superradiance, characterized by burst-like features taking place over a wide range of time scales, may provide a natural explanation for the recent observations of periodic and seemingly alternating methanol and water maser flares in G107.298+5.639. Although these observations would be very difficult to explain within the context of maser theory, we show that these flares may result from simultaneously initiated 6.7-GHz methanol and 22-GHz water superradiant bursts operating on different time scales, thus providing a natural mechanism for their observed durations and time ordering. The evidence of superradiance in this source further suggests the existence of entangled quantum mechanical states, involving a very large number of molecules, over distances of up to a few kilometers in the interstellar medium. Keywords
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546558618545532, "perplexity": 2576.399121785887}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647146.41/warc/CC-MAIN-20180319194922-20180319214922-00700.warc.gz"}
http://math.stackexchange.com/questions/80820/numerical-solution-of-coupled-system-im-missing-something-looking-for-refer?answertab=oldest
# numerical solution of coupled system - I'm missing something - looking for references I'm using numerical method to solve a coupled system. To be more precise, I'm using finite differences solve a electrostatics problem (let's call this system $A$), and using a function of the solution in $A$ to improve the boundary condition (system $B$). Now, if I: (a) update $B$ after $A$ has converged, and finish after B does no varies more, the error is a little high. (b) update $B$ after a fixed number if iterations of $A$, I get a lower error, but sometimes $B$ diverges! So, I'm missing something. something fundamental. This is consistent, I've gone through the code and modified it in different ways, and always end with a situation like this. So I'm certain there is something I'm not taking into account. So, I'm searching for theory on coupled system. Does anybody around has any idea about where would be a good place to start? let it be books, papers... To clarify things more, in $A$ I'm using over-relaxation to solve a finite differences system given by $\nabla\cdot(\epsilon\nabla\phi)=\rho$, which, if $\epsilon$ is constant, turns into Poisson's equation. It's a squared grid. In $B$, I'm integrating over a closed surface, in a similar way to Boundary Elements method, to approximate the value at the boundary using Green's second identity. thanks! - I assume that your system $A$ looks something like $\nabla^2\phi(\bf x) = \rho(\bf x)$ where $\phi$ is a potential and $\rho$ is a charge density? Can you give a little more detail about how exactly you modify the boundary conditions in response to the solution of this equation? –  Chris Taylor Nov 10 '11 at 11:44 Now if only you had mentioned the precise systems that you have... –  J. M. Nov 10 '11 at 11:44 You are both right. Sorry. I'm editing the question to specify the system. –  jbcolmenares Nov 10 '11 at 16:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648684620857239, "perplexity": 418.6357804749774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824990.54/warc/CC-MAIN-20140820021344-00072-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.statsandr.com/blog/one-proportion-and-goodness-of-fit-test-in-r-and-by-hand/
# Introduction In a previous article, I presented the Chi-square test of independence in R which is used to test the independence between two categorical variables. In this article, I show how to perform, first in R and then by hand, the: 1. one-proportion test (also referred as one-sample proportion test) 2. Chi-square goodness of fit test The first test is used to compare an observed proportion to an expected proportion, when the qualitative variable has only two categories. The second test is used to compare multiple observed proportions to multiple expected proportions, in a situation where the qualitative variable has two or more categories. Both tests allow to test the equality of proportions between the levels of the qualitative variable or to test the equality with given proportions. These given proportions could be determined arbitrarily or based on the theoretical probabilities of a known distribution. # In R ## Data For this section, we use the same dataset than in the article on descriptive statistics. It is the well-known iris dataset, to which we add the variable size. The variable size corresponds to small if the length of the petal is smaller than the median of all flowers, big otherwise: # load iris dataset dat <- iris # create size variable dat$size <- ifelse(dat$Sepal.Length < median(dat$Sepal.Length), "small", "big" ) # show first 5 observations head(dat, n = 5) ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species size ## 1 5.1 3.5 1.4 0.2 setosa small ## 2 4.9 3.0 1.4 0.2 setosa small ## 3 4.7 3.2 1.3 0.2 setosa small ## 4 4.6 3.1 1.5 0.2 setosa small ## 5 5.0 3.6 1.4 0.2 setosa small ## One-proportion test For this example, we have a sample of 150 flowers and we want to test whether the proportion of small flowers is the same than the proportion of big flowers (measured by the variable size). Here are the number of flowers by size, and the corresponding proportions: # barplot library(ggplot2) ggplot(dat) + aes(x = size) + geom_bar(fill = "#0c4c8a") + theme_minimal() # counts by size table(dat$size) ## ## big small ## 77 73 # proportions by size, rounded to 2 decimals round(prop.table(table(dat$size)), 2) ## ## big small ## 0.51 0.49 Among the 150 flowers forming our sample, 51% and 49% are big and small, respectively. To test whether the proportions are the same among both sizes, we use the prop.test() function which accepts the following arguments: • number of successes • number of observations/trials • expected probability (the one we want to test against) Considering (arbitrarily) that big is the success, we have:1 # one-proportion test test <- prop.test( x = 77, # number of successes n = 150, # total number of trials (77 + 73) p = 0.5 ) # we test for equal proportion so prob = 0.5 in each group test ## ## 1-sample proportions test with continuity correction ## ## data: 77 out of 150, null probability 0.5 ## X-squared = 0.06, df = 1, p-value = 0.8065 ## alternative hypothesis: true p is not equal to 0.5 ## 95 percent confidence interval: ## 0.4307558 0.5952176 ## sample estimates: ## p ## 0.5133333 We obtain an output with, among others, the null probability (0.5), the test statistic (X-squared = 0.06), the degrees of freedom (df = 1), the p-value (p-value = 0.8065) and the alternative hypothesis (true p is not equal to 0.5). The p-value is 0.806 so, at the 5% significance level, we do not reject the null hypothesis that the proportions of small and big flowers are the same. ### Assumption of prop.test() and binom.test() Note that prop.test() uses a normal approximation to the binomial distribution. Therefore, one assumption of this test is that the sample size is large enough (usually, n > 30). If the sample size is small, it is recommended to use the exact binomial test. The exact binomial test can be performed with the binom.test() function and accepts the same arguments as the prop.test() function. For this example, suppose now that we have a sample of 12 big and 3 small flowers and we want to test whether the proportions are the same among both sizes: # barplot barplot(c(12, 3), # observed counts names.arg = c("big", "small"), # rename labels ylab = "Frequency", # y-axis label xlab = "Size" # x-axis label ) abline( h = 15 / 2, # expected counts in each level lty = 2 # dashed line ) # exact binomial test test <- binom.test( x = 12, # counts of successes n = 15, # total counts (12 + 3) p = 0.5 # expected proportion ) test ## ## Exact binomial test ## ## data: 12 and 15 ## number of successes = 12, number of trials = 15, p-value = 0.03516 ## alternative hypothesis: true probability of success is not equal to 0.5 ## 95 percent confidence interval: ## 0.5191089 0.9566880 ## sample estimates: ## probability of success ## 0.8 The p-value is 0.035 so, at the 5% significance level, we reject the null hypothesis and we conclude that the proportions of small and big flowers are significantly different. This is equivalent than concluding that the proportion of big flowers is significantly different from 0.5 (since there are only two sizes). If you want to test that the proportion of big flowers is greater than 50%, add the alternative = "greater" argument into the binom.test() function:2 test <- binom.test( x = 12, # counts of successes n = 15, # total counts (12 + 3) p = 0.5, # expected proportion alternative = "greater" # test that prop of big flowers is > 0.5 ) test ## ## Exact binomial test ## ## data: 12 and 15 ## number of successes = 12, number of trials = 15, p-value = 0.01758 ## alternative hypothesis: true probability of success is greater than 0.5 ## 95 percent confidence interval: ## 0.5602156 1.0000000 ## sample estimates: ## probability of success ## 0.8 The p-value is 0.018 so, at the 5% significance level, we reject the null hypothesis and we conclude that the proportion of big flowers is significantly larger than 50%. ## Chi-square goodness of fit test Suppose now that the qualitative variable has more than two levels as it is the case for the variable Species: # barplot ggplot(dat) + aes(x = Species) + geom_bar(fill = "#0c4c8a") + theme_minimal() # counts by Species table(dat$Species) ## ## setosa versicolor virginica ## 50 50 50 The variable Species has 3 levels, with 50 observations in each level. Suppose for this example that we want to test whether the 3 species are equally common. If they were equally common, they would be equally distributed and the expected proportions would be $$\frac{1}{3}$$ for each of the species. This test can be done with the chisq.test() function, accepting the following arguments: • a numeric vector representing the observed proportions • a vector of probabilities (of the same length of the observed proportions) representing the expected proportions Applied to our research question (i.e., are the 3 species equally common?), we have: # goodness of fit test test <- chisq.test(table(dat$Species), # observed proportions p = c(1 / 3, 1 / 3, 1 / 3) # expected proportions ) test ## ## Chi-squared test for given probabilities ## ## data: table(dat$Species) ## X-squared = 0, df = 2, p-value = 1 The p-value is 1 so, at the 5% significance level, we do not reject the null hypothesis that the proportions are equal among all species. This was quite obvious even before doing the statistical test given that there are exactly 50 flowers of each species, so it was easy to see that the species are equally common. We however still did the test to show how it works in practice. ### Does my distribution follow a given distribution? In the previous section, we chose the proportions ourselves. The goodness of fit test is also particularly useful to compare observed proportions with expected proportions that are based on some known distribution. Remember the hypotheses of the test: • $$H_0$$: there is no significant difference between the observed and the expected frequencies • $$H_1$$: there is a significant difference between the observed and the expected frequencies For this example, suppose that we measured the number of girls in 100 families of 5 children. We want to test whether the (observed) distribution of number girls follows a binomial distribution. #### Observed frequencies Here is the distribution of the number of girls per family in our sample of 100 families of 5 children: And the corresponding frequencies and relative frequencies (remember that the relative frequency is the frequency divided by the total sample size): # counts dat ## Girls Frequency Relative_freq ## 1 0 5 0.05 ## 2 1 12 0.12 ## 3 2 28 0.28 ## 4 3 33 0.33 ## 5 4 17 0.17 ## 6 5 5 0.05 #### Expected frequencies In order to compare the observed frequencies to a binomial distribution and see if both distributions match, we first need to determine the expected frequencies that would be obtained in case of a binomial distribution. The expected frequencies assuming a probability of 0.5 of having a girl (for each of the 5 children) are as follows: # create expected frequencies for a binomial distribution x <- 0:5 df <- data.frame( Girls = factor(x), Expected_relative_freq = dbinom(x, size = 5, prob = 0.5) ) df$Expected_freq <- df$Expected_relative_freq * 100 # *100 since there are 100 families # create barplot p <- ggplot(df, aes(x = Girls, y = Expected_freq)) + geom_bar(stat = "identity", fill = "#F8766D") + xlab("Number of girls per family") + ylab("Expected frequency") + labs(title = "Binomial distribution Bi(x, n = 5, p = 0.5)") + theme_minimal() p # expected relative frequencies and (absolute) frequencies df ## Girls Expected_relative_freq Expected_freq ## 1 0 0.03125 3.125 ## 2 1 0.15625 15.625 ## 3 2 0.31250 31.250 ## 4 3 0.31250 31.250 ## 5 4 0.15625 15.625 ## 6 5 0.03125 3.125 #### Observed vs. expected frequencies We now compare the observed frequencies to the expected frequencies to see whether the two differ significantly. If the two differ significantly, we reject the hypothesis that the number of girls per family of 5 children follows a binomial distribution. On the other hand, if the observed and expected frequencies are similar, we do not reject the hypothesis that the number of girls per family follows a binomial distribution. Visually we have: # create data data <- data.frame( num_girls = factor(rep(c(0:5), times = 2)), Freq = c(dat$Freq, df$Expected_freq), obs_exp = c(rep("observed", 6), rep("expected", 6)) ) # create plot ggplot() + geom_bar( data = data, aes( x = num_girls, y = Freq, fill = obs_exp ), position = "dodge", # bar next to each other stat = "identity" ) + ylab("Frequency") + xlab("Number of girls per family") + theme_minimal() + theme(legend.title = element_blank()) # remove legend title We see that the observed and expected frequencies are quite similar, so we expect that the number of girls in families of 5 children follows a binomial distribution. However, only the goodness of fit test will confirm our belief: # goodness of fit test test <- chisq.test(dat$Freq, # observed frequencies p = df$Expected_relative_freq # expected proportions ) test ## ## Chi-squared test for given probabilities ## ## data: dat$Freq ## X-squared = 3.648, df = 5, p-value = 0.6011 The p-value is 0.601 so, at the 5% significance level, we do not reject the null hypothesis that the observed and expected frequencies are equal. This is equivalent than concluding that we cannot reject the hypothesis that the number of girls in families of 5 children follows a binomial distribution (since the expected frequencies were based on a binomial distribution). Note that the goodness of fit test can of course be performed with other types of distribution than the binomial one. For instance, if you want to test whether an observed distribution follows a Poisson distribution, this test can be used to compare the observed frequencies with the expected proportions that would be obtained in case of a Poisson distribution. # By hand Now that we showed how to perform the one-proportion and goodness of fit test in R, in this section we show how to do these tests by hand. We first illustrate the one-proportion test then the Chi-square goodness of fit test. ## One-proportion test For this example, suppose that we tossed a coin 100 times and noted that it landed on heads 67 times. Following this, we want to test whether the coin is fair, that is, test whether the probability of landing on heads or tails is equal to 50%. As for many hypothesis tests, we do it through 4 easy steps: 1. State the null and alternative hypotheses 2. Compute the test-statistic (also known as t-stat) 3. Find the rejection region 4. Conclude by comparing the test-statistic with the rejection region Step 1. In our example, the null and alternative hypotheses are: • $$H_0$$: $$p_0 = 0.5$$ • $$H_1$$: $$p_0 \ne 0.5$$ where $$p_0$$ is the expected proportion of landing on heads. Step 2. The test statistic is:3 $z_{obs} = \frac{\hat{p} - p_0}{\sqrt{\frac{\hat{p}(1 - \hat{p})}{n}}} = \frac{0.67 - 0.5}{\sqrt{\frac{0.67 \cdot (1 - 0.67)}{100}}} = 3.615$ (See how to perform hypothesis tests in a Shiny app if you need more help in computing the test statistic.) Step 3. The rejection region is found via the normal distribution table. Assuming a significance level $$\alpha = 0.05$$, we have: $\pm z_{\alpha/2} = \pm z_{0.025} = \pm 1.96$ Step 4. We compare the test statistic (found in step 2) with the rejection region (found in step 3) and we conclude. Visually, we have: The test statistic lies within the rejection region (i.e., the grey shaded areas). Therefore, at the 5% significance level, we reject the null hypothesis and we conclude that the proportion of heads (and thus tails) is significantly different than 50%. In other words, still at the 5% significance level, we conclude that the coin is unfair. If you prefer to compute the p-value instead of comparing the t-stat and the rejection region, you can use this Shiny app to easily compute p-values for different probability distributions. After having opened the app, set the t-stat, the corresponding alternative and you will find the p-value at the top of the page. ### Verification in R Just for the sake of illustration, here is the verification of the above example in R: # one-proportion test test <- prop.test( x = 67, # number of heads n = 100, # number of trials p = 0.5 # expected probability of heads ) test ## ## 1-sample proportions test with continuity correction ## ## data: 67 out of 100, null probability 0.5 ## X-squared = 10.89, df = 1, p-value = 0.0009668 ## alternative hypothesis: true p is not equal to 0.5 ## 95 percent confidence interval: ## 0.5679099 0.7588442 ## sample estimates: ## p ## 0.67 The p-value is 0.001 so, at the 5% significance level, we reject the null hypothesis that the proportions of heads and tails are equal, and we conclude that the coin is biased. This is the same conclusion than the one found by hand. ## Goodness of fit test We now illustrate the goodness of fit test by hand with the following example. Suppose that we toss a dice 100 times, we note how many times it lands on each face (1 to 6) and we test whether the dice is fair. Here are the observed counts by dice face: ## dice_face ## 1 2 3 4 5 6 ## 15 24 10 19 19 13 With a fair dice, we would expect it to land $$\frac{100}{6} \approx 16.67$$ times on each face (this expected value is represented by the dashed line in the above plot). Although the observed frequencies are different than the expected value of 16.67: ## dice_face observed_freq expected_freq ## 1 1 15 16.67 ## 2 2 24 16.67 ## 3 3 10 16.67 ## 4 4 19 16.67 ## 5 5 19 16.67 ## 6 6 13 16.67 we need to test whether they are significantly different. For this, we perform the appropriate hypothesis test following the 4 easy steps mentioned above: 1. State the null and alternative hypotheses 2. Compute the test-statistic (also known as t-stat) 3. Find the rejection region 4. Conclude by comparing the test-statistic with the rejection region Step 1. The null and alternative hypotheses of the goodness of fit test are: • $$H_0$$: there is no significant difference between the observed and the expected frequencies • $$H_1$$: there is a significant difference between the observed and the expected frequencies Step 2. The test statistic is: $\chi^2 = \sum_{i = 1}^k \frac{(O_i - E_i)^2}{E_i}$ where $$O_i$$ is the observed frequency, $$E_i$$ is the expected frequency and $$k$$ is the number of categories (in our case, there are 6 categories, representing the 6 dice faces). This $$\chi^2$$ statistic is obtained by calculating the difference between the observed number of cases and the expected number of cases in each category. This difference is squared (to avoid negative and positive differences being compensated) and divided by the expected number of cases in that category. These values are then summed for all categories, and the total is referred to as the $$\chi^2$$ statistic. Large values of this test statistic lead to the rejection of the null hypothesis, small values mean that the null hypothesis cannot be rejected.4 Given our data, we have: $\chi^2 = \frac{(15 - 16.67)^2}{16.67} + \frac{(24 - 16.67)^2}{16.67} + \\ \frac{(10 - 16.67)^2}{16.67} + \frac{(19 - 16.67)^2}{16.67} + \frac{(19 - 16.67)^2}{16.67} + \\ \frac{(13 - 16.67)^2}{16.67} = 7.52$ Step 3. Whether the $$\chi^2$$ test statistic is small or large depends on the rejection region. The rejection region is found via the $$\chi^2$$ distribution table. With a degrees of freedom equals to $$k - 1$$ (where $$k$$ is the number of categories) and assuming a significance level $$\alpha = 0.05$$, we have: $\chi^2_{\alpha; k-1} = \chi^2_{0.05; 5} = 11.0705$ Step 4. We compare the test statistic (found in step 2) with the rejection region (found in step 3) and we conclude. Visually, we have: The test statistic does not lie within the rejection region (i.e., the grey shaded area). Therefore, at the 5% significance level, we do not reject the null hypothesis that there is no significant difference between the observed and the expected frequencies. In other words, still at the 5% significance level, we cannot reject the hypothesis that the dice is fair. Again, you can use the Shiny app to easily compute the p-value given the test statistic if you prefer this method over the comparison between the t-stat and the rejection region. ### Verification in R Just for the sake of illustration, here is the verification of the above example in R: # goodness of fit test test <- chisq.test(dat$observed_freq, # observed frequencies for each dice face p = rep(1 / 6, 6) # expected probabilities for each dice face ) test ## ## Chi-squared test for given probabilities ## ## data: dat\$observed_freq ## X-squared = 7.52, df = 5, p-value = 0.1847 The test statistic and degrees of freedom are exactly the same than the ones found by hand. The p-value is 0.185 which, still at the 5% significance level, leads to the same conclusion than by hand (i.e., failing to reject the null hypothesis). Thanks for reading. I hope this article helped you to understand and perform the one-proportion and goodness of fit test in R and by hand. As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion. 2. Similarly, this argument can also be added to the prop.test() function to test whether the observed proportion is larger than the expected proportion. Use alternative = "less" if you want to test whether the observed proportion is smaller than the expected one. 3. One assumption of this test is that $$n \cdot p \ge 5$$ and $$n \cdot (1 - p) \ge 5$$. The assumption is met so we can use the normal approximation to the binomial distribution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509877562522888, "perplexity": 1193.693174360597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00103.warc.gz"}
https://www.physicsforums.com/threads/child-chair-and-spring-scale-on-pulley-forces.776695/
# Child, Chair, and Spring Scale on Pulley: Forces • Start date • Tags • #1 95 2 ## Homework Equations $F=ma$ ## The Attempt at a Solution (a) Because the spring scale weighs 250 N, the effective weight on the child's side is 230 N. Therefore, $T-230 = \frac{230}{9.8} a$ and $T-250 = -\frac{250}{9.8} a$, so $a = 0.408$. This is correct, but is the solution really legitimate? The "effective weight" argument seems a bit suspicious and nonrigorous. (b) I have no idea where to even start. How would the "effective weights" work here? (Or would it even work?) Thanks, minimario • #2 collinsmark Homework Helper Gold Member 2,960 1,414 ## Homework Equations $F=ma$ ## The Attempt at a Solution (a) Because the spring scale weighs 250 N, the effective weight on the child's side is 230 N. Therefore, $T-230 = \frac{230}{9.8} a$ and $T-250 = -\frac{250}{9.8} a$, so $a = 0.408$. This is correct, but is the solution really legitimate? The "effective weight" argument seems a bit suspicious and nonrigorous. (b) I have no idea where to even start. How would the "effective weights" work here? (Or would it even work?) Thanks, minimario Hello minimario, Welcome to Physics Forums! :) First, here's a link to get you started with using $\LaTeX$ on Phyisics Forums. The "250 N" is not the weight of the spring scale, rather it is the reading on the spring scale. You should consider the spring scale itself to be massless. That it means is that the tension in the rope is 250 N. Whenever you work with a massless and frictionless rope and pulley, it means that the tension of the rope on one side of the pulley is equal to the tension on the other side (I think it is valid from this problem to assume that the rope is also massless and the pully is both massless and frictionless). (i) Draw a free body diagram (FBD) of all the forces acting on the child+chair combination. Don't forget that the rope has two ends! (Hint: you already know the tension on the rope. Just don't forget both ends.) (ii) What is the net force acting on the child+chair combination (i.e., sum together all the force vectors, to find the net force vector [and don't forget they are vectors, not scalars])? (iii) What is the the mass of the child+chair combination. (i.e., not the weight, but the mass)? (iv) Invoke Newton's second law of motion to find the acceleration. [Edit: For part b, start by drawing a FBD of all the forces acting only on the child. Hint: you know the child's mass (or you can calculate it now) and the child's acceleration from previous calculations. You also know the child's weight (gravitational force) and the force from the rope that the child is holding onto. Invoke Newton's second law again, and solve for the remaining force on the child from the chair.] Last edited: • Last Post Replies 3 Views 11K • Last Post Replies 2 Views 3K • Last Post Replies 6 Views 14K • Last Post Replies 3 Views 4K • Last Post Replies 3 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 2 Views 1K • Last Post Replies 3 Views 10K • Last Post Replies 3 Views 6K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455142617225647, "perplexity": 836.266275978409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00567.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/9/lesson/9.2.2/problem/9-87
### Home > PC > Chapter 9 > Lesson 9.2.2 > Problem9-87 9-87. 2y + 3x = 100 2y = 100 − 3x $y=\frac{100-3x}{2}$ Areatotal = Arearectangle + Areatriangle To find the area of the triangle, drop an altitude forming two congruent triangles. The diagonal will be 'x'. $\footnotesize \text{Use the Pythagorean Theorem to find the height. The area will be }\frac{(\text{height})(\text{base})}{2}.$
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885600209236145, "perplexity": 4570.515139315812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00511.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=49K20
# American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(49K20) AND publication=(all) Sort order: Date Format: Standard display Results: 1 to 24 of 24 found      Go to page: 1 [1] R. Rautmann. Decomposition of the homogeneous space $\hat{W}^{1,2}$ with respect to the Dirichlet form $\langle\nabla u, \nabla v \rangle$ and applications. Contemporary Mathematics 666 (2016) 279-288. Book volume table of contents    View Article: PDF [2] Graziano Crasta and Ilaria Fragalà. A $C^1$ regularity result for the inhomogeneous normalized infinity Laplacian. Proc. Amer. Math. Soc. 144 (2016) 2547-2558. Abstract, references, and article information    View Article: PDF [3] Paata Ivanisvili, Nikolay N. Osipov, Dmitriy M. Stolyarov, Vasily I. Vasyunin and Pavel B. Zatitskiy. Bellman function for extremal problems in BMO. Trans. Amer. Math. Soc. 368 (2016) 3415-3468. Abstract, references, and article information    View Article: PDF [4] Wei-Kuo Chen. Partial results on the convexity of the Parisi functional with PDE approach. Proc. Amer. Math. Soc. 143 (2015) 3135-3146. Abstract, references, and article information    View Article: PDF [5] A. A. Logunov, L. Slavin, D. M. Stolyarov, V. Vasyunin and P. B. Zatitskiy. Weak integral conditions for BMO. Proc. Amer. Math. Soc. 143 (2015) 2913-2926. Abstract, references, and article information    View Article: PDF [6] Térence Bayen, J. Frédéric Bonnans and Francisco J. Silva. Characterization of local quadratic growth for strong minima in the optimal control of semi-linear elliptic equations. Trans. Amer. Math. Soc. 366 (2014) 2063-2087. Abstract, references, and article information    View Article: PDF [7] Wei Gong. Error estimates for finite element approximations of parabolic equations with measure data. Math. Comp. 82 (2013) 69-98. Abstract, references, and article information    View Article: PDF [8] L. Slavin and V. Vasyunin. Sharp results in the integral-form John–Nirenberg inequality. Trans. Amer. Math. Soc. 363 (2011) 4135-4169. MR 2792983. Abstract, references, and article information    View Article: PDF [9] Fredi Tröltzsch. Optimal Control of Partial Differential Equations. Graduate Studies in Mathematics 112 (2010) MR MR2583281. Book volume table of contents    [10] Fredi Tröltzsch. Supplementary results on partial differential equations. Graduate Studies in Mathematics 112 (2010) 355-383. Book volume table of contents    View Article: PDF [11] Fredi Tröltzsch. Linear-quadratic parabolic control problems. Graduate Studies in Mathematics 112 (2010) 119-179. Book volume table of contents    View Article: PDF [12] Fredi Tröltzsch. Introduction and examples. Graduate Studies in Mathematics 112 (2010) 1-19. Book volume table of contents    View Article: PDF [13] Fredi Tröltzsch. Linear-quadratic elliptic control problems. Graduate Studies in Mathematics 112 (2010) 21-118. Book volume table of contents    View Article: PDF [14] Fredi Tröltzsch. Optimal control of semilinear parabolic equations. Graduate Studies in Mathematics 112 (2010) 265-321. Book volume table of contents    View Article: PDF [15] Fredi Tröltzsch. Optimal control of semilinear elliptic equations. Graduate Studies in Mathematics 112 (2010) 181-264. Book volume table of contents    View Article: PDF [16] Fredi Tröltzsch. Optimization problems in Banach spaces. Graduate Studies in Mathematics 112 (2010) 323-353. Book volume table of contents    View Article: PDF [17] Leandro M. Del Pezzo and Julián Fernández Bonder. An optimization problem for the first weighted eigenvalue problem plus a potential. Proc. Amer. Math. Soc. 138 (2010) 3551-3567. MR 2661555. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Yury Grabovsky and Tadele Mengesha. Sufficient conditions for strong local minima: The case of $C^{1}$ extremals. Trans. Amer. Math. Soc. 361 (2009) 1495-1541. MR 2457407. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] Michael G. Crandall, Changyou Wang and Yifeng Yu. Derivation of the Aronsson equation for $C^1$ Hamiltonians. Trans. Amer. Math. Soc. 361 (2009) 103-124. MR 2439400. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Vladimir I. Oliker. A Minkowski-style theorem for focal functions of compact convex reflectors. Trans. Amer. Math. Soc. 360 (2008) 563-574. MR 2346462. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] Mahir Hadzic. A constraint variational problem arising in stellar dynamics. Quart. Appl. Math. 65 (2007) 145-153. MR 2313153. Abstract, references, and article information View Article: PDF [22] S. Ibrahim, M. Majdoub and N. Masmoudi. Double logarithmic inequality with a sharp constant. Proc. Amer. Math. Soc. 135 (2007) 87-97. MR 2280178. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Gunnar Aronsson, Michael G. Crandall and Petri Juutinen. A tour of the theory of absolutely minimizing functions. Bull. Amer. Math. Soc. 41 (2004) 439-505. MR 2083637. Abstract, references, and article information    View Article: PDF [24] Chaocheng Huang and David Miller. Domain functionals and exit times for Brownian motion. Proc. Amer. Math. Soc. 130 (2002) 825-831. MR 1866038. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 24 of 24 found      Go to page: 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330713748931885, "perplexity": 3374.000998552389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661781.17/warc/CC-MAIN-20160924173741-00214-ip-10-143-35-109.ec2.internal.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/192i1/C2xQ8xA4.html
Copied to clipboard ## G = C2×Q8×A4order 192 = 26·3 ### Direct product of C2, Q8 and A4 Series: Derived Chief Lower central Upper central Derived series C1 — C23 — C2×Q8×A4 Chief series C1 — C22 — C23 — C2×A4 — C22×A4 — C2×C4×A4 — C2×Q8×A4 Lower central C22 — C23 — C2×Q8×A4 Upper central C1 — C22 — C2×Q8 Generators and relations for C2×Q8×A4 G = < a,b,c,d,e,f | a2=b4=d2=e2=f3=1, c2=b2, ab=ba, ac=ca, ad=da, ae=ea, af=fa, cbc-1=b-1, bd=db, be=eb, bf=fb, cd=dc, ce=ec, cf=fc, fdf-1=de=ed, fef-1=d > Subgroups: 520 in 205 conjugacy classes, 57 normal (12 characteristic) C1, C2, C2, C2, C3, C4, C4, C22, C22, C6, C2×C4, C2×C4, Q8, Q8, C23, C23, C23, C12, A4, C2×C6, C22×C4, C22×C4, C2×Q8, C2×Q8, C24, C2×C12, C3×Q8, C2×A4, C2×A4, C23×C4, C22×Q8, C22×Q8, C4×A4, C6×Q8, C22×A4, Q8×C23, C2×C4×A4, Q8×A4, C2×Q8×A4 Quotients: C1, C2, C3, C22, C6, Q8, C23, A4, C2×C6, C2×Q8, C3×Q8, C2×A4, C22×C6, C6×Q8, C22×A4, Q8×A4, C23×A4, C2×Q8×A4 Smallest permutation representation of C2×Q8×A4 On 48 points Generators in S48 (1 11)(2 12)(3 9)(4 10)(5 25)(6 26)(7 27)(8 28)(13 37)(14 38)(15 39)(16 40)(17 21)(18 22)(19 23)(20 24)(29 35)(30 36)(31 33)(32 34)(41 47)(42 48)(43 45)(44 46) (1 2 3 4)(5 6 7 8)(9 10 11 12)(13 14 15 16)(17 18 19 20)(21 22 23 24)(25 26 27 28)(29 30 31 32)(33 34 35 36)(37 38 39 40)(41 42 43 44)(45 46 47 48) (1 31 3 29)(2 30 4 32)(5 38 7 40)(6 37 8 39)(9 35 11 33)(10 34 12 36)(13 28 15 26)(14 27 16 25)(17 43 19 41)(18 42 20 44)(21 45 23 47)(22 48 24 46) (5 25)(6 26)(7 27)(8 28)(13 37)(14 38)(15 39)(16 40)(17 21)(18 22)(19 23)(20 24)(41 47)(42 48)(43 45)(44 46) (1 11)(2 12)(3 9)(4 10)(17 21)(18 22)(19 23)(20 24)(29 35)(30 36)(31 33)(32 34)(41 47)(42 48)(43 45)(44 46) (1 5 19)(2 6 20)(3 7 17)(4 8 18)(9 27 21)(10 28 22)(11 25 23)(12 26 24)(13 46 36)(14 47 33)(15 48 34)(16 45 35)(29 40 43)(30 37 44)(31 38 41)(32 39 42) G:=sub<Sym(48)| (1,11)(2,12)(3,9)(4,10)(5,25)(6,26)(7,27)(8,28)(13,37)(14,38)(15,39)(16,40)(17,21)(18,22)(19,23)(20,24)(29,35)(30,36)(31,33)(32,34)(41,47)(42,48)(43,45)(44,46), (1,2,3,4)(5,6,7,8)(9,10,11,12)(13,14,15,16)(17,18,19,20)(21,22,23,24)(25,26,27,28)(29,30,31,32)(33,34,35,36)(37,38,39,40)(41,42,43,44)(45,46,47,48), (1,31,3,29)(2,30,4,32)(5,38,7,40)(6,37,8,39)(9,35,11,33)(10,34,12,36)(13,28,15,26)(14,27,16,25)(17,43,19,41)(18,42,20,44)(21,45,23,47)(22,48,24,46), (5,25)(6,26)(7,27)(8,28)(13,37)(14,38)(15,39)(16,40)(17,21)(18,22)(19,23)(20,24)(41,47)(42,48)(43,45)(44,46), (1,11)(2,12)(3,9)(4,10)(17,21)(18,22)(19,23)(20,24)(29,35)(30,36)(31,33)(32,34)(41,47)(42,48)(43,45)(44,46), (1,5,19)(2,6,20)(3,7,17)(4,8,18)(9,27,21)(10,28,22)(11,25,23)(12,26,24)(13,46,36)(14,47,33)(15,48,34)(16,45,35)(29,40,43)(30,37,44)(31,38,41)(32,39,42)>; G:=Group( (1,11)(2,12)(3,9)(4,10)(5,25)(6,26)(7,27)(8,28)(13,37)(14,38)(15,39)(16,40)(17,21)(18,22)(19,23)(20,24)(29,35)(30,36)(31,33)(32,34)(41,47)(42,48)(43,45)(44,46), (1,2,3,4)(5,6,7,8)(9,10,11,12)(13,14,15,16)(17,18,19,20)(21,22,23,24)(25,26,27,28)(29,30,31,32)(33,34,35,36)(37,38,39,40)(41,42,43,44)(45,46,47,48), (1,31,3,29)(2,30,4,32)(5,38,7,40)(6,37,8,39)(9,35,11,33)(10,34,12,36)(13,28,15,26)(14,27,16,25)(17,43,19,41)(18,42,20,44)(21,45,23,47)(22,48,24,46), (5,25)(6,26)(7,27)(8,28)(13,37)(14,38)(15,39)(16,40)(17,21)(18,22)(19,23)(20,24)(41,47)(42,48)(43,45)(44,46), (1,11)(2,12)(3,9)(4,10)(17,21)(18,22)(19,23)(20,24)(29,35)(30,36)(31,33)(32,34)(41,47)(42,48)(43,45)(44,46), (1,5,19)(2,6,20)(3,7,17)(4,8,18)(9,27,21)(10,28,22)(11,25,23)(12,26,24)(13,46,36)(14,47,33)(15,48,34)(16,45,35)(29,40,43)(30,37,44)(31,38,41)(32,39,42) ); G=PermutationGroup([[(1,11),(2,12),(3,9),(4,10),(5,25),(6,26),(7,27),(8,28),(13,37),(14,38),(15,39),(16,40),(17,21),(18,22),(19,23),(20,24),(29,35),(30,36),(31,33),(32,34),(41,47),(42,48),(43,45),(44,46)], [(1,2,3,4),(5,6,7,8),(9,10,11,12),(13,14,15,16),(17,18,19,20),(21,22,23,24),(25,26,27,28),(29,30,31,32),(33,34,35,36),(37,38,39,40),(41,42,43,44),(45,46,47,48)], [(1,31,3,29),(2,30,4,32),(5,38,7,40),(6,37,8,39),(9,35,11,33),(10,34,12,36),(13,28,15,26),(14,27,16,25),(17,43,19,41),(18,42,20,44),(21,45,23,47),(22,48,24,46)], [(5,25),(6,26),(7,27),(8,28),(13,37),(14,38),(15,39),(16,40),(17,21),(18,22),(19,23),(20,24),(41,47),(42,48),(43,45),(44,46)], [(1,11),(2,12),(3,9),(4,10),(17,21),(18,22),(19,23),(20,24),(29,35),(30,36),(31,33),(32,34),(41,47),(42,48),(43,45),(44,46)], [(1,5,19),(2,6,20),(3,7,17),(4,8,18),(9,27,21),(10,28,22),(11,25,23),(12,26,24),(13,46,36),(14,47,33),(15,48,34),(16,45,35),(29,40,43),(30,37,44),(31,38,41),(32,39,42)]]) 40 conjugacy classes class 1 2A 2B 2C 2D 2E 2F 2G 3A 3B 4A ··· 4F 4G ··· 4L 6A ··· 6F 12A ··· 12L order 1 2 2 2 2 2 2 2 3 3 4 ··· 4 4 ··· 4 6 ··· 6 12 ··· 12 size 1 1 1 1 3 3 3 3 4 4 2 ··· 2 6 ··· 6 4 ··· 4 8 ··· 8 40 irreducible representations dim 1 1 1 1 1 1 2 2 3 3 3 6 type + + + - + + + - image C1 C2 C2 C3 C6 C6 Q8 C3×Q8 A4 C2×A4 C2×A4 Q8×A4 kernel C2×Q8×A4 C2×C4×A4 Q8×A4 Q8×C23 C23×C4 C22×Q8 C2×A4 C23 C2×Q8 C2×C4 Q8 C2 # reps 1 3 4 2 6 8 2 4 1 3 4 2 Matrix representation of C2×Q8×A4 in GL5(𝔽13) 12 0 0 0 0 0 12 0 0 0 0 0 12 0 0 0 0 0 12 0 0 0 0 0 12 , 1 11 0 0 0 1 12 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 , 5 0 0 0 0 5 8 0 0 0 0 0 12 0 0 0 0 0 12 0 0 0 0 0 12 , 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 12 0 0 0 0 0 12 , 1 0 0 0 0 0 1 0 0 0 0 0 12 0 0 0 0 0 12 0 0 0 0 0 1 , 9 0 0 0 0 0 9 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 G:=sub<GL(5,GF(13))| [12,0,0,0,0,0,12,0,0,0,0,0,12,0,0,0,0,0,12,0,0,0,0,0,12],[1,1,0,0,0,11,12,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1],[5,5,0,0,0,0,8,0,0,0,0,0,12,0,0,0,0,0,12,0,0,0,0,0,12],[1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,12,0,0,0,0,0,12],[1,0,0,0,0,0,1,0,0,0,0,0,12,0,0,0,0,0,12,0,0,0,0,0,1],[9,0,0,0,0,0,9,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,0] >; C2×Q8×A4 in GAP, Magma, Sage, TeX C_2\times Q_8\times A_4 % in TeX G:=Group("C2xQ8xA4"); // GroupNames label G:=SmallGroup(192,1499); // by ID G=gap.SmallGroup(192,1499); # by ID G:=PCGroup([7,-2,-2,-2,-3,-2,-2,2,176,303,142,530,909]); // Polycyclic G:=Group<a,b,c,d,e,f|a^2=b^4=d^2=e^2=f^3=1,c^2=b^2,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,a*f=f*a,c*b*c^-1=b^-1,b*d=d*b,b*e=e*b,b*f=f*b,c*d=d*c,c*e=e*c,c*f=f*c,f*d*f^-1=d*e=e*d,f*e*f^-1=d>; // generators/relations ׿ × 𝔽
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987322688102722, "perplexity": 4546.50000495493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00014.warc.gz"}
http://tex.stackexchange.com/questions/78165/spacing-around-cdot-when-used-as-a-wildcard?answertab=oldest
# Spacing around \cdot when used as a wildcard I have to use \cdot as a wildcard indicator in an expression like: K(\cdot,y). My concern is: is the spacing around the \cdot correct? should I force something in either one side or the other of it? - Welcome to TeX.sx! If it's a placeholder, I'd use {\cdot}. But in the example you have given, the spacing is already correct because neither ( nor , is seen as an ordinary character as would 5 and x in 5 \cdot x. (There's no harm to use {\cdot} anyway, maybe as a macro: \newcommand*\wildcard{{\cdot}}.) –  Qrrbrbirlbel Oct 18 '12 at 13:21 Welcome to TeX.sx! A tip: You can use backticks to mark your inline code as I did in my edit. –  Corentin Oct 18 '12 at 13:21 Thanks @qrrbrbirlbel. Indeed, it is used as a placeholder and unfortunately it looks (at least to me) awful (not enough space around). Thus I was wondering about about its standard use. –  Acorbe Oct 18 '12 at 13:26 Thanks @Corentin for your advice. –  Acorbe Oct 18 '12 at 13:27 I don't know about a standard use, but similar to the notation of a norm a spacing to what {}\cdot{} produces seem appropriate, too. I guess, it depends on typographical consensus and an author's individual choice. Either way: Use a macro for this, so you can later change its definition without having to correct your whole document manually. –  Qrrbrbirlbel Oct 18 '12 at 14:20 show 1 more comment For what it's worth, this is converted from comments. I'm no expert on mathematical typography, so these are if anything suggestions. My first idea was K({\cdot},y) which produces the same spacing as K(\cdot,y) because neither ( nor , is treated as an ordinary character as 5 and x would be in 5 \cdot x. If you rather want to reproduce the spacing from 5 \cdot x you can use empty groups to fake them. K({}\cdot{},y) does look more like something like K(x,y) You can, of course, specify any space around the \cdot with \mspace or the primitive \mkern which both take dimensions in mu (math unit: mu = 1/18 em): {{\mkern 2mu\cdot\mkern 2mu}} I recommend to define a macro for this kind of wildcard/placeholder so that, if you ever change your opinion about the correct spacing you can just change the macro's definition (aka “The LaTeX way”). \newcommand*\wc{{\mkern 2mu\cdot\mkern 2mu}} ### Code \documentclass{article} \newcommand*\wc{{}\cdot{}} \begin{document}\noindent $$K ( \cdot , y )$$ \\ $$K ({ \cdot} , y )$$ \\ $$K ({}\cdot{}, y )$$ \\ $$K ( \wc , y )$$ \\ $$K ( x, y )$$ \end{document} ### Output - The "correct" space depends on many factors; slightly less space around the dot can be obtained by \newcommand\blank{{\mkern 2mu\cdot\mkern 2mu}}`. I definitely recommend using a command for it. –  egreg Oct 19 '12 at 6:33 thanks to the both of you. –  Acorbe Oct 19 '12 at 7:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993151426315308, "perplexity": 2093.5964262653933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654293/warc/CC-MAIN-20140305060734-00060-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-wavelength-and-wave-speed-in-a-sinusoidal-wave.557243/
# Finding wavelength and wave speed in a sinusoidal wave. 1. Dec 5, 2011 ### JustinLiang 1. The problem statement, all variables and given/known data A sinusoidal wave is propagating along a stretched string that lies along the x-axis. The displacement of the string as a function of time is graphed in the figure (attachment) for particles at x=0m and x=0.0900m. (A) What is the amplitude of the wave? 4mm (B) What is the period of the wave? 0.04s I am confused by the following questions: (C) You are told that the two points x=0 an x=0.09m are within one wavelength of each other. If the wave is moving in the +x-direction, determine the wavelength and wave speed. (D) If the wave is moving in the -x-direction, determine the wavelength and wave speed. 2. Relevant equations y(x,t)=Acos(kx +/- ωt) v=λf 3. The attempt at a solution Initially I thought the wavelength would be 0.09 since it says 0 and 0.09 are one wavelength away from each other... But then I looked at the answer key and it seems my interpretation is wrong. Does anyone know what the question is asking in C? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution File size: 12 KB Views: 645 2. Dec 5, 2011 ### Delphi51 It doesn't say 0 and .09 are one wavelength apart. If they were, they would peak simultaneously. It looks like one peaks about .025 s after the other. That would mean the wave went one wavelength in time .025. Can you get a wavelength or velocity out of that? If you get one, you can find the other with the wave equation. 3. Dec 5, 2011 ### Spinnor I don't think that is correct. Depending on which way the wave is going the velocity will be about .09m/.025s or .09m/.015s ? Confusing graph at first. 4. Dec 5, 2011 ### JustinLiang Could you please explain to me how you got 0.15s for the -x direction? Isn't the time interval the same? But yeah I think he is correct, it took 0.025s to travel from 0m to 0.09m, so the velocity is about 3.6m/s and wavelength is 0.144m. 5. Dec 5, 2011 ### JustinLiang Oh I see because we start at x=0. Thanks for the help! 6. Sep 8, 2012 ### cvazzer42 Im a little confused on why the wavelength is different when the wave is moving in the -x direction. Can somebody shed a little light in this for me? 7. Sep 9, 2012 ### Spinnor It is a confusing ( to me) graph. You have two seperate observation points with the data graphed. There is ambiguity from the graph as to which way the wave is traveling, you can't tell from the graph. But once they tell you which way the wave is going you can determine the wavelength and the velocity. Let x =0 = a and x = .09 = b. If the wave moves to the right the wave peaks at a then latter at b. if the wave moves to the left it peaks at b and then latter at a. From the graph you can then determine the time it takes for the wave to move .09 meters. From the graph you also know the frequency of the wave. You have enough information to figure the wave velocity. Hope this was helpful. Good luck. 8. Sep 22, 2012 ### Spinnor The graphs might be of waves in some moving medium, like waves in a flowing river. Water waves move faster down stream then up-stream when viewed from the river bank. Similar Discussions: Finding wavelength and wave speed in a sinusoidal wave.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285255670547485, "perplexity": 721.9048532078558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00388.warc.gz"}
https://dsp.stackexchange.com/questions/46900/calculating-output-of-a-system-given-impulse-response-and-input
# Calculating output of a system given impulse response and input I'm trying to calculate the output given an impulse response $h(t)=e^{-t}u(t)$ and input $x(t)=\cos(2\pi t)$. I know I need to use the convolution but given that cosine is periodic, I don't see how I would get an answer to converge. Am I thinking about this the right way or missing something? • – Unknown123 Feb 1 '18 at 23:51 • Have you tried using the Fourier transform? – Tendero Feb 2 '18 at 0:04 • Hint 1. cos(2 pi t) = ( exp( -j 2 pi t) + exp( j 2pi t))/2 – user28715 Feb 2 '18 at 1:24 Eventhough you could solve this problem using other means, such as frequency domain methods, you could also follow a direct time domain path as the following. Given the impulse response $$h(t) = e^{-t} u(t)$$ of an LTI system and the applied excitation input $$x(t) = \cos(2\pi t)$$ (which is of infinite extend from $$t=-\infty$$ to $$t=\infty$$), the output can be written as the convolution integral: $$y(t) = x(t)\star h(t) = \int_{-\infty}^{\infty} h(t-\tau) x(\tau)d\tau = \int_{-\infty}^{\infty} e^{-(t-\tau)}u(t-\tau) \cos(2\pi \tau)d\tau$$ $$y(t) = e^{-t} \int_{-\infty}^{t} e^{\tau} \cos(2\pi \tau)d\tau$$ At this point we have to evaluate the integral which can be found on most calculus texts or in an integral table. I will use MATLAB to evaluate it which results in $$\boxed{ y(t) = \frac{\cos(2\pi t) }{1 + 4\pi^2} + \frac{2\pi \sin(2\pi t) }{1 + 4\pi^2} }$$ You can simplify the result using $$a\cos(wt) + b\sin(wt) = \sqrt{a^2+b^2} \cos(wt - \tan^{-1}(b/a) )$$ into $$\boxed{ y(t) = \frac{1 }{\sqrt{1 + 4\pi^2}} \cos(2\pi t)}$$ as the solution. The solution has no transient part as the input was applied beginning from $$t=-\infty$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627375602722168, "perplexity": 313.841016035704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00182.warc.gz"}
https://www.physicsforums.com/threads/test-of-second-partials-proof.51481/
# Test of Second Partials - Proof 1. Nov 4, 2004 ### circa415 Can anyone give me a proof or some kind of general explaination about why there is a local max at (a,b) if D>0 and fxx<0, a min if D>0 and fxx>0, etc. My text book doesn't give any kind of explaination at all and I'm just a little curious as to why it works. 2. Nov 5, 2004 ### Galileo Second Derivatives Test: Suppose the second partial derivatives of $f$ are continuous on a disk with center $(a,b)$, and suppose that $f_x(a,b)=0$ and $f_y(a,b)=0$ [that is, $(a,b)$ is a critical point of $f$]. Let $$D=D(a,b)=f_{xx}(a,b)f_{yy}(a,b)-[f_{xy}(a,b)]^2$$ (a) If D>0 and $f_{xx}(a,b)>0$, then f(a,b) is a local minimum. (b) If D>0 and $f_{xx}(a,b)<0$, then f(a,b) is a local maximum. (c) If D<0 and $f_{xx}(a,b)>0$, then f(a,b) is not a local maximum or minimum. Proof of part (a): We compute the second-order directional derivative of f in the direction of $\vec u = \langle h, k \rangle$. The first-order derivative is given by: $$D_uf=f_xh+f_yk \quad \mbox{(from a different theorem)}$$ Applying this theorem a second time, we have: $$\begin{eqnarray} D^2_uf & = & D_u(D_uf)=\frac{\partial}{\partial x}(D_uf)h+\frac{\partial}{\partial y}(D_uf)k \nonumber \\ & = & (f_{xx}h+f_{yx}k)h+(f_{xy}h+f_{yy}k)k \nonumber\\ & = & f_{xx}h^2+2f_{xy}hk+f_{yy}k^2 \mbox{(by Clairaut's theorem)}\nonumber \end{eqnarray}$$ If we complete the square in this expression, we obtain: $$D_u^2f=f_{xx}\left(h+\frac{f_{xy}}{f_{xx}}k\right)^2+\frac{k^2}{f_{xx}}(f_{xx}f_{yy}-f^2_{xy}) \quad \mbox(<- Equation 1)$$ We are given that $f_{xx}(a,b)>0$ and $D(a,b)>0$. But $f_{xx}$ and $D=f_{xx}f_{yy}-f^2_{xy}$ are continuous functions, so there is a disk B with center (a,b) and radius $\delta>0$ such that $f_{xx}>0$ and $D>0$ whenever (x,y) is in B. Therefore, by looking at Equation 1, we see that $D_u^2f(x,y)>0$ whenever (x,y) is in B. This means that if C is the curve obtained by intersecting the graph of f with the vertical plane through P(a,b,f(a,b)) in the direction of $\vec u$, then C is concave upward on an interval of lenght $2\delta$. This is true in the direction of every vector $\vec u$, so if we restrict (x,y) to lie in B, the graph lies above its horizontal tangent plane at P. Thus $f(x,y)\geq f(a,b)$ whenever (x,y) is in B. This shows that f(a,b) is a local minimum. Parts (b) and (c) have similar proofs. Last edited: Nov 5, 2004 3. Nov 5, 2004 ### HallsofIvy Staff Emeritus Notice that what Galileo is really doing is expanding f(x,y) in a Taylor's series with two variables. The derivatives at a critical point are 0 so there are no first power terms. For x, y very close to the critical point, we can ignore higher powers so we have only the constant term (the value of f AT the critical point) and the quadratic terms. We can always change the coordinate system to eliminate any "xy" term and have left ax2+ by2 (with a, b, positive, negative, or 0). If both a and b are positive, that's a minimum, if negative, maximum. If either is 0, we need to look at higher powers. What's REALLY happening is that, with a real valued function of of R2 to R, the second derivative is a linear function from R2 to R2 which can be represented as a 2 by 2 matrix (the entries are: first row fxx, fxy, second row fyx, fyy). Since that is a symmetric matrix we can always change the coordinate system to make that a diagonal matrix resulting in the "ax2+ b2" above (a, b are the diagonal elements). Of course the determinant of such a matrix is ab so everything depends upon whether that determinant is positive or negative. But the determinant is independent of changing the coordinate system so we can just look at the determinant of the original matrix: D= (fxxfyy-fxy2). 4. Nov 5, 2004 ### arildno The second derivative test can seem rather tricky and abstract to begin with, but, as HallsofIvy said, what you really get out of it, is either: 1) Locally, your graph looks like a paraboloid expanding upwards (i.e, you've got positive curvatures and a minimum point) 2) Locally, your graph looks like a paraboloid expanding downwards (i.e, you've got negative curvatures and a maximum point) 3) Locally, your graph looks like a "saddle", neither maximum nor minimum. 4) Second derivatives test is insufficient in elucidating local behaviour (i.e, you must look on higher derivatives, (assuming those exist))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538911581039429, "perplexity": 419.19038559661414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541322.19/warc/CC-MAIN-20161202170901-00437-ip-10-31-129-80.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/majorana-fermions/new
# Tag Info 1 There may be some language confusion here. The "Majorana representation" refered to in the question has nothing to do with Majorana fermions. Rather it is about the so-called "Majorana stellar representation" of spin J pure quantum states (which can be thought of as the generalization for J>1/2 of the Bloch representation of spin 1/2 pure states). See e.g. ... 6 In short, what makes a superconductor topological is the nontrivial band structure of the Bogoliubov quasiparticles. Generally one can classify non-interacting gapped fermion systems based on single-particle band structure (as well as symmetry), and the result is the so-called ten-fold way/periodic table. The topological superconductivity mentioned in the ... 2 A prototypical example of an intrinsic topological superconductor is the so-called $p$-wave superconductor [more details there: What is a $p_x + i p_y$ superconductor? Relation to topological superconductors, also, Meng-Cheng wrote the spinless $p$-wave model in 2D somewhere else on this page, and comment it carefully]. You can also induce topological ... 1 You say about Majorana fermions: "From this I would argue that they need to have all quantum numbers equal to zero." which is not true. Charge conjugation is defined on Dirac spinors as $\psi^c := \mathrm{i}\gamma^0\gamma^2\bar\psi^T$. Being Majorana means $\psi^c = \psi$. While this would imply the spinor has zero electric charge (and zero all other ... 1 There is no well established theory predicting the neutrino mass (even if of Majorana type). A particle is its own anti-particle if the field describing the particle is a real field. That means obviously that the electric charge is 0 but not necessarily that other charges are zero. For instance, if the neutrinos are Majorana neutrinos, they would still have ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896806240081787, "perplexity": 701.5385090820587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299496.98/warc/CC-MAIN-20150323172139-00195-ip-10-168-14-71.ec2.internal.warc.gz"}