url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://physics.stackexchange.com/users/5705/queueoverflow?tab=summary | # queueoverflow
less info
reputation
216
bio website martin-ueding.de location Germany age member for 2 years, 2 months seen Dec 12 at 15:34 profile views 118
# 28 Questions
8 How to combine measurement error with statistic error 5 absolute defintion of the right (i. e. not left) direction 5 Imaginary angle on simple centrifugal problem 4 Why does the conductivity $\sigma$ decrease with the temperature $T$ in a semi-conductor? 4 Lorentz Transformation via Geometry
# 722 Reputation
+20 Why does the conductivity $\sigma$ decrease with the temperature $T$ in a semi-conductor? +5 How to combine measurement error with statistic error +5 Imaginary angle on simple centrifugal problem +10 Will macroscopic object change its angular velocity after absorbing electron?
4 Riddle about speed 3 Speed Distribution 3 Which is the axis of rotation? 2 Angular displacement and the displacement vector 2 Force on Earth due to Sun's radiation pressure
# 63 Tags
9 homework × 16 3 torque × 2 8 speed × 3 3 reference-frame 5 rotational-dynamics × 3 3 rigid-body-dynamics 4 acoustics 2 classical-electrodynamics × 2 3 electromagnetism × 5 2 electromagnetic-radiation
# 22 Accounts
Ask Ubuntu 1,549 rep 21441 Stack Overflow 884 rep 520 TeX - LaTeX 865 rep 415 Physics 722 rep 216 Mathematics 538 rep 1311 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178817868232727, "perplexity": 4571.501159770862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776447/warc/CC-MAIN-20131218054936-00054-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1404.4596?context=math.RT | math.RT
# Title:Twisting of Siegel paramodular forms
Abstract: Let $S_k(\Gamma^{\mathrm{para}}(N))$ be the space of Siegel paramodular forms of level $N$ and weight $k$. Let $p\nmid N$ and let $\chi$ be a nontrivial quadratic Dirichlet character mod $p$. Based on our previous work, we define a linear twisting map $\mathcal{T}_\chi:S_k(\Gamma^{\mathrm{para}}(N))\rightarrow S_k(\Gamma^{\mathrm{para}}(Np^4))$. We calculate an explicit expression for this twist and give the commutation relations of this map with the Hecke operators and Atkin-Lehner involution for primes $\ell\neq p$.
Comments: 64 pages. In version 2, the paper has been shortened significantly and lengthy, technical proofs given in a separate appendix. In version 3, two typos are corrected. In version 4, we have included the full details of the proof of the local twisting theorem in the paper and improved the results with the L-function theorem Subjects: Number Theory (math.NT); Representation Theory (math.RT) Cite as: arXiv:1404.4596 [math.NT] (or arXiv:1404.4596v4 [math.NT] for this version)
## Submission history
From: Jennifer Johnson-Leung [view email]
[v1] Thu, 17 Apr 2014 18:07:08 UTC (24 KB)
[v2] Wed, 23 Apr 2014 05:48:11 UTC (16 KB)
[v3] Thu, 8 Jan 2015 20:03:24 UTC (16 KB)
[v4] Tue, 4 Oct 2016 00:49:11 UTC (24 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735520243644714, "perplexity": 1017.0552917847824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00142.warc.gz"} |
http://mathoverflow.net/questions/385/deformation-theory-and-differential-graded-lie-algebras/417 | # Deformation theory and differential graded Lie algebras
There is supposed to be a philosophy that, at least over a field of characteristic zero, every "deformation problem" is somehow "governed" or "controlled" by a differential graded Lie algebra. See for example http://arxiv.org/abs/math/0507284
I've seen this idea attributed to big names like Quillen, Drinfeld, and Deligne -- so it must be true, right? ;-)
An example of this philosophy is the deformation theory of a compact complex manifold: It is "controlled" by the Kodaira-Spencer dg Lie algebra: holomorphic vector fields tensor Dolbeault complex, with differential induced by del-bar on the Dolbeault complex, and Lie bracket induced by Lie bracket on the vector fields (I think also take wedge product on the Dolbeault side).
I seem to recall that there is a general theorem which justifies this philosophy, but I don't remember the details, or where I heard about it. The statement of the theorem should be something like:
Let k be a field of characteristic zero. Given a functor F: (Local Artin k-algebras) -> (Sets) satisfying some natural conditions that a "deformation functor" should satisfy, then there exists a dg Lie algebra L such that F is isomorphic to the deformation functor of L, which is the functor that takes an algebra A and returns the set of Maurer-Cartan solutions (dx + [x,x] = 0) in (L^1 tensor mA) modulo the gauge action of (L^0 tensor mA), where mA denotes the maximal ideal of A.
Furthermore, I think such an L should be unique up to quasi-isomorphism.
Does anyone know a reference for something along these lines?
Any other nice examples of cases where this philosophy holds would also be appreciated.
-
I think another example is supposed to be something like: the Hochschild complex (maybe shifted one way or another) is the dg Lie algebra that controls the deformations of an A-infinity algebra or an A-infinity category. – Kevin H. Lin Oct 12 '09 at 23:22
I'm looking forward to a proper answer to this question, I've not come across a theorem like that before and it sounds very interesting indeed. My experience is very much weighted to the algebraic side, I should read up on geometric examples. – James Griffin Oct 13 '09 at 12:27
Jacob Lurie's ICM address (math.harvard.edu/~lurie/papers/moduli.pdf) is concerned precisely with the issues discussed here. – David Ben-Zvi Apr 6 '10 at 18:21
@DBZ, thanks! I'll take a look. – Kevin H. Lin Apr 6 '10 at 20:50
I hope to write more on this later, but for now let me make some general assertions: there are general theorems to this effect and give two references: arXiv:math/9812034, DG coalgebras as formal stacks, by Vladimir Hinich, and the survey article arXiv:math/0604504, Higher and derived stacks: a global overview, by Bertrand Toen (look at the very end to where Hinich's theorem and its generalizations are discussed).
The basic assertion if you'd like is the Koszul duality of the commutative and Lie operads in characteristic zero. In its simplest form it's a version of Lie's theorem: to any Lie algebra we can assign a formal group, and to every formal group we can assign a Lie algebra, and this gives an equivalence of categories. The general construction is the same: we replace Lie algebras by their homotopical analog, Loo algebras or dg Lie algebras (the two notions are equivalent --- both Lie algebras in a stable oo,1 category). We can associate to such an object the space of solutions of the Maurer-Cartan equations -- this is basically the classifying space of its formal group (ie formal group shifted by 1). Conversely from any formal derived stack we can calculate its shifted tangent complex (or perhaps better to say, the Lie algebra of its loop space). These are equivalences of oo-categories if you set everything up correctly. This is a form of Quillen's rational homotopy theory - we're passing from a simply connected space to the Lie algebra of its loop space (the Whitehead algebra of homotopy groups of X with a shift) and back.
So basically this "philosophy", with a modern understanding is just calculus or Lie theory: you can differentiate and exponentiate, and they are equivalences between commutative and Lie theories (note we're saying this geometrically, which means replacing commutative algebras by their opposite, ie appropriate spaces -- in this case formal stacks). Since any deformation/formal moduli problem, properly formulated, gives rise to a formal derived stack, it is gotten (again in characteristic zero) by exponentiating a Lie algebra.
Sorry to be so sketchy, might try to expand later, but look in Toen's article for more (though I think it's formulated there as an open question, and I think it's not so open anymore). Once you see things this way you can generalize them also in various ways -- for example, replacing commutative geometry by noncommutative geometry, you replace Lie algebras by associative algebras (see arXiv:math/0605095 by Lunts and Orlov for this philosophy) or pass to geometry over any operad with an augmentation and its dual...
-
PS the names associated with this philosophy should include also Boris Feigin in addition to those mentioned above. – David Ben-Zvi Oct 20 '09 at 0:21
Thanks!! I'll take a look at those references. – Kevin H. Lin Oct 20 '09 at 0:23
"look in Toen's article for more (though I think it's formulated there as an open question, and I think it's not so open anymore)" Hi David. Can you give a reference for that ? – DamienC Jun 23 '11 at 14:06
Hi Damien - I would look first at Jacob's ICM address, which covers this and more.. – David Ben-Zvi Jun 24 '11 at 2:54
This is an amazingly clear and compact introduction to the topic. – Mark.Neuhaus Apr 19 at 23:27
Perhaps notes of Kontsevich's lectures are helpfull.
-
Thanks, but I've already read both of those. The principle is definitely stated somewhere therein, but I don't think it's precisely formulated nor proven. Perhaps I missed it though... – Kevin H. Lin Oct 20 '09 at 0:18
Maybe http://arxiv.org/abs/0707.0889 could be of any help? It's general enough - representations of properads cover a huge variety of cases, from algebraic structures to formal differential geometric things.
-
I can offer an algebraic example generalising that of Hochschild cohomology. Let O → P be a morphism of operads, assume that O has O(1)=k (although augmented should be strong enough as well I think). Then we can form a cofibrant resolution, O' of O, this has the underlying structure of the free operad on a set of generators C, and this C has a cooperad structure.
We want to deform f. Well Hom(O',P) < Hom(FC,P) = Hom_S(C,P), where the first two hom's are in the category of operads and the final is in the category of collections. Recall that collections underlie operads, they can be given as functors from the category of finite sets and bijections into vector spaces.
But C is a cooperad and P is an operad, and this homset looks a lot like linear maps between them, so shouldn't we have a convolution operad structure. Well we do, but it's only a non-symmetric operad.
Non-symmetric operads have the natural structure of a pre-Lie algebra, the composition is defined by taking the sum of all possible ways of plugging one operation into another. If you haven't met pre-Lie algebras then don't worry as the anti-symmetrisation of the pre-Lie product is a Lie bracket. So our non-symmetric operad Hom(FC,P) naturally has the structure of a dg-Lie algebra.
The nature of the inclusion of Hom(O',P) into Hom(FC,P) should come as no surprise. They're precisely the Maurer-Cartan elements. So our morphism f corresponds to a MC element in a dg-Lie algebra.
Given a MC element in a graded-Lie algebra the deformations are MC elements in the dg-Lie algebra twisted by the original MC element.
Examples
1. Let P be an endomorphism operad ⊕Hom(A⊗...⊗A,A), then the theory above is the deformation theory for O algebras.
2. Let O be the associative operad, let P be the operad for associative algebras with an action of a comm. alg R by central elements and a Lie algebra g by derivations with some compatability conditions (in fact by a Lie-Rinehart algebra, or a Lie-algebroid). Then the deformation theory of the inclusion morphism is the study of deformations induced by Lie-algebroid actions. In fact the dg-Lie algebras involved are very often formal.
-
I'm hesitant to put this up, since I haven't actually looked at the reference I'm about to suggest, but what about Illusie's Complexe Cotangents et Deformations I & II ? I've been under the impression that if I want to learn deformation theory, I should look there, though I have no clue if it contains the precise theorem you are looking for.
-
Some precise statements with proofs can be found in this paper of Manetti: http://arxiv.org/abs/math/9910071
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707424998283386, "perplexity": 659.0382096263675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400382386.21/warc/CC-MAIN-20141119123302-00244-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/138184/area-between-y-ex-xex/138186 | # Area between $y = e^x - xe^x$
I am trying to find the area between $y = e^x - xe^x$ and x= 0
I get stuck on trying to find the antiderivative of $xe^x$ I don't know how to do a complex number like that. I know the curves intersect at 1,1 so I am finding it from 0 to 1.
-
$\int xe^x\,dx$ is a standard integration by parts; set $u=x$ and $dv=e^x\,dx$. – Brian M. Scott Apr 28 '12 at 20:11
I have not been introduced to integration by parts. – Jordan Apr 28 '12 at 20:14
I don’t offhand see any straightforward systematic way to get the antiderivative without using integration by parts. Try taking the derivative of $2e^x-xe^x$ with respect to $x$, and see if that helps you to find the antiderivative that you need. – Brian M. Scott Apr 28 '12 at 20:23
It might help to think about what differentiates to a similar function (i.e. the whole function, y). @Mark Bennet's answer is a good hint that is straightforward and avoids integration by parts. – Ronald Apr 28 '12 at 20:31
You may not know about integration by parts, but you might be expected to use some intelligent guesswork. You know that the derivative of $e^x$ is $e^x$ so how about ...
Take the derivative of $xe^x$ and find that it is $e^x+xe^x$.
So the derivative of $xe^x-e^x$ is ... and go from there.
-
$$\int_0^1e^x-xe^xdx=\int_0^1e^xdx-\int_0^1xe^xdx=e-1-\int_0^1xe^xdx$$
$$\int_0^1xe^xdx=xe^x| _0^1-\int_0^1e^xdx\ \ \ \ \ \ \ \text{(by parts)}$$ $$=e-(e-1)=1$$
$$\int_0^1e^x-xe^xdx=e-2$$
-
Integration by parts is not something introduced until the next chapter which will not be apart of this course. – Jordan Apr 28 '12 at 20:16
First remember the formula for derivatives by parts. You can remember it by integrating the product formula for derivatives $$(fg)' = f(g') + (f')g \quad\xrightarrow{\int_0^1}\quad fg|_{x=0}^{1} = \int_0^1 f(g')dx + \int_0^1 (f')g dx$$ Or rephrased: $\int_0^1 (f')g dx = fg|_{x=0}^{1} - \int_0^1 f(g')dx$.
If we chose $f'=e^x \Rightarrow f=e^x$ and $g=x$ we get $$\int_0^1 e^x x dx = [e^x x]_{x=0}^{1} - \int_0^1 e^x(1)dx.$$
$$\int_0^1 e^x dx - \int_0^1 e^x x dx = - [e^x x]_{x=0}^{1} + 2 \int_0^1 e^x(1)dx$$ $$= -e + 2(e-1) = e-2$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972572088241577, "perplexity": 187.06105340864133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758389/warc/CC-MAIN-20131218054918-00030-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://en.wikisource.org/wiki/Eight_Lectures_on_Theoretical_Physics/II | Eight Lectures on Theoretical Physics/II
SECOND LECTURE.
Thermodynamic States of Equilibrium in Dilute Solutions.
In the lecture of yesterday I sought to make clear the fact that the essential, and therefore the final division of all processes occurring in nature, is into reversible and irreversible processes, and the characteristic difference between these two kinds of processes, as I have further separated them, is that in irreversible processes the entropy increases, while in all reversible processes it remains constant. Today I am constrained to speak of some of the consequences of this law which will illustrate its rich fruitfulness. They have to do with the question of the laws of thermodynamic equilibrium. Since in nature the entropy can only increase, it follows that the state of a physical configuration which is completely isolated, and in which the entropy of the system possesses an absolute maximum, is necessarily a state of stable equilibrium, since for it no further change is possible. How deeply this law underlies all physical and chemical relations has been shown by no one better and more completely than by John Willard Gibbs, whose name, not only in America, but in the whole world will be counted among those of the most famous theoretical physicists of all times; to whom, to my sorrow, it is no longer possible for me to tender personally my respects. It would be gratuitous for me, here in the land of his activity, to expatiate fully on the progress of his ideas, but you will perhaps permit me to speak in the lecture of today of some of the important applications in which thermodynamic research, based on Gibbs works, can be advanced beyond his results.
These applications refer to the theory of dilute solutions, and we shall occupy ourselves today with these, while I show you by a definite example what fruitfulness is inherent in thermodynamic theory. I shall first characterize the problem quite generally. It has to do with the state of equilibrium of a material system of any number of arbitrary constituents in an arbitrary number of phases, at a given temperature $T$ and given pressure $p$. If the system is completely isolated, and therefore guarded against all external thermal and mechanical actions, then in any ensuing change the entropy of the system will increase:
\begin{align}&{\color{White}.(00)}\qquad&& dS > 0. \end{align}
But if, as we assume, the system stands in such relation to its surroundings that in any change which the system undergoes the temperature $T$ and the pressure $p$ are maintained constant, as, for instance, through its introduction into a calorimeter of great heat capacity and through loading with a piston of fixed weight, the inequality would suffer a change thereby. We must then take account of the fact that the surrounding bodies also, e. g., the calorimetric liquid, will be involved in the change. If we denote the entropy of the surrounding bodies by $S_{0}$, then the following more general equation holds:
\begin{align}&{\color{White}.(00)}\qquad&& dS + dS_{0} > 0. \end{align}
In this equation
\begin{align}&{\color{White}.(00)}\qquad&& dS_{0} = -\frac{Q}{T}, \end{align}
if $Q$ denote the heat which is given up in the change by the surroundings to the system. On the other hand, if $U$ denote the energy, $V$ the volume of the system, then, in accordance with the first law of thermodynamics,
\begin{align}&{\color{White}.(00)}\qquad&& Q = dU + p dV. \end{align}
Consequently, through substitution:
\begin{align}&{\color{White}.(00)}\qquad&& dS - \frac{dU + p dV}{T} > 0 \end{align}
or, since $p$ and $T$ are constant:
\begin{align}&{\color{White}.(00)}\qquad&& d \left(S - \frac{U + pV}{T} \right) > 0. \end{align}
If, therefore, we put:
\begin{align}&(1){\color{White}.0}\qquad&& S - \frac{U + pV}{T} = \Phi, \end{align}
then
\begin{align}&{\color{White}.(00)}\qquad&& d \Phi > 0, \end{align}
and we have the general law, that in every isothermal-isobaric ($T = \text{const.}$, $p = \text{const.}$) change of state of a physical system the quantity $\Phi$ increases. The absolutely stable state of equilibrium of the system is therefore characterized through the maximum of $\Phi$:
\begin{align}&(2){\color{White}.0}\qquad&& \delta \Phi = 0. \end{align}
If the system consist of numerous phases, then, because $\Phi$, in accordance with $(1)$, is linear and homogeneous in $S$, $U$ and $V$, the quantity $\Phi$ referring to the whole system is the sum of the quantities $\Phi$ referring to the individual phases. If the expression for $\Phi$ is known as a function of the independent variables for each phase of the system, then, from equation $(2)$, all questions concerning the conditions of stable equilibrium may be answered. Now, within limits, this is the case for dilute solutions. By “solution” in thermodynamics is meant each homogeneous phase, in whatever state of aggregation, which is composed of a series of different molecular complexes, each of which is represented by a definite molecular number. If the molecular number of a given complex is great with reference to all the remaining complexes, then the solution is called dilute, and the molecular complex in question is called the solvent; the remaining complexes are called the dissolved substances.
Let us now consider a dilute solution whose state is determined by the pressure $p$, the temperature $T$, and the molecular numbers $n_{0}$, $n_{1}$, $n_{2}$, $n_{3}$, $\cdots$, wherein the subscript zero refers to the solvent. Then the numbers $n_{1}$, $n_{2}$, $n_{3}$, $\cdots$ are all small with respect to $n_{0}$, and on this account the volume $V$ and the energy $U$ are linear functions of the molecular numbers:
\begin{align}&{\color{White}.(00)}\qquad& V &= n_{0}v_{0} + n_{1}v_{1} + n_{2}v_{2} + \cdots,\\&& U &= n_{0}u_{0} + n_{1}u_{1} + n_{2}u_{2} + \cdots, \end{align}
wherein the $v$'s and $u$'s depend upon $p$ and $T$ only.
From the general equation of entropy:
\begin{align}&{\color{White}.(00)}\qquad&& dS = \frac{dU + p dV}{T}, \end{align}
in which the differentials depend only upon changes in $p$ and $T$, and not in the molecular numbers, there results therefore:
\begin{align}&{\color{White}.(00)}\qquad&& dS = n_{0} \frac{du_{0} + p dv_{0}}{T} + n_{1} \frac{du_{1} + p dv_{1}}{T} + \cdots, \end{align}
and from this it follows that the expressions multiplied by $n_{0}$, $n_{1}$ $\cdots$, dependent upon $p$ and $T$ only, are complete differentials. We may therefore write:
\begin{align}&(3){\color{White}.0}\qquad&& \frac{du_{0} + p dv_{0}}{T} = ds_{0}, \quad \frac{du_{1} + p dv_{1}}{T} = ds_{1},\ \cdots \end{align}
and by integration obtain:
\begin{align}&{\color{White}.(00)}\qquad&& S = n_{0}s_{0} + n_{1}s_{1} + n_{2}s_{2} + \cdots + C. \end{align}
The constant $C$ of integration does not depend upon $p$ and $T$, but may depend upon the molecular numbers $n_{0}$, $n_{1}$, $n_{2}$, $\cdots$. In order to express this dependence generally, it suffices to know it for a special case, for fixed values of $p$ and $T$. Now every solution passes, through appropriate increase of temperature and decrease of pressure, into the state of a mixture of ideal gases, and for this case the entropy is fully known, the integration constant being, in accordance with Gibbs:
\begin{align}&{\color{White}.(00)}\qquad&& C = - R (n_{0} \log c_{0} + n_{1} \log c_{1} + \cdots), \end{align}
wherein $R$ denotes the absolute gas constant and $c_{0}$, $c_{1}$, $c_{2}$, $\cdots$ denote the “molecular concentrations”:
\begin{align}&{\color{White}.(00)}\qquad&& c_{0} = \frac{n_{0}}{n_{0} + n_{1} + n_{2} + \cdots}, \quad c_{1} = \frac{n_{1}}{n_{0} + n_{1} + n_{2} + \cdots} ,\ \cdots. \end{align}
Consequently, quite in general, the entropy of a dilute solution is:
\begin{align}&{\color{White}.(00)}\qquad&& S = n_{0}(s_{0} - R \log c_{0}) + n_{1}(s_{1} - R \log c_{1}) + \cdots, \end{align}
and, finally, from this it follows by substitution in equation $(1)$ that:
\begin{align}&(4){\color{White}.0}\qquad&& \Phi = n_{0}(\varphi_{0} - R \log c_{0}) + n_{1}(\varphi_{1} - R \log c_{1}) + \cdots, \end{align}
if we put for brevity:
\begin{align}&(5){\color{White}.0}\qquad&& \varphi_{0} = s_{0} - \frac{u_{0} + pv_{0}}{T}, \quad \varphi_{1} = s_{1} - \frac{u_{1} + pv_{1}}{T},\ \cdots \end{align}
all of which quantities depend only upon $p$ and $T$.
With the aid of the expression obtained for $\Phi$ we are enabled through equation $(2)$ to answer the question with regard to thermodynamic equilibrium. We shall first find the general law of equilibrium and then apply it to a series of particularly interesting special cases.
Every material system consisting of an arbitrary number of homogeneous phases may be represented symbolically in the following way:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0} m_{0},\ n_{1} m_{1},\ \cdots \mid {n_{0}}' {m_{0}}',\ {n_{1}}' {m_{1}}',\ \cdots \mid {n_{0}}''{m_{0}}'',\ {n_{1}}''{m_{1}}'',\ \cdots \mid \cdots. \end{align}
Here the molecular numbers are denoted by $n$, the molecular weights by $m$, and the individual phases are separated from one another by vertical lines. We shall now suppose that each phase represents a dilute solution. This will be the case when each phase contains only a single molecular complex and therefore represents an absolutely pure substance; for then the concentrations of all the dissolved substances will be zero.
If now an isobaric-isothermal change in the system of such kind is possible that the molecular numbers
\begin{align}&{\color{White}.(00)}\qquad&& n_{0},\ n_{1},\ n_{2},\ \cdots,\quad {n_{0}}',\ {n_{1}}',\ {n_{2}}',\ \cdots,\quad {n_{0}}'',\ {n_{1}}'',\ {n_{2}}'',\ \cdots \end{align}
change simultaneously by the amounts
\begin{align}&{\color{White}.(00)}\qquad&& \delta n_{0},\ \delta n_{1},\ \delta n_{2}, \cdots,\quad \delta {n_{0}}',\ \delta {n_{1}}',\ \delta {n_{2}}', \cdots,\quad \delta {n_{0}}'',\ \delta {n_{1}}'',\ \delta {n_{2}}'', \cdots \end{align}
then, in accordance with equation $(2)$, equilibrium obtains with respect to the occurrence of this change if, when $T$ and $p$ are held constant, the function
\begin{align}&{\color{White}.(00)}\qquad&& \Phi + \Phi' + \Phi'' + \cdots \end{align}
is a maximum, or, in accordance with equation $(4)$:
\begin{align}&{\color{White}.(00)}\qquad&& {\textstyle\sum} (\varphi_{0} - R \log c_{0})\delta n_{0} + (\varphi_{1} - R \log c_{1})\delta n_{1} + \cdots = 0 \end{align}
(the summation ${\textstyle\sum}$ being extended over all phases of the system). Since we are only concerned in this equation with the ratios of the $\delta n$'s, we put
\begin{align}&{\color{White}.(00)}\qquad&& \delta n_{0} : \delta n_{1} : \cdots : \delta {n_{0}}' : \delta {n_{1}}' : \cdots : \delta {n_{0}}'' : \delta {n_{1}}'' : \cdots \\&&& = \nu_{0} : \nu_{1} : \cdots : {\nu_{0}}' : {\nu_{1}}' : \cdots : {\nu_{0}}'' : {\nu_{1}}'' : \cdots, \end{align}
wherein we are to understand by the simultaneously changing $\nu$'s, in the variation considered, simple integer positive or negative numbers, according as the molecular complex under consideration is formed or disappears in the change. Then the condition for equilibrium is:
\begin{align}&(6){\color{White}.0}\qquad&& {\textstyle\sum} \nu_{0} \log c_{0} + \nu_{1} \log c_{1} + \cdots = \frac{1}{R} {\textstyle\sum} \nu_{0} \varphi_{0} + \nu_{1} \varphi_{1} + \cdots = \log K. \end{align}
$K$ and the quantities $\varphi_{0}$, $\varphi_{1}$, $\varphi_{2}$, $\cdots$ depend only upon $p$ and $T$, and this dependence is to be found from the equations:
\begin{align}&{\color{White}.(00)}\qquad& \frac{\partial \log K}{\partial p} &= \frac{1}{R} {\textstyle\sum} \nu_{0} \frac{\partial \varphi_{0}}{\partial p} + \nu_{1} \frac{\partial \varphi_{1}}{\partial p} + \cdots,\\&& \frac{\partial \log K}{\partial T} &= \frac{1}{R} {\textstyle\sum} \nu_{0} \frac{\partial \varphi_{0}}{\partial T} + \nu_{1} \frac{\partial \varphi_{1}}{\partial T} + \cdots. \end{align}
Now, in accordance with $(5)$, for any infinitely small change of $p$ and $T$:
\begin{align}&{\color{White}.(00)}\qquad&& d \varphi_{0} = ds_{0} - \frac{du_{0} + p dv_{0} + v_{0} dp}{T} + \frac{u_{0} + pv_{0}}{T^{2}} \cdot dT, \end{align}
and consequently, from $(3)$:
\begin{align}&{\color{White}.(00)}\qquad&& d \varphi_{0} = \frac{u_{0} + pv_{0}}{T^{2}} dT - \frac{v_{0} dp}{T}, \end{align}
and hence:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \varphi_{0}}{\partial p} = -\frac{v_{0}}{T},\quad \frac{\partial \varphi_{0}}{\partial T} = \frac{u_{0} + pv_{0}}{T^{2}}. \end{align}
Similar equations hold for the other $\varphi$'s, and therefore we get:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log K}{\partial p} = -\frac{1}{RT} {\textstyle\sum} \nu_{0}v_{0} + \nu_{1}v_{1} + \cdots, \\&&& \frac{\partial \log K}{\partial T} = -\frac{1}{RT^{2}} {\textstyle\sum} \nu_{0}u_{0} + \nu_{2}u_{2} + \cdots + p(\nu_{0}v_{0} + \nu_{1}v_{1} + \cdots) \end{align}
or, more briefly:
\begin{align}&(7){\color{White}.0}\qquad&& \frac{\partial \log K}{\partial p} = -\frac{1}{RT} \cdot \Delta V, \quad \frac{\partial \log K}{\partial T} = \frac{\Delta Q}{RT^{2}}, \end{align}
if $\Delta V$ denote the change in the total volume of the system and $\Delta Q$ the heat which is communicated to it from outside, during the isobaric isothermal change considered. We shall now investigate the import of these relations in a series of important applications.
I. Electrolytic Dissociation of Water.
The system consists of a single phase:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}H_{2}O,\quad n_{1}\overset{+}{H},\quad n_{2}\overset{-}{HO}. \end{align}
The transformation under consideration
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} : \nu_{1} : \nu_{2} = \delta n_{0} : \delta n_{1} : \delta n_{2} \end{align}
consists in the dissociation of a molecule $H_{2}O$ into a molecule $\overset{+}{H}$ and a molecule $\overset{-}{HO}$, therefore:
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = -1,\quad \nu_{1} = 1,\quad \nu_{2} = 1. \end{align}
Hence, in accordance with $(6)$, for equilibrium:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{0} + \log c_{1} + \log c_{2} = \log K, \end{align}
or, since $c_{1} = c_{2}$ and $c_{0} = 1$, approximately:
\begin{align}&{\color{White}.(00)}\qquad&& 2 \log c_{1} = \log K. \end{align}
The dependence of the concentration $c_{1}$ upon the temperature now follows from $(7)$:
\begin{align}&{\color{White}.(00)}\qquad&& 2 \frac{\partial \log c_{1}}{\partial T} = \frac{\Delta Q}{R T^{2}} . \end{align}
$\Delta Q$, the quantity of heat which it is necessary to supply for the dissociation of a molecule of $H_{2}O$ into the ions $\overset{+}{H}$ and $\overset{-}{HO}$, is, in accordance with Arrhenius, equal to the heat of ionization in the neutralization of a strong univalent base and acid in a dilute aqueous solution, and, therefore, in accordance with the recent measurements of Wörmann,[1]
\begin{align}&{\color{White}.(00)}\qquad&& \Delta Q = 27,857 - 48.5 T \ \text{gr}.\ \text{cal}. \end{align}
Using the number $1.985$ for the ratio of the absolute gas constant $R$ to the mechanical equivalent of heat, it follows that:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log c_{1}}{\partial T} = \frac{1}{2\cdot1.985} \left(\frac{27,857}{T^{2}} - \frac{48.5}{T}\right), \end{align}
and by integration:
\begin{align}&{\color{White}.(00)}\qquad&& \overset{10}{\log} c_{1} = - \frac{3047.3}{T} - 12.125 \overset{10}{\log} T + \text{const.} \end{align}
This dependence of the degree of dissociation upon the temperature agrees very well with the measurements of the electric conductivity of water at different temperatures by Kohlrausch and Heydweiller, Noyes, and Lundén.
II. Dissociation of a Dissolved Electrolyte.
Let the system consists of an aqueous solution of acetic acid:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}H_{2}O,\quad n_{1}H_{4}C_{2}O_{2},\quad n_{2}\overset{+}{H},\quad n_{3}\overset{-}{H_{3}C_{2}O_{2}}. \end{align}
The change under consideration consists in the dissociation of a molecule $H_{4}C_{2}O_{2}$ into its two ions, therefore
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = 0, \quad \nu_{1} = -1, \quad \nu_{2} = 1, \quad \nu_{3} = 1. \end{align}
Hence, for the state of equilibrium, in accordance with $(6)$:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{1} + \log c_{2} + \log c_{3} = \log K, \end{align}
or, since $c_{2} = c_{3}$:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{{c_{2}}^{2}}{c_{1}} = K. \end{align}
Now the sum $c_{1} + c_{2} = c$ is to be regarded as known, since the total number of the undissociated and dissociated acid molecules is independent of the degree of dissociation. Therefore $c_{1}$ and $c_{2}$ may be calculated from $K$ and $c$. An experimental test of the equation of equilibrium is possible on account of the connection between the degree of dissociation and electrical conductivity of the solution. In accordance with the electrolytic dissociation theory of Arrhenius, the ratio of the molecular conductivity $\lambda$ of the solution in any dilution to the molecular conductivity $\lambda_{\infty}$ of the solution in infinite dilution is:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\lambda}{\lambda_{\infty}} = \frac{c_{2}}{c_{1} + c_{2}} = \frac{c_{2}}{c}, \end{align}
since electric conduction is accounted for by the dissociated molecules only. It follows then, with the aid of the last equation, that:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\lambda^{2} c}{\lambda_{\infty} - \lambda} = K \cdot \lambda_{\infty} = \text{const.} \end{align}
With unlimited decreasing $c$, $\lambda$ increases to $\lambda_{\infty}$. This “law of dilution” for binary electrolytes, first enunciated by Ostwald, has been confirmed in numerous cases by experiment, as in the case of acetic acid.
Also, the dependence of the degree of dissociation upon the temperature is indicated here in quite an analogous manner to that in the example considered above, of the dissociation of water.
III. Vaporization or Solidification of a Pure Liquid.
In equilibrium the system consists of two phases, one liquid, and one gaseous or solid:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}m_{0} \mid {n_{0}}'{m_{0}}'. \end{align}
Each phase contains only a single molecular complex (the solvent), but the molecules in both phases do not need to be the same. Now, if a liquid molecule evaporates or solidifies, then in our notation
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = - 1,\quad {\nu_{0}}' = \frac{m_{0}}{{m_{0}}'},\quad c_{0} = 1,\quad {c_{0}}' = 1, \end{align}
and consequently the condition for equilibrium, in accordance with $(6)$, is:
\begin{align}&(8){\color{White}.0}\qquad&& 0 = \log K. \end{align}
Since $K$ depends only upon $p$ and $T$, this equation therefore expresses a definite relation between $p$ and $T$: the law of dependence of the pressure of vaporization (or melting pressure) upon the temperature, or vice versa. The import of this law is obtained through the consideration of the dependence of the quantity $K$ upon $p$ and $T$. If we form the complete differential of the last equation, there results:
\begin{align}&{\color{White}.(00)}\qquad&& 0 = \frac{\partial \log K}{\partial p} dp + \frac{\partial \log K}{\partial T} dT, \end{align}
or, in accordance with $(7)$:
\begin{align}&{\color{White}.(00)}\qquad&& 0 = -\frac{\Delta V}{T} dp + \frac{\Delta Q}{T^2} dT. \end{align}
If $v_{0}$ and ${v_{0}}'$ denote the molecular volumes of the two phases, then:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta V = \frac{m_{0}{v_{0}}'}{{m_{0}}'} - v_{0}, \end{align}
consequently:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta Q = T\left(\frac{m_{0}{v_{0}}'}{{m_{0}}'} - v_{0}\right) \frac{dp}{dT}, \end{align}
or, referred to unit mass:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\Delta Q}{m_{0}} = T \left(\frac{{v_{0}}'}{{m_{0}}'} - \frac{v_{0}}{m_{0}}\right) \cdot \frac{dp}{dT}, \end{align}
the well-known formula of Carnot and Clapeyron.
IV. The Vaporization or Solidification of a Solution of Non-Volatile Substances.
Most aqueous salt solutions afford examples. The symbol of the system in this case is, since the second phase (gaseous or solid) contains only a single molecular complex:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}m_{0},\ n_{1}m_{1},\ n_{2}m_{2},\ \cdots \mid {n_{0}}'{m_{0}}'. \end{align}
The change is represented by:
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = -1,\quad \nu_{1} = 0,\quad \nu_{2} = 0,\quad \cdots\quad {\nu_{0}}' = \frac{m_{0}}{{m_{0}}'}, \end{align}
and hence the condition of equilibrium, in accordance with $(6)$, is:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{0} = \log K, \end{align}
or, since to small quantities of higher order:
\begin{align}&{\color{White}.(00)}\qquad& c_{0} = \frac{n_{0}}{n_{0} + n_{1} + n_{2} + \cdots} &= 1 - \frac{n_{1} + n_{2} + \cdots}{n_{0}},\\&(9)& \frac{n_{1} + n_{2} + \cdots}{n_{0}} &= \log K. \end{align}
A comparison with formula $(8)$, found in example III, shows that through the solution of a foreign substance there is involved in the total concentration a small proportionate departure from the law of vaporization or solidification which holds for the pure solvent. One can express this, either by saying: at a fixed pressure $p$, the boiling point or the freezing point $T$ of the solution is different than that ($T_{0}$) for the pure solvent, or: at a fixed temperature $T$ the vapor pressure or solidification pressure $p$ of the solution is different from that ($p_{0}$) of the pure solvent. Let us calculate the departure in both cases.
1. If $T_{0}$ be the boiling (or freezing temperature) of the pure solvent at the pressure $p$, then, in accordance with $(8)$:
\begin{align}&{\color{White}.(00)}\qquad&& (\log K)_{T = T_{0}} = 0, \end{align}
and by subtraction of $(9)$ there results:
\begin{align}&{\color{White}.(00)}\qquad&& \log K - (\log K)_{T = T_{0}} = \frac{n_{1} + n_{2} + \cdots}{n_{0}}. \end{align}
Now, since $T$ is little different from $T_{0}$, we may write in place of this equation, with the aid of $(7)$:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log K}{\partial T} (T - T_{0}) = \frac{\Delta Q}{RT_{0}^{2}} (T - T_{0}) = \frac{n_{1} + n_{2} + \cdots}{n_{0}}, \end{align}
and from this it follows that:
\begin{align}&(10){\color{White}.}\qquad&& T - T_{0} = \frac{n_{1} + n_{2} + \cdots}{n_{0}} \cdot \frac{RT_{0}^{2}}{\Delta Q}. \end{align}
This is the law for the raising of the boiling point or for the lowering of the freezing point, first derived by van't Hoff: in the case of freezing $\Delta Q$ (the heat taken from the surroundings during the freezing of a liquid molecule) is negative. Since $n_{0}$ and $\Delta Q$ occur only as a product, it is not possible to infer anything from this formula with regard to the molecular number of the liquid solvent.
2. If $p_{0}$ be the vapor pressure of the pure solvent at the temperature $T$, then, in accordance with $(8)$:
\begin{align}&{\color{White}.(00)}\qquad&& (\log K)_{p = p_{0}} = 0, \end{align}
and by subtraction of $(9)$ there results:
\begin{align}&{\color{White}.(00)}\qquad&& \log K - (\log K)_{p = p_{0}} = \frac{n_{1} + n_{2} + \cdots}{n_{0}}. \end{align}
Now, since $p$ and $p_{0}$ are nearly equal, with the aid of $(7)$ we may write:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log K}{\partial p} (p - p_{0}) = - \frac{\Delta V}{RT} (p - p _{0}) = \frac{n_{1} + n_{2} + \cdots}{n_{0}}, \end{align}
and from this it follows, if $\Delta V$ be placed equal to the volume of the gaseous molecule produced in the vaporization of a liquid molecule:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta V = \frac{m_{0}}{{m_{0}}'} \frac{RT}{p}, \\&&& \frac{p_{0} - p}{p} = \frac{{m_{0}}'}{m_{0}} \cdot \frac{n_{1} + n_{2} + \cdots}{n_{0}}. \end{align}
This is the law of relative depression of the vapor pressure, first derived by van't Hoff. Since $n_{0}$ and $m_{0}$ occur only as a product, it is not possible to infer from this formula anything with regard to the molecular weight of the liquid solvent. Frequently the factor ${m_{0}}'/m_{0}$ is left out in this formula; but this is not allowable when $m_{0}$ and ${m_{0}}'$ are unequal (as, e. g., in the case of water).
V. Vaporization of a Solution of Volatile Substances.
(E. g.., a Sufficiently Dilute Solution of Propyl Alcohol in Water.)
The system, consisting of two phases, is represented by the following symbol:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0} m_{0},\ n_{1} m_{1},\ n_{2} m_{2},\ \cdots \mid {n_{0}}'{m_{0}}',\ {n_{1}}'{m_{1}}',\ {n_{2}}'{m_{2}}',\ \cdots, \end{align}
wherein, as above, the figure $0$ refers to the solvent and the figures $1$, $2$, $3$ $\cdots$ refer to the various molecular complexes of the dissolved substances. By the addition of primes in the case of the molecular weights (${m_{0}}'$, ${m_{1}}'$, ${m_{2}}'$ $\cdots$) the possibility is left open that the various molecular complexes in the vapor may possess a different molecular weight than in the liquid.
Since the system here considered may experience various sorts of changes, there are also various conditions of equilibrium to fulfill, each of which relates to a definite sort of transformation. Let us consider first that change which consists in the vaporization of the solvent. In accordance with our scheme of notation, the following conditions hold:
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = - 1,\ \nu_{1} = 0,\ \nu_{2} = 0,\ \cdots\ \nu_{0}' = \frac{m_{0} }{ {m_{0}}'},\ {\nu_{1}}' = 0,\ {\nu_{2}}' = 0,\ \cdots, \end{align}
and, therefore, the condition of equilibrium $(6)$ becomes:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{0} + \frac{m_{0}}{{m_{0}}'} \log {c_{0}}' = \log K, \end{align}
or, if one substitutes:
\begin{align}&{\color{White}.(00)}\qquad&& c_{0} = 1 - \frac{n_{1} + n_{2} + \cdots}{n_{0}} \quad \text{and} \quad {c_{0}}' = 1 - \frac{{n_{1}}' + {n_{2}}' + \cdots}{{n_{0}}'},\\&&& \frac{n_{1} + n_{2} + \cdots}{n_{0}} - \frac{m_{0}}{{m_{0}}'} \cdot \frac{{n_{1}}' + {n_{2}}' + \cdots}{{n_{0}}'} = \log K. \end{align}
If we treat this equation upon equation $(9)$ as a model, there results an equation similar to $(10)$:
\begin{align}&{\color{White}.(00)}\qquad&& T - T_{0} = \left(\frac{n_{1} + n_{2} + \cdots}{n_{0}m_{0}} - \frac{{n_{1}}' + {n_{2}}' + \cdots}{{n_{0}}'{m_{0}}'}\right) \frac{RT_{0}^{2}m_{0}}{\Delta Q}. \end{align}
Here $\Delta Q$ is the heat effect in the vaporization of one molecule of the solvent and, therefore, $\Delta Q/m_{0}$ is the heat effect in the vaporization of a unit mass of the solvent.
We remark, once more, that the solvent always occurs in the formula through the mass only, and not through the molecular number or the molecular weight, while, on the other hand, in the case of the dissolved substances, the molecular state is characteristic on account of their influence upon vaporization. Finally, the formula contains a generalization of the law of van't Hoff, stated above, for the raising of the boiling point, in that here in place of the number of dissolved molecules in the liquid, the difference between the number of dissolved molecules in unit mass of the liquid and in unit mass of the vapor appears. According as the unit mass of liquid or the unit mass of vapor contains more dissolved molecules, there results for the solution a raising or lowering of the boiling point; in the limiting case, when both quantities are equal, and the mixture therefore boils without changing, the change in boiling point becomes equal to zero. Of course, there are corresponding laws holding for the change in the vapor pressure.
Let us consider now a change which consists in the vaporization of a dissolved molecule. For this case we have in our notation
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = 0,\ \nu_{1} = -1,\ \nu_{2} = 0\ \cdots, \ {\nu_{0}}' = 0,\ {\nu_{1}}' = \frac{m_{1}}{{m_{1}}'},\ {\nu_{2}}' = 0,\ \cdots \end{align}
and, in accordance with $(6)$, for the condition of equilibrium:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{1} + \frac{m_{1}}{{m_{1}}'} \log {c_{1}}' = \log K \end{align}
or:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{{{c_{1}}'}^{\frac{m_{1}}{{m_{1}}'}}}{c_{1}} = K. \end{align}
This equation expresses the Nernst law of distribution. If the dissolved substance possesses in both phases the same molecular weight ($m_{1} = {m_{1}}'$), then, in a state of equilibrium a fixed ratio of the concentrations $c_{1}$ and ${c_{1}}'$ in the liquid and in the vapor exists, which depends only upon the pressure and temperature. But, if the dissolved substance polymerises somewhat in the liquid, then the relation demanded in the last equation appears in place of the simple ratio.
VI. The Dissolved Substance only Passes over into the Second Phase.
This case is in a certain sense a special case of the one preceding. To it belongs that of the solubility of a slightly soluble salt, first investigated by van't Hoff, e. g., succinic acid in water. The symbol of this system is:
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}H_{2}O,\ n_{1}H_{6}C_{4}O_{4} \mid {n_{0}}'H_{6}C_{4}O_{4}, \end{align}
in which we disregard the small dissociation of the acid solution. The concentrations of the individual molecular complexes are:
\begin{align}&{\color{White}.(00)}\qquad&& c_{0} = \frac{n_{0}}{n_{0} + n_{1}}, \quad c_{1} = \frac{n_{1}}{n_{0} + n_{1}}, \quad {c_{0}}' = \frac{{n_{0}}'}{{n_{0}}'} = 1. \end{align}
For the precipitation of solid succinic acid we have:
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = 0, \quad \nu_{1} = -1, \quad {\nu_{0}}' = 1, \end{align}
and, therefore, from the condition of equilibrium $(6)$:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{1} = \log K, \end{align}
hence, from $(7)$:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta Q = - RT^{2} \frac{\partial \log c_{1}}{\partial T}. \end{align}
By means of this equation van't Hoff calculated the heat of solution $\Delta Q$ from the solubility of succinic acid at $0^\circ$ and at $8.5^\circ$ C. The corresponding numbers were $2.88$ and $4.22$ in an arbitrary unit. Approximately, then:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log c_{1}}{\partial T} = \frac{\overset{e}{\log} 4.22 - \overset{e}{\log} 2.88}{8.5} = 0.04494, \end{align}
from which for $T = 273$:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta Q = -1.98 \cdot 273^{2} \cdot 0.04494 = -6,600\ \text{cal}., \end{align}
that is, in the precipitation of a molecule of succinic acid, $6,600~\text{cal}.$ are given out to the surroundings. Berthelot found, however, through direct measurement, $6,700$ calories for the heat of solution.
The absorption of a gas also comes under this head, e. g. carbonic acid, in a liquid of relatively unnoticeable smaller vapor pressure, e. g., water at not too high a temperature. The symbol of the system is then
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}H_{2}O,\ n_{1}CO_{2} \mid {n_{0}}'CO_{2}. \end{align}
The vaporization of a molecule $CO_{2}$ corresponds to the values
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = 0,\quad \nu_{1} = -1,\quad {\nu_{0}}' = 1. \end{align}
The condition of equilibrium is therefore again:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{1} = \log K, \end{align}
i. e., at a fixed temperature and a fixed pressure the concentration $c_{1}$ of the gas in the solution is constant. The change of the concentration with $p$ and $T$ is obtained through substitution in equation $(7)$. It follows from this that:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log c_{1}}{\partial p} = \frac{\Delta V}{RT} ,\quad \frac{\partial \log c_{1}}{\partial T} = -\frac{\Delta Q}{RT^{2}}. \end{align}
$\Delta V$ is the change in volume of the system which occurs in the isobaric-isothermal vaporization of a molecule of $CO_{2}$, $\Delta Q$ the quantity of heat absorbed in the process from outside. Now, since $\Delta V$ represents approximately the volume of a molecule of gaseous carbonic acid, we may put approximately:
\begin{align}&{\color{White}.(00)}\qquad&& \Delta V = \frac{RT}{p}, \end{align}
and the equation gives:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{\partial \log c_{1}}{\partial p} = \frac{1}{p}, \end{align}
which integrated, gives:
\begin{align}&{\color{White}.(00)}\qquad&& \log c_{1} = \log p + \text{const.}, \quad c_{1} = C \cdot p, \end{align}
i. e., the concentration of the dissolved gas is proportional to the pressure of the free gas above the solution (law of Henry and Bunsen). The factor of proportionality $C$, which furnishes a measure of the solubility of the gas, depends upon the heat effect in quite the same manner as in the example previously considered.
A number of no less important relations are easily derived as by-products of those found above, e. g., the Nernst laws concerning the influence of solubility, the Arrhenius theory of isohydric solutions, etc. All such may be obtained through the application of the general condition of equilibrium $(6)$. In conclusion, there is one other case that I desire to treat here. In the historical development of the theory this has played a particularly important rôle.
VII. Osmotic Pressure.
We consider now a dilute solution separated by a membrane (permeable with regard to the solvent but impermeable as regards the dissolved substance) from the pure solvent (in the same state of aggregation), and inquire as to the condition of equilibrium. The symbol of the system considered we may again take as
\begin{align}&{\color{White}.(00)}\qquad&& n_{0}m_{0},\ n_{1}m_{1},\ n_{2}m_{2},\ \cdots \mid {n_{0}}'m_{0}. \end{align}
The condition of equilibrium is also here again expressed by equation $(6)$, valid for a change of state in which the temperature and the pressure in each phase is maintained constant. The only difference with respect to the cases treated earlier is this, that here, in the presence of a separating membrane between two phases, the pressure $p$ in the first phase may be different from the pressure $p'$ in the second phase, whereby by “pressure,” as always, is to be understood the ordinary hydrostatic or manometric pressure.
The proof of the applicability of equation $(6)$ is found in the same way as this equation was derived above, proceeding from the principle of increase of entropy. One has but to remember that, in the somewhat more general case here considered, the external work in a given change is represented by the sum $p dV + p' dV'$, where $V$ and $V'$ denote the volumes of the two individual phases, while before $V$ denoted the total volume of all phases. Accordingly, we use, instead of $(7)$, to express the dependence of the constant $K$ in $(6)$ upon the pressure:
\begin{align}&(11){\color{White}.}\qquad&& \frac{\partial \log K}{\partial p} = -\frac{\Delta V}{RT}, \quad \frac{\partial \log K}{\partial p'} = -\frac{\Delta V'}{RT}. \end{align}
We have here to do with the following change:
\begin{align}&{\color{White}.(00)}\qquad&& \nu_{0} = -1,\quad \nu_{1} = 0,\quad \nu_{2} = 0,\quad \cdots,\quad {\nu_{0}}' = 1, \end{align}
whereby is expressed, that a molecule of the solvent passes out of the solution through the membrane into the pure solvent. Hence, in accordance with $(6)$:
\begin{align}&{\color{White}.(00)}\qquad&& -\log c_{0} = \log K, \end{align}
or, since
\begin{align}&{\color{White}.(00)}\qquad&& c_{0} = 1 - \frac{n_{1} + n_{2} + \cdots}{n_{0}}, \quad \frac{n_{1} + n_{2} + \cdots}{n_{0}} = \log K. \end{align}
Here $K$ depends only upon $T$, $p$ and $p'$. If a pure solvent were present upon both sides of the membrane, we should have $c_{0} = 1$, and $p = p'$; consequently:
\begin{align}&{\color{White}.(00)}\qquad&& (\log K)_{p = p'} = 0, \end{align}
and by subtraction of the last two equations:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{n_{1} + n_{2} + \cdots}{n_{0}} = \log K - (\log K)_{p = p'} = \frac{\partial \log K}{\partial p} (p - p') \end{align}
and in accordance with $(11)$:
\begin{align}&{\color{White}.(00)}\qquad&& \frac{n_{1} + n_{2} + \cdots}{n_{0}} = -(p - p') \cdot \frac{\Delta V}{RT}. \end{align}
Here $\Delta V$ denotes the change in volume of the solution due to the loss of a molecule of the solvent ($\nu_{0} = -1$). Approximately then:
\begin{align}&{\color{White}.(00)}\qquad&& -\Delta V \cdot n_{0} = V, \end{align}
the volume of the whole solution, and
\begin{align}&{\color{White}.(00)}\qquad&& \frac{n_{1} + n_{2} + \cdots}{n_{0}} = (p - p') \cdot \frac{V}{RT}. \end{align}
If we call the difference $p - p'$, the osmotic pressure of the solution, this equation contains the well known law of osmotic pressure, due to van't Hoff.
The equations here derived, which easily permit of multiplication and generalization, have, of course, for the most part not been derived in the ways described above, but have been derived, either directly from experiment, or theoretically from the consideration of special reversible isothermal cycles to which the thermodynamic law was applied, that in such a cyclic process not only the algebraic sum of the work produced and the heat produced, but that also each of these two quantities separately, is equal to zero (first lecture). The employment of a cyclic process has the advantage over the procedure here proposed, that in it the connection between the directly measurable quantities and the requirements of the laws of thermodynamics succinctly appears in each case; but for each individual case a satisfactory cyclic process must be imagined, and one has not always the certain assurance that the thermodynamic realization of the cyclic process also actually supplies all the conditions of equilibrium. Furthermore, in the process of calculation certain terms of considerable weight frequently appear as empty ballast, since they disappear at the end in the summation over the individual phases of the process.
On the other hand, the significance of the process here employed consists therein, that the necessary and sufficient conditions of equilibrium for each individually considered case appear collectively in the single equation $(6)$, and that they are derived collectively from it in a direct manner through an unambiguous procedure. The more complicated the systems considered are, the more apparent becomes the advantage of this method, and there is no doubt in my mind that in chemical circles it will be more and more employed, especially, since in general it is now the custom to deal directly with the energies, and not with cyclic processes, in the calculation of heat effects in chemical changes.
1. Ad Heydweiller, Ann. d. Phys., 28, 506, 1909. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 321, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758833050727844, "perplexity": 1280.8052483939287}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298080.28/warc/CC-MAIN-20150323172138-00182-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://www.ck12.org/book/CK-12-Trigonometry-Concepts/r1/section/5.3/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 5.3: Identify Accurate Drawings of Triangles
Difficulty Level: At Grade Created by: CK-12
[object Object]%
Progress
Progress
[object Object]%
Your friend is creating a new board game that involves several different triangle shaped pieces. However, the game requires accurate measurements of several different pieces that all have to fit together. She brings some of the pieces to you and asks if you can verify that her measurements of the pieces' side lengths and angles are correct.
You take out the first piece. According to your friend, the piece has sides of length 4 in, 5 in and 7 in, and the angle between the side of the length 4 and the side of length 5 is \begin{align*}78^\circ\end{align*}. She's very confident in the lengths of the sides, but not quite sure if she measured the angle correctly. Is there a way to determine if your friend's game piece has the correct measurements, or did she make a mistake?
It is indeed possible to determine if your friend's measurements are correct or not. At the end of this Concept, you'll be able to tell your friend if her measurements were accurate.
### Guidance
Our extension of the analysis of triangles draws us naturally to oblique triangles. The Law of Cosines can be used to verify that drawings of oblique triangles are accurate. In a right triangle, we might use the Pythagorean Theorem to verify that all three sides are the correct length, or we might use trigonometric ratios to verify an angle measurement. However, when dealing with an obtuse or acute triangle, we must rely on the Law of Cosines.
#### Example A
In \begin{align*}\triangle{ABC}\end{align*} at the right, \begin{align*}a = 32, b = 20\end{align*}, And \begin{align*}c = 16\end{align*}. Is the drawing accurate if it labels \begin{align*}\angle{C}\end{align*} as \begin{align*}35.2^\circ\end{align*}? If not, what should \begin{align*}\angle{C}\end{align*} measure?
Solution: We will use the Law of Cosines to check whether or not \begin{align*}\angle{C}\end{align*} is \begin{align*}35.2^\circ\end{align*}.
\begin{align*}16^2 & = 20^2 + 32^2 - 2(20)(32) \cos 35.2 && \text{Law of Cosines} \\ 256 & = 400 + 1024 - 2(20)(32) \cos 35.2 && \text{Simply squares} \\ 256 & = 400 + 1024 - 1045.94547 && \text{Multiply} \\ 256 & \neq 378.05453 && \text{Add and subtract}\end{align*}
Since \begin{align*}256 \neq 378.05453\end{align*}, we know that \begin{align*}\angle{C}\end{align*} is not \begin{align*}35.2^\circ\end{align*}. Using the Law of Cosines, we can figure out the correct measurement of \begin{align*}\angle{C}\end{align*}.
\begin{align*}16^2 & = 20^2 + 32^2 -2(20)(32) \cos C && \text{Law of Cosines} \\ 256 & = 400 + 1024 - 2(20)(32) \cos C && \text{Simplify Squares} \\ 256 & = 400 + 1024 - 1280 \cos C && \text{Multiply} \\ 256 & = 1424 - 1280 \cos C && \text{Add} \\ -1168 & = -1280 \cos C && \text{Subtract}\ 1424 \\ 0.9125 & = \cos C && \text{Divide} \\ 24.1^\circ & \approx \angle {C} && \cos^{-1}(0.9125) \end{align*}
For some situations, it will be necessary to utilize not only the Law of Cosines, but also the Pythagorean Theorem and trigonometric ratios to verify that a triangle or quadrilateral has been drawn accurately.
#### Example B
A builder received plans for the construction of a second-story addition on a house. The diagram shows how the architect wants the roof framed, while the length of the house is 20 ft. The builder decides to add a perpendicular support beam from the peak of the roof to the base. He estimates that new beam should be 8.3 feet high, but he wants to double-check before he begins construction. Is the builder’s estimate of 8.3 feet for the new beam correct? If not, how far off is he?
Solution: If we knew either \begin{align*}\angle{A}\end{align*} or \begin{align*}\angle{C}\end{align*}, we could use trigonometric ratios to find the height of the support beam. However, neither of these angle measures are given to us. Since we know all three sides of \begin{align*}\triangle{ABC}\end{align*}, we can use the Law of Cosines to find one of these angles. We will find \begin{align*}\angle{A}\end{align*}.
\begin{align*}14^2 & = 12^2 + 20^2 - 2(12)(20) \cos A && \text{Law of Cosines} \\ 196 & = 144 + 400 - 480 \cos A && \text{Simplify} \\ 196 & = 544 - 480 \cos A && \text{Add} \\ -348 & = -480 \cos A && \text{Subtract} \\ 0.725 & = \cos A && \text{Divide} \\ 43.5^\circ & \approx \angle{A} && \cos^{-1} (0.725) \end{align*}
Now that we know \begin{align*}\angle{A}\end{align*}, we can use it to find the length of \begin{align*}BD\end{align*}.
\begin{align*}\sin 43.5 & = \frac{x}{12} \\ 12 \sin 43.5 & = x \\ 8.3 & \approx x\end{align*}
Yes, the builder’s estimate of 8.3 feet for the support beam is accurate.
#### Example C
In \begin{align*}\triangle{CIR}, c = 63, i = 52\end{align*}, and \begin{align*}r = 41.9\end{align*}. Find the measure of all three angles.
Solution:
\begin{align*} 63^2 = 52^2 + 41.9^2 - 2 \cdot 52 \cdot 41.9 \cdot \cos C \\ 52^2 = 63^2 + 41.9^2 - 2 \cdot 63 \cdot 41.9 \cdot \cos I \\ 180^\circ - 83.5^\circ - 55.1^\circ = 41.4^\circ\\ \angle{C} \approx 83.5^\circ\\ \angle{I} \approx 55.1^\circ \\ \angle{R} \approx 41.4^\circ \end{align*}
### Vocabulary
Law of Cosines: The law of cosines relates the lengths of the sides of a triangle that is not a right triangle.
### Guided Practice
1. Find \begin{align*}AD\end{align*} using the Pythagorean Theorem, Law of Cosines, trig functions, or any combination of the three.
2. Find \begin{align*}HK\end{align*} using the Pythagorean Theorem, Law of Cosines, trig functions, or any combination of the three if \begin{align*}JK = 3.6, KI = 5.2, JI = 1.9, HI = 6.7\end{align*}, and \begin{align*}\angle{KJI} = 96.3^\circ\end{align*}.
3. Use the Law of Cosines to determine whether or not the following triangle is drawn accurately. If not, determine how far the measurement of side "d" is from the correct value.
Solutions:
1. First, find \begin{align*}AB\end{align*}. \begin{align*}AB^2 = 14.2^2 + 15^2 - 2 \cdot 14.2 \cdot 15 \cdot \cos 37.4^\circ, AB = 9.4. \sin 23.3^\circ = \frac{AD}{9.4}, AD = 3.7.\end{align*}
2. \begin{align*}\angle{HJI} = 180^\circ - 96.3^\circ = 83.7^\circ\end{align*} (these two angles are a linear pair). \begin{align*}6.7^2 = HJ^2 + 1.9^2 - 2 \cdot HJ \cdot 1.9 \cdot \cos 83.7^\circ\end{align*}. This simplifies to the quadratic equation \begin{align*}HJ^2 - 0.417HJ - 41.28\end{align*}. Using the quadratic formula, we can determine that \begin{align*}HJ \approx 6.64\end{align*}. So, since \begin{align*}HJ + JK = HK, 6.64 + 3.6 \approx HK \approx 10.24\end{align*}.
3. To determine this, use the Law of Cosines and solve for \begin{align*}d\end{align*} to determine if the picture is accurate. \begin{align*}d^2 = 12^2 + 24^2 - 2 \cdot 12 \cdot 24 \cdot \cos 30^\circ, d = 14.9\end{align*}, which means \begin{align*}d\end{align*} in the picture is off by 1.9.
### Concept Problem Solution
Since your friend is certain of the lengths of the sides of the triangle, you should use those as the known quantities in the Law of Cosines and solve for the angle:
\begin{align*}7^2 = 5^2 + 4^2 + (2)(5)(4)\cos \theta\\ 49 = 25 + 16 + 40 \cos \theta\\ 49 - 25 - 16 = 40 \cos \theta\\ \frac{8}{40} = \cos \theta\\ cos^{-1} \frac{8}{40} = \theta\\ \theta = 78.46 \end{align*}
So as it turns out, your friend is rather close. Her measurements were probably slight inaccurate because of her round off from the protractor.
### Practice
1. If you know the lengths of all three sides of a triangle and the measure of one angle, how can you determine if the triangle is drawn accurately?
Determine whether or not each triangle is labelled correctly.
Determine whether or not each described triangle is possible. Assume angles have been rounded to the nearest degree.
1. In \begin{align*}\triangle BCD\end{align*}, b=4, c=4, d=5, and \begin{align*}m\angle B=51^\circ\end{align*}.
2. In \begin{align*}\triangle ABC\end{align*}, a=7, b=4, c=9, and \begin{align*}m\angle B=34^\circ\end{align*}.
3. In \begin{align*}\triangle BCD\end{align*}, b=3, c=2, d=7, and \begin{align*}m\angle D=138^\circ\end{align*}.
4. In \begin{align*}\triangle ABC\end{align*}, a=8, b=6, c=13.97, and \begin{align*}m\angle C=172^\circ\end{align*}.
5. In \begin{align*}\triangle ABC\end{align*}, a=4, b=4, c=9, and \begin{align*}m\angle B=170^\circ\end{align*}.
6. In \begin{align*}\triangle BCD\end{align*}, b=3, c=5, d=4, and \begin{align*}m\angle C=90^\circ\end{align*}.
7. In \begin{align*}\triangle ABC\end{align*}, a=8, b=3, c=6, and \begin{align*}m\angle A=122^\circ\end{align*}.
8. If you use the Law of Cosines to solve for \begin{align*}m\angle C\end{align*} in \begin{align*}\triangle ABC\end{align*} where a=3, b=7, and c=12, you will an error. Explain why.
### Vocabulary Language: English
law of cosines
The law of cosines is a rule relating the sides of a triangle to the cosine of one of its angles. The law of cosines states that $c^2=a^2+b^2-2ab\cos C$, where $C$ is the angle across from side $c$.
Show Hide Details
Description
Difficulty Level:
Tags:
Subjects: | {"extraction_info": {"found_math": true, "script_math_tex": 57, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 3, "texerror": 0, "math_score": 0.9983823895454407, "perplexity": 1272.2741400690645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00060-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://testbook.com/question-answer/the-eulers-equation-for-steady-flow-of-an-i--60634766293e7c2c095d75c6 | # The Euler’s equation for steady flow of an ideal fluid along a stream line is based on Newton’s
This question was previously asked in
Gujarat Engineering Service 2017 Official Paper (Civil Part 2)
View all GPSC Engineering Services Papers >
1. First law of motion
2. Second law of motion
3. Third law of momentum
4. Law of friction
Option 2 : Second law of motion
Free
Gujarat Engineering Service 2019 Official Paper (Civil Part 1)
622
150 Questions 150 Marks 90 Mins
## Detailed Solution
Explanation:
Euler's equation of motion:
The Euler's equation for a steady flow of an ideal fluid along a streamline is a relation between the velocity, pressure and density of a moving fluid. It is based on the Newton's Second Law of Motion which states that if the external force is zero, linear momentum is conserved. The integration of the equation gives Bernoulli's equation in the form of energy per unit weight of the following fluid.
It is based on the following assumptions:
The fluid is non-viscous (i,e., the frictional losses are zero)
The fluid is homogeneous and incompressible (i.e., the mass density of the fluid is constant)
The flow is continuous, steady and along the streamline.
The velocity of the flow is uniform over the section.
No energy or force (except gravity and pressure forces) is involved in the flow.
As there is no external force applied (Non-viscous flow), therefore linear momentum will be conserved. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476180076599121, "perplexity": 989.6241833626724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00203.warc.gz"} |
http://tex.stackexchange.com/questions/162878/arithmetic-overflow-in-psscaleboxto/163112 | # Arithmetic overflow in psscaleboxto
The following example, using \psscaleboxto, produces an error:
"I can't carry out that multiplication or division, since the result is out of range."
Why does it happen?
\documentclass{article}
\usepackage{pstricks}
\begin{document}
\begin{pspicture}(20,20)
\psscaleboxto(1,1) {
\psframe(0,0)(5,5)
}
\end{pspicture}
\end{document}
-
that cannot work, because all PSTricks objects have inside TeX a width and height of 0pt. That is the reason why you can't expand "nothing" to 1x1. This will work:
\documentclass{article}
\usepackage{pstricks}
\begin{document}
\begin{pspicture}[showgrid](10,10)
\psscaleboxto(1,1){\rule{5cm}{5cm}}
\psframe*[unit=0.2,linecolor=red](5,5)(10,10)
\end{pspicture}
\end{document}
\rule is known with width and height on TeX level.
You can use the default \resizebox or \psscalebox if you have a pspicture environment:
\documentclass{article}
\usepackage{pstricks,graphicx}
\begin{document}
\resizebox{4cm}{2cm}{%
\begin{pspicture}[showgrid](10,10)
\psframe*[linecolor=red](5,5)(10,10)
\psdot[dotstyle=x,dotscale=5](3,3)
\end{pspicture}%
}
\psscaleboxto(2,4){%
\begin{pspicture}[showgrid](10,10)
\psframe*[linecolor=red](5,5)(10,10)
\psdot[dotstyle=x,dotscale=5](3,3)
\end{pspicture}%
}
\end{document}
-
I see that putting the psframe inside the scalebox also works, as long as the \rule is there: "\psscaleboxto(1,1){ \rule{5cm}{5cm} \psframe*[linecolor=red](5,5)(10,10) }" Is there a way to make the black "rule" disappear? – Erel Segal-Halevi Mar 1 '14 at 19:52
What exactly do you want to achieve? With the options xunit=..,yunit=.. you can do the same with objects. – Herbert Mar 1 '14 at 20:06
I have not a single object but an arrangement of several objects (frames, dots, etc), and I want to reproduce the same arrangement in a smaller scale, in order to illustrate some recursive construction (similar to a fractal). – Erel Segal-Halevi Mar 1 '14 at 20:34
see my edit ... – Herbert Mar 1 '14 at 20:59
I see, so I just have to put the pspicture inside the psscalebox. – Erel Segal-Halevi Mar 1 '14 at 21:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788391947746277, "perplexity": 1925.1448126181083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158811.82/warc/CC-MAIN-20160205193918-00074-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/69905/the-cantor-set-is-homeomorphic-to-infinite-product-of-0-1-with-itself-cy | # The Cantor set is homeomorphic to infinite product of $\{0,1\}$ with itself - cylinder basis - and it topology
I know the Cantor set probably comes up in homework questions all the time but I didn't find many clues - that I understood at least.
I am, for a homework problem, supposed to show that the Cantor set is homeomorphic to the infinite product (I am assuming countably infinite?) of $\{0,1\}$ with itself.
So members of this two-point space(?) are things like $(0,0,0,1)$ and $(0,1,1,1,1,1,1)$, etc.
Firstly, I think that a homeomorphism (the 'topological isomorhism') is a mapping between two topologies (for the Cantor sets which topology is this? discrete?) that have continuous, bijective functions.
So I am pretty lost and don't even know what more to say! :( I have seen something like this in reading some texts, something about $$f: \sum_{i=1}^{+\infty}\,\frac{a_i}{3^i} \mapsto \sum_{i=1}^{+\infty}\,\frac{a_i}{2^{i+1}} ,$$ for $a_i = 0,2$. But in some ways this seems to be a 'complement' of what I need.... Apparently I am to use ternary numbers represented using only $0$'s and $1$'s in; for example, $0.a_1\,a_2\,\ldots = 0.01011101$?
Thanks much for any help starting out!
Here is the verbatim homework question:
The standard measure on the Cantor set is given by the Cantor $\phi$ function which is constant on missing thirds and dyadic on ternary rationals.
Show the Cantor set is homeomorphic to the infinite product of $\{0,1\}$ with itself.
How should we topologize this product?
(Hint: this product is the same as the set of all infinite binary sequences)
Fix a binary $n$-tuple $(a_1,\ldots, a_n)$ (for e.g., $(0,1,1,0,0,0)$ if $n = 6$).
Show that the Cantor measure of points ($b_k$) with $b_k=a_k$ for $k \leq n$ and $b_k \in \{0,1\}$ arbitrary for $k>n$, is exactly $1/2^n$. These are called cylinders. (They are the open sets, but also closed!)
-
I’m going to assume that Cantor set here refers to the standard middle-thirds Cantor set $C$ described here. It can be described as the set of real numbers in $[0,1]$ having ternary expansions using only the digits $0$ and $2$, i.e., real numbers of the form $$\sum_{n=1}^\infty \frac{a_n}{3^n},$$ where each $a_n$ is either $0$ or $2$.
For each positive integer $n$ let $D_n = \{0,1\}$ with the discrete topology, and let $$X = \prod_{n=1}^\infty D_n$$ with the product topology. Elements of $X$ are infinite sequences of $0$’s and $1$’s, so $(0,0,0,1)$ and $0,1,1,1,1,1,1)$ are not elements of $X$; if you pad these with an infinite string of $0$’s to get $(0,0,0,1,0,0,0,0,\dots)$ and $(0,1,1,1,1,1,1,0,0,0,0,\dots)$, however, you do get points of $X$. A more interesting point of $X$ is the sequence $(p_n)_n$, where $p_n = 1$ if $n$ is prime, and $p_n = 0$ if $n$ is not prime.
Your problem is to show that $C$, with the topology that it inherits from $\mathbb{R}$, is homeomorphic to $X$. To do that, you must find a bijection $h:C\to D$ such that both $h$ and $h^{-1}$ are continuous. The suggestion that you found is to let $$h\left(\sum_{n=1}^\infty\frac{a_n}{3^n}\right) = \left(\frac{a_1}2,\frac{a_2}2,\frac{a_3}2,\dots\right).$$ Note that $$\frac{a_n}2 = \begin{cases}0,&\text{if }a_n=0\\1,&\text{if }a_n=2,\end{cases}$$ so this really does define a point in $X$. This really is a bijection: if $b = (b_n)_n \in X$, $$h^{-1}(b) = \sum_{n=1}^\infty\frac{2b_n}{3^n}.$$
-
@nate: The answer to your first question is yes, provided that you look only at series in which all of the $a_n$’2 are $0$ or $2$. When you remove $(1/3,2/3)$, you get rid of all of the numbers whose only ternary expansions begin $0.1$. When you remove $(1/9,2/9)\cup(7/9,8/9)$ you get rid of those whose only ternary expansions begin $0.01$ or $0.21$. And so on. For your second question: no, $D_3 = \{0,1\}$. It’s just one of the factor spaces in the infinite product. A sequence of $0$’s and $1$’s is a member of that product. – Brian M. Scott Oct 5 '11 at 0:17
I guess I am getting it, albeit slowly. My main confusion (for another posting when I get it worded well) is from the "cantor set" page on wikipedia where the author says to take the binary representation of $3/5_{10} \mapsto 0.10011001..._{2}$ and replace all the 1's by 2's. In base-3 (with 0,1,2), 3/5 is 0.12101210... I guess this will have to be another posting or look into it some more - this exchange 1's by 2's. Or is this a trick to just get rid of the middle thirds? – nate Oct 5 '11 at 0:55
@nate: It’s just a trick to get rid of the middle thirds. When you replace the $1$’s in a binary expansion by $2$’s and interpret the result in ternary, you will gave a different number. In fact, the two binary expansions of a dyadic rational will give you different numbers: $1/2_\text{ten}=0.10000\dots_\text{two}$ gives you $0.20000\dots_\text{three}=2/3_\text{ten}$, while $1/2_\text{ten}=0.01111\dots_\text{two}$ gives you $0.02222\dots_\text{three}=1/3_\text{ten}$: you’ve split $1/2_\text{ten}$ in two. – Brian M. Scott Oct 5 '11 at 1:11
Fascinating! Well I do recall seeing somewhere (I'll have to read up on it) the possibility for non-injective relations between number systems. Well I fully understand my homework problem and see how/why it is asking for base-3 numbers using 0's and 1's, and not 2's. Thank you much! – nate Oct 5 '11 at 1:18
Note that the $1/3$-Cantor set in $[0,1]$ can be represented as the set of real numbers of the form $\sum_{n=1}^\infty a_n/3^n$ where $a_n\in\{0,2\}$ for each $n\in\mathbb{N}$. A homeomorphism you are looking for is the function $f$ which maps the point $\sum_{n=1}^\infty a_n/3^n$ in the Cantor set to the sequence $(a_n/2)_{n=1}^\infty$ in the product $\{0,1\}^\mathbb{N}$. The product $\{0,1\}^\mathbb{N}$ consists of countably infinite sequences of $0$'s and $1$'s. Note that no finite tuple such as $(0,0,0,1)$ is in $\{0,1\}^\mathbb{N}$. The product is topologized so that each factor $\{0,1\}$ is given the discrete topology and then the product is given the product topology.
You want to prove that $f$ is a continuous and open bijection. The bijectiveness is very easy to show. For the continuity you may want to use the fact that the product topology of $\{0,1\}^\mathbb{N}$ is generated by the sets of the form $U(N,a)=\{(a_n)_{n=1}^\infty\in\{0,1\}^\mathbb{N}:a_N=a\}$ where $N\in\mathbb{N}$ and $a\in\{0,1\}$, and hence it suffices to show that the preimages of these sets $U(N,a)$ are open in the Cantor set. Finally to show that $f$ is open you can use the following general fact: a continuous bijection from a compact space to a Hausdorff space is open.
-
Thank you too for the effort. I need to read more about how to show $f$ is open. However, I am a little stuck on proving the bijectiveness. I got the injectivity by saying that any two different preimage elements result in different elements in the image set. But how about the surjectivity? That is still hard..... Any advice? – nate Oct 5 '11 at 1:58
@nate: If you know that the Cantor set is the same as $\{\sum_{n=1}^\infty a_n/3^n:\forall n\in\mathbb{N}(a_n\in\{0,2\})\}$, then the surjectivity is easy to see: pick $y=(a_n)_{n=1}^\infty\in\{0,1\}^\mathbb{N}$, then $x=\sum_{n=1}^\infty 2a_n/3^n$ is in the Cantor set and $f(x)=y$. – LostInMath Oct 5 '11 at 9:33
@nate: The proof of the last claim is a nice little interplay between compactness and closedness. First note that a bijection is open iff it is closed (follows easily from 'a subset is open iff the complement is closed'). Let $f:X\to Y$ be a continuous bijection between a compact and a Hausdorff space. Pick a closed subset $F$ of $X$. Since a closed subset of a compact space is compact and $f$ is continuous, $f(F)$ is compact. Since a compact subset of a Hausdorff space is closed, $f(F)$ is closed in $Y$. Hence $f$ is a closed (equivalently open) function. – LostInMath Oct 5 '11 at 9:43
The Cantor set consists of numbers whose ternary expansion uses only $0$s and $2$s. So there's a "natural" bijection between the cantor set and $\{0,1\}^\omega$, or rather $\{0,2\}^\omega$. Everything else should just "work out".
Note that $\{0,1\}^\omega$ consists of all infinite sequences of $0$ and $1$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965588390827179, "perplexity": 133.18886061602853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454553.89/warc/CC-MAIN-20151124205414-00144-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://slideplayer.com/slide/6448109/ | # Current Current There are two kinds of current: There are two kinds of current:
## Presentation on theme: "Current Current There are two kinds of current: There are two kinds of current:"— Presentation transcript:
Current
Current There are two kinds of current: There are two kinds of current:
Current 1.Direct current
Current There are two kinds of current: There are two kinds of current: 1.Direct current 2.Alternating current
Direct current
DC flows in one direction DC flows in one direction DC is the kind of current that comes from batteries. DC is the kind of current that comes from batteries.
Alternating current
AC flows back and forth rapidly. In North America it flow back and forth 60 times a second. This is called 60 Hertz
Why do we use AC instead of DC?
Why we use AC instead of DC AC can be more easily transmitted over long distances than DC. AC can be more easily transmitted over long distances than DC.
Why we use AC instead of DC AC can be more easily transmitted over long distances than DC. AC can be more easily transmitted over long distances than DC. Why? Why?
Why we use AC instead of DC AC can be more easily transmitted over long distances than DC. AC can be more easily transmitted over long distances than DC. Why? Why? Because of transformers Because of transformers
Transformers
Step-up transformers Turns lower voltages into higher ones Turns lower voltages into higher ones Fewer turns on the primary side than the secondary one. Fewer turns on the primary side than the secondary one.
Step-up transformers
Step-down transformers Turns higher voltages into lower ones. Turns higher voltages into lower ones. More turns on the primary side than the secondary one. More turns on the primary side than the secondary one.
Step-down transformers
Electromagnetic Induction
E.i. describes the production of electricity by using a magnetic field. E.i. describes the production of electricity by using a magnetic field. This is how generators “make” electricity. This is how generators “make” electricity. Generators convert mechanical energy into electrical energy. Generators convert mechanical energy into electrical energy.
Electromagnetic Induction
POWER
POWER Power is the rate at which devices use energy. Power is the rate at which devices use energy.
POWER The unit of power measurement is the watt (W). It measures how many units of energy are used per second. The unit of power measurement is the watt (W). It measures how many units of energy are used per second.
POWER Power is the rate at which devices use energy. Power is the rate at which devices use energy. The unit of power measurement is the watt (W). It measures how many units of energy are used per second. The unit of power measurement is the watt (W). It measures how many units of energy are used per second. Energy is measured in joules (J) Energy is measured in joules (J)
POWER The power rating of a device is directly proportional to the current flowing through the device and the voltage in the circuit. The power rating of a device is directly proportional to the current flowing through the device and the voltage in the circuit.
POWER In other words... In other words...
POWER The power rating of a device is directly proportional to the current flowing through the device and the voltage in the circuit. The power rating of a device is directly proportional to the current flowing through the device and the voltage in the circuit. In other words... In other words... The Power Formula Power = current x voltage Power = current x voltage P = I x V P = I x V
Math Time!!
Now, stop writing (and complaining) and watch this.this
Download ppt "Current Current There are two kinds of current: There are two kinds of current:"
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116727471351624, "perplexity": 750.0570540533741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00319.warc.gz"} |
https://www-subatech.in2p3.fr/fr/?view=seminar&id=390 | # Neutrinoless double beta decay search in XENONnT and future prospects
## Maxime Pierre
### Subatech (équipe Xénon)
With the lowest background level ever reached by detectors searching for rare-events, XENON1T proved to be the most sensitive dark matter direct detection experiment on earth. The unprecedented low level of radioactivity reached, made the XENON1T experiment suitable also for other interesting rare-events searches including the neutrinoless double beta decay (0νßß) of 136Xe. Furthermore, in the context of the advancement of the XENON program, the next generation experiment, XENONnT, designed with a high level of background reduction aiming to increase the predecessor sensitivity in rare-events searches is currently under commissioning phase in the underground National Laboratory of Gran Sasso (LNGS): it will host 5.9 tonnes of liquid xenon as a target mass. In this talk I will present my contribution to the ongoing commissioning of XENONnT and to the 0νßß search in Large Scale LXe Dual phase Time Projection Chamber (TPC).
# Dynamics of the critical fluctuations in heavy-ion collisions
## Gregoire Pihan
### Subatech (équipe Théorie)
It is now known that a deconfined state of the strongly interacting matter can be formed under extreme conditions of temperature and density in heavy-ion collisions: the quark-gluon plasma (QGP). Intensive work has been done to understand its fascinating properties but many questions remain unanswered. Especially, theoretical studies suggest the existence of a critical point in the phase diagram of the QCD matter. For the moment, the supposed location of the critical point is outside the region accessible from first-principle lattice calculations (μB/T . 2). Therefore, we need to rely on understanding the experimental data obtained from heavy-ion collisions. Heavy-ion collisions are extremely fast and out-of-equilibrium phenomena. This has a huge impact on the observables that are supposed to carry information about the critical point such as the higher-order cumulants of the net-baryon density, net-charge density or net-strangeness density. A reliable study of the impact of the dynamics on the critical fluctuations is then required to fully address the following question : are we able to experimentally prove the existence of the critical point with heavy-ion collision ? We investigate the diffusive dynamics of the critical fluctuations of the conserved charges in the case of a relativistic heavy-ion collision undergoing a Bjorken-type expansion. The fluctuation observables are inferred from the study of coupled stochastic diffusion equations which incorporates both diffusive dynamics and intrinsic fluctuations. In the vicinity of the critical point, the critical behavior of the fluctuations is encoded in the Ginzburg-Landau free energy functional parametrized using a non-universal mapping to the 3D Ising model. Far from the critical region, the behavior of the fluctuations corresponds to lattice calculations of the susceptibilities in equilibrium. We present the coupling between the fluctuations of different conserved charges in heavy-ion collisions for trajectories passing near the critical point. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541268706321716, "perplexity": 915.3899043790558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00517.warc.gz"} |
http://mathhelpforum.com/calculus/176815-another-optimization-problem-assistance.html | # Thread: Another optimization problem assistance
1. ## Another optimization problem assistance
At 2:30 pm, Bob heads north from an intersection at 120 km/h. Alvin is driving east at 96 km/h and reaches the same destination at 5:10 pm. When are they closest to each other.
$d(t)=\sqrt{(120t)^2+(256-96t)^2}$
Is that correct? The answer is supposedly 3:32 PM, so I'm not sure if I understood it incorrectly or if the book had a typo.
Thanks for the assistance!
2. t = 0 at 2:30
minimizing the quantity $(120t)^2 + (256-96t)^2$ will also minimize it's square root.
book is correct.
3. Originally Posted by skeeter
t = 0 at 2:30
minimizing the quantity $(120t)^2 + (256-96t)^2$ will also minimize it's square root.
book is correct.
Ah, thanks! Missed that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071892857551575, "perplexity": 1939.9288479574116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163039753/warc/CC-MAIN-20131204131719-00030-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3132483/intersection-of-two-subspaces-is-the-span-of-the-vector-b | # Intersection of two subspaces is the span of the vector b
Let the vectors $$u_1=\begin{bmatrix}1\\2\\3\end{bmatrix}$$, $$u_2=\begin{bmatrix}2\\3\\4\end{bmatrix}$$, $$v_1=\begin{bmatrix}1\\1\\2\end{bmatrix}$$, $$v_2=\begin{bmatrix}2\\2\\3\end{bmatrix}$$ $$\in \mathbb{R}^3$$. Let the subspaces be $$U=span(u_1,u_2)$$, $$V=span(v_1,v_2)$$
1) Show that $$U\cap V=span(b)$$, where $$b=\begin{bmatrix}1\\1\\1\end{bmatrix}$$.
My approach: So U is the span of the two vectors $$u_1, u_2$$, which is the linear combination of the vectors. So they span a plane in $$\mathbb{R}^3$$. Which mean U is the set of vectors on the following form: $$U=\{u=\begin{bmatrix}\alpha'+2\beta'\\2\alpha'+3\beta'\\3\alpha'+4\beta'\end{bmatrix} | \alpha,\beta\in \mathbb{R} \}$$,
The same approach goes for V, so V is the set of following vectors: $$V=\{v=\begin{bmatrix}\alpha+2\beta\\\alpha+2\beta\\2\alpha+3\beta\end{bmatrix} | \alpha,\beta\in \mathbb{R} \}$$,
So $$U\cap V$$ is the set of elements contained in both U and V. In that case I should find the $$\alpha',\beta',\alpha,\beta$$ where $$u=v$$.
Doing this creating a total matrix: $$(u-v|0)$$ I get the following equations: $$\alpha'+\alpha$$=0, $$\beta'-\beta=0$$ and $$\alpha+\beta=0$$.
So honestly, now I'm a bit lost. I was hoping putting these equations back in the vectors v and u, that i would get some scalar ($$\alpha',\beta',\alpha,\beta$$) times the vector b: $$k\begin{bmatrix}1\\1\\1\end{bmatrix}$$, and then i could conclude that span was the linear combination of that one vector. But that only works for the vector v.
Could somebody help me telling me why my approach is wrong, and what the best way of doing this?
We wish to find a basis of the intersection of $$U=\operatorname{Span}\{u_1, u_2\}$$ and $$V=\operatorname{Span}\{v_1, v_2\}$$ where \begin{align*} u_1 &= \left\langle1,\,2,\,3\right\rangle & u_2 &= \left\langle2,\,3,\,4\right\rangle & v_1 &= \left\langle1,\,1,\,2\right\rangle & v_2 &= \left\langle2,\,2,\,3\right\rangle \end{align*} Let's start by inserting our vectors into the columns of a matrix $$\left[ \begin{array}{cc|cc} u_1 & u_1 & v_1 & v_2 \end{array}\right] = \left[\begin{array}{rr|rr} 1 & 2 & 1 & 2 \\ 2 & 3 & 1 & 2 \\ 3 & 4 & 2 & 3 \end{array}\right]$$ The nullity of this matrix is the dimension of $$U\cap V$$. Row reducing this matrix gives $$\operatorname{rref}\left[ \begin{array}{cc|cc} u_1 & u_1 & v_1 & v_2 \end{array}\right]=\left[\begin{array}{rr|rr} 1 & 0 & 0 & -1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \end{array}\right]$$ We immediately see that $$\dim(U\cap V)=1$$. Moreover, this reduced form tells us that $$u_1-u_2-v_1+v_2=0$$ which is equivalent to $$u_1-u_2=v_1-v_2=\left\langle-1,\,-1,\,-1\right\rangle$$ This gives $$U\cap V=\operatorname{Span}\{\left\langle-1,\,-1,\,-1\right\rangle\}$$.
Never mind i figured it out. I made a mistake reading my equations from the total matrix. Instead of $$\alpha' +\alpha=0$$ It should be $$\alpha' +\beta=0$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929563403129578, "perplexity": 102.69548938713694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00749.warc.gz"} |
http://dlmf.nist.gov/27.11 | # §27.11 Asymptotic Formulas: Partial Sums
The behavior of a number-theoretic function $f(n)$ for large $n$ is often difficult to determine because the function values can fluctuate considerably as $n$ increases. It is more fruitful to study partial sums and seek asymptotic formulas of the form
27.11.1 $\sum_{n\leq x}f(n)=F(x)+\mathop{O\/}\nolimits\!\left(g(x)\right),$
where $F(x)$ is a known function of $x$, and $\mathop{O\/}\nolimits\!\left(g(x)\right)$ represents the error, a function of smaller order than $F(x)$ for all $x$ in some prescribed range. For example, Dirichlet (1849) proves that for all $x\geq 1$,
27.11.2 $\sum_{n\leq x}\mathop{d\/}\nolimits\!\left(n\right)=x\mathop{\mathrm{log}\,\/}% \nolimits x+(2\EulerConstant-1)x+\mathop{O\/}\nolimits\!\left(\sqrt{x}\right),$
where $\EulerConstant$ is Euler’s constant (§5.2(ii)). Dirichlet’s divisor problem (unsolved in 2009) is to determine the least number $\theta_{0}$ such that the error term in (27.11.2) is $\mathop{O\/}\nolimits\!\left(x^{\theta}\right)$ for all $\theta>\theta_{0}$. Kolesnik (1969) proves that $\theta_{0}\leq\frac{12}{37}$.
Equations (27.11.3)–(27.11.11) list further asymptotic formulas related to some of the functions listed in §27.2. They are valid for all $x\geq 2$. The error terms given here are not necessarily the best known.
27.11.3 $\sum_{n\leq x}\frac{\mathop{d\/}\nolimits\!\left(n\right)}{n}=\frac{1}{2}(% \mathop{\mathrm{log}\,\/}\nolimits x)^{2}+2\EulerConstant\mathop{\mathrm{log}% \,\/}\nolimits x+\mathop{O\/}\nolimits\!\left(1\right),$
where $\EulerConstant$ again is Euler’s constant.
27.11.4 $\displaystyle\sum_{n\leq x}\mathop{\sigma_{1}\/}\nolimits\!\left(n\right)$ $\displaystyle=\frac{\pi^{2}}{12}x^{2}+\mathop{O\/}\nolimits\!\left(x\mathop{% \mathrm{log}\,\/}\nolimits x\right).$ Symbols: $\mathop{O\/}\nolimits\!\left(x\right)$: order not exceeding, $\mathop{\sigma_{\alpha}\/}\nolimits\!\left(n\right)$: sum of powers of divisors of $n$, $n$: positive integer and $x$: real number A&S Ref: 24.3.3 III (in slightly different form) Permalink: http://dlmf.nist.gov/27.11.E4 Encodings: TeX, pMML, png 27.11.5 $\displaystyle\sum_{n\leq x}\mathop{\sigma_{\alpha}\/}\nolimits\!\left(n\right)$ $\displaystyle=\frac{\mathop{\zeta\/}\nolimits\!\left(\alpha+1\right)}{\alpha+1% }x^{\alpha+1}+\mathop{O\/}\nolimits\!\left(x^{\beta}\right),$ $\alpha>0$, $\alpha\neq 1$, $\beta=\max(1,\alpha)$.
27.11.6 $\displaystyle\sum_{n\leq x}\mathop{\phi\/}\nolimits\!\left(n\right)$ $\displaystyle=\frac{3}{\pi^{2}}x^{2}+\mathop{O\/}\nolimits\!\left(x\mathop{% \mathrm{log}\,\/}\nolimits x\right).$ 27.11.7 $\displaystyle\sum_{n\leq x}\frac{\mathop{\phi\/}\nolimits\!\left(n\right)}{n}$ $\displaystyle=\frac{6}{\pi^{2}}x+\mathop{O\/}\nolimits\!\left(\mathop{\mathrm{% log}\,\/}\nolimits x\right).$
27.11.8 $\sum_{p\leq x}\frac{1}{p}=\mathop{\mathrm{log}\,\/}\nolimits\mathop{\mathrm{% log}\,\/}\nolimits x+A+\mathop{O\/}\nolimits\!\left(\frac{1}{\mathop{\mathrm{% log}\,\/}\nolimits x}\right),$
where $A$ is a constant.
27.11.9 $\sum_{\substack{p\leq x\\ p\equiv h\!\!\!\!\!\;\;(\mathop{{\rm mod}}k)}}\frac{1}{p}=\frac{1}{\mathop{% \phi\/}\nolimits\!\left(k\right)}\mathop{\mathrm{log}\,\/}\nolimits\mathop{% \mathrm{log}\,\/}\nolimits x+B+\mathop{O\/}\nolimits\!\left(\frac{1}{\mathop{% \mathrm{log}\,\/}\nolimits x}\right),$
where $\left(h,k\right)=1$, $k>0$, and $B$ is a constant depending on $h$ and $k$.
27.11.10 $\sum_{p\leq x}\frac{\mathop{\mathrm{log}\,\/}\nolimits p}{p}=\mathop{\mathrm{% log}\,\/}\nolimits x+\mathop{O\/}\nolimits\!\left(1\right).$
27.11.11 $\sum_{\substack{p\leq x\\ p\equiv h\!\!\!\!\!\;\;(\mathop{{\rm mod}}k)}}\frac{\mathop{\mathrm{log}\,\/}% \nolimits p}{p}=\frac{1}{\mathop{\phi\/}\nolimits\!\left(k\right)}\mathop{% \mathrm{log}\,\/}\nolimits x+\mathop{O\/}\nolimits\!\left(1\right),$
where $\left(h,k\right)=1$, $k>0$.
Letting $x\to\infty$ in (27.11.9) or in (27.11.11) we see that there are infinitely many primes $p\equiv h\;\;(\mathop{{\rm mod}}k)$ if $h,k$ are coprime; this is Dirichlet’s theorem on primes in arithmetic progressions.
27.11.12 $\sum_{n\leq x}\mathop{\mu\/}\nolimits\!\left(n\right)=\mathop{O\/}\nolimits\!% \left(xe^{-C\sqrt{\mathop{\mathrm{log}\,\/}\nolimits x}}\right),$ $x\to\infty$,
for some positive constant $C$,
27.11.13 $\lim_{x\to\infty}\frac{1}{x}\sum_{n\leq x}\mathop{\mu\/}\nolimits\!\left(n% \right)=0,$ Symbols: $\mathop{\mu\/}\nolimits\!\left(n\right)$: Möbius function, $n$: positive integer and $x$: real number Referenced by: §27.11 Permalink: http://dlmf.nist.gov/27.11.E13 Encodings: TeX, pMML, png
27.11.14 $\lim_{x\to\infty}\sum_{n\leq x}\frac{\mathop{\mu\/}\nolimits\!\left(n\right)}{% n}=0,$ Symbols: $\mathop{\mu\/}\nolimits\!\left(n\right)$: Möbius function, $n$: positive integer and $x$: real number A&S Ref: 24.3.1 III Referenced by: §27.11 Permalink: http://dlmf.nist.gov/27.11.E14 Encodings: TeX, pMML, png
27.11.15 $\lim_{x\to\infty}\sum_{n\leq x}\frac{\mathop{\mu\/}\nolimits\!\left(n\right)% \mathop{\mathrm{log}\,\/}\nolimits n}{n}=-1.$ Symbols: $\mathop{\mu\/}\nolimits\!\left(n\right)$: Möbius function, $n$: positive integer and $x$: real number A&S Ref: 24.3.1 III Referenced by: §27.11, §27.11 Permalink: http://dlmf.nist.gov/27.11.E15 Encodings: TeX, pMML, png
Each of (27.11.13)–(27.11.15) is equivalent to the prime number theorem (27.2.3). The prime number theorem for arithmetic progressions—an extension of (27.2.3) and first proved in de la Vallée Poussin (1896a, b)—states that if $\left(h,k\right)=1$, then the number of primes $p\leq x$ with $p\equiv h\;\;(\mathop{{\rm mod}}k)$ is asymptotic to $x/(\mathop{\phi\/}\nolimits\!\left(k\right)\mathop{\mathrm{log}\,\/}\nolimits x)$ as $x\to\infty$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 125, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790749549865723, "perplexity": 1251.0880180666497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/parallel-axis-theorem-for-area.841477/ | Homework Help: Parallel axis theorem for area
1. Nov 5, 2015
goldfish9776
1. The problem statement, all variables and given/known data
why the y bar is 0 ? according to the diagram , y ' has certain value , it's not 0 ! can someone help to explain ?
2. Relevant equations
3. The attempt at a solution
File size:
87.1 KB
Views:
101
2. Nov 5, 2015
HallsofIvy
$\overline{y}= 0$ because the y' axis, not the y axis, is defined as the vertical line passing through the center of mass.
3. Nov 5, 2015
goldfish9776
it pass thru centroid , why it is 0 ?
4. Nov 5, 2015
haruspex
The shape can be thought of as made up of many little areas like A. Each little area has its own (x', y') coordinates relative to the centroid. y-bar is here defined as the average value of y' across all these little areas. By definition of centroid, that average is zero.
5. Nov 5, 2015
goldfish9776
why only the second integral = 0 , why not the first integral equal to 0 also ?
6. Nov 5, 2015
SteamKing
Staff Emeritus
Because in the first integral, y' is squared.
When y' is by itself, you are adding products of y' dA on either side of the centroidal axis, so some products are negative and some are positive. When you calculate y'2 dA, all of the products are positive, so their sum adds up to a positive, non-zero result. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159657120704651, "perplexity": 1052.5995854492285}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593438.33/warc/CC-MAIN-20180722174538-20180722194538-00410.warc.gz"} |
http://math-sciences.org/event/axel-maas-jena/ | # Axel Maas (Jena)
• This event has passed.
# Axel Maas (Jena)
## March 6, 2012 @ 4:00 pm - 5:00 pm UTC+0
“Correlation functions and gauge-fixing beyond perturbation theory”
Correlation functions are very useful tools in the intermediate steps from the elementary degrees of freedom to observable physics. However, in gauge theories they depend on the chosen gauge. Beyond perturbation theory, the thus necessary fixing of the gauge becomes complicated due to the presence of the Gribov-Singer ambiguity.
The origins and properties of this ambiguity will be presented, with particular focus on the non-perturbative generalization of the Landau gauge. The consequences for methods and the gauge-dependent correlation functions will be detailed. Finally, the current challenges in a full formal resolution of the ambiguity will be outlined.
## Details
Date:
March 6, 2012
Time:
4:00 pm - 5:00 pm UTC+0
Event Categories:
,
## Venue
Room 205
2-5, Kirkby Place
Plymouth, PL4 6DT United Kingdom
Top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864943265914917, "perplexity": 2741.023479676236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00798.warc.gz"} |
http://www.algebra.com/cgi-bin/show-question-source.mpl?solution=138939 | Question 185187
<font face="Times New Roman" size="+2">
I'm presuming that the six partitions add up to one complete circle. Given that a circle is 360 degrees, the last segment must be half of that, or 180. The one before that, 90, before that 45, 22.5, 11.25, and finally, the first one must have been 5.625.
Symbolically:
*[tex \LARGE \ \ \ \ \ \ \ \ \ \ \frac{360}{2^6}]
John
*[tex \LARGE e^{i\pi} + 1 = 0]
</font> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522899389266968, "perplexity": 782.01933295991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383077/warc/CC-MAIN-20130516092623-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-discriminant-and-how-many-solutions-does-4x-2-4x-11-0-have | Algebra
Topics
# How do you find the discriminant and how many solutions does 4x^2-4x+11=0 have?
May 6, 2015
The discriminant is given as:
$\Delta = {b}^{2} - 4 a c$
In your equation (written in the general form: $a {x}^{2} + b x + c = 0$) you have:
$a = 4$
$b = - 4$
$c = 11$
So: $\Delta = {\left(- 4\right)}^{2} - 4 \cdot \left(4 \cdot 11\right) = - 160 < 0$ This means that you cannot have REAL solutions to your equation (however, you can have two COMPLEX solutions!).
##### Impact of this question
405 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812130868434906, "perplexity": 1299.9284302784406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502204.93/warc/CC-MAIN-20200605174158-20200605204158-00475.warc.gz"} |
https://cs.stackexchange.com/questions/41791/a-totally-ordered-set-of-functions | # A totally-ordered set of functions
When we analyze algorithms using the $O$ notation, we usually use only a small set of the space of all functions. E.g., we use $\Theta(n)$ but not $\Theta(2n)$, as these two are equally well represented by $\Theta(n)$. This makes me ask whether it is possible to define a set of "representative functions" which are totally ordered by the $o$-notation?
Concretely, let $F$ be the space of all positive, monotonically increasing real functions on $N$ (the natural numbers). I am looking for a subset $G\subseteq F$ with the following properties:
1. For every two functions $g_1,g_2 \in G$, either $g_1(n) = o(g_2(n))$ or $g_2(n) = o(g_1(n))$.
2. For every function $f\in F$, there is a function $g\in G$ with $g(n)=\Theta(f(n))$
Does there exist such a set? If so, how can it be represented (e.g. how many real parameters are required?)
• Maybe this question would be better on math.stackexchange. – Tom Cornebize Apr 24 '15 at 13:02
• @TomCornebize I thought of asking this in math.SE, but found it difficult to understand the motivation. The motivation is from computer science. – Erel Segal-Halevi Apr 24 '15 at 13:05
• The answer to the existence question should be "yes", assuming the axiom of choice, but this does not necessarily imply representability. Note that even for functions $\mathbb{N}\to\mathbb{N}$ you would have consider monstrosities such as the fast-growing hierarchies (en.wikipedia.org/wiki/Fast-growing_hierarchy). – Klaus Draeger Apr 24 '15 at 14:04
• It may be easier to formulate a normal form; drop all non-dominant summands and constant factors, for instance. But then, the set of all functions is uncountable while that of those we can write down (with any fixed syntax and semantics) is countable, so this "visual" approach won't cover most functions. – Raphael Apr 24 '15 at 14:06
• Ah, but I found this: mathoverflow.net/questions/45510/… – Erel Segal-Halevi Apr 24 '15 at 14:16
This question is very natural since most if not all common functions that appear in the runtime-analysis of algorithms form a totally ordered set in terms of little $$o$$-notation or big $$\Omega$$-notation such as shown in this answer by Robert S. Barnes or this answer by Kelalaka.
However, there is no such totally-ordered set of representative functions for all positive increasing functions.
The basic reason is the first property, when applied to $$F$$, does not define a total-order.
Define two strictly increasing $$f_1(n)$$ and $$f_2(n)$$ from $$\Bbb N$$ to $$\Bbb N$$.
$$f_1(n)=2^{(2\lceil\frac {n+1}2\rceil)^2+n}$$ $$f_2(n)=2^{(2\lfloor\frac {n}2\rfloor+1)^2+n}$$
Then $$\sup\lim _{n\to\infty}\frac {f_1(n)}{f_2(n)} \ge \lim_{n=2i+1\to\infty}\frac {f_1(n)}{f_2(n)} =\lim_{i\to\infty}\frac {2^{(2(i+1))^2+2i+1}}{2^{(2i+1)^2+2i+1}} =\lim_{i\to\infty}2^{4i+3}=\infty$$
$$\sup\lim _{n\to\infty}\frac {f_2(n)}{f_1(n)} \ge \lim_{n=2i\to\infty}\frac {f_2(n)}{f_1(n)} =\lim_{i\to\infty}\frac {2^{(2i+1)^2+2i}}{2^{(2i)^2+2i}} =\lim_{i\to\infty}2^{4i+1} =\infty$$
In plain words, neither of $$f_1(n)$$ and $$f_2(n)$$ grows asymptotically the same or faster than the other one ignoring a constant factor.
Let $$F$$ and $$G$$ be defined as in the question. Suppose $$G$$ exists. Then there is $$g_1, g_2\in G$$ such that $$g_1(n)=\Theta(f_1(n))$$ and $$g_2(n)=\Theta(f_2(n))$$. Then $$g_1(n)>c_1(f_1(n))$$ for some constant $$c_1>0$$ and $$g_2(n) for some constant $$d_2$$. (In usual cases, we have to say "for $$n$$ large enough". However, that clause can be omitted since all values here are positive. Anyway, we can add that clause as well.)
$$\sup\lim _{n\to\infty}\frac {g_1(n)}{g_2(n)} \ge \sup\lim_{n\to\infty}\frac {c_1f_1(n)}{d_2f_2(n)} \ge \frac{c_1}{d_2}\sup\lim_{n\to\infty}\frac {f_1(n)}{f_2(n)} =\infty$$
Symmetrically, $$\sup\lim_{n\to\infty}\dfrac {g_2(n)}{g_1(n)}=\infty$$. We see that $$g_1\not=g_2$$ but neither $$g_1(n) = o(g_2(n))$$ nor $$g_2(n) = o(g_1(n))$$. This contradition shows $$G$$ does not exist.
How about if we drop the first requirement so that the functions in $$G$$ are not required to dominate each other?
Then this question becomes closely related to the question Big O notation and the maximal set of comparable functions as found by OP, where a couple of enlightening answers explain the situation pretty well, indicating there is hardly any hope for a nice positive answer. Even countably-infinitely real parameters cannot parametrize continuously a set of representative functions in the equivalence classes of increasing functions where two functions are in the same equivalence class if they are related by $$\Theta$$ and each function only uses finitely-many parameters.
Explicit rigorous answers to the following exercises might be long. However, the ideas should be simple for experienced users.
(Exercise 1.) For any given positive integer $$n$$, construct $$n$$ increasing functions from $$\Bbb N$$ to $$\Bbb N$$ such that for any two of them, $$f_1$$ and $$f_2$$, neither $$f_1=O(f_2)$$ nor $$f_2=O(f_1)$$.
(Exercise 2.) Construct a set of infinitely many increasing functions from $$\Bbb N$$ to $$\Bbb N$$ such that for any two of them, $$f_1$$ and $$f_2$$, neither $$f_1=O(f_2)$$ nor $$f_2=O(f_1)$$.
(Exercise 3.) Let $$h(n)$$ be a given function from $$\Bbb N$$ to $$\Bbb N$$. Construct a set of infinitely many increasing functions from $$\Bbb N$$ to $$\Bbb N$$ such that for any two of them, $$f_1$$ and $$f_2$$, neither $$f_1=O(h\ f_2)$$ nor $$f_2=O(h\ f_1)$$.
• Cool example! I didn't think that there could be functions that are incomparable by the $O$ notation. We could define a relaxed notation that allows functions to differ not only by a constant factor but also by an exponential function of $n$. But then we could find another pair of functions that is incomparable by the new notation too. – Erel Segal-Halevi Jan 17 '19 at 10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95745450258255, "perplexity": 415.06741002705206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00102.warc.gz"} |
http://math.stackexchange.com/questions/250665/can-f-be-extended-to-a-continuous-map | # Can f be extended to a continuous map?
Here is an old Berkeley Preliminary Exam question (Spring 79).
Let $f : \mathbb{R}^n-\{0\} \rightarrow \mathbb{R}$ be differentiable. Suppose
$$\lim_{x\rightarrow0}\frac{\partial f}{\partial x_j}(x)$$ exists for each $j=1, \cdots ,n$.
(1) Can $f$ be extended to a continuous map from $\mathbb{R}^n$ to $\mathbb{R}$?
(2) Assuming continuity at the origin, is $f$ differentiable from $\mathbb{R}^n$ to $\mathbb{R}$?
End of question.
In the book by De Souza, the following solution is given for (1)
No, with the counter example $f(x,y)=\frac{xy}{x^2+y^2}$ for $(x,y)\neq (0,0)$.
This is not an extendable function, but $\lim_{x\rightarrow0}\frac{\partial f}{\partial x_j}(x)$ does not exists. I think the solution is wrong, any other correct counter example?
Thanks
-
Let $n =1$ and $f(x) = \text{signum}(x)$. Surely $f$ satisfies the conditions and surely it cannot be extended to a continuous function.
Though it feels like cheating...
-
It did feel like cheating, haha, I can't think of something in n=2 yet. – KWO Dec 4 '12 at 12:53
I now believe that the book might be wrong, conclusion in (1) might be correct. – KWO Dec 4 '12 at 12:58
Sorry, I retract my previous comment. – KWO Dec 4 '12 at 13:00
In polar coordinates, with $\theta \in [0,2\pi), r \in (0,\infty)$: $f(r,\theta) = \sin \theta$. This takes every value in $[-1,1]$ in every neighbourhood of $(0,0)$.
Edited to add: As KWO points out in the comments, this example is flawed.
-
I think your example can be extended to continuous function, right? Just define f(0,0)=0 will do. – KWO Dec 4 '12 at 14:11
@KWO: Read my last sentence again. – TonyK Dec 4 '12 at 15:24
According to your argument $f_{\theta}=cos \theta$ also have trouble converging to zero, hence the hypothesis not satisfied. – KWO Dec 5 '12 at 3:39
The hypothesis of the problem is that the limit of partial derivative exist when x goes to 0, in your construction, the limit of partial derivative with respect to theta does not exist when x goes to zero. – KWO Dec 5 '12 at 10:51
OK, sorry. You are right (except that it's $f_x$ and $f_y$ that must tend to $0$, not $f_\theta$.) – TonyK Dec 5 '12 at 11:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229357838630676, "perplexity": 409.4173227056958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00125-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://de.maplesoft.com/support/help/maple/view.aspx?path=RegularChains%2FChainTools%2FChain | RegularChains[ChainTools] - Maple Programming Help
Home : Support : Online Help : Mathematics : Factorization and Solving Equations : RegularChains : ChainTools Subpackage : RegularChains/ChainTools/Chain
RegularChains[ChainTools]
Chain
constructs regular chains
Calling Sequence Chain(lp, rc, R)
Parameters
lp - list of polynomials of R rc - regular chain of R R - polynomial ring
Description
• The command Chain(lp, rc, R) returns the regular chain obtained by extending rc with lp.
• It is assumed that lp is a list of non-constant polynomials sorted in increasing main variable, and that any main variable of a polynomial in lp is strictly greater than any algebraic variable of rc.
• It is also assumed that the polynomials of rc together with those of lp form a regular chain.
• The function Chain allows the user to build a regular chain without performing any expensive check and without splitting or simplifying. On the contrary, the functions Construct and ListConstruct check their input completely. In addition, they simplify the input polynomials and they may also factorize some of them, leading to a list of regular chains (that is, a split) rather than a single one.
• The function Chain is used by some algorithms where one tries to split the computations as little as possible. This is the case for the function EquiprojectableDecomposition.
• This command is part of the RegularChains[ChainTools] package, so it can be used in the form Chain(..) only after executing the command with(RegularChains[ChainTools]). However, it can always be accessed through the long form of the command by using RegularChains[ChainTools][Chain](..).
Examples
> $\mathrm{with}\left(\mathrm{RegularChains}\right):$
> $\mathrm{with}\left(\mathrm{ChainTools}\right):$
> $R≔\mathrm{PolynomialRing}\left(\left[t,x,y,z\right]\right)$
${R}{≔}{\mathrm{polynomial_ring}}$ (1)
> $\mathrm{pz}≔{z}^{2}+2z+1$
${\mathrm{pz}}{≔}{{z}}^{{2}}{+}{2}{}{z}{+}{1}$ (2)
> $\mathrm{py}≔z{y}^{2}+1$
${\mathrm{py}}{≔}{z}{}{{y}}^{{2}}{+}{1}$ (3)
> $\mathrm{pt}≔t\left(x+y\right)+y+z$
${\mathrm{pt}}{≔}{t}{}\left({x}{+}{y}\right){+}{y}{+}{z}$ (4)
> $\mathrm{qy}≔\mathrm{expand}\left(3z\mathrm{py}\right)$
${\mathrm{qy}}{≔}{3}{}{{y}}^{{2}}{}{{z}}^{{2}}{+}{3}{}{z}$ (5)
> $\mathrm{qt}≔\mathrm{expand}\left({\left(x+y\right)}^{2}\mathrm{pt}\right)$
${\mathrm{qt}}{≔}{t}{}{{x}}^{{3}}{+}{3}{}{t}{}{{x}}^{{2}}{}{y}{+}{3}{}{t}{}{x}{}{{y}}^{{2}}{+}{t}{}{{y}}^{{3}}{+}{{x}}^{{2}}{}{y}{+}{{x}}^{{2}}{}{z}{+}{2}{}{x}{}{{y}}^{{2}}{+}{2}{}{x}{}{y}{}{z}{+}{{y}}^{{3}}{+}{z}{}{{y}}^{{2}}$ (6)
> $\mathrm{rc}≔\mathrm{Empty}\left(R\right)$
${\mathrm{rc}}{≔}{\mathrm{regular_chain}}$ (7)
> $\mathrm{rc1}≔\mathrm{Chain}\left(\left[\mathrm{pz},\mathrm{qy},\mathrm{qt}\right],\mathrm{rc},R\right)$
${\mathrm{rc1}}{≔}{\mathrm{regular_chain}}$ (8)
> $\mathrm{Equations}\left(\mathrm{rc1},R\right)$
$\left[\left({{x}}^{{3}}{+}{3}{}{y}{}{{x}}^{{2}}{+}{3}{}{{y}}^{{2}}{}{x}{+}{{y}}^{{3}}\right){}{t}{+}\left({y}{+}{z}\right){}{{x}}^{{2}}{+}\left({2}{}{{y}}^{{2}}{+}{2}{}{z}{}{y}\right){}{x}{+}{{y}}^{{3}}{+}{z}{}{{y}}^{{2}}{,}{3}{}{{z}}^{{2}}{}{{y}}^{{2}}{+}{3}{}{z}{,}{{z}}^{{2}}{+}{2}{}{z}{+}{1}\right]$ (9)
> $\mathrm{lrc}≔\mathrm{ListConstruct}\left(\left[\mathrm{pz},\mathrm{qy},\mathrm{qt}\right],\mathrm{rc},R\right)$
${\mathrm{lrc}}{≔}\left[{\mathrm{regular_chain}}{,}{\mathrm{regular_chain}}\right]$ (10)
> $\mathrm{map}\left(\mathrm{Equations},\mathrm{lrc},R\right)$
$\left[\left[{t}{,}{y}{-}{1}{,}{z}{+}{1}\right]{,}\left[\left({x}{-}{1}\right){}{t}{-}{2}{,}{y}{+}{1}{,}{z}{+}{1}\right]\right]$ (11) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288222789764404, "perplexity": 1352.8493400129807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00347.warc.gz"} |
https://destevez.net/2020/06/reverse-engineering-the-dscs-iii-convolutional-encoder/ | # Reverse-engineering the DSCS-III convolutional encoder
One thing I left open in my post yesterday was the convolutional encoder used for FEC in the DSCS-III X-band beacon data. I haven’t seen that the details of the convolutional encoder are described in Coppola’s Master’s thesis, but in a situation such as this one, it is quite easy to use some linear algebra to find the convolutional encoder specification. Here I explain how it is done.
What makes finding the convolutional encoder specifications quite easy is that the encoder is systematic, which means that its input is a subset of the output. As we saw yesterday, the input bits, which we will call $$x_n$$ are sent as the even bits of the output, while the odd bits of the output, which we will call $$y_n$$, are computed in terms of $$x_n$$ by the encoder.
The general formula for such a convolutional encoder is simple:$y_n = \sum_{j = 0}^{k-1} a_j x_{n-j},$where all the arithmetic is done in $$GF(2)$$. When applying this to a real world system, the data starts at some point, say at $$n = 0$$, and we put formally $$x_n = 0$$ for $$n < 0$$ so that the formula above makes sense. In the usual implementation of the convolutional encoder by using a shift register, this corresponds to starting with the register filled with zeros.
In the formula above, it is usually assumed that $$a_{k-1} \neq 0$$, so that the number $$k$$, which is called the constraint length, is uniquely defined. Assume for a moment that we now the number $$k$$, or at least have an upper bound $$l$$ for it. Then we can try to collect $$l$$ row vectors formed by $$l$$ consecutive input bits $$v_j = (x_{n_j}, x_{n_j+1}, …, x_{n_j+l-1})$$, $$j = 1,\ldots,l$$, in such a way that $$\{v_j\}$$ are linearly independent. In the unlikely case that this is not possible, then the conclusion is that the input has not “enough variety” to determine the convolutional code uniquely.
If we write the matrix $$A$$ which has the vectors $$v_j$$ as rows, put $$\alpha = (a_{k-1},\ldots,a_0)^T$$ and $$\beta = (y_{n_1+l-1}, y_{n_2+l-1}, \ldots, y_{n_l+l-1})^T$$, then $$A\alpha = \beta$$. Since the matrix $$A$$ is invertible and $$\beta$$ is known, we can solve this linear equation to find $$\alpha$$, which gives us the convolutional code specification.
What about finding an upper bound $$l$$ for the constraint length? Well, first of all maybe this isn’t even necessary. Constraint lengths are usually not very large (the CCSDS convolutional code has a constraint length of 7), so we can proceed by guessing a large enough value for $$l$$, say $$l = 20$$. This is inefficient, but it will always yield a candidate solution. The candidate solution needs to be checked against the full set of input data $$x_n$$ and output data $$y_n$$. If it works, then we’ve found the convolutional code. If it doesn’t work, then the constraint length is actually greater than $$l$$, so we can use a larger guess for $$l$$ and try again.
Alternatively, we can use the following approach. We consider a large integer $$M$$ (that should be guaranteed to be larger than the constraint length), an index $$t$$, and the vectors $$\gamma = (y_t, y_{t+1}, \ldots, y_{t+M})^T$$, and $$w_j = (x_{t – j}, x_{t-j+1},\ldots, x_{t-j+M})^T$$, for $$j = 0, 1, \ldots$$. Note that $$\gamma$$ is a linear combination of $$w_0, \ldots, w_{k-1}$$. So to find $$k$$ we start only with $$w_0$$ and keep adding vectors $$w_1, w_2, \ldots$$ as necessary until we obtain a set of vectors such that $$\gamma$$ is a linear combination of them. Note that this condition corresponds to whether an overdetermined linear system has a solution, so it can be checked by the usual methods of Gaussian elimination or computing determinants.
In the case of the DSCS-III beacon, finding the constraint length is more simple, since the data has lots of zeros. In this case, we can look at one point in the input where a one appears followed by many zeros (looking at the figures in the last post, the bit at position 80 is a good choice), and then look at which position in the output after this point a one appears for the last time (which happens at position 89). By doing so, we find the length of the shift register, which is precisely the constraint length.
So for DSCS-III the constraint length is 10. It is easy to choose a linearly independent set of 10 vectors formed by 10 consecutive input bits (choosing them about the point where ones start to appear in the input is enough). By doing so, we can solve the linear system introduced above and find that the convolutional code is$y_n = x_n + x_{n-1} + x_{n-2}+ x_{n-5} + x_{n-6} + x_{n-9}.$I have checked that this is correct by going over the full dataset. Since all the frames start by and end with a bunch of zeros, we need not care about (and cannot distinguish) whether this code is applied in a streaming fashion, or separately per data frame. I have updated the Jupyter notebook to include the relevant calculations.
The fact that the convolutional code was systematic has made our life much easier. Usually, convolutional codes are not systematic, so we don’t have direct access to the input of the encoder. Still, similar linear algebra techniques can be used to reverse-engineer the code. For more details see this post by Gonzalo Carracedo EA1IYR about some work regarding reverse-engineering of convolutional codes that he originally presented in STARcon 2019.
## 2 Replies to “Reverse-engineering the DSCS-III convolutional encoder”
1. Scott -- K4KDR says:
Thanks very much for the interesting report! I do not understand all of the theory (or math), but appreciate the explanation very much.
Do we know, or is it reasonable to assume, that the same polynomial might be used on other satellites from the same family?
In general is it common for the same code to be used across spacecraft within a generation of similar units?
1. From reading the Master’s thesis, I understand that all the DSCS-III satellites have the same kind of beacon. This type of convolutional code might also have been used in similar spacecraft, or maybe not. It’s the first time I find a systematic convolutional code being used in practise.
It’s common for an engineering solution or technique to be re-used in related designs. This not only applies to convolutional codes, but to almost anything you can think of.
Finally, these days pretty much everybody uses the same convolutional code: the CCSDS r=1/2, k=7 code. This code is sometimes called the Voyager code, because, well, it dates back to the Voyager missions.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389552235603333, "perplexity": 261.14329418945067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00264.warc.gz"} |
https://www.jiskha.com/search/index.cgi?query=a+certain+reaction+has+the+following+generaln+form&page=27 | # a certain reaction has the following generaln form
74,460 results, page 27
1. ### Chemistry
Phosphorous trichloride (PCl3) is produced from the reaction of white phosphorous (P4) and chlorine: P4(s) + 6Cl2(g) --> 4 PCl3(g) . A sample of PCl3 of mass 280.7 g was collected from the reaction of 70.86 g of P4 with excess chlorine. What is the percentage yield of the ...
2. ### Chemistry 2
An industrial chemical reaction has a rate law: rate = k [C]. The activation energy for the reaction is 3.0000 x 10^4 J/mol and A = 2.000 x 10^-3. What must be the reaction temperature if a rate of 1.000 x 10^-9 M/s is required and [C] must be kept at 0.03000M. Hint: What must...
3. ### LA
In the poems Concrete Cat, Limerick and Haiku chose different forms to express their thoughts and feelings about the subject. Imagine that their poetic forms changed how would each poem be different if its form was exchanged with another form write a paragraph discussing how ...
4. ### goverment help
Which of the following is not a minor party in the history of the United States? Know-Nothing Free Soil Labour Socialist Which of the following is the most common party system in the world today? a group of people seeking to advance their ideas an independent agency pushing ...
5. ### Chemistry Rate Reaction
What is the rate of the reaction: H2SeO3 + 6I- + 4H+ Se + 2I3- + 3H2O given that the rate law for the reaction at 0°C is rate = (5.0 × 10^5 L5mol-5s-1)[H2SeO3][I-]^3[H+]^2 The reactant concentrations are [H2SeO3] = 1.5 × 10^-2 M, [I-] = 2.4 × 10^-3 M, [H+] = 1.5 × 10^-3 M.
6. ### Chemistry
Use Lewis Symbols to show the reaction of atoms to form arsine, AsH3. Indicate bonding pairs in the Lewis formula of AsH3 are bonding and which are lone pairs.
7. ### chemistry
Ammonia reacts with oxygen gas to form nitrogen monoxide and water. At constant temperture and pressure, how many of nitrogen monoxide can be made by the reaction of 800. ml of oxygen gas?
8. ### CHEMISTRY
Write a balanced chemical equation for the reaction of solid vanadium(V)({\rm V}) oxide with hydrogen gas to form solid vanadium(III)({\rm III}) oxide and liquid water.
9. ### chemistry quick help!
In the balanced double replacement reaction of calcium chloride CaCl2 and aluminum sulfate Al2(SO4)3, how many grams of calcium sulfate can be produced if you start the reaction with 4.19 grams of calcium chloride and the reaction goes to completion?
10. ### Chem hwk
In the balanced double replacement reaction of calcium chloride CaCl2 and aluminum sulfate Al2(SO4)3, how many grams of calcium sulfate can be produced if you start the reaction with 18.97 grams of calcium chloride and the reaction goes to completion?
11. ### Hard Math
Why are these poor models for a parabola, Where a ball starts a a certain point and then is hit to reach a maximum height (vertex) and then lands at a certain point i. y = -0.002x(x - 437.1) ii. y = -0.5x + 216x + 3 iii. y = -0.002x + 0.879x + 3.981 iv. y = -0.002x + 0.8732x...
12. ### math
i don't understand standard form math and i have to questions what is nine hundredths in standard form and sixteen and six tenths in standard form?
13. ### Chemistry
The following reaction can be used to determine the amount of calcium ion present in milk. Before the reaction below is carried out ca2+ is precipitated out of a sample of milk as CaC2O4(s). The CaC2O4(s) is then titrated with MnO4-. 2MnO4-(aq) + 5CaC2O4(s) + 16H+(aq) = 2Mn2+(...
14. ### Chemistry
What is the value of the rate constant for a reaction in which 75.0 percent of the compound in a given sample decomposes in 80.0 min? Assume first order kinetics. Use the integrated rate law for a first order reaction. Remember at 75 percent reaction, 25 percent of the ...
15. ### Chemistry
What is the value of the rate constant for a reaction in which 75.0 percent of the compound in a given sample decomposes in 80.0 min? Assume first order kinetics. Use the integrated rate law for a first order reaction. Remember at 75 percent reaction, 25 percent of the ...
16. ### chem
In the reaction of hydrogen peroxide with iron (II) ion in acidic solution to form iron (III) ion and water, the oxidizing agent is____. Is it hydrogen peroxide?
17. ### Chemistry
Aluminum sulfate reacts with calcium hydroxide (from lime) to form aluminum hydroxide and calcium sulfate. Write the balanced formula unit equation for the reaction.
18. ### Anatomy
I have two question 1.In a dehydration synthesis reaction which of the following occurs a. hydrogen and oxygen are removed from the reactant b. water is added to the reactants c. water is broken down into hydrogen and oxygen d. both a and c I think it would definitely be a but...
19. ### AP Chemistry, Chem, science
Can anyone please help me with this problem?? =/ The reaction between solid sodium and iron (III) oxide is one in a series of reactions that inflates an automobile airbag. If 100g of sodium and 100g of iron(III) oxide are used in this reaction, determine: a. the balanced ...
20. ### chemistry
(a) (i) Draw the abbreviated structural formula, in truncated form, of the compound that would be produced from the reaction of one molecule of glycerol with one molecule each of A, B and C .Circle and name the new functional group(s) produced. What type of reaction is ...
21. ### math
convert from polar form into rectangular form. type the equation in rectangular form (simplified): 1) rsecθ = 3 2) r=5/cosθ-2sinθ 3) r=5cscθ
22. ### Chemistry
Three solutions are mixed together to form a single solution. One contains 0.2 mol Pb(CH3COO)2, the second contains 0.1 mol Na2S, and the third contains 0.1 mol CaCl2. Write the net ionic equations for the precipitation reaction or reactions that occur.
23. ### Chemistry
Consider the following reaction: CO(g)+H2O(g)⇌CO2(g)+H2(g) Kp=0.0611 at 2000 K A reaction mixture initially contains a CO partial pressure of 1348torr and a H2O partial pressure of 1780torr at 2000 K. Calculate the equilibrium partial pressure of CO2. Calculate the ...
24. ### math
It takes 28 minutes for a certain bacteria population to double. If there are 4,241,763 bacteria in this population at 1:00 p.m., which of the following is closest to the number of bacteria in millions at 4:30 pm on the same day?
25. ### chemistry
Sodium nitrate reacts with copper (II) hydroxide. I know how to write it down: NaNo3+CuOH2----> NO REACTION The problem is that I don't understand why it's no reaction I know it's double replacement and we have to look at activity series of metal to figure out If we can ...
26. ### Physics
Which one of the following will form a standing wave: (1) two waves of the same amplitude and speed moving in the same direction (2) two waves of the same amplitude and speed moving in the opposite directions, (3) any two traveling waves can form standing wave if they have ...
27. ### chem
18.4 mg/L of toluene is in a groundwater plume. Compute how much of the following would be necessary for full mineralization of the toluene if each reaction were the only reaction responsible for accepting the electrons from toluene. You may ignore biomass growth. Express the ...
28. ### Math
1. Write the following ratio in simplest form: 32 min:36 min 8:9 8:36 32:9 128:144 2. Marie saved $51. On Wednesday, she spent$8 of her savings. What ratio represents the portion of her total savings that she still has left? 43:8 8:51 43:51 59:51 3. The price of 8.4 ounces of...
What is Gibbs free energy? A. The energy lost as heat to the surrounding molecules B. The usable energy released or absorbed by a reaction C. The energy in the form of kinetic energy in a system D. The energy contained within the bonds of molecules Do not give me a website to ...
30. ### math
A certain sum is deposited in a bank which gives compound intrest at certain rate. The intrest on the amount is rs.440. In the first 2 years and rs.728 in the first 3 years what is the rate of interest per annum.
31. ### cutural diversity
Should United States government policy favor certain kinds of immigrants? Should [citizenship] preference be given to the neediest applicants? The most talented? The most oppressed? The richest? Should applications from certain countries be given priority
32. ### Physics
Suppose under the certain conditions, the maximum force of friction that could act on a certain car was 3.35*10^3 N. The mass of the car is 857kg. What is the maximum possible centripetal acceleration of the car going around a bend? I know the equation for this is a=(4piR)/(T^...
33. ### chemistry
The isomerization reaction H3CNC(g) <=> H3CCN(g) was found to be first order in H3CNC. At 600 K, the concentration of H3CNC is 75% of its original value after 205 s. (a) What is the rate constant for this isomerization reaction at 600 K? (b) At what time will 33% of the ...
34. ### Chemistry
A chemical reaction occurring in a cylinder equipped with a movable piston produces 0.58 mol of a gaseous product. If the cylinder contained 0.11 mol of gas before the reaction and had an initial volume of 2.1 L, what was it's volume after the reaction. Am I doing this right ...
35. ### Chemistry/Equilibrium calculations
At 1200 K, the approximate temperature of vehicle exhaust gases, Kp for the reaction 2CO2(g)-> <- 2CO(g)+ O2(g) is about 1 E-13. Assuming that the exhaust gas (total pressure 1 bar) contains 0.2% CO, 12% CO2 and 3% O2 by volume, is the system at equilibrium with respect ...
36. ### Chemistry
When the oxide of generic metal M is heated at 25.0 °C, only a negligible amount of M is produced. O2M(s) ----> M(s) +O2(g) Delta G^o = 290.3 kJ/mol When this reaction is coupled to the conversion of graphite to carbon dioxide, it becomes spontaneous. What is the chemical ...
37. ### Algebra
Please check all of my answers and tell me if they are right. 1. What is the simplified form of the following expression? 7m^2 + 6.5n – 4n + 2.5m^2 – n (1 point) A. 9.5m^2 + 1.5n*** B. 4.5m^2 + 1.5n C. 1.5m^2 – 4.2n D. 9.5m^2 – 1.5n 2. Simplify the ...
A mass of 11.6g of phosphoric acid was produced from the reaction of 10g of P4O10 with 12g of water. What was the percent yield for this reaction?
39. ### chemistry
Explain why the rate of a simple chemical reaction such as NO(g)+1/2O2(g)=NO2(g) is likey to be most rapid at the beginning of the reaction.
40. ### chemistry
consider the reaction in which 410g of Ca(NO3)2 react with just the right amount of lithium metal in a single replacement reaction ?
41. ### Chemistry
The rate constant of a particular reaction is halved when the temperature is decreased from 323 K to 273 K. Calculate the activation energy for the reaction.
42. ### Chemistry
Consider the reaction when aqueous solutions of manganese(II) sulfate and barium bromide are combined. The net ionic equation for this reaction is:???
43. ### chemistry
Represent this reaction with a balance chemical equation. Solid aluminium hydride is formed by a combination reaction of its two elements.
44. ### chemistry
2 HI(g) H2(g) + I2(g) The reaction above has an initial concentration of 2.00 M HI. (No other species are initially present.) If 22.4% of the HI has reacted when equilibrium is established, calculate Kc for the reaction.
45. ### chemistry
The reaction x products follows Ist order kinetics in 40 minutes the concetration of x changes from 0.1M to 0.025M then the rate of reaction when conc of x is 0.01
46. ### CHEMISTRY
We have this reaction : HCO3- + OH- => CO3^-2 + H2O Specify the conjugate pairs and , which direction the reaction equilibrium shift ? and How ( as a rule ) ?
47. ### chemistry
Which term has the same numerical value for the forward reaction as it has for the reverse reaction, but with opposite sign? a. ^E ( delta E) b. Ea1 c. Ea’ d. Ea2
48. ### Science
If the concentration of AgCl drops from 1.000 M to 0.655 M in the first 30.0 s of the reaction, what is the average rate of reaction over this time interval?
49. ### science
2KClO3 - 2KCl + 3O2 is formed ! now which kind of reaction is this ? and how can i say it is that particular reaction took place here?
50. ### biology, molecular
Consider the following 45 base-pair (bp) DNA sequence: 1 10 20 30 40 | . | . | . | . | . 5’-CGCACCTGTGTTGATCACCTAGCCGATCCACGGTGGATCCAAGGC-3’ ||||||||||||||||||||||||||||||||||||||||||||| 3’-GCGTGGACACAACTAGTGGATCGGCTAGGTGCCACCTAGGTTCCG-5’ Tip: If you want to see a ...
51. ### Anatomy
I'm posting this again today hoping someone can clarify this for me- I have two question 1.In a dehydration synthesis reaction which of the following occur- a. hydrogen and oxygen are removed from the reactant b. water is added to the reactants c. water is broken down into ...
10. Which statement is most accurate regarding reinforcement theory? A. Attainable goals lead to high levels of motivation under certain conditions. B. Perceptions of unfairness are viewed as punishments and reduce productivity. C. Management style is based on the extent to ...
53. ### MATH
Need Answer to the Following Please: Using the 9 digits, 1, 2, 3, 4, 5, 6,7, 8 and 9 you can arrange four different digits to form a four-digit number that is NOT divisible by 7. The digits 1238 cannot be arranged to create a four-digit number that is divisible by 7. The ...
54. ### chemistry
predict the identity of the precipitate that forms. calcium nitrate, Ca(NO3)2, and sodium chloride , NaCl why does the answer come out as no reaction? couldnt it form to be CaCl2 or NaNO3 i thought it had something to do with the fact that Na and NO3 are both soluble so they ...
55. ### chemistry
Calculate ∆T for the reaction. Assume the initial temperature of both reactants is 25.0◦C. Calculate the volume of the reaction mixture. Calculate the mass of the reaction mixture. Assume the density of the mixture is 1.03 g mL^(-1). Calculate the heat transferred ...
56. ### chemistry
Methane (CH4) is the main component of natural gas. It is burned for fuel in a combustion reaction. The unbalanced combustion reaction for methane is shown below. CH4 + O2 CO2 + H2O + heat When the reaction is balanced, how many carbon dioxide molecules are produced for every ...
57. ### chemistry
A certain first order reaction has a rate constant of 5.68 x 10–3 s–1 at 139°C and a rate constant of 8.99 x 10–1 s–1 at a temperature of 215°C. Ea = 111 kJ mol– q1) What is the value of the rate constant at 275°C? q2) At what temperature does the rate ...
58. ### chemistry
In the REDOX reaction in basic solution Zn-->Zn(OH)4+H2 I know that the first half reaction is 4H20+Zn-->Zn(OH)4 +4H+e but the second one is either 2e+2H2O--->H2+2OH (WHICH IS THE ANSWER GIVEN ON THE WORKSHEET) OR 2e+2H-->H2 I think both leads to the same answer= ...
59. ### alegebra
1. What is the factored form of 4x 2 + 12x + 5? (1 point) (2x + 4)(2x + 3) (4x + 5)(x + 1) (2x + 1)(2x + 5) (4x + 1)(x + 5) 2. What is the factored form of 2x 2 + x – 3? (1 point) (2x + 3)(x – 1) (2x + 1)(x – 3) (2x – 3)(x + 1) (2x – 1)(x + 3) 3. The area of a ...
60. ### chemisrty
Consider the reaction between sodium metal and chlorine gas to form sodium chloride (table salt). 2Na(s) + Cl2(g) 2NaCl(s) If 3.6 moles of chlorine react with sufficient sodium, how many grams of sodium chloride will be formed?
61. ### chemisrty
I have no idea how to do this activity..can someone please help!? In this activity you will investigate how chemical reactions are limited by the amount of reactants present. You will be using paper clips to represent atoms. You can describe the quantities of reactants and ...
62. ### chemistry
(1) How many grams of copper, cu are there in 2.55 mol of cu? (2) Determine the percent composition of each of the following (a) ethylalcohol CH3CH2OH. (b) vitamin C, ascorbic acid C6H8O6. (c) Coteine, C18H21NO3. (3) A 27.5g sample of a compound containing carbon and hydrogen ...
63. ### General
I just have a question for the people who help the students. Do you guys pick certain people to answer or does it take more time to figure out or answer certain problems over others? Or does it just have to do with the fact that some tutors and volunteers only teach certain ...
64. ### marketing
You have been hired as a marketing consultant to Johannesburg Burger Supply, Inc., and you wish to come up with a unit price for its hamburgers in order to maximize its weekly revenue. To make life as simple as possible, you assume that the demand equation for Johannesburg ...
NOTE: before you answer this question, please be aware that there aren't meant to be any full stops after the '2' or the 'i=0'. Also the underscores '-' represent how the following number is meant to be a lower case number. The ^ represents root numbers. PLEASE HELP ME Expand ...
66. ### chemistry
1) A Galvanic cell runs on the following reaction: Co (s) + Cu2+ (aq) → Co2+ (aq) + Cu (s) Draw a diagram for this Galvanic cell, labeling the electron flow, the anode and cathode, and the positive and negative sides of the Galvanic cell. 2) A Galvanic cell runs on the ...
67. ### math
Find the standard form of the equation of the circle having the following properties: Center at the origin Containing the point (-4,1)
68. ### Chemistry
Which of the following elements does not form bonds easily because it has a full outer shell? a. Aluminum b. Carbon c. Helium d. Calcium
69. ### Chemistry
which of the following elements does not form bonds easily because it has a full outer shell? a. Aluminum b. Carbon c. Helium d. Calcium
70. ### Chemistry
Which of the following elements does not form bonds easily because it has a full outer shell? A. Aluminum B. Carbon C. Helium D. Calcium
71. ### Math
Simplify the following expression, and rewrite it in an equivalent form with positive exponents. 24x^3y^-3/72x^-5y^-1 My answer was going to be: 24/72*x^3/x^-5=y^=3/y^-1
72. ### Math
Simplify the following expression, and rewrite it in an equivalent form with positive exponents. -15x^4y/17x^2y7 A. -3x^2/y^6 B.3x^2/y^6 C.-3x^6y^8 D. 52x^2/y^6
73. ### math
what would the following equation be in vertex form. y=-9x squared+72x+81. I have tried for hours and i cannot figure it out
74. ### chem
Which ionic compound is expected to form from combining the following pairs of elements? yttrium and oxygen i put YO but its wrong
75. ### math
How do i solve algebraic methods in addition/ subtraction problems such as rewriting the following equations in standard form. 5x + 4 = -9y
76. ### Calc
Find the derivative of the following function using the appropriate form of the Fundamental Theorem of Calculus. intergral s^2/(1+3s^4) ds from sqrtx to 1 F'(x)=?
77. ### Algebra
Give the list of elements and write in the form with condition the following sets: the set of natural numbers that are multiples of 3
78. ### maths
Calculate the lengths of the straight lines joining the following pairs of points, leaving your answers in surd form: (a) A(1,-5), B(6,10); (b) A(4,-3), B(7,6).
79. ### maths
Calculate the lengths of the straight lines joining the following pairs of points, leaving your answers in surd form: (a) A(4,-3), B(7,6).
80. ### URGENT CHEM
Write a balanced equation for the following reduction-oxidation reaction. SO3(2-) + MnO4(-) --- SO4(2–) + Mn(2+) numbers in parenthesis means the charge
81. ### chemistry
Which of the following compounds would result in an exchange reaction if mixed in an aqueous solution with (NH4)2S? a. SrCl2 b. Li2C2O4 c. CH3CO2Ag d. MnSO4 I don't know what to look for.
82. ### Chemistry
When calcium oxelate is treated with an acidic solution of potassium permanganate, the following reaction occurs: CaC2O4(s) + MnO4-(aq) = Ca+2(aq) + CO2(g) + Mn+2(aq) Balance the equation.
83. ### 11th grade
If the volume of water produced during the reaction doubled, what would happen to the ratio of hydrogen to oxygen in the following equation: 2 H2+ O2 --> 2 H2O
84. ### Chemistry
I need to know how to right a net ionic equation for the following chemical reaction: 5 H2C2O4 + 3 H2SO4 + 2 KMnO4 = K2SO4 + 2 MnSO4 + 8 H2O + 10 CO2
85. ### chem
IF a reation generally takes 40 seconds to occur at 45 degrees C, how long should the reaction take a t the following temperatures? A. 35C B. 55C C. 65C
86. ### Chemistry
Identify the atom that increases in oxidation number in the following redox reaction. 2MnO2 + 2K2CO3 + O2 2KMnO4 + 2CO2 *A.) Mn B.) O C.) K D.) C Please show work Thank you.
87. ### physical science
Help please. Aisha drops an antacid tablet in water and times how long it takes to dissolve. Which of the following will decrease the reaction rate?
88. ### Chemistry
What happens if sodium acetate (CH3COONa) is added to the following reaction? CH3COOH H+ + CH3COO- a) [H+] increases b) [H+] decreases c) a precipitate is formed d) [H+] does not change
89. ### english
Which one of the following is a correct example of the singular possessive case? A. king's rights B. audiences' reaction C. women's club D. who's job I think it's C.
90. ### chemistry
What would Qc be if the volume of the container decreased by a factor of 1.774 for the following reaction? PH3(g) + BCl3(g) → PH3BCl3(s) KC = 534.8 at 353.0oC
91. ### Science(chemistry)
Give one word/term for the following descriptions during this type of reaction,the appearance,colour or shape of a substance is changed
92. ### chemistry
Use Hess's Law and a table of heats of formation to determine the enthalpy for the following reaction. 3SO2(g) + 3H2O(g) + 3/2O2(g) --->3H2SO4(l) ΔH = ?
93. ### chemistry
Use Hess's Law and a table of heats of formation to determine the enthalpy for the following reaction. 3SO2(g) + 3H2O(g) + 3/2O2(g) --->3H2SO4(l) ΔH = ?
94. ### Biology
Which one of the following reactions occurs in the cristae of the mitochondria? A. The prep reaction B. Glycolysis C. The electron transport chain D. The citric acid cycle B?
95. ### chemistry
Could you please tell me the objective, theory and mechanism,result, precautions and applications of following experiment : reaction of 4-aminotoluene and o-vanillin(preparation of azomethine)
96. ### chemistry
What mass of carbon dioxide can be produced from 4 moles of sodium bicarbonate according to the following unbalanced reaction, NaHCO3 --> Na2CO3 + CO2 + H2O?
97. ### CHEM 1411
What mass of Gallium chloride is formed by the reaction of 2.6 L of a 1.44 M solution of HCl according to the following equation: 2Ga+ 6HCl= 2GaCl3+3H2
98. ### chem 1411
what mass of H2 is produced by the reaction of 118.5 mL of a 0.8775M solution of H3PO4 according to the following equation: 2Cr+2H3PO4= 3H2+2CrPO4
99. ### chemistry
Determine the order of reducing agents (strongest to weakest). A) Mn(s) > Zn(s) > Cr(s) > Fe(s) B) Fe(s) > Cr(s) > Zn(s) > Mn(s) C) Mn(s) > Fe(s) > Cr(s) > Zn(s) D) Zn(s) > Mn(s) > Cr(s) > Fe(s) I know the half reaction for the following: - ...
100. ### Chemistry
Given the following reaction for photosynthesis, 6CO2 + 6H2O --> C6H12O6 + 6O2, how many liters of O2 can be produced from 100 g of water at standard temperature and pressure? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823166012763977, "perplexity": 3043.298775878611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945584.75/warc/CC-MAIN-20180422100104-20180422120104-00431.warc.gz"} |
https://en.wikipedia.org/wiki/Maximum_likelihood_principle | # Maximum likelihood estimation
(Redirected from Maximum likelihood principle)
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model given observations, by finding the parameter values that maximize the likelihood of making the observations given the parameters. MLE can be seen as a special case of the maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters, or as a variant of the MAP that ignores the prior and which therefore is unregularized.
The method of maximum likelihood corresponds to many well-known estimation methods in statistics. For example, one may be interested in the heights of adult female penguins, but is unable to measure the height of every single penguin in a population due to cost or time constraints. Assuming that the heights are normally distributed with some unknown mean and variance, the mean and variance can be estimated with MLE while only knowing the heights of some sample of the overall population. MLE would accomplish this by taking the mean and variance as parameters and finding particular parametric values that make the observed results the most probable given the model.
In general, for a fixed set of data and underlying statistical model, the method of maximum likelihood selects the set of values of the model parameters that maximizes the likelihood function. Intuitively, this maximizes the "agreement" of the selected model with the observed data, and for discrete random variables it indeed maximizes the probability of the observed data under the resulting distribution. Maximum likelihood estimation gives a unified approach to estimation, which is well-defined in the case of the normal distribution and many other problems.
## Principles
Suppose there is a sample x1, x2, …, xn of n independent and identically distributed observations, coming from a distribution with an unknown probability density function f0(·). It is however surmised that the function f0 belongs to a certain family of distributions { f(·| θ), θ ∈ Θ } (where θ is a vector of parameters for this family), called the parametric model, so that f0 = f(·| θ0). The value θ0 is unknown and is referred to as the true value of the parameter vector. It is desirable to find an estimator ${\displaystyle \scriptstyle {\hat {\theta }}}$ which would be as close to the true value θ0 as possible. Either or both the observed variables xi and the parameter θ can be vectors.
To use the method of maximum likelihood, one first specifies the joint density function for all observations. For an independent and identically distributed sample, this joint density function is
${\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )=f(x_{1}\mid \theta )\times f(x_{2}\mid \theta )\times \cdots \times f(x_{n}\mid \theta ).}$
Now we look at this function from a different perspective by considering the observed values x1, x2, …, xn to be fixed "parameters" of this function, whereas θ will be the function's variable and allowed to vary freely; this same function will be called the likelihood:
${\displaystyle {\mathcal {L}}(\theta \,;\,x_{1},\ldots ,x_{n})=f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )=\prod _{i=1}^{n}f(x_{i}\mid \theta ).}$
Note that " ${\displaystyle ;}$ " denotes a separation between the two categories of input arguments: the parameters ${\displaystyle \theta }$ and the observations ${\displaystyle x_{1},\ldots ,x_{n}}$.
In practice, it is often more convenient when working with the natural logarithm of the likelihood function, called the log-likelihood:
${\displaystyle \ln {\mathcal {L}}(\theta \,;\,x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}$
or the average log-likelihood:
${\displaystyle {\hat {\ell }}={\frac {1}{n}}\ln {\mathcal {L}}.}$
The hat over indicates that it is akin to some estimator. Indeed, ${\displaystyle \scriptstyle {\hat {\ell }}}$ estimates the expected log-likelihood of a single observation in the model.
The method of maximum likelihood estimates θ0 by finding a value of θ that maximizes ${\displaystyle {\hat {\ell }}(\theta ;x)}$. This method of estimation defines a maximum likelihood estimator (MLE) of θ0:
${\displaystyle \{{\hat {\theta }}_{\mathrm {mle} }\}\subseteq \{{\underset {\theta \in \Theta }{\operatorname {arg\,max} }}\ {\hat {\ell }}(\theta \,;\,x_{1},\ldots ,x_{n})\},}$
if a maximum exists. An MLE estimate is the same regardless of whether we maximize the likelihood or the log-likelihood function, since log is a monotonically increasing function.
For many models, a maximum likelihood estimator can be found as an explicit function of the observed data x1, ..., xn. For many other models, however, no closed-form solution to the maximization problem is known or available, and an MLE has to be found numerically using optimization methods. For some problems, there may be multiple estimates that maximize the likelihood. For other problems, no maximum likelihood estimate exists – either the log-likelihood function increases without ever reaching a supremum value, or the supremum does exist but is outside the bounds of ${\displaystyle \Theta }$, the set of acceptable parameter values.
In the exposition above, it is assumed that the data are independent and identically distributed. The method can be applied however to a broader setting, as long as it is possible to write the joint density function f(x1, …, xn | θ), and its parameter θ has a finite dimension which does not depend on the sample size n. In a simpler extension, an allowance can be made for data heterogeneity, so that the joint density is equal to f1(x1 | θ) · f2(x2|θ) · ··· · fn(xn | θ). Put another way, we are now assuming that each observation xi comes from a random variable that has its own distribution function fi. In the more complicated case of time series models, the independence assumption may have to be dropped as well.
A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter θ that maximizes the probability of θ given the data, given by Bayes' theorem:
${\displaystyle P(\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )P(\theta )}{P(x_{1},x_{2},\ldots ,x_{n})}}}$
where ${\displaystyle P(\theta )}$ is the prior distribution for the parameter θ and where ${\displaystyle P(x_{1},x_{2},\ldots ,x_{n})}$ is the probability of the data averaged over all parameters. Since the denominator is independent of θ, the Bayesian estimator is obtained by maximizing ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )P(\theta )}$ with respect to θ. If we further assume that the prior ${\displaystyle P(\theta )}$ is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}$. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution ${\displaystyle P(\theta )}$.
## Properties
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function (c.f. loss function)
${\displaystyle {\hat {\ell }}(\theta \,;\,x)={\frac {1}{n}}\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}$
this being the sample analogue of the expected log-likelihood ${\displaystyle \ell (\theta )=\operatorname {E} [\,\ln f(x_{i}\mid \theta )\,]}$, where this expectation is taken with respect to the true density ${\displaystyle f(\cdot \mid \theta _{0})}$.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[1] However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
• Consistency: the sequence of MLEs converges in probability to the value being estimated.
• Asymptotic normality: as the sample size increases, the distribution of the MLE tends to the Gaussian distribution with mean ${\displaystyle \theta }$ and covariance matrix equal to the inverse of the Fisher information matrix.
• Efficiency, i.e., it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound).
• Second-order efficiency after correction for bias.
### Consistency
Under the conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that having a sufficiently large number of observations n, it is possible to find the value of θ0 with arbitrary precision. In mathematical terms this means that as n goes to infinity the estimator ${\displaystyle \scriptstyle {\hat {\theta }}}$ converges in probability to its true value:
${\displaystyle {\begin{matrix}{}\\{\hat {\theta }}_{\mathrm {mle} }\ {\xrightarrow {p}}\ \theta _{0}.\\{}\end{matrix}}}$
(1)
Under slightly stronger conditions, the estimator converges almost surely (or strongly) to:
${\displaystyle {\begin{matrix}{}\\{\hat {\theta }}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.\\{}\end{matrix}}}$
(2)
To establish consistency, the following conditions are sufficient:[2]
1. Identification of the model:
${\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}$
In other words, different parameter values θ correspond to different distributions within the model. If this condition did not hold, there would be some value θ1 such that θ0 and θ1 generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have been observationally equivalent.
The identification condition is absolutely necessary for the ML estimator to be consistent. When this condition holds, the limiting likelihood function (θ|·) has unique global maximum at θ0.
2. Compactness: the parameter space Θ of the model is compact.
The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right).
Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as:
• both concavity of the log-likelihood function and compactness of some (nonempty) upper level sets of the log-likelihood function, or
• existence of a compact neighborhood N of θ0 such that outside of N the log-likelihood function is less than the maximum by at least some ε > 0.
3. Continuity: the function ln f(x|θ) is continuous in θ for almost all values of x:
${\displaystyle \Pr \!{\big [}\;\ln f(x\mid \theta )\;\in \;\mathbb {C} ^{0}(\Theta )\;{\big ]}=1.}$
The continuity here can be replaced with a slightly weaker condition of upper semi-continuity.
4. Dominance: there exists D(x) integrable with respect to the distribution f(x|θ0) such that
${\displaystyle {\big |}\ln f(x\mid \theta ){\big |}
By the uniform law of large numbers, the dominance condition together with continuity establish the uniform convergence in probability of the log-likelihood:
${\displaystyle \sup _{\theta \in \Theta }{\big |}{\hat {\ell }}(\theta \mid x)-\ell (\theta )\,{\big |}\ {\xrightarrow {p}}\ 0.}$
The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case the uniform convergence in probability can be checked by showing that the sequence ${\displaystyle \scriptstyle {\hat {\ell }}(\theta \mid x)}$ is stochastically equicontinuous.
If one wants to demonstrate that the ML estimator ${\displaystyle \scriptstyle {\hat {\theta }}}$ converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:
${\displaystyle \sup _{\theta \in \Theta }{\big \|}\;{\hat {\ell }}(x\mid \theta )-\ell (\theta )\;{\big \|}\ {\xrightarrow {\text{a.s.}}}\ 0.}$
### Asymptotic normality
In a wide range of situations, maximum likelihood parameter estimates exhibit asymptotic normality – that is, they are equal to the true parameters plus a random error that is approximately normal (given sufficient data), and the error's variance decays as 1/n. For this property to hold, it is necessary that the estimator does not suffer from the following issues:
#### Estimate on boundary
Sometimes the maximum likelihood estimate lies on the boundary of the set of possible parameters, or (if the boundary is not, strictly speaking, allowed) the likelihood gets larger and larger as the parameter approaches the boundary. Standard asymptotic theory needs the assumption that the true parameter value lies away from the boundary. If we have enough data, the maximum likelihood estimate will keep away from the boundary too. But with smaller samples, the estimate can lie on the boundary. In such cases, the asymptotic theory clearly does not give a practically useful approximation. Examples here would be variance-component models, where each component of variance, σ2, must satisfy the constraint σ2 ≥ 0.
#### Data boundary parameter-dependent
For the theory to apply in a simple way, the set of data values which has positive probability (or positive probability density) should not depend on the unknown parameter. A simple example where such parameter-dependence does hold is the case of estimating θ from a set of independent identically distributed observations when the common distribution is uniform on the range (0,θ). For estimation purposes the relevant range of θ is such that θ cannot be less than the largest observation. Because the interval (0,θ) is not compact, there exists no maximum for the likelihood function: For any estimate of theta, there exists a greater estimate that also has greater likelihood. In contrast, the interval [0,θ] includes the end-point θ and is compact, in which case the maximum likelihood estimator exists. However, in this case, the maximum likelihood estimator is biased. Asymptotically, this maximum likelihood estimator is not normally distributed.[3]
#### Nuisance parameters
For maximum likelihood estimations, a model may have a number of nuisance parameters. For the asymptotic behaviour outlined to hold, the number of nuisance parameters should not increase with the number of observations (the sample size). A well-known example of this case is where observations occur as pairs, where the observations in each pair have a different (unknown) mean but otherwise the observations are independent and normally distributed with a common variance. Here for 2N observations, there are N + 1 parameters. It is well known that the maximum likelihood estimate for the variance does not converge to the true value of the variance.
#### Increasing information
For the asymptotics to hold in cases where the assumption of independent identically distributed observations does not hold, a basic requirement is that the amount of information in the data increases indefinitely as the sample size increases. Such a requirement may not be met if either there is too much dependence in the data (for example, if new observations are essentially identical to existing observations), or if new independent observations are subject to an increasing observation error.
Some regularity conditions which ensure this behavior are:
1. The first and second derivatives of the log-likelihood function exist (are “well defined”).
2. The Fisher information matrix is non-singular.
3. The Fisher information matrix is continuous as a function of the parameters, θ.
4. The maximum likelihood estimator is consistent.
Suppose that conditions for consistency of maximum likelihood estimator are satisfied, and[4]
1. θ0 ∈ interior(Θ);
2. f(x | θ) > 0 and is twice continuously differentiable in Θ in some neighborhood N of θ0;
3. ∫ supθN||∇θf(x | θ)||dx < ∞, and ∫ supθN||∇θθf(x | θ)||dx < ∞;
4. I = E[∇θln f(x | θ0) ∇θln f(x | θ0)′] exists and is nonsingular;
5. E[supθN||∇θθln f(x | θ)||] < ∞.
Then the maximum likelihood estimator has asymptotically normal distribution:
${\displaystyle {\sqrt {n}}{\big (}{\hat {\theta }}_{\mathrm {mle} }-\theta _{0}{\big )}\ {\xrightarrow {d}}\ {\mathcal {N}}(0,\,I^{-1}).}$
##### Sketch of proof
Since the log-likelihood function is differentiable, and ${\displaystyle \theta _{0}}$ lies in the interior of the parameter set ${\displaystyle \Theta }$, in the maximum the first-order condition will be satisfied:
${\displaystyle \nabla _{\!\theta }\,{\hat {\ell }}({\hat {\theta }}\mid x)={\frac {1}{n}}\sum _{i=1}^{n}\nabla _{\!\theta }\ln f(x_{i}\mid {\hat {\theta }})=0.}$
When the log-likelihood is twice differentiable, this expression can be expanded into a Taylor series around the point ${\displaystyle \theta =\theta _{0}}$:
${\displaystyle 0={\frac {1}{n}}\sum _{i=1}^{n}\nabla _{\!\theta }\ln f(x_{i}\mid \theta _{0})+{\Bigg [}\,{\frac {1}{n}}\sum _{i=1}^{n}\nabla _{\!\theta \theta }\ln f(x_{i}\mid {\tilde {\theta }})\,{\Bigg ]}({\hat {\theta }}-\theta _{0}),}$
where ${\displaystyle {\tilde {\theta }}}$ is some point intermediate between ${\displaystyle \theta _{0}}$ and ${\displaystyle {\hat {\theta }}}$. From this expression we can derive that
${\displaystyle {\sqrt {n}}({\hat {\theta }}-\theta _{0})={\Bigg [}\,{-{\frac {1}{n}}\sum _{i=1}^{n}\nabla _{\!\theta \theta }\ln f(x_{i}\mid {\tilde {\theta }})}\,{\Bigg ]}^{-1}{\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}\nabla _{\!\theta }\ln f(x_{i}\mid \theta _{0})}$
Here the expression in square brackets converges in probability to ${\displaystyle H=\mathbb {E} \left[-\nabla _{\theta \theta }\ln f(x|\theta _{0})\right]}$ by the law of large numbers. The continuous mapping theorem ensures that the inverse of this expression also converges in probability, to ${\displaystyle H^{-1}}$. The second sum, by the central limit theorem, converges in distribution to a multivariate normal with mean zero and variance matrix equal to the Fisher information ${\displaystyle I}$. Thus, applying Slutsky's theorem to the whole expression, we obtain that
${\displaystyle {\sqrt {n}}({\hat {\theta }}-\theta _{0})\ \ {\xrightarrow {d}}\ \ {\mathcal {N}}{\big (}0,\ H^{-1}IH^{-1}{\big )}.}$
Finally, the information equality guarantees that when the model is correctly specified, matrix ${\displaystyle H}$ will be equal to the Fisher information ${\displaystyle I}$, so that the variance expression simplifies to just ${\displaystyle I^{-1}}$.
### Functional invariance
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if ${\displaystyle {\widehat {\theta }}}$ is the MLE for θ, and if g(θ) is any transformation of θ, then the MLE for α = g(θ) is by definition
${\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta }}\,).\,}$
It maximizes the so-called profile likelihood:
${\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,}$
The MLE is also invariant with respect to certain transformations of the data. If Y = g(X) where g is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
${\displaystyle f_{Y}(y)={\frac {f_{X}(x)}{|g'(x)|}}}$
and hence the likelihood functions for X and Y differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data.
### Higher-order properties
The standard asymptotics tells that the maximum likelihood estimator is √n-consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound:
${\displaystyle {\sqrt {n}}({\hat {\theta }}_{\text{mle}}-\theta _{0})\ \ {\xrightarrow {d}}\ \ {\mathcal {N}}(0,\ I^{-1}),}$
where I is the Fisher information matrix:
${\displaystyle I_{jk}=\operatorname {E} _{X}{\bigg [}\;{-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}}\;{\bigg ]}.}$
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order n−1/2. However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that θmle has bias of order n−1. This bias is equal to (componentwise)[5]
${\displaystyle b_{s}\equiv \operatorname {E} [({\hat {\theta }}_{\mathrm {mle} }-\theta _{0})_{s}]={\frac {1}{n}}\cdot I^{si}I^{jk}{\big (}{\tfrac {1}{2}}K_{ijk}+J_{j,ik}{\big )}}$
where Einstein's summation convention over the repeating indices has been adopted; Ijk denotes the j,k-th component of the inverse Fisher information matrix I−1, and
${\displaystyle {\tfrac {1}{2}}K_{ijk}+J_{j,ik}=\operatorname {E} _{X}{\bigg [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\,\partial \theta _{j}\,\partial \theta _{k}}}+{\frac {\partial \ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}}}{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\bigg ]}.}$
Using these formulas it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it:
${\displaystyle {\hat {\theta }}_{\mathrm {mle} }^{*}={\hat {\theta }}_{\mathrm {mle} }-{\hat {b}}.}$
This estimator is unbiased up to the terms of order n−1, and is called the bias-corrected maximum likelihood estimator.
This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order n−2. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, as was shown by Kano (1996), the maximum likelihood estimator is not third-order efficient.
## Examples
### Discrete uniform distribution
Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution); thus, the sample size is 1. If n is unknown, then the maximum likelihood estimator ${\displaystyle {\hat {n}}}$ of n is the number m on the drawn ticket. (The likelihood is 0 for n < m, 1/n for n ≥ m, and this is greatest when n = m. Note that the maximum likelihood estimate of n occurs at the lower extreme of possible values {mm + 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) The expected value of the number m on the drawn ticket, and therefore the expected value of ${\displaystyle {\hat {n}}}$, is (n + 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator for n will systematically underestimate n by (n − 1)/2.
### Discrete distribution, finite parameter space
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a HEAD p. The goal then becomes to determine p.
Suppose the coin is tossed 80 times: i.e., the sample might be something like x1 = H, x2 = T, …, x80 = T, and the count of the number of HEADS "H" is observed.
The probability of tossing TAILS is 1 − p (so here p is θ above). Suppose the outcome is 49 HEADS and 31 TAILS, and suppose the coin was taken from a box containing three coins: one which gives HEADS with probability p = 1/3, one which gives HEADS with probability p = 1/2 and another which gives HEADS with probability p = 2/3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:
{\displaystyle {\begin{aligned}\Pr(\mathrm {H} =49\mid p=1/3)&={\binom {80}{49}}(1/3)^{49}(1-1/3)^{31}\approx 0.000,\\[6pt]\Pr(\mathrm {H} =49\mid p=1/2)&={\binom {80}{49}}(1/2)^{49}(1-1/2)^{31}\approx 0.012,\\[6pt]\Pr(\mathrm {H} =49\mid p=2/3)&={\binom {80}{49}}(2/3)^{49}(1-2/3)^{31}\approx 0.054.\end{aligned}}}
The likelihood is maximized when p = 2/3, and so this is the maximum likelihood estimate for p.
### Discrete distribution, continuous parameter space
Now suppose that there was only one coin but its p could have been any value 0 ≤ p ≤ 1. The likelihood function to be maximised is
${\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31},}$
and the maximisation is over all possible values 0 ≤ p ≤ 1.
likelihood function for proportion value of a binomial process (n = 10)
One way to maximize this function is by differentiating with respect to p and setting to zero:
{\displaystyle {\begin{aligned}{0}&{}={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right),\\[8pt]{0}&{}=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&{}=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&{}=p^{48}(1-p)^{30}\left[49-80p\right]\end{aligned}}}
which has solutions p = 0, p = 1, and p = 49/80. The solution which maximizes the likelihood is clearly p = 49/80 (since p = 0 and p = 1 result in a likelihood of zero). Thus the maximum likelihood estimator for p is 49/80.
This result is easily generalized by substituting a letter such as t in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields the maximum likelihood estimator t / n for any sequence of n Bernoulli trials resulting in t 'successes'.
### Continuous distribution, continuous parameter space
For the normal distribution ${\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}$ which has probability density function
${\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},}$
the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is
${\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right),}$
or more conveniently:
${\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}{2\sigma ^{2}}}\right),}$
where ${\displaystyle {\bar {x}}}$ is the sample mean.
This family of distributions has two parameters: θ = (μσ), so we maximize the likelihood, ${\displaystyle {\mathcal {L}}(\mu ,\sigma )=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma )}$, over both parameters simultaneously, or if possible, individually.
Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (The likelihood's logarithm is not strictly increasing). This log likelihood can be written as follows:
${\displaystyle \log({\mathcal {L}}(\mu ,\sigma ))=(-n/2)\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}}$
(Note: the log-likelihood is closely related to information entropy and Fisher information.)
We now compute the derivatives of this log likelihood as follows.
{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log({\mathcal {L}}(\mu ,\sigma ))=0-{\frac {-2n({\bar {x}}-\mu )}{2\sigma ^{2}}}.\end{aligned}}}
This is solved by
${\displaystyle {\hat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {x_{i}}{n}}.}$
This is indeed the maximum of the function since it is the only turning point in μ and the second derivative is strictly less than zero. Its expectation value is equal to the parameter μ of the given distribution,
${\displaystyle E\left[{\widehat {\mu }}\right]=\mu ,\,}$
which means that the maximum likelihood estimator ${\displaystyle {\widehat {\mu }}}$ is unbiased.
Similarly we differentiate the log likelihood with respect to σ and equate to zero:
{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log \left(\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}{2\sigma ^{2}}}\right)\right)\\[6pt]&={\frac {\partial }{\partial \sigma }}\left({\frac {n}{2}}\log \left({\frac {1}{2\pi \sigma ^{2}}}\right)-{\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}{2\sigma ^{2}}}\right)\\[6pt]&=-{\frac {n}{\sigma }}+{\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}+n({\bar {x}}-\mu )^{2}}{\sigma ^{3}}}\end{aligned}}}
which is solved by
${\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}$
Inserting the estimate ${\displaystyle \mu ={\widehat {\mu }}}$ we obtain
${\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.}$
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error) ${\displaystyle \delta _{i}\equiv \mu -x_{i}}$. Expressing the estimate in these variables yields
${\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).}$
Simplifying the expression above, utilizing the facts that ${\displaystyle E\left[\delta _{i}\right]=0}$ and ${\displaystyle E[\delta _{i}^{2}]=\sigma ^{2}}$, allows us to obtain
${\displaystyle E\left[{\widehat {\sigma }}^{2}\right]={\frac {n-1}{n}}\sigma ^{2}.}$
This means that the estimator ${\displaystyle {\widehat {\sigma }}}$ is biased. However, ${\displaystyle {\widehat {\sigma }}}$ is consistent.
Formally we say that the maximum likelihood estimator for ${\displaystyle \theta =(\mu ,\sigma ^{2})}$ is:
${\displaystyle {\widehat {\theta }}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).}$
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log likelihood at its maximum takes a particularly simple form:
${\displaystyle \log({\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}))={\frac {-n}{2}}(\log(2\pi {\hat {\sigma }}^{2})+1)}$
This maximum log likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
## Non-independent variables
It may be the case that variables are correlated, that is, not independent. Two random variables X and Y are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
${\displaystyle f(x,y)=f(x)f(y)\,}$
Suppose one constructs an order-n Gaussian vector out of random variables ${\displaystyle (x_{1},\ldots ,x_{n})\,}$, where each variable has means given by ${\displaystyle (\mu _{1},\ldots ,\mu _{n})\,}$. Furthermore, let the covariance matrix be denoted by ${\displaystyle \Sigma }$.
The joint probability density function of these n random variables is then given by:
${\displaystyle f(x_{1},\ldots ,x_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {{\text{det}}(\Sigma )}}}}\exp \left(-{\frac {1}{2}}\left[x_{1}-\mu _{1},\ldots ,x_{n}-\mu _{n}\right]\Sigma ^{-1}\left[x_{1}-\mu _{1},\ldots ,x_{n}-\mu _{n}\right]^{T}\right)}$
In the two variable case, the joint probability density function is given by:
${\displaystyle f(x,y)={\frac {1}{2\pi \sigma _{x}\sigma _{y}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(x-\mu _{x})^{2}}{\sigma _{x}^{2}}}-{\frac {2\rho (x-\mu _{x})(y-\mu _{y})}{\sigma _{x}\sigma _{y}}}+{\frac {(y-\mu _{y})^{2}}{\sigma _{y}^{2}}}\right)\right]}$
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section Principles, using this density.
## Iterative procedures
Consider problems where both states ${\displaystyle x_{i}}$ and parameters such as ${\displaystyle \sigma ^{2}}$ require to be estimated. Iterative procedures such as Expectation-maximization algorithms may be used to solve joint state-parameter estimation problems.
For example, suppose that n samples of state estimates ${\displaystyle {\hat {x}}_{i}}$ together with a sample mean ${\displaystyle {\bar {x}}}$ have been calculated by either a minimum-variance Kalman filter or a minimum-variance smoother using a previous variance estimate ${\displaystyle {\widehat {\sigma }}^{2}}$. Then the next variance iterate may be obtained from the maximum likelihood estimate calculation
${\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}({\hat {x}}_{i}-{\bar {x}})^{2}.}$
The convergence of MLEs within filtering and smoothing EM algorithms has been studied in the literature.[6][7][8]
## Applications
Maximum likelihood estimation is used for a wide range of statistical models, including:
These uses arise across applications in widespread set of fields, including:
## History
Ronald Fisher in 1913
Maximum-likelihood estimation was recommended, analyzed (with fruitless attempts at proofs) and widely popularized by Ronald Fisher between 1912 and 1922[11] (although it had been used earlier by Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth).[12]
Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called "Wilks' theorem".[13] The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent samples is χ² distributed, which enables determination of a confidence region around any one estimate of the parameters. The only difficult part of the proof depends on the expected value of the Fisher information matrix, which is provided by a theorem by Fisher.[14] Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[15]
Some of the theory behind maximum likelihood estimation was developed for Bayesian statistics.[11]
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[16]
## References
1. ^ Pfanzagl (1994, p. 206)
2. ^ Newey & McFadden (1994, Theorem 2.5.)
3. ^ Lehmann & Casella (1998)
4. ^ Newey & McFadden (1994, Theorem 3.3.)
5. ^ Cox & Snell (1968, formula (20))
6. ^ Einicke, G.A.; Malos, J.T.; Reid, D.C.; Hainsworth, D.W. (January 2009). "Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment". IEEE Trans. Signal Processing. 57 (1): 370–375. doi:10.1109/TSP.2008.2007090.
7. ^ Einicke, G.A.; Falco, G.; Malos, J.T. (May 2010). "EM Algorithm State Matrix Estimation for Navigation". IEEE Signal Processing Letters. 17 (5): 437–440. doi:10.1109/LSP.2010.2043151.
8. ^ Einicke, G.A.; Falco, G.; Dunn, M.T.; Reid, D.C. (May 2012). "Iterative Smoother-Based Variance Estimation". IEEE Signal Processing Letters. 19 (5): 275–278. doi:10.1109/LSP.2012.2190278.
9. ^ Sijbers, Jan; den Dekker, A.J. (2004). "Maximum Likelihood estimation of signal amplitude and noise variance from MR data". Magnetic Resonance in Medicine. 51 (3): 586–594. doi:10.1002/mrm.10728. PMID 15004801.
10. ^ Sijbers, Jan; den Dekker, A.J.; Scheunders, P.; Van Dyck, D. (1998). "Maximum Likelihood estimation of Rician distribution parameters". IEEE Transactions on Medical Imaging. 17 (3): 357–361. doi:10.1109/42.712125. PMID 9735899.
11. ^ a b Pfanzagl, Johann, with the assistance of R. Hamböker (1994). Parametric statistical theory. Walter de Gruyter, Berlin, DE. pp. 207–208. ISBN 3-11-013863-8.
12. ^
13. ^ Wilks, S. S. (1938). The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. Annals of Mathematical Statistics, 9: 60–62. doi:10.1214/aoms/1177732360.
14. ^ Owen, Art B. (2001). Empirical Likelihood. London: Chapman & Hall/Boca Raton, FL: CRC Press. ISBN 978-1584880714.
15. ^ Wilks, Samuel S. (1962), Mathematical Statistics, New York: John Wiley & Sons. ISBN 978-0471946502.
16. ^ Savage (1976), Pratt (1976), Stigler (1978, 1986, 1999), Hald (1998, 1999), and Aldrich (1997) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 101, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880317449569702, "perplexity": 3540.3579255937157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00724.warc.gz"} |
https://www.dragonwasrobot.com/mathematics/2015/08/03/an-introduction-to-horner-s-method.html | #### prerequisites: A basic knowledge of Haskell or similar.
“What makes the desert beautiful,”
said the little prince,
“is that somewhere it hides a well…”
Antoine de Saint-Expuéry, The Little Prince
### 1. Introduction
The goal of this blog post is to introduce Horner’s method for polynomial evaluation and polynomial division, and subsequently prove an equivalence relation between these two types of application.
The blog post is structured as follows. In Section 2, we argue for the application of Horner’s method for polynomial evaluation, and subsequently derive its definition. Having covered polynomial evaluation, we then argue for the application of Horner’s method for polynomial division and derive its definition in Section 3. Lastly, we state and prove an equivalence relation between the definition of polynomial evaluation and the definition of polynomial division using Horner’s method in Section 4. The blog post is concluded in Section 5.
### 2. Polynomial evaluation using Horner’s method
In order to understand the advantages of using Horner’s method for evaluating a polynomial, we first examine how this is usually done. If we let $p(x) = 7x^4 + 2x^3 + 5x^2 + 4x + 6$ and $x = 3$, then we would evaluate $p(3)$ one term at a time and sum all the intermediate results. However, by doing so we are unfortunately performing redundant operations when evaluating the exponents, as can be seen when we unfold the evaluation of the exponents,
Here, the evaluation of the largest exponent, $3^4 = (3 \cdot 3 \cdot 3 \cdot 3)$, also calculates all exponents of a lesser degree, i.e., $3^3 = (3 \cdot 3 \cdot 3)$ and $3^2 = (3 \cdot 3)$, as its intermediate results. Luckily, we can transform the formula of the polynomial $p$ in such a way that the operations calculating the exponents are shared across the terms. In fact, since the number of multiplications by $3$ decreases by $1$ for each term, we can nest the multiplications across the terms like so,
thus removing any redundant multiplications used for evaluating the exponents. Formula \ref{eq:polynomial-evaluation-horner-example-formula} now exhibits a simple inductive structure which adds one multiplication and one addition for each term in the polynomial $p$. As a result, we can now evaluate the final formula of $p$,
by repeatedly performing a multiplication and an addition, starting from the innermost set of parentheses,
If we compare the number of operations performed in the first and last equation of Formula \ref{eq:polynomial-evaluation-horner-example-formula}, we count a total of $14$ in the former and $8$ in the latter. The difference of $6$ operations corresponds exactly to the number of multiplications required to evaluate the exponents in the first equation. Thus, our transformation of the polynomial formula into its inductive form, has removed the computational overhead of evaluating each of the exponents in sequence. Lastly, it has even been proved that the number of additions and multiplications used in this procedure, are indeed the smallest number possible for evaluating a polynomial.1
Upon closer examination of the intermediate results of Formula \ref{eq:polynomial-evaluation-horner-example-calculation}, we can make out a recursive substitution scheme happening under the hood,
where each intermediate result is the result of multiplying the previous result by $3$ and adding the next coefficient. If we assign the intermediate values on the left-hand side, $(7, 23, 74, 226, 684)$, to the variable $b_i$, assign the value $3$ to the variable $k$, and lastly assign the values corresponding to the coefficients of $p$, $(7, 2, 5, 4, 6)$, to the variable $a_i$, we can restate Formula \ref{eq:polynomial-evaluation-horner-example-substitution-numbers} like so,
Formula \ref{eq:polynomial-evaluation-horner-example-substitution-variables} now reflects a recursively structured, and easily generalizable, substitution procedure where $b_4 = a_4$ is the base case, and the inductive case is defined in terms of the next coefficient in the polynomial and the preceding intermediate result, $b_3 = b_4 \cdot k + a_3$. The procedure terminates when it reaches the last term of the polynomial $p$, where $b_0 = b_1 \cdot k + a_0$ is the result of evaluating $p(k)$.
We call the above procedure Horner’s method2 for polynomial evaluation, and formalize it in Haskell by first representing a polynomial as a list of integers,
for which we define the procedure,
which takes a polynomial, cs, corresponding to $a_i$, an integer, x, corresponding to $k$, and an accumulator, a, corresponding to the intermediate result $b_i$. As described above, it returns the final value of the accumulator (result), a, in the base case, and multiplies a by x for each recursive call and adds the coefficient c. Lastly, we define a wrapper procedure,
which initializes the accumulator to 0. As a result, we now can evaluate the example polynomial of Formula \ref{eq:polynomial-evaluation-horner-example-inductive},
for $x = 3$, by passing the coefficients of $p$ as the list [7, 2, 5, 4, 6], along with the value of $x$, 3, to hornersPolyEval like so, hornersPolyEval [7, 2, 5, 4, 6] 3, giving the expected result, 684.
Having formalized Horner’s method for polynomial evaluation, as the procedures hornersPolyEvalAcc and hornersPolyEval, we now define Horner’s method for polynomial division.
### 3. Polynomial division using Horner’s method
Now that we have used Horner’s method as an efficient procedure for evaluating a polynomial, using a recursive substitution scheme, we move on to examine its use for polynomial division.
According to the definition of polynomial division, when dividing two polynomials, $p$ and $d$, $\frac{p(x)}{d(x)}$, where $d \not= 0$, the result is a quotient, $q$, and a remainder, $r$, satisfying the relation,
where $r$ has a degree less than $d$. In this blog post, we restrict ourselves to division with a binomial, $x - k$, which means that $r$ is always a constant, and $0$ in the case where $d$ divides $p$.
One procedure for polynomial division is polynomial long division, which we can use to divide the polynomial $p(x) = 2x^3 + 4x^2 + 11x + 3$ with the binomial $d(x) = x - 2$, giving us the following result,3
where we can read the quotient, $2x^2 + 8x + 27$, from the line above the numerator, $2x^3 + 4x^2 + 11x + 3$, and we can read the remainder, $57$, from the value at the bottom of the calculation. Lastly, we can verify the calculations by checking that the relation in Formula \ref{eq:poly-div-relation} is satisfied,
If we examine the intermediate results of the procedure, $(2, 8, 27, 57)$, i.e., the leftmost values of each step of the procedure, we can make out a similar recursive substitution scheme to what we saw in the case of polynomial evaluation,
where each intermediate result is equal to the previous result multiplied by the second term of the denominator, $x - 2$, plus the next coefficient. This time, we assign the intermediate results on the left to the variable $b_{i-1}$, the last result to the variable $r$, the second term of the denominator to the variable $k$, and the coefficients of $p$ to the variable $a_i$, which yields the following set of equations,
These equations strongly suggest that we can divide $p$ with $d$ using the same recursive substitution procedure, as described in the evaluation case, spending just one addition and multiplication per term, which again reduces the number of operations to a minimum. Furthermore, we can put the substitution scheme above in a tabular format, similar to polynomial long division,
where the coefficients of the polynomial are located at the top row, the second term of the denominator to the far left, and the coefficients of the resulting quotient, $b_2,b_1,b_0$, and the remainder, $r$, at the bottom row of the table.
We formalize the tabular representation in Formula \ref{eq:horner-div-abstract} as the following procedure,
which performs the exact same substitution scheme as in hornersPolyEvalAcc, except that it also aggregates the intermediate results and adds them to the result polynomial. Likewise, we define a wrapper function,
which sets the initial accumulator to the first coefficient and adds it to the result polynomial. Now, if we wanted to divide our initial polynomial $p(x) = 2x^3 + 4x^2 + 11x + 3$ with the binomial $d(x) = x - 2$, we would pass the list [2, 4, 11, 3] as the input polynomial cs and 2 as the input value x to hornersPolyDiv, from which we would get the result list [2, 8, 27, 57], where [2, 8, 27] are the coefficients of the quotient and 57 is the remainder. Thus, we have now defined Horner’s method for polynomial division as the procedures hornersPolyDivAcc and hornersPolyDiv.
### 4. Equivalence of the two Horner procedures
Due to the strong similarity between the procedure for polynomial evaluation and the procedure for polynomial division, we are interested in stating an equivalence relation between the two. As such, we note that the last element in the result polynomial of hornersPolyDiv is equal to the result of hornersPolyEval when given the same input,
Proving the relation requires us to first prove a similar equivalence relation between the underlying procedures hornersPolyEvalAcc and hornersPolyDivAcc, parameterized over the accumulator,
The equivalence can be proved by first proving the underlying theorem, using structural induction on the polynomial, cs', followed by case analysis on the polynomial, cs, in the original theorem.4 Incidentally, the above theorem also proves an implementation-specific version of the polynomial remainder theorem.
### 5. Conclusion
In this post, we have introduced Horner’s method for polynomial evaluation and polynomial division. Furthermore, we have also stated and proved an equivalence relation between the definition of Horner’s method for polynomial evaluation and polynomial division.
In our next post, we show how we can obtain Taylor polynomials using Horner’s method.
1. See “On Two Problems in Abstract Algebra Connected with Horner’s Rule” (1954) by Alexander Markowich Ostrowski and “Methods of computing values of polynomials” (1966) by Victor Ya Pan.
2. See “A new method of solving numerical equations of all orders, by continuous approximation” (1819) by William G. Horner.
3. Unfortunately, MathJax does not support partial horizontal lines (\cline) and thus we cannot format polynomial long division in the traditional way.
4. While we do not show each step of the proof in this post, a Coq implementation of Horner’s method and accompanying equivalence proof can be found in my Master’s thesis | {"extraction_info": {"found_math": true, "script_math_tex": 64, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.880476713180542, "perplexity": 327.6908779846272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00224.warc.gz"} |
https://www.physicsforums.com/threads/maxwell-equations.242776/ | # Maxwell Equations
1. Jun 30, 2008
### ghery
Hi:
In electromagnetism, Maxwell equations originally were 6, with the aid of vector analysis, these equations were simplified and they became 4, after that with the aid of special relativity and tensor analysis (for the electromagnetic tensor) they became 2.
Now I have seen (I don't remember where) that these two equations became just one without any loss of information, Does anybody know how to derive these equation?, What is that equation?, and by the way what other mathematical tools do I need in order to derive it?
2. Jun 30, 2008
### ismaili
I guess what you want is Maxwell equations written in terms of differential forms. However, it takes two. You may first try
http://en.wikipedia.org/wiki/Maxwell's_equations
As for the basic introduction of forms, you can read Ryder's QFT book.
Similar Discussions: Maxwell Equations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099060297012329, "perplexity": 853.5743447635903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00647.warc.gz"} |
https://www.askmehelpdesk.com/small-claims/wage-garnishment-married-couple-84769.html | My wages are already being garnished by one creditor. And another creditor is going to sue both me and my wife. If I'm already being garnished, can they go to my wife and garnish her wages? And then we would both be garnished. Can that happen? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155396580696106, "perplexity": 3911.2449048511353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948335.92/warc/CC-MAIN-20160723072908-00166-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=e89d3e75552cd4015a4d0c04e69b498f&p=4748678 | # Calculating a simple generator's output power (wattage)
by Steve S
Tags: generator, output, power, simple, wattage
P: 5 Hi All, I am thinking about building a simple electrical generator (to use wave power), and I am trying to make sure I clearly understand the theory and expected results before starting the project. I have two basic questions: 1) How do I calculate the generators maximum current and wattage, if i know the induced emf? 2) How is this generators output power related / limited by the input mechanical power? At the moment the concept is a simple renewable energy source which is the prime mover, acting to drive a magnet up and down a cylindrical coil, with N turns. So I believe this should be a simple classical problem. I am clear that the voltage induced is calculated by Faraday's law - and I am comfortable with how this would be develop. Based on my initial setup of 200 turns, 6000gauss magnet, a cylinder of radius 4cm and a magnet travel speed of 0.25m/s i get a voltage of ε = 4.824V I am conformtable with Ohms law, however, where i am confused is how I calculate the actual current and generators maximum wattage. Suppose for arguments sake I have a 1 Ohm load resistor, and neglect the impedance of the generator coil - I think that the induced current would be : 4.824V / 1 Ohm = 4.824 Amps as P = V^2 / R P = 23.27Watts But if I reduced the loads resistance to say 0.5 Ohm, i get: 4.824V^2 / 0.5 Ohm = 46.54Watts Similarily if I put a load resistance of 0.01 Ohm, I get a figure of 232.7Watts So I am unclear as to how the generator can seemingly produce more power, by reducing the load resistance. Surely the actual power produced is limited by the amount of input energy coming in from the magnet? Is there a way to calculate the maximum theoretical output and if so, can someone provide some guidance on how to link the input mechanical energy to the output power? For reference I have also posted this in the Physics forum, and so far haven't had much success.
HW Helper Thanks P: 5,361 If you "try" to draw more output power than the mechanical input, then its rotation will slow down, producing a lower voltage so that output power will always be less than input. A generator becomes much harder to turn when you have it connected to low resistance load. There is no magic here. You need to have wires thick enough to carry the maximum current without overheating.
P: 5 Thanks for the response. I'm aware this would happen with a conventionally driven alternator, where the prime move is a gas turbine or diesel engine - but in the case i'm looking at the mechanical input power is essentially fixed i.e. the magnets physical action is caused by gravity, or a flow of water etc..
P: 283
Calculating a simple generator's output power (wattage)
Quote by Steve S Thanks for the response. I'm aware this would happen with a conventionally driven alternator, where the prime move is a gas turbine or diesel engine - but in the case i'm looking at the mechanical input power is essentially fixed i.e. the magnets physical action is caused by gravity, or a flow of water etc..
Here is a mechanical analogy. You are pushing a 10 lb sled with a certain amount of force and a certain speed. If I put another 50 lbs on the sled a few things can happen.
1.you dont change the input force. Therefore your speed decreases.
2. you increase your input force such that the speed stays the same.
So in the case of the generator, if you are applying a constant force to the generator, it will have to slow down when the load increases.
P: 5 I understand your mechanical analogy - but the generation system I am describing is one that creates an impulse of power - it is not a traditional rotating magnet in a field (or vice versa). In my scenario a magnet is dropping, due to gravity, through a coil - so there would be no effect that would cause the magnets speed to slow? Or does the EMF field created cause a magnetic physical force acting upwards on the magnet to slow its descent?
P: 283
Quote by Steve S Or does the EMF field created cause a magnetic physical force acting upwards on the magnet to slow its descent?
That is correct. If it were not, you would have broken the laws of physics as we know it.
Related Discussions Classical Physics 7 Electrical Engineering 5 General Physics 10 Classical Physics 7 Electrical Engineering 5 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592876195907593, "perplexity": 509.29068598677594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030956-00050-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/81794/is-any-presheaf-on-a-local-ring-a-sheaf-stalks-points-for-some-grothendieck-top | Is any presheaf on a local ring a sheaf? Stalks/points for some Grothendieck topologies
I was able to recover the original statement, which is: Let A be a local ring (for Zariski, henselian for Nisnevic and strictly henselian for étale) and $\{U_i\rightarrow Spec A\}$ a covering. Then there exists an $i$ and a morphism $Spec A\rightarrow U_i$ (open/étale) which admits a lift of $U_i\rightarrow Spec A$. (For Zariski this is just the statement below, that each covering of $Spec A$ already includes the identity.) Thus the global sections are unchanged for a presheaf and its associated sheaf.
It remains to ask why this implies that the suitable notion of points in the étale site are the geometric points, but I guess this is too fuzzy to be an actual question.
As seen in the comments the original question didn't really make sense. Sorry for any inconvenience.
The question:
Is any presheaf (of sets, maybe more structure) on the spectrum of a (edit: reduced) local ring A (with the Zariski topology) automatically a sheaf?
The motivation:
For the Zariski and Nisnevich topology, stalks are defined for every "classical point" $Spec k\rightarrow Spec A$, whereas for the étale topolgy we take geometric points $Spec k^s\rightarrow Spec A$, where $k^s$ is the separable closure. The reason is somewhat that we want sheafification (and other constructions) to work and hence want the small bits from which we build the sheaf associated to a presheaf (i.e stalks) to fulfill the sheaf properties to begin with. The above question is definitely false for the étale topology, but we should be able to fix it, if we require A to be strictly henselian, which then inevitably leads to the definition of geometric points.
How to tackle the question:
It would be nice to have a proof which refrains from using stalks itself, as this would appear a bit circular.
What I have so far:
For any covering of the whole space $Spec A$ by standard open sets $D(f_i)$, we know that the maximal ideal $m$ must be contained in at least one of the $D(f_i)$, which means $f_i$ is not an element of $m$, which implies $f_i$ is a unit and hence $D(f_i)=Spec A$. This reasoning fails for open subsets $D(f)$ of $Spec A$, since $A_f$ doesn't need to be local (or does it?).
edit: One extre condition should be $F(\emptyset)$ is the final object. Hopefully there aren't too many similar conditions I forgot.
This already fails for the spectrum $X$ of a field whenever $F(X)=F(\varnothing)$ is not the final object. – user2035 Nov 24 '11 at 11:32
Valid point a-fortiori. As I said maybe there are some minor extra conditions. Let us just assume $F(\emptyset)$ is the final object. – Simon Markett Nov 24 '11 at 11:48
Doesn't the etale topology usually include open sets which are $n$ copies of the base space, for instance? You need to account for those. Accounting for that, I'm not sure that it's even true for the spec of a field in general, because the sheaf condition says something about non-Galois extensions. There tend to be a LOT of presheafs you can construct on any given site. This should only work if you have very few open sets. – Will Sawin Nov 24 '11 at 11:58
Let $X$ be the spectrum of a one-dimensional local ring having two minimal prime ideals $x_1,x_2$. Now choose $F(X)=F(\{x_1,x_2\})\ne F(\{x_1\})\times F(\{x_2\})$. – user2035 Nov 24 '11 at 11:59
There are lots of one-dimensional reduced local rings having more than one minimal prime. You could take $\mathbb{Z}[x]_{(2,x)} / (2x)$ or $k[[x, y]]/ (xy)$ ($k$ any field), for instance. – Neil Epstein Nov 24 '11 at 14:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406319260597229, "perplexity": 318.173646939182}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121478981.94/warc/CC-MAIN-20150124174438-00076-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/enthalpy-of-combustion-for-magnesium.52811/ | # Enthalpy of Combustion for Magnesium
1. Nov 15, 2004
### decamij
I placed some magnesium ribbon in HCl and measured the temperature change. How would i find the change in enthalpy per mole of magnesium in the following reaction:
Mg + 2HCl --> H2 +MgCl2
If i know the following information: change in temperature = 19C
mass of ribbon = 0.5g
volume of HCl = 100mL
I can't use any known values (i.e. like the ones you'd find in a textbook). I must use the experimental values above. However, i can use specific heat capacity values, c, if i must.
2. Nov 15, 2004
### Sirus
$$Q=mc\Delta t$$
where Q is the enthalpy change. You cannot just plug numbers into this formula though; remember that it applies to the entire system, and can be used in conjuction with
$$\mbox{heat gain}=-\mbox{(heat loss)}$$
How is heat energy transfered in the reaction? (what gains, what loses?)
3. Nov 15, 2004
### pack_rat2
Did you perform that experiment in a "bomb calorimeter" (or a suitable improvised version thereof)? If not, the temperature change is meaningless, because an unknown amount of heat was dissipated to the surroundings.
4. Nov 15, 2004
### Staff: Mentor
As Pack_rat2 alluded to, ideally one is doing this in an adiabatic system so that heat is not lost to the environment outside the reaction vessel (presumably test tube). One might have to correct of the mass of the reaction vessel as well if it is heated.
From the mass of HCl solution and temperature, one can determine the change in enthalpy of the solution.
Then assuming that all the heat originated from the chemical reaction - one can determine the energy per unit mass or mole of Mg (the other known quantity).
5. Nov 16, 2004
### decamij
But what will i use the specific heat capacity of HCl? I can't find that in my textbook.
6. Sep 23, 2008
### nekoooo
The Specific heat capacity of HCl (s) is 3.93 Jg-1C-1
7. Sep 24, 2008
### Staff: Mentor
I suppose reaction was going on in a relatively diluted solution of the acid. If so, use specific heat of water.
8. Sep 30, 2008
### Gimpinald
Yea I would just use the specific heat capacity of water. My class just did this lab the other day. We used 0.5 M HCl and the teacher did some calculations on the board to show us that only about 0.2%(or somewhere around there) of the solution was HCl and the rest was water.
9. Oct 1, 2008
### Staff: Mentor
More like 1.8% (closer to 2 then to 0.2). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852371096611023, "perplexity": 1748.3415411428432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114649.41/warc/CC-MAIN-20160428161514-00126-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/best-numerical-technique.22055/ | # Best numerical technique?
1. Apr 22, 2004
### Xishan
Best numerical technique???
I've recently used Simpson's (1/3) rule for the numerical solution of the 'intersecting cylinders' problem. I've found that it isn't too accurate no matter how many intervals I take (I have even taken 1000,000 intervals!), but still end up with some error.
I'll appreciate any one for helping me in this matter
2. Apr 25, 2004
### NSX
What is the intersecting cylinders problem?
3. Apr 26, 2004
### Xishan
it is discussed in the thread called 'intersecting cylinders' in this forum. the results of numerical integration do converge but always with some error | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501205086708069, "perplexity": 3659.857404563756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00011.warc.gz"} |
http://mathoverflow.net/revisions/52106/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
The number of divisions of $\mathbb{R}^3$ by $k \ge 0$ planes in general position starts 1,2,4,8, then 15, etc. For $\mathbb{R}^6$ it is 1,2,4,8,16,32,64 then 127. In general for $\mathbb{R}^N$ it is the sum of the binomial coefficients from $\binom{k}{0}$ up to $\binom{k}{N}$ and hence it agrees with $2^k$ for terms 0,1,2, up to N before starting to fall off.
other answers Of course for prime p, $2^{p-1}=1 \mod p$ mod{p}$but there are only 2 known cases$p=1093$and$3511$where$2^{p-1}=1 \mod p^2$. mod{p^2}$. SO primes and primes with $2^{p-1} \ne 1 mod p^2$ \mod{ p^2}$agree for the first 182 primes. For "listed in the OEIS" there are a couple which go from 1 to 99 then skip 100: undulating numbers in base 10 and cents you can have in US coins without having change for a dollar. (the latter being 1-99 along with$105, 106, 107, 108, 109, 115, 116, 117, 118, 119$.) 3 added 182 characters in body The number of divisions of$\mathbb{R}^3$by$k \ge 0$planes in general position starts 1,2,4,8, then 15, etc. For$\mathbb{R}^6$it is 1,2,4,8,16,32,64 then 127. In general for$\mathbb{R}^N$it is the sum of the binomial coefficients from$\binom{k}{0}$up to$\binom{k}{N}$and hence it agrees with$2^k$for terms 0,1,2, up to N before starting to fall off. other answers Of course for prime p,$2^{p-1}=1 \mod p$but there are only 2 known cases$p=1093$and$3511$where$2^{p-1}=1 \mod p^2$. SO primes and primes with$2^{p-1} \ne 1 mod p^2$agree for the first 182 primes. For "listed in the OEIS" there are a couple which go from 1 to 99 then skip 100: undulating numbers in base 10 and cents you can have in US coins without having change for a dollar. 2 added 307 characters in body The number of divisions of$\mathbb{R}^3$by$k \ge 0$planes in general position starts 1,2,4,8, then 15, etc. For$\mathbb{R}^6$it is 1,2,4,8,16,32,64 then 127. In general for$\mathbb{R}^N$it is the sum of the binomial coefficients from$\binom{k}{0}$up to$\binom{k}{N}$and hence it agrees with$2^k$for terms 0,1,2, up to N before starting to fall off. other answers Of course for prime p,$2^{p-1}=1 \mod p$but there are only 2 known cases$p=1093$and$3511$where$2^{p-1}=1 \mod p^2$. SO primes and primes with$2^{p-1} \ne 1 mod p^2\$ agree for the first 182 primes. For "listed in the OEIS there are a couple which go from 1 to 99 then skip 100. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455982208251953, "perplexity": 1625.375392612818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711406217/warc/CC-MAIN-20130516133646-00031-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.theorie.physik.uni-muenchen.de/activities/research_seminars/strings_and_fields/archive_wise1112/2012_01_26/ | # Supersymmetric SO(10) unification with light sparticle spectrum
Marek Olechowski (Warsaw U.)
26.01.2012 at 16:15
Supersymmetric SO(10) GUT model with negative $\mu$ is investigated. The present experimental constraints may be fulfilled in this model only when the soft masses of scalars and gauginos are non-universal. Appropriate non- universalities can be naturally present in SO(10) models. The tension between the experimental data on BR($b\to s\gamma$) and $(g-2)_\mu$ and the recent limits from LHC is investigated. It is shown that all the constraints may be satisfied simultaneously. The relic abundance of the LSP in the preferred part of the parameter space may be consistent with the WMAP data. The sparticle spectrum in the considered model is relatively light and may be explored by the LHC experiments (in contrast to SO(10) models with positive $\mu$).
Arnold Sommerfeld Center
Theresienstrasse 37
Room 348/349 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859940767288208, "perplexity": 1770.9843430135045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00471.warc.gz"} |
https://forum.allaboutcircuits.com/threads/about-current.7352/ | #### nikhilthunderbird
Joined Oct 6, 2007
1
is current increases in a conductor when the velocity of electrons increases in conductor
#### SgtWookie
Joined Jul 17, 2007
22,221
Current = quantity of electrons flowing
Voltage = electrical pressure
#### bloguetronica
Joined Apr 27, 2007
1,424
And the velocity of the electrons (the net mean volocity) is quite the same for all cases. 8m/s independently of current and voltage. Of course the electromangetic force that drives the electrons is instantaneous. That is why a light bulb will light imediately, and not take 8 seconds to lit if it has a 8m cable. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196702599525452, "perplexity": 2777.8148661851724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00050.warc.gz"} |
https://math.stackexchange.com/questions/3372931/h1-inner-product-for-vector-valuled-functions | # $H^1$ Inner Product for vector valuled functions
I am not sure how to find the $$H^{1}(D)$$ inner product for two functions $$u,v: D \rightarrow \mathbb{R}^2$$, ($$D \subset \mathbb{R}^2$$). The inner product for scalar functions is defined as:
$$\int_{D} f \; g \; dx + \int_{D} \nabla f \cdot \nabla g \; dx$$
For extending this definition to vector valued functions, I found this link (Inner product for vector - valued functions) but it treats only the first term ($$L^2$$ norm). For the second term ($$H^1$$ seminorm), I tried to look up definitions of inner product for matrices but found multiple answers. Can someone please tell me which is the correct way to compute this?
Edit: I need to compute it for calculating the Gramian Matrix for a finite set of vector valued functions, with respect to the $$H^1(D)$$ norm. Is this the right way to do it?
Thank you!
$$H^1(D;\mathbb{R}^2)=\{u \in L^2(D;\mathbb{R}^2) : \nabla u \in L^2(D;\mathbb{R}^{2 \times 2}) \}$$
Let $$u,v :D \to \mathbb{R}^2$$. Let us denote by $$u\cdot v=\sum_{i} u_i v_i$$ the vector scalar product and by $$A:B=\sum_{i,j} a_{ij} b_{ij}$$ the matrix scalar product. The inner product of $$H^1(D;\mathbb{R}^2)$$ is given by
$$(u,v)_{H^1}=\int_D u \cdot v + \nabla u : \nabla v \text{ d}x.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658125042915344, "perplexity": 109.5884560848227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00060.warc.gz"} |
https://brilliant.org/discussions/thread/help-needed-19/ | ×
# help needed!!!
Can any one tell me how to solve test papers with accuracy as I am a beginner in jee preparation and I have no experience of solving the paper fast with accurate answers...
Note by Sarvesh Dubey
2 years, 5 months ago
Sort by:
I'll post a note soon regarding your problem. Just have a little patience. Thanks. @sarvesh dubey
- 2 years, 5 months ago
Okk thanx!!!
- 2 years, 5 months ago
The idea there is no one way of achieving the above. Another thing to realize is that everyone begins by making mistakes.You improve over the period and learn your previous mistakes and get better. Don't put yourself under the pressure of solving is fast and accurate during the initial stages of the preparation. Firstly your aim should be building your concepts getting everything correctly and realize what was wrong if it was incorrect. Once you have gained enough an idea about the topic your speed would increase automatically and as you practice more you will find yourself where you want to be. It is the beginning and you have a long way to go. Don't lose your courage. All the best!!
- 2 years, 5 months ago
Building the concepts!!!all say this but in simple language what does this mean???I have heard everyone say about this..please can you tell me about to build the concepts
- 2 years, 5 months ago
"Building Concepts" has no one line definition or explanation. Concept refers to an idea of what something is or how that something works. To me the notion of building concepts is getting the complete idea of a system, its components and their interaction so that when you analyse a part of the system you know how it is affected and what affects it. Also if it gets affected how would other parts react.
If it seems complicated then think of the system as a physical system say a pulley system or a system of two blocks one upon the other moving over the ground. Try and analyse the scenario. If you feel that you have analysed all aspects then congratulations, you have conceptually gotten the idea.
Building concepts is nothing but analyzing a variety of systems so when a new system springs up you can probably correlate to a few of them you have done and solve the given problem.
Although intuitively a system is related to a more physical scenario, the notion of a concept and a related system is valid in general in all fields.
- 2 years, 5 months ago
- 2 years, 5 months ago | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891789317131042, "perplexity": 904.9748042423142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00244.warc.gz"} |
https://brilliant.org/discussions/thread/problems-with-the-preview-window/ | ×
# Problems with the preview window
I accidentally entered a ## block in one of my solutions, which caused the last paragraph to go really large. However, that part of my solution was rendered normally in the preview window and I couldn't notice it before I published my solution.
Here's how the preview window renders text within the ## blocks:
If the image doesn't load, go to: http://s30.postimg.org/rn92r31e9/Untitled.png
When the solution is finalized (or the same text is posted as a comment), here's how it renders:
If the image doesn't load, go to: http://s29.postimg.org/3m7p649tz/Untitled.png
Is this a bug?
Note by Sreejato Bhattacharya
3 years, 5 months ago
Sort by:
## I Think It Works and Is Not A Glitch
· 3 years, 5 months ago
Once again, note that I said it works for the preview window of a comment. Does it work for you in the preview window of a solution? · 3 years, 5 months ago
## lola
· 10 months, 2 weeks ago
## lola
· 10 months, 2 weeks ago
# It works!!!!!!
· 3 years, 3 months ago
# awesome
· 3 years, 3 months ago
I think this is a ##feature## · 3 years, 5 months ago
## BugTest
· 3 years, 5 months ago
tried to create this: Rich Text Editor · 3 years, 5 months ago
## It doesn't work for me. I can view it normally after previewing.
· 3 years, 5 months ago
Are you talking about previewing comments? Well that works just fine. I'm having problems with the preview window of a solution. · 3 years, 5 months ago
Oh, misread, sorry. · 3 years, 5 months ago
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847739338874817, "perplexity": 4145.378276204502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423842.79/warc/CC-MAIN-20170722022441-20170722042441-00417.warc.gz"} |
https://chem.libretexts.org/Core/Inorganic_Chemistry/Crystallography/Fundamental_Crystallography/Normal_subgroup | # Normal subgroup
A subgroup H of a group G is normal in G (H $\triangleleft$ G) if gH = Hg for any g ∈G. Equivalently, H ⊂ G is normal if and only if gHg-1= H for any g ∈G, i.e., if and only if each conjugacy class of G is either entirely inside H or entirely outside H. This is equivalent to say that H is invariant under all inner automorphisms of G.
The property gH = Hg means that left and rights cosets of H in G coincide. From this one sees that the cosets form a group with the operation g1H * g2H = g1g2H which is called the factor group or quotient group of G by Hdenoted by G/H.
In the special case that a subgroup H has only two cosets in G (namely H and gH for some g not contained in H), the subgroup H is always normal in G.
### Connection with homomorphisms
If f is a homomorphism from G to another group, then the kernel of f is a normal subgroup of G. Conversely, every normal subgroup $\triangleleft$ G arises as the kernel of a homomorphism, namely of the projection homomorphism G → G/H defined by mapping g to its coset gH.
### Example
The group T containing all the translations of a space group G is a normal subgroup in G called the translation subgroup of G. The factor group G/T is isomorphic to the point group P of G. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232276082038879, "perplexity": 456.7288276907167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00413.warc.gz"} |
https://pennstate.pure.elsevier.com/en/publications/right-division-in-moufang-loops | # Right division in Moufang loops
Maria de Lourdes M. Giuliani, Kenneth W. Johnson
Research output: Contribution to journalArticlepeer-review
1 Scopus citations
## Abstract
If (G, ·) is a group, and the operation (*) is defined by x * y = x · y-1 then by direct verification (G, *) is a quasigroup which satisfies the identity (x * y) * (z * y) = x * z. Conversely, if one starts with a quasigroup satisfying the latter identity the group (G, ·) can be constructed, so that in effect (G, ·) is determined by its right division operation. Here the analogous situation is examined for a Moufang loop. Subtleties arise which are not present in the group case since there is a choice of defining identities and the identities produced by replacing loop multiplication by right division give identities in which loop inverses appear. However, it is possible with further work to obtain an identity in terms of (*) alone. The construction of the Moufang loop from a quasigroup satisfying this identity is significantly more difficult than in the group case, and it was first carried out using the software Prover9. Subsequently a purely algebraic proof of the construction was obtained.
Original language English (US) 209-215 7 Commentationes Mathematicae Universitatis Carolinae 51 2 Published - 2010
## All Science Journal Classification (ASJC) codes
• Mathematics(all)
## Fingerprint
Dive into the research topics of 'Right division in Moufang loops'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238764047622681, "perplexity": 1185.979549688356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00559.warc.gz"} |
http://math.stackexchange.com/questions/351984/stopping-time-proof | # Stopping time proof
Let $\{X_t, t \ge 0\}$ be a continuous stochastic process and adapted to the filtration $\{\mathcal{F}_t,t\ge 0 \}$ and consider
$$\alpha = \inf\{t, |X_t|>1\},$$ the first time the the process $X_t$ leaves the interval $[-1,1]$. Then can you help me to show that $\alpha$ is in fact stopping time?
-
## 2 Answers
Then can you help me to show that $\alpha$ is in fact stopping time?
Well, not very easily I am afraid, since $\alpha$ is not always a stopping time for the filtration $(\mathcal F_t)_{t\geqslant0}$ defined by $\mathcal F_t=\sigma(X_s;0\leqslant s\leqslant t)$.
For a counterexample, consider some Bernoulli random variable $U$ such that $P(U=1)=\frac12$ and $P(U=0)=\frac12$ and define the process $(X_t)_{t\geqslant0}$ as follows:
• If $U=1$ then $X_t=t$ for every $t\geqslant0$.
• If $U=0$ then $X_t=t$ for every $0\leqslant t\lt1$ and $X_t=2-t$ for every $t\geqslant1$.
Then $\alpha=\inf\{t;|X_t|\gt1\}$ is $\alpha=1$ if $U=1$ and $\alpha=3$ if $U=0$ hence $\{\alpha\leqslant1\}=\{U=1\}$. For every $s\leqslant1$, the random variable $X_s=s$ is deterministic, hence $\mathcal F_1=\{\varnothing,\Omega\}$ is the trivial sigma-algebra. The event $\{U=1\}$ is not in $\mathcal F_1$ hence $\alpha$ is not a stopping time for the filtration $(\mathcal F_t)_{t\geqslant0}$.
-
Thanks Did, indeed $\alpha$ is not stopping time in this example. – Ron Jul 24 '14 at 17:58
@Ilya, thanks I forgot this. – Ron Jul 24 '14 at 18:24
(+1). It might be worth mentioning that $\alpha$ is a stopping time with respect to the filtration $(\mathcal{F}_{t+})_{t \geq 0}$. – saz Aug 22 '14 at 20:11
@saz Quite so. Thanks. – Did Aug 22 '14 at 23:13
The proof below is not correct. See an example by Did. The flaw in the proof: in fact we have $$\{\alpha>t\} = \bigcup_n\{\alpha \geq t+\frac1n\} = \bigcup_{n}\bigcap_{s\in [0,t+\frac1n]}\{|X_s|\leq 1\}$$ and there appear events $\{|X_s|\leq 1\}$ for $s>t$.
By the definition, a random variable $\tau:\Omega\to [0,\infty]$ is a stopping time if and only if $$\{\tau \leq t\}\in \mathscr F_t$$ for any $t\in [0,\infty)$. We have in your case $$\{\alpha \leq t\} = \Omega\setminus \{\alpha > t\} = \Omega\setminus \bigcap_{s\in [0,t]}\{|X_s|\leq 1\} = \Omega\setminus\bigcap_{s\in \Bbb Q\cap [0,t]}\left\{|X_s|\leq 1\right\} \in \mathscr F_t$$ where we pass to the intersection over rational numbers only since $X$ has continuous trajectories.
-
Thanks a lot for the proof. – Ron Apr 5 '13 at 11:54
@Shaik: you're welcome – Ilya Apr 5 '13 at 11:55
Is it always true that we pass intersection over rational numbers in your proof? If $\{X_t, t \ge 0\}$ is right continuous, then even in that case do we pass intersection over rational numbers? – Ron Apr 17 '13 at 12:46
Sorry but the proof does not work since $\{\alpha > t\}$ is not always $$\bigcap_{s\in [0,t]}\{|X_s|\leq 1\}.$$ – Did Jul 24 '14 at 15:58
@Did: you certainly have a fan :) I'll read your answer now. – Ilya Jul 24 '14 at 16:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608108401298523, "perplexity": 148.88276594407887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.56/warc/CC-MAIN-20160723071024-00111-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://chemwiki.ucdavis.edu/Wikitexts/University_of_California_Davis/UCD_Chem_105/Lab_1%3A_Analog_Electronics | Interesting online CONFCHEM discussion going on right now on the ChemWiki and
greater STEMWiki Hyperlibary project. Come join the discussion.
# Lab 1: Analog Electronics
In modern analytical chemistry, the quantity to be measured, for instance, the intensity of a light passing through a solution, is converted into an electrical signal which is then amplified or modified to operate a device which can visually display the numerical value of the measured quantity. A simple example is a pH meter in which the potential of the glass electrode responds to the concentration (more precisely, the activity) of hydrogen ions in solution. When an observed electrical response is continuous, it is called an analog response.
### The Operational Amplifier
The most important element in analog instrumentation is the operational amplifier. In this experiment, we will examine the properties and uses of operational amplifiers. The operational amplifier is a device with two inputs and one output. It is normally represented by a triangle; the input designated (+) is non-inverting, and that designated (-) inverting (Fig. 1). The output voltage Vo is measured with respect to ground, the common terminal on the power supply. Vs is the voltage of the inverting input (-) with respect to the non-inverting one (+). The fundamental property of this amplifier is that the output voltage Vo is the inverted, amplified value of the voltage Vs; thus, if the gain of the amplifier is A, Vo = -AVs
Figure 1: Schematic of an operational amplifier; the numbers refer to pins on the 741 solid state type.
The power taken from the amplifier is derived from a power supply which usually delivers +12 V and -12 V in the case of solid-state devices. Note that the power supply is normally not shown in circuit diagrams. The following important properties are those of an ideal operational amplifier. It is important to keep them in mind in order to understand the applications: (a) The gain of the amplifier is infinity; real devices have values of A ranging from 104 to 108. In the ideal device, a very small value of Vs results in the maximum value of Vo which is determined by the limits of the power supply (± 12 V). (b) The input impedance is infinity; in real devices, this is in the range 105 to 1013 ?. (c) The output impedance is zero; in practice, it is determined by the power supply. (d) The properties of the device must be independent of time; in practice, real devices produce some noise and drift with respect to time. As a result, the amplifier must be periodically balanced using a separate nulling circuit.
### Apparatus
The apparatus for this experiment consists of (a) A protoboard including a 741 operational amplifier for setting up the operational amplifier circuits connected to a +/-12 volt power supply (b) A variable d.c. power supply (c) A digital multimeter (d) A function generator for applying input voltages (e) An oscilloscope for observing the output voltage when it varies with time (f) An assortment of resistors, capacitors, and shielded cables with BNC connectors. For instructions on operation of the oscilloscope and function generator, see the appendix at the end of the experiment. The operational amplifier is a small integrated circuit about 1 cm2 in area with 8 numbered tabs in the following configuration. The tabs have the following purpose: 1. offset null 2. inverting input (-) 3. non-inverting input (+) 4. negative (-) terminal of power supply 5. offset null 6. output 7. positive (+) terminal of power supply 8. none The nulling circuit will not be attached in this experiment, and tabs 1 and 5 will not be used. Remember to define a common or ground terminal on the protoboard and to connect that point to the common terminal on the power supply.
### EXPERIMENTAL ASSIGNMENTS
#### 1. Reactive Circuits
Suppose that we are to measure a signal from a device by connecting the output of some transducer to a data acquisition or display device (such as an oscilloscope or an analog-to-digital converter). Most often we will use a shielded cable, such as typical coaxial “BNC” cable, which has a central conductor surrounded by insulation that is, in turn, wrapped by metal braid and more insulation. The metal braid serves as the circuit common return and, because electromagnetic waves do not penetrate into the enclosed metal objects, it also acts as a shield, greatly reducing noise pick-up by the inner conductor. Since the cable consists of two metal surfaces separated by an insulator, it has stray capacitance (as well as a small inductance). Capacitance (and inductance) in circuits causes frequency-dependent effects because the impedance of a capacitive circuit is dependent on the frequency of electric signals: XC = 1 / 2?fC (1) A simple circuit to allow the examination of the effect of capacitance is depicted below. When charging or discharging a capacitor, the voltages of the input Vin and output Vo,C are not proportional to each other, resulting in a phase shift: ? = arctan (Xc / R) (2) which is also frequency dependent.
##### Circuitry
The series RC circuit illustrates the effects of capacitance on signals of different frequencies. When the resister and capacitor are connected in series, the total voltage drop across the two components always must equal the instantaneous voltage applied as Vin. Since the capacitive reactance varies with frequency, however, the relative proportion of the voltage drops across the resistor and capacitor will vary. Looking across the resistor (at Vo,R), we have what is referred to as a high-pass filter. Looking across the capacitor (at Vo,C), we have a low-pass filter.
##### Procedure
First build a typical high-pass filter using C = 0.01 µF and R = 10 k?. Do not use the breadboard for this circuit. Use the holes in the wooden parts holder to support the circuit. Apply a 5.0 V p/p sinusoidal signal for Vin. Use the function generator as the power supply and the oscilloscope to make measurements of voltage and frequency. (Start at a frequency of ~10 kHz to set the voltage.) Be careful to make sure that the common leads from the O-scope and function generator are always together. Both devices connect their common points to earth ground via some circuitry. Unless these two common points are connected, ground loops can cause spurious behavior during your measurements. Thus, one has to re-wire the circuit a bit differently to measure across the capacitor or resistor. Refer to Figure 4 above. Measure Vo,R and Vo,C at frequencies of5.0 Hz, 10.0 Hz, 50 Hz, 100 Hz, 500 Hz, 1.00 kHz, 5.00 kHz, 10.0 kHz, and 50.0 kHz. Make sure to check your input amplitude each time you change frequencies, since the signal generator output will vary slightly, especially at low frequncies. Make a semi-log plot of Vo,R vs frequency and another plot using Vo,C vs frequency. Now apply a 5.0 V square wave input and sketch the waveform observed for both Vo,R and Vo,C at frequencies of 50 kHz, 10 kHz, 1 kHz, and 100 Hz. Discuss the results in terms of capacitor reactance and total circuit impedance, pulse width, and RC time constant.
#### 2. THE VOLTAGE AMPLIFIER
##### Circuitry
Now we turn to basic operational amplifier circuits, the voltage amplifier. In order to operate the amplifier in a stable fashion, a feedback path is always provided between the output and inverting input (-). The amplifier then provides a current through the feedback path, if which maintains the voltage between the inputs, Vs at zero. A typical operating configuration is shown in Fig. 5. An input voltage Vi supplies an input current ii through the input impedance Zi. Similarly, the feedback current if is determined by the magnitude of the feedback impedance Zf. Since the non-inverting input has been connected to ground, the voltage Vs measured at S is zero. Now, by Kirchoff's law, the sum of the currents at S must be zero. Since no current flows into the amplifier when it behaves ideally if = -ii (3) Since if = Vo / Zf, and ii = Vi / Zi, the following relationship holds between the output and input voltages Vo = - Zf Zi Vi (4) The operational amplifier is now acting as a voltage amplifier, that is to say, the input voltage is multiplied by the ratio Zf / Zi and also inverted in sign. For example, if Zf is a 100 k? resistor and Zi, a 1 k? resistor, the amplification factor is 100. Obviously, the operational amplifier can also be used to attenuate the input voltage when Zf < Zi.
In summary, the following rules should be remembered in analyzing a circuit using an operational amplifier with feedback:
1. No current flows into either input
2. The voltage Vs between the two inputs is zero
3. The input current ii is equal and opposite to the feedback current if at point S.
##### Procedure
In the first application, the impedances, Z, will be simple resistances and the input signals will be DC voltages. Figure 6 shows the breadboard set up as the circuit in Figure 5. Use the variable voltage supply (VVS) to apply -1.0 V for Vi. For Rf = 10 k??and 22 k?, measure Vo with either the DVM or the oscilloscope, with Ri equal to 4.7, 10, 22, 47 and 100 k?. (If you use the oscilloscope, remember to set it to DC.) Plot Vo against Rf / Ri. Discuss the slope and any errors in the measurements. Next, using Ri = 10 k? and Rf = 100 k?, apply a 0.1 V, 1.0 kHz sinusoidal input using the wave function generator. Increase the frequency of the input signal to 5.0 kHz, 10 kHz, 50 kHz, 100 kHz, 500 kHz and 1.0 MHz. Record the amplitude and phase relationship of the input and output signals. Explain any frequency dependence in terms of the bandwidth product (gain times bandwidth) and reactive impedances. Would this circuit be useful for amplifying 1 MHz signals from a piece of analytical instrumentation, such as a Fourier transform mass spectrometer?
#### 3. DIFFERENTIAL AMPLIFIER
##### Circuitry
Figure 7 The purpose of this amplifier is to provide an output voltage proportional to the weighted difference between V1 and V2. The voltage at T which is equal to the voltage at S is given by VT = Vs = i2Rf = V2RfRi+Rf (5) The input current i1 is given by I1 = V1-Vs Ri = V1Ri - V2RfRi (Ri+Rf) (6) The feedback current if is given by if = = - (7) Since if = -i1, the following equation for Vo is obtained b. Procedure Set Ri = Rf = 10 k? and V1 = 1.2 V. Measure Vo for V2 = 10, 5, 2, 0, -3, -5, and -7 V. Change Rf to 47 k? and repeat with V2 = 3, 2, 1, 0 and -0.5 V. Plot Vo against V2 - V1, and comment on the behavior of this circuit. You should use the voltage divider made from the appropriate resistors and the fixed 12 volt power supply to provide the constant 1.2V for V1 4.
#### 4. THE INTEGRATOR
##### Circuitry
Figure 8 The integrator produces an output voltage that is proportional to the integral of the input voltage with respect to time. When the reset switch is opened, the current through the feedback loop results in a charge Qf being built up on the feedback capacitor Cf. Qf = CfVo = o t if dt (9) Since if = -ii = -Vi / Ri, Vo = - 1RiCf o t Vi dt (10) If the input voltage does not change with time, then Vo = - (Vi / RiCf) t (11) where t is the time measured from the moment when the reset switch was opened. It follows that the output voltage varies linearly with time. When the reset switch is closed, the feedback loop contains no impedance and the output voltage drops to zero, that is, to the voltage at S. b)
##### Procedure
(i) (Note: Rf is not used in this step. Leave this resistor off the circuit. See above diagram.) First, the effect of the integrator circuit on a DC signal will be observed. Use the following values for the circuit: Ri = 1 M?, Cf = 1 ?F, and Vi = 0.5 VDC. Connect the output to the DVM (Digital Voltage Meter) and observe the change in Vo with respect to time when the reset switch/cable is unplugged. Record the value of Vo at 5-second intervals until it reaches a minimum value (after approximately 30 seconds). Compare the observed behavior of the system with that expected on the basis of equation (11). (ii) To observe the effect of the integrator circuit on an AC signal, use the Wavetech Signal Generator to generate the input voltage signal. The dual-channel oscilloscope is used to monitor both the input and the output voltage signals. To eliminate the integration of any small DC offset voltages, an additional resister is placed in the feedback loop. Use the following circuit and values: Ri = 10 k?, Cf = 0.01 ?F, Rf = 100 k? First set Vi to 1 V peak-to-peak at 10 kHz using the sinusoidal waveform. The input signal should appear as shown in the diagram below. Draw in the output voltage signal which you observe on the oscilloscope, paying close attention to any phase shift between Vi and Vo. Then switch Vi to the square wave function, as shown in the diagram below. Draw in the observed output signal. Last, repeat the measurement using the triangular waveform. Carefully estimate and record any phase shift between the input and output signals. Why is Rf chosen to be so much large than Ri?
### Lab Report
Summarize the results in your lab book in an orderly fashion. Answer all questions at the appropriate point in the report, and discuss errors. How to write your Lab Report - See the general outline, as described on pages 1-3 of this lab manual - Your tables should include both calculated and experimental values for all sections.
16:46, 17 Dec 2013
## Classifications
(not set)
(not set)
### Textbook Maps
This material is based upon work supported by the National Science Foundation under Grant Number 1246120 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191357254981995, "perplexity": 1137.6467252878426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00071-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/340557/ctep-is-generally-greater-than-varp-frac12-cdot100-p-p-being-a | # $CTE(p)$ is generally greater than $VaR(p+\frac{1}{2}\cdot(100-p))$, $p$ being a percentile
Let's assume we are in the insurance business and the values we are observing are losses.
So there is a general statement that says the Conditional Tail Expectation at percentile $p$ is usually greater than the Value at Risk (percentile value) of $p+\frac{1}{2}(100-p)$
In other words, $CTE(p)> VaR(p+\frac{1}{2}(100-p))$
For example, let $p$ be $90%$. Then this statement becomes $CTE(90)>VaR(95)$
I have performed quite a few simulations based on an exponential distribution and have not found a case for which this is false.
Thus, I am wondering:
1. What is the basis for this statement?
2. When will this be false (Cases where there aren't many values beyond $p$)?
• Value at RIsk has different (but related) definitions in banking and insurance. Are you referring to the Tail VaR common in insurance applications? (It might be even less ambiguous to actually give the integrals for Var and CTE) – Glen_b Apr 14 '18 at 23:14
• @Glen_b Yes you are right. I am referring to Value at Risk from banking and insurance. Can you enlighten me what is different in the definition of Value at Risk for banking and insurance? – user101998 Apr 15 '18 at 2:46
• Value at risk for a bank typically relates to underperforming assets, but for an insurer it's larger than expected liabilities. The bank is looking at the left tail of the asset performance while the insurer is looking at the right tail of the aggregate claims distribution. This difference is why I was asking which one you were doing, since it would affect the definitions required to try to answer the question. – Glen_b Apr 15 '18 at 11:49
• Great observation. So if these were losses from an insurer's perspective, this question becomes clear right? Let me edit it first. – user101998 Apr 16 '18 at 2:20
## 1 Answer
If we define the Conditional Tail Expectation at $t$ as $E[X|X>t],$ then $\mathrm{CTE}(90)$ will be $$E[X|X>x_{90}],$$ where $x_{90}$ is the 90th percentile of $X.$ For the exponential distribution, this expectation is very simple: $$E[X|X>x_{90}] = x_{90} + \beta,$$ where $\beta$ is the scale parameter (and also the mean). This is from the memoryless property.
Also for an exponential distribution, we know the quantiles are given by $x_{p} = -\beta \ \mathrm{ln} \left(1-p \right),$ where $p$ is in decimal form.
So we have $$E[X|X>x_{90}]=x_{90} + \beta=-\beta \ \mathrm{ln} \left(1-0.9 \right)+\beta \approx 3.303 \beta$$
Now using the quantile formula, we have for Value-at-Risk that $$\mathrm{VaR}(95)=-\beta \ \mathrm{ln} \left(1-0.95 \right) \approx 2.996 \beta$$
So in the case of the exponential distribution the claim of $$\mathrm{CTE} \left( 90 \right) > \mathrm{VaR} \left(95 \right)$$ will always be true.
• Thanks, but there is no way in general to assert the truth value of this statement? – user101998 Apr 17 '18 at 15:45
• It is not true in general. Suppose $X$ is uniform on $[0,1].$ Then the two quantities are equal. Suppose the density is $f(x)=2x$ on $[0,1].$ Then you will find that the CTE is slightly smaller than the VaR. – soakley Apr 17 '18 at 22:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816420435905457, "perplexity": 310.7519524103913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141185851.16/warc/CC-MAIN-20201126001926-20201126031926-00323.warc.gz"} |
https://www.physicsforums.com/threads/magnetic-force-problem.70782/ | # Homework Help: Magnetic Force Problem
1. Apr 9, 2005
### bbbbbev
Hi. Ok, here's the problem:
An electron that has velocity v = (3.6 106 m/s) i + (3.7 106 m/s) j moves through a magnetic field B = (0.03 T) i - (0.15 T) j.
(a) Find the force on the electron.
I know how to find the force from scalar numbers (using the equation F_mag = q x v x Bsin(phi)), but I can't figure out how to do it with vectors. I know that the answer is going to be in the "k" direction, but I don't understand how to get a k from an i and a j, and I can't find how to do it in the book or on any website.
I tried finding the force in the i direction and then the force in the j direction and doing vector addition, but that didn't work because the resultant vector is not in the k direction. I guess the real problem is that I don't know how to add j and i vectors to get a k vector. Could someone please help? Thanks alot! Beverly
2. Apr 9, 2005
### quasar987
Howdy Beverly. I'm sure you have seen written somewhere the magnetic force in terms of vector as
$$F_{mag} = q(\vec{v} \times \vec{B})$$
This means that the vector force has a magnitude given by vBsinO (like you did) and a direction given by the right hand rule.
Learn about the right hand rule here.
Last edited: Apr 9, 2005
3. Apr 9, 2005
### bbbbbev
Oh, thanks. I think I get it. Can I just find force in the i direction and then find force in the j direction and then multiply them together to get the magnitude in the k direction? I tried doing this:
F_i = q x v_i x B_i
F_i = (1.6E-19C)(3.6e6m/s)(0.03T)
F_i = 1.728E-14 N
F_j = q x v_j x B_j
F_j = (1.6E-19C)(3.7e6m/s)(-0.15T)
F_j = -8.88E-14 N
Then I multiplied F_i x F_j to get F_k, but that answer was incorrect. Am I understanding the right hand rule thing wrong??
Beverly
4. Apr 9, 2005
### jdavel
bbbbbev,
No, you can't do it that way. Go back to the equation you started with:
F =qvBsin(phi) where phi is the angle between the directions of v and B.
Can you figure out what phi is?
5. Apr 9, 2005
### quasar987
The scalar force IS the magnitude of the vector force. The right hand rule only adds to it by telling you the direction of the force based on the directions of the vectors v and B.
6. Apr 11, 2005
### bbbbbev
Thanks! I got it. I figured out phi and just used that equation. Thanks guys. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429649114608765, "perplexity": 713.6186895063763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945940.14/warc/CC-MAIN-20180423085920-20180423105920-00225.warc.gz"} |
https://brilliant.org/problems/composite-dilemma/ | # Composite Dilemma
Number Theory Level pending
Which of the following values is a possible value of $$n$$ such that $$n, n+1, n+2,\ldots ,n+200$$ are all composite?
Notation: $$!$$ denotes the factorial notation. For example, $$8! = 1\times2\times3\times\cdots\times8$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888243079185486, "perplexity": 1119.481908910185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00569-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/11875/solving-a-problem-related-to-convolution | # Solving a problem related to convolution
I have this confusion related to solving this problem
You’ve been working with some physicists who need to study, as part of their experimental design, the interactions among large numbers of very small charged particles. Basically, their setup works as follows. They have an inert lattice structure, and they use this for placing charged particles at regular spacing along a straight line. Thus we can model their structure as consisting of the points $$\{1, 2, 3, \cdots , n\}$$ on the real line; and at each of these points $$j$$, they have a particle with charge $$q_j$$. (Each charge can be either positive or negative.)
They want to study the total force on each particle, by measuring it and then comparing it to a computational prediction. This computational part is where they need your help. The total net force on particle $$j$$, by Coulomb’s Law, is equal to
$$F_j = \sum_{ij}\frac{Cq_iq_j}{(j-i)^2}$$
They’ve written the following simple program to compute $$F_j$$ for all $$j$$:
For j = 1, 2,...,n
Initialize Fj to 0
For i = 1, 2, ..., n
 If i < j then
Elseif i > j then
Endif
Endfor
Output Fj
Endfor
It’s not hard to analyze the running time of this program: each invocation of the inner loop, over i, takes $$O(n)$$ time, and this inner loop is invoked $$O(n)$$ times total, so the overall running time is $$O(n^2)$$.
The trouble is, for the large values of $$n$$ they’re working with, the program takes several minutes to run. On the other hand, their experimental setup is optimized so that they can throw down $$n$$ particles, perform the measurements, and be ready to handle n more particles within a few seconds. So they’d really like it if there were a way to compute all the forces $$F_j$$ much more quickly, so as to keep up with the rate of the experiment.
Help them out by designing an algorithm that computes all the forces $$F_j$$ in $$O(n \log n)$$ time.
I am sure that this problem is solved by convolution which takes time $$O(n \log n)$$. However, I am not being able to proceed and see how it's converted to a problem related to convolution. Any suggestions?
• if this is a class exercise [apparently] it would be more polite to say so & give more details eg specific class, book, etc. – vzn Jun 10 '13 at 3:04
• This problem appears in J. Kleinberg, E. Tardos, Algorithm Design, Addison Wesley (2006), Divide and Conquer chapter as one of the excercises. – 89f3a1c Jul 22 '19 at 15:36
Hint: Consider the two sequences $q_1,\ldots,q_n$ and $\ldots,-1/9,-1/4,-1,0,1,1/4,1/9,\ldots$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969078063964844, "perplexity": 334.4552285610758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00557.warc.gz"} |
https://www.physicsforums.com/threads/complex-number-problem.313239/ | # Complex number problem
1. May 11, 2009
### aks_sky
Calculate (-1) ^ i
I tried using the formula x^ni = cos (ln (x)^n) + i sin (ln (x)^n)
but i cannot solve it. i used matlab to get this answer 0.0432139182637723 + 0i
but i dunno how to solve it with steps.. can i get some assistance please.
thank you.
2. May 11, 2009
### jbunniii
Do you know how to write -1 in polar form?
3. May 11, 2009
### aks_sky
yup the polar form will just be cos (theta) + i sin (theta) and the modulus here is 1.. correct?
4. May 11, 2009
### jbunniii
Well, that's a particular complex number, but it's actually expressed in rectangular form x + iy, where x = cos(theta) and y = sin(theta).
Do you know how to write -1 in terms of "e", i.e., do you know what a complex exponential is? It would help to know what background can be assumed for this exercise.
5. May 11, 2009
### aks_sky
well what i did was... x = ln (-1)^i
which is.. i ln (-1)
then in terms of "e" i will get... e ^ i ln (-1)
which gives me cos (ln (-1)) + i sin (ln (-1))
but i cant go any further to get the answer
6. May 11, 2009
### jbunniii
What I was trying to get at is, have you been exposed to Euler's famous formula:
$$e^{i\pi} = -1$$
If so, then you can easily use this to get the answer you want.
7. May 11, 2009
### aks_sky
yup i know that formula.. but how do i use it here?.. i tried to use that formula too but dint work.. maybe i did something wrong?
8. May 11, 2009
### jbunniii
Well, you're trying to find (-1)^i, right? So what is the natural thing do to both sides of Euler's formula?
9. May 11, 2009
### aks_sky
um not sure exactly.
10. May 11, 2009
### jbunniii
Oh, come on!
What operation do you do to -1 to obtain (-1)^i? (This isn't a trick question!) Just do that operation to both sides of Euler!
11. May 11, 2009
### Matterwave
What jbunnii is trying to say is:
$$(-1) = (e^{i\pi})$$
$$(-1)^i = ...$$
Use basic algebra here.
12. May 11, 2009
### aks_sky
we take logs of both sides
13. May 11, 2009
### aks_sky
ohhh yup i get what you asking
Similar Discussions: Complex number problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653006196022034, "perplexity": 3246.8650259357196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00350.warc.gz"} |
https://sta.co/portal/mod/glossary/showentry.php?eid=330 | Global searching is not enabled.
#### Descendant
The degree of the ecliptic (zodiac) that meets the western horizon, and which denotes the 7th house cusp. So called because planets at this point descend beneath the horizon and are no longer visible to the naked eye. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214173913002014, "perplexity": 1803.1646444537948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00461.warc.gz"} |
http://math.stackexchange.com/questions/253273/example-of-a-compact-set-whose-set-of-limit-points-is-countably-infinite | # Example of a compact set whose set of limit points is countably infinite.
I need to find an example of a compact set whose set of limit points is countably infinite.
-
HINT: Start with the set $A=\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$; that’s a compact set with one limit point. Now add to $A$ a sequence converging to each of the points $\frac1n$. You can do this in $\Bbb R$, but it’s a little easier to describe and visualize if you first embed $A$ in $\Bbb R^2$ and then let the new sequence converge vertically to the points $\langle\frac1n,0\rangle$.
-
Excellent, thanks a lot. – Jorge Dec 7 '12 at 19:51
@Jorge: You’re welcome. – Brian M. Scott Dec 7 '12 at 19:55
Hint: In $\mathbb{R}$ we want the set to be closed and bounded. Make a set whose limit points are, say, $0$ and $1,1/2,1/3,1/4,1/5,\dots$.
To make a set with the right limit points, you might start with making a set whose only limit points are $1$, $2$, $3$, and so on. (This is of course not compact.) Then produce the desired set by using the reciprocal, not forgetting the reciprocal of "$\infty$."
-
Excellent, thank you. – Jorge Dec 7 '12 at 19:50
$$A = \{ (1/m, 1/n),(1/m,0),(0, 1/n) : m , n \in \mathbb{N} \}$$ The set of limit point of $A$ is $\{(1/m,0) : m \in \mathbb{N} \} \cup \{ (0, 1/n) : n \in \mathbb{N} \}$ is contable infinite and $A$ is compact because it is bounded and closed.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9527647495269775, "perplexity": 244.58609597757612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645258858.62/warc/CC-MAIN-20150827031418-00285-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://dcc.ligo.org/LIGO-P1800333/public | # Searches for Continuous Gravitational Waves from Fifteen Supernova Remnants and Fomalhaut b with Advanced LIGO
Document #:
LIGO-P1800333-v5
Document type:
P - Publications
Other Versions:
Abstract:
We describe directed searches for continuous gravitational waves from sixteen well localized candidate neutron stars assuming none of the stars has a binary companion. The searches were directed toward fifteen supernova remnants and Fomalhaut~b, an extrasolar planet candidate which has been suggested to be a nearby old neutron star. Each search covered a broad band of frequencies and first and second time derivatives. After coherently integrating spans of data from the first Advanced LIGO observing run of 3.5--53.7~days per search, applying data-based vetoes and discounting known instrumental artifacts, we found no astrophysical signals. We set upper limits on intrinsic gravitational wave strain as strict as $1\times10^{-25}$, on fiducial neutron star ellipticity as strict as $2\times10^{-9}$, and on fiducial $r$-mode amplitude as strict as $3\times10^{-8}$.
Files in Document:
Other Files:
Topics:
Authors:
Author Groups:
Notes and Changes:
v5: Changes in response to referee comments.
Added data files for the figures. Due to detector calibration uncertainties and the statistical nature of the upper limits, these limits are much more uncertain than the precision of the numbers in the files. Caution is therefore advised in performing any direct comparison against the data values made available on this page. Repeated values in the second columns of all data files indicate bands where no upper limit was set, for reasons described in the text, and the (nonphysical) value in the second column was chosen for visibility in each plot.
Related Documents:
Referenced by:
Publication Information:
https://arxiv.org/abs/1812.11656
DCC Version 3.2.2, contact Document Database Administrators | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433494210243225, "perplexity": 2838.9424886492125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00270.warc.gz"} |
https://stats.stackexchange.com/questions/319390/deciding-which-points-to-be-used-for-density-estimation | # Deciding which points to be used for density estimation
Given a set of data with their frequency in a data set, like: $$x_1, f_{x_1}, x_2, f_{x_2}, x_3, f_{x_3}, \cdots, x_n, f_{x_n}$$, where $f_i$ is the frequency of value $x_i$ in a data set $\{x_i\}$.The definition of frequency here is: $$f_i =\frac{n_i}{N} =\frac{n_i}{n_1+n_2+n_3+\cdots}$$ If we only know a limited number of data values and their frequency, how can we estimate the density distribution of this dataset?
An example question:
If our observation is $x_1, f_{x_1}, x_5, f_{x_5}, x_8, f_{x_8}$. How can we estimate a density function $\hat{f}$, such that $\hat{f}(x_i)$ is as close to the ground truth $f_i$ as possible.
Further, if we have the ability to decide which points to be observed, how can we decide which points to observe to get the best density estimation? Example: we can observe $5$ data value and their frequency, to get the best density estimation, which $5$ points should we observe?
I search the internet and didn't find much useful resources. I do not need a precise solution (I know my question is not well defined mathematically), can you just give me some insight about which field I should look into to solve this problem, or is there any similar problems that have been discussed before?
• Thank you, kernel density estimation solves the first part of my question, but when we can choose which points to observe, how do we make the decision? Say we know all the data frequency in a data set $f_1,f_2,...f_n$, but we can only observe $k$ points in the dataset to do the density estimation, what should be our strategy? Should we choose the data points with the highest frequency? – llxxee Dec 18 '17 at 16:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911617994308472, "perplexity": 176.40166725929396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00028.warc.gz"} |
http://tex.stackexchange.com/questions/63852/question-mark-instead-of-citation-number | # Question mark instead of citation number
I've browsed the forums and found a number of posts that have addressed this issue, but none of the solutions seem to work for me. I have the following script that I just copied from the bibtex home page to get familiar with it. Instead of the citation number I get a question mark. I compile using Latex+Bibtex+Latex+Latex+PDFLatex+ViewPDF just as has been previously suggested and the problem persists.
\documentclass[11pt]{article}
\usepackage{cite}
\begin{document}
\title{My Article}
\author{Nobody Jr.}
\date{Today}
\maketitle
Blablabla said Nobody ~\cite{Nobody06}.
\bibliography{mybib}{}
\bibliographystyle{plain}
\end{document}
My bibliography (Bib.bbl)
@misc{ Nobody06,
author = "Nobody Jr",
title = "My Article",
year = "2006" }
Looking at previous posts one thing that is concerning is that my .bbl looks empty as shown below. Further, I don't have a .blg
\begin{thebibliography}{}
\end{thebibliography}
-
not addressing the question itself, ..., but if the ~ before \cite is intended to keep the cross-reference from being broken to a new line, the input shown -- Nobody ~\cite -- won't do that. the space character preceding the ~ will (1) happily allow a line break, and (2) double the width of the space before the xref when it's printed. should be Nobody~\cite to have the no-break effect. – barbara beeton Jul 19 '12 at 17:57
Since this question comes up so often, I thought I'd try to supplement ArTourter's correct answer with a more general comment.
What does a question mark mean
It means that somewhere along the line the combination of LaTeX and BibTeX has failed to find and format the citation data you need for the citation: LaTeX can see you want to cite something, but doesn't know how to do so.
Missing citations show up differently in biblatex
If you are using biblatex you will not see a question mark, but instead you will see your citation key in bold. For example, if you have an item in your .bib file with the key Jones1999 you will see Jones1999 in your PDF.
How does this all work
To work out what's happening, you need to understand how the process is (supposed to) work. Imagine LaTeX and BibTeX as two separate people. LaTeX is a typesetter. BibTeX is an archivist. Roughly the process is supposed to run as follows:
1. LaTeX (the typesetter) reads the manuscript through and gives three pieces of information to BibTeX (the archivist): a list of the references that need to be cited, extracted from the \cite commands; a note of a file where those references can be found, extracted from the \bibliography command; a note of the sort of formatting required, extracted from the \bibliographystyle command.
2. BibTeX then goes off, looks up the data in the file it has been told to read, consults a file that tells it how to format the data, and generates a new file containing that data in a form that has been organised so that LaTeX can use it (the .bbl file).
3. LaTeX then has to take that data and typeset the document - and may indeed need more than one 'run' to do so properly (because there may be internal relationships within the data, or with the rest of the manuscript, which BibTeX neither knows or cares about, but which matter for typesetting.
Your question-mark tells you that something has gone wrong with this process.
More biblatex and biber notes:
• If you are using biblatexthe style information is located in the options passed to the to the biblatex package, and the raw data is in the \addbibresource command.
• If you are using biblatex, the stage described as BiBTeX in this answer are generally replaced with a different, and more cunning, archivist, Biber.
What to do
The first thing to do is to make sure that you have actually gone through the whole process at least once: that is why, to deal with any new citation, you will always need at least a LaTeX run (to prepare the information that needs to be handed to BibTeX), one BibTeX run, and one or more subsequent LaTeX runs. So first, make sure you have done that.
If you still have problems, then something has gone wrong somewhere. And it's nearly always something about the flow of information.
Your first port of call is the BibTeX log (.blg) file. That will usually give you the information you need to diagnose the problem. So open that file (which will be called blah.blg where 'blah' is the name of your source file).
In a roughly logical order:
1. BibTeX did not find the style file. That's the file that tells it how to format references. In this case you will have an error, and BibTeX will complain I couldn't open the style file badstyle.bst. If you are trying to use a standard style, that's almost certainly because you have not spelled the style correctly in your \bibliographystyle command - so go and check that. If you are trying to use a non-standard style, it's probably because you've put it somewhere TeX can't find it. (For testing purposes, I find, it's wise to remember that it will always be found if it's in the same directory as your source file; but if you are installing using the facilities of your TeX system -- as an inexperienced person should be - you are unlikely to get that problem.)
2. BibTeX did not find the database file. That's the .bib file containing the data. In that case the log file will say I couldn't open database file badfile.bib, and will then warn you that it didn't find database files. The cure is the same: go back and check you have spelled the filename correctly, and that it is somewhere TeX can find it (if in doubt, put it in the folder with your source file).
3. BibTeX found the file, but it doesn't contain citation data for the thing you are trying cite. Now you will just get, in the log-file: Warning--I didn't find a database entry for "yourcitation". That's what happened to you. You might think that you should have got a type 2 error: but you didn't because as it happens there is a file called mybib.bib hanging around on the system (as kpsewhich mybib.bib will reveal) -- so BibTeX found where it was supposed to look, but couldn't find the data it needed there. But essentially the order of diagnosis is the same: check you have the right file name in your \bibliography command. If that's all right, then there is something wrong with that file, or with your citation command. The most likely error here is that you've either forgotten to include the data in your .bib file, or you have more than one .bib file that you use and you've sent BibTeX to the wrong one, or you've mis-spelled the citation label (e.g. you've done \cite{nobdoy06} for \cite{nobody06}.
4. There's something wrong with the formatting of your entry in the .bib file. That's not uncommon: it's easy (for instance) to forget a comma. In that case you should have errors from BibTeX, and in particular something like I was expecting a ',' or a '}' and you will be told that it was skipping whatever remains of this entry. Whether that actually stops any citation being produced may depend on the error; I think BibTeX usually manages to produce something -- but biblatex can get totally stumped. Anyway, check and correct the particular entry.
biblatex and biber notes
If you are using biblatex, then generally you will also be using the Biber program instead of BiBTeX program to process your bibliography, but the same general principles apply.
Summary
The order of diagnosis is as follows:
1. Have I run LaTex, BibTeX (or Biber), LaTeX, LaTeX?
2. Look at the .blg file, which will help mightily in answering the following questions.
3. Has BibTeX/Biber found my style file? (Check you have a valid \bibliographystyle command and that there is a .bst with the same name where it can be found.)
4. Has Bibtex/Biber found my database? (Check the \bibliography names it correctly and it is able to be found.)
5. Has it found the right database?
6. Does the database contain an entry which matches the citation I have actually typed?
7. Is that entry valid?
8. Finally: When you have changed something, don't forget that you will need to go through the same LaTeX -- BibTeX (or Biber) -- LaTeX -- LaTeX run all over again to get it straight. (That's not actually quite true: but until you have more of a feel for the process it's a safe assumption to make.)
-
Excellent answer. :) – egreg Jul 19 '12 at 9:02
You say it in the first sentence, this question comes very often and now I know where to send people looking for an answer... – matth Jul 19 '12 at 9:05
Thanks for the very insightful post. – user16747 Jul 19 '12 at 19:53
fwiw, i'll scrape the answer to improve the faq answer on the same topic (the current faq answer doesn't even touch on biblatex/biber, since i've never used either...). my reuse doesn't add much to the coverage of your work, but it helps me -- ok? – wasteofspace Oct 26 at 18:49
The syntax for the \bibliography{} command is \bibliography{file1,file2,...}
in your case you seem to be calling a file called mybib when your bib file is in fact Bib.
Also note that bibtex file should have the .bib extension. the .bbl file will be created by bibtex.
You should therefore rename your bibliography file mybib.bib and get rid of the extra {} in the \bibliography{mybib}{} call, and then recompile. This should fix your problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842745423316956, "perplexity": 1379.2005638788041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163964642/warc/CC-MAIN-20131204133244-00055-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.reference.com/browse/wiki/Normal_bundle | Definitions
Normal bundle
In differential geometry, a field of mathematics, a normal bundle is a particular kind of vector bundle, complementary to the tangent bundle, and coming from an embedding (or immersion).
Definition
Riemannian manifold
Let $\left(M,g\right)$ be a Riemannian manifold, and $S subset M$ a Riemannian submanifold. Define, for a given $p in S$, a vector $n in mathrm\left\{T\right\}_p M$ to be normal to $S$ whenever $g\left(n,v\right)=0$ for all $vin mathrm\left\{T\right\}_p S$ (so that $n$ is orthogonal to $mathrm\left\{T\right\}_p S$). The set $mathrm\left\{N\right\}_p S$ of all such $n$ is then called the normal space to $S$ at $p$.
Just as the total space of the tangent bundle to a manifold is constructed from all tangent spaces to the manifold, the total space of the normal bundle $mathrm\left\{N\right\} S$ to $S$ is defined as
$mathrm\left\{N\right\}S := coprod_\left\{p in S\right\} mathrm\left\{N\right\}_p S$.
The conormal bundle is defined as the dual bundle to the normal bundle. It can be realised naturally as a sub-bundle of the cotangent bundle.
General definition
More abstractly, given an immersion $icolon N to M$ (for instance an embedding), one can define a normal bundle of N in M, by at each point of N, taking the quotient space of the tangent space on M by the tangent space on N. For a Riemannian manifold one can identify this quotient with the orthogonal complement, but in general one cannot (such a choice is equivalent to a section of the projection $V to V/W$).
Thus the normal bundle is in general a quotient of the tangent bundle of the ambient space restricted to the subspace.
Formally, the normal bundle to N in M is a quotient bundle of the tangent bundle on M: one has the short exact sequence of vector bundles on N:
$0 to TN to TMvert_\left\{i\left(N\right)\right\} to T_\left\{M/N\right\} := TMvert_\left\{i\left(N\right)\right\} / TN to 0$
where $TMvert_\left\{i\left(N\right)\right\}$ is the restriction of the tangent bundle on M to N (properly, the pullback $i^*TM$ of the tangent bundle on M to a vector bundle on N via the map $i$).
Stable normal bundle
Abstract manifolds have a canonical tangent bundle, but do not have a normal bundle: only an embedding (or immersion) of a manifold in another yields a normal bundle. However, since every compact manifold can be embedded in $mathbf\left\{R\right\}^N$, by the Whitney embedding theorem, every manifold admits a normal bundle, given such an embedding.
There is in general no natural choice of embedding, but for a given M, any two embeddings in $mathbf\left\{R\right\}^N$ for sufficiently large N are regular homotopic, and hence induce the same normal bundle. The resulting class of normal bundles (it is a class of bundles and not a specific bundle because N could vary) is called the stable normal bundle.
Dual to tangent bundle
The normal bundle is dual to the tangent bundle in the sense of K-theory: by the above short exact sequence,
$\left[TN\right] + \left[T_\left\{M/N\right\}\right] = \left[TM\right]$
in the Grothendieck group. In case of an immersion in $mathbf\left\{R\right\}^N$, the tangent bundle of the ambient space is trivial (since $mathbf\left\{R\right\}^N$ is contractible, hence parallelizable), so $\left[TN\right] + \left[T_\left\{M/N\right\}\right] = 0$, and thus $\left[T_\left\{M/N\right\}\right] = -\left[TN\right]$.
This is useful in the computation of characteristic classes, and allows one to prove lower bounds on immersability and embeddability of manifolds in Euclidean space.
Search another word or see Normal_bundleon Dictionary | Thesaurus |Spanish | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978484570980072, "perplexity": 191.9135848305053}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00339-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/158260-hyperplane.html | ## Hyperplane
Prove that a hyperplane H C R and its associated half-spaces H+ and H- are convex sets . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506962656974792, "perplexity": 4468.598961913374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096301.47/warc/CC-MAIN-20150627031816-00304-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://fahmyalhafidz.com/dzqurt/ | # 3 + $$\sqrt{2}$$ is a/an __________ number. A. natural B. irrational
3 + $$\sqrt{2}$$ is a/an __________ number.
A. natural
B. irrational
C. rational
D. whole
The explanation: $$\sqrt{2}$$ = 1.414213…
Since the expansion of √2 is non-terminating and non-recurring, result of 3 + $$\sqrt{2}$$ would be also non-terminating and non-recurring. So we can say that 3 + $$\sqrt{2}$$ is an irrational number. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119961261749268, "perplexity": 4363.0432127216845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00349.warc.gz"} |
https://www.physicsforums.com/threads/complex-fourier-series-and-phase-spectra.102273/ | # Complex Fourier Series and Phase Spectra
1. Dec 1, 2005
### mathwurkz
Please check my solution and I need help on understanding the second part of the question.
Q:Obtain the complex form of the fourier series of the sawtooth function.
$$f(t) = \frac{2t}{T} \ \ \ 0 < t < 2T\\$$
So if the period is 2l = 2T then l = T
$$\\ c_n = \frac{1}{2l} \int_{-l}^{l} f(x) e^{in\pi x / l} dx \\ c_n = \frac{1}{2T} \int_{-T}^{T} \frac{2t}{T} e^{in\pi t / T} dt = \frac{1}{T^2} \int_{-T}^{T} te^{-in\pi t / T} dt \\$$
Then I used integration by parts.
$$\\ u = t \ \ \ du = dt\ \ \ dv = e^{-in\pit / T} \ \ \ v = -\frac{T}{in \pi }e^{-in\pit / T}\\$$
And here we go...
$$\\ = \frac{1}{T^2}\left[-t\frac{T}{in\pi }e^{-in\pi t / T}|_{-T}^{T} + \frac{T}{in\pi } \int_{-T}^{T} e^{-in\pi t / T} dt\right]\\$$
I skip a few steps too much nitty gritty and then I arrive at...
$$\\ \frac{1}{in\pi } \left[ \left(-e^{-in\pi } - e^{in\pi }\right) - \frac{1}{in \pi} \left( e^{-in\pi - e^{in\pi }\right)\right]\\$$
and since I know:
$$\\ e^{+-iat} = \cos{at} +- i \sin{at} \\{$$
then it becomes
$$\frac{1}{in\pi } \left[ \left(-\cos{n\pi } - \cos{n\pi }\right) - \frac{1}{in \pi} \left( \cos{n\pi } - \cos{n\pi }\right)\right]\\$$
and the end result is...
$$c_n = -\frac{2(-1)^n}{in\pi }\\$$
and the fourier series is...
$$f(t) = \sum_{-\infty}^{infty} -\frac{2(-1)^n}{in\pi } e^{in\pi t / T}$$
The second part the question, the one I do not understand is it asks to find and plot the discrete amplitude and phase spectra for f(t) above. In general, a complex quantity G(t), can be written $$G(t) = ||G(t)||e^{i\phi t}$$ where ||G(t)|| is the amplitude and $$\phi (t)$$ is the phase angle. When these quantities are function of angular frequency, w, the plots resulting from their graphs vs w are called spectra.
hints: $$\omega _0 = \frac{\pi}{T} \ \ \ c_n = ||c_n||e^{i \phi}\\$$
So what I did was find ||c_n||...
$$||c_n|| = \frac{2}{\pi n} = \frac{2}{T \omega _0 n}}\\$$
Is this right though? And how do I go about finding the phase angle? and then plotting it vs omega. thanks for any help.
Last edited: Dec 1, 2005
2. Dec 1, 2005
### bigplanet401
\begin{align} \frac{1}{in\pi } \left[ (-e^{-in\pi } &- e^{in\pi }) - \frac{1}{in \pi} ( e^{-in\pi} - e^{in\pi })\right]\notag\\ &\neq \frac{1}{in\pi } \left[ \left(-\cos{n\pi } - \cos{n\pi }\right) - \frac{1}{in \pi} {\color{red} ( \cos{n\pi } - \cos{n\pi })\right] }\notag \end{align}
Sines?
Last edited: Dec 1, 2005 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787229299545288, "perplexity": 2223.588732938988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170286.6/warc/CC-MAIN-20170219104610-00474-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/vanishing-ricci-tensor-in-3-dimensions.638537/ | # Vanishing Ricci Tensor in 3 Dimensions
1. Sep 24, 2012
### Airsteve0
In my general relativity course my professor recommended that it would be useful to convince ourselves that in 3 dimensions the vacuum field equations are trivial because the vanishing of the Ricci tensor implies the vanishing of the full Riemann tensor. However, I am unsure of how to show this mathematically; if someone could help me or point me in the right direction I would appreciate it, thanks.
2. Sep 24, 2012
### samalkhaiat
Look at
Similar Discussions: Vanishing Ricci Tensor in 3 Dimensions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254185557365417, "perplexity": 289.72984805897914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00787.warc.gz"} |
https://www.physicsforums.com/threads/christoffel-symbols-homework.192961/ | # Homework Help: Christoffel Symbols homework
1. Oct 21, 2007
### n1mrod
Hey Guys,
I'm new here on the forum, and I hope someone can help me out.
I'm solving one of my GR homework exercises and I'm asked to find the christoffel symbols corresponding to cylindrical coordinates.
I'll post my work and please correct if you see mistakes.
I found the metric to be dR^2 + (R^2)(dtheta^2) + dZ^2
therefore
Gab=
1 0 0
0 R^-2 0
0 0 1
Can somebody kind of explain to me how to proceed with these calculations?
Thanks a lot!
Fred.
Last edited: Oct 22, 2007
2. Oct 21, 2007
### timur
I think the element in G is R^2, not R^-2. Once you find the metric, you can calculate the Christoffel symbol by using the direct formula (which involves derivatives of Gab). Or you can use the change of coordinate formula for the Christoffel symbol.
3. Oct 21, 2007
### n1mrod
Hello!
It absolutely is, I just saw I forgot to write, I wrote Gab as being the inverse metric already, so there I believe it is R^-2, shouldn't it?
4. Oct 21, 2007
### timur
Yes.
5. Oct 22, 2007
### cristo
Staff Emeritus
Where's the e come from here?
Incidentally, by G_ab, in your post, you mean g_ab; the metric tensor. You should use the correct notation from the beginning of your studies, as you will find another important tensor denoted G_ab later on!
Last edited: Oct 22, 2007
6. Oct 22, 2007
### CompuChip
Also note the difference between g_ab and g^ab - one is the metric tensor (which?) and the other one is its inverse. Difference is a power of (-1) (if it's diagonal) - not entirely unimportant. If you use the right one but write down the wrong one, people will get confused. If you use the wrong one and write that down correctly... better
7. Oct 22, 2007
### n1mrod
Hey, the e was wrong, I mispelled when I typed in, sorry.
Also, I used G_ab as the inverse. I'm sorry I'm just not used how to type these things here on the forum, but I used G_ab as the inverse, that's why its R^-2 .
8. Oct 22, 2007
### n1mrod
thanks! I know that I did a mistake there, the one I wrote is the inverse already, I'll fix my notations for next time. But still, I don't know how to proceed to calculate the symbols once I have the inverse metric.
9. Oct 22, 2007
### cristo
Staff Emeritus
Ahh, ok.
Yea, the point of my post wasn't to be picky and say "you should use an _" but, more importantly, that the metric tensor is denoted by a small g. When most people look at G_ab they immediately think of the Einstein tensor. Anyway, it may have been a little bit of a pedantic point!
10. Oct 22, 2007
### n1mrod
ahhh okay, so my mistake was even bigger heheh. Hey, please I asked people to correct me, you weren't being picky, you were correcting me :D
I'm just starting with General Relativity and I really want to learn, but as you know it's not very easy to get used to the nomenclature since it's very different from "regular" Physics.
So I believe what I wrote is g_ab right? The inverse of the metric?
11. Oct 22, 2007
### Mentz114
According to my books,
$$g_{\mu\nu}$$ is the metric.
$$g^{\mu\nu}$$ is the inverse metric.
Christoffels are
$$\Gamma^{m}_{ab} = \frac{1}{2}g^{mk}(g_{ak,b}+g_{bk,a}-g_{ab,k})$$
with summation over k, and the ,n means differentiated wrt $$x^n$$
If you click on the text above, you'll get a window with the Tex source.
Last edited: Oct 22, 2007
12. Oct 22, 2007
### cristo
Staff Emeritus
I have no idea what I was doing this morning! Of course, the metric tensor is denoted g_ab, and g^ab is the inverse metric tensor. Thus, what the OP has written in the opening post should g^ab=(..and then the matrix he writes out...)
Apologies for that, and thanks, Mentz for spotting it!
13. Oct 22, 2007
### n1mrod
Okay now I understand it, I went to my professor this morning on my class, and also with the help you guys gave I was able to figure out.. now I can move forward =P
thanks a lot for the help! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690189957618713, "perplexity": 1261.525641585227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00285.warc.gz"} |
http://math.stackexchange.com/questions/245798/definite-integral-over-triple-products-of-higher-order-bessel-functions | # Definite integral over triple products of higher order Bessel functions.
As a follow up to this question I am also interested in a symbolic closed form for this integral
$$\int_0^\infty d r \,r^2\, j_{n_1}( k_1 r)\, j_{n_2}( k_2 r)\, j_{n_3}( k_3 r)\,,$$ where $j_n(r)$ is the $n^{\rm th}$ order spherical Bessel function, $k_1$,$k_2$ and $k_3$ are real positive numbers and $n_1,n_2$ and $n_3$ are positive integers.
The spherical Bessel function $j_n$ can be defined by $$j_n(x) = (-x)^n \left(\frac{1}{x}\frac{d}{dx}\right)^n\,\frac{\sin x}{x}.$$
Clue
As an answer to this question, @joriki provided a nice solution for $n_1=n_2=n_3=0$.
If I am to believe Mathematica again, for instance $$\int_0^\infty d r \,r^2\, j_2( r)\, j_2( 2 r)\, j_2( 3 r)=-\frac{\pi}{48}$$ and $$\int_0^\infty d r \,r^2\, j_2( r)\, j_4( 2 r)\, j_4( 3 r)=-\frac{\pi}{48}$$ so the integral seems possible. On the other hand, if some $n_i$ are odd the integral seems ill defined.
I would guess that for odd indices the answer is $\pi/(8k_1 k_2 k_3)$ times some function of the signs of $n_1$, $n_2$ and $n_3$.
Update
My guess seems to be wrong. Symbolic integration for the first $8\times8\times 8$ values of $(n_1,n_2,n_3)$ yields (with $k_1=k_2=k_3=1$)
-
You should try to use integration by parts on the Bessel function given by the mentioned formula. – Phira Nov 27 '12 at 17:22
@Phira thanks for the advice; it seems difficult to do in practice since 3 such functions are involved?? – chris Nov 27 '12 at 17:26
R. Mehrem gives this as equation 5.14 in the paper The Plane Wave Expansion, Infinite Integrals and Identities involving Spherical Bessel Functions (arXiv:0909.0494v4). It looks like this:
The symbols in curly braces are Wigner 6j-symbols, and the angle-bracketed letters are Clebsch-Gordan coefficients. The P_l is a Legendre polynomial, and Delta is k1^2+k2^2-k3^2/(2k1k2), coming from applying law of cosines to a triangle in k-space (it is just cos theta_12)). The beta function is zero unless the k1, k2, k3 combination is such that it forms a closed triangle (i.e. the side lengths satisfy the triangle inequality).
-
Please summarize the relevant information from the paper. – Null Dec 17 '14 at 7:01
Thank you for your answer. – chris Dec 17 '14 at 9:33
There is also a more recent paper that appears to have a simpler approach: http://www.ams.org/journals/qam/2013-71-03/S0033-569X-2012-01300-8/
I'm not sure if you can access it without institutional credentials, though (I got it through my institution). If you can, the relevant equations are (5) and (9), with q=2.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809542894363403, "perplexity": 269.8946751889656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096290.39/warc/CC-MAIN-20150627031816-00294-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/linked/183341 | 2k views
### How to solve this recurrence relation? $f_n = 3f_{n-1} + 12(-1)^n$
How to solve this particular recurrence relation ? $$f_n = 3f_{n-1} + 12(-1)^n,\quad f_1 = 0$$ such that $f_2 = 12, f_3 = 24$ and so on. I tried out a lot but due to $(-1)^n$ I am not able to ...
5k views
### Finding the closed form for a sequence
My teacher isn't great with explaining his work and the book we have doesn't cover anything like this. He wants us to find a closed form for the sequence defined by: $P_{0} = 0$ $P_{1} = 1$ ...
820 views
### If $x_1<x_2$ are arbitrary real numbers, and $x_n=\frac{1}{2}(x_{n-2}+x_{n-1})$ for $n>2$, show that $(x_n)$ is convergent.
If $x_1<x_2$ are arbitrary real numbers, and $x_n=\frac{1}{2}(x_{n-2}+x_{n-1})$ for $n>2$, show that $(x_n)$ is convergent. What is the limit? The back of my textbook says that ...
2k views
### Solving a recurrence relation with the characteristic equation
I have some trouble solving this due to not seeing the steps to be able to feed it into the characteristic equation. $$T(n) = 4T(n-2) +n + 2^nn^2\ \text{with}\ \ T(0)=0,\ T(1)=1$$ (don't have to ...
362 views
### Closed Form of Recurrence Relation
I have a recurrence relation defined as: $$f(k)=\frac{[f(k-1)]^2}{f(k-2)}$$ Wolfram Alpha shows that the closed form for this relation is: $$f(k)=\exp{(c_2k+c_1)}$$ I'm not really sure how to go ...
250 views
### How to deal with linear recurrence that it's characteristic polynomial has multiple roots?
example , $$a_n=6a_{n-1}-9a_{n-2},a_0=0,a_1=1$$ what is the $a_n$? In fact, I want to know there are any way to deal with this situation.
262 views
### Find n that : $1+5u_nu_{n+1}=k^2, k \in N$
Let ${u_n}$ be such that: $$\begin{cases}u_1=20;\\u_2=30;\\ u_{n+2}=3u_{n+1}-u_{n},\; n \in \mathbb N^*.\end{cases}$$ Find $n$ such that: $$1+5u_nu_{n+1}=k^2,\; k \in \mathbb N.$$
430 views
### How does one rewrite a recursive function to be strictly non-recursive?
Given the recursive function: $$f(0) = \frac{x^2}{2} + \frac{x}{2}, f(n) = \frac{f(n-1)}{2} + \frac{x}{2}$$ where $x$ = some integer How would one rewrite this function to be strictly ...
What is the general solution to the recurrence: $x(n + 2) = 6x(n + 1) - 9x(n)$ for $n \geq 0$; with $x(0) = 0; x(1) = 1$? Solution. The first few values of $x(n)$ are $0,1,6,27,...$ The auxiliary ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655221700668335, "perplexity": 178.53546614038638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861848830.49/warc/CC-MAIN-20160428164408-00152-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/death-by-impact.174852/ | Death by impact?
1. Jun 23, 2007
Evil_Klown
Hi everyone -- I'm brand new here and didn't really know where to post this question. It's just that some friends and I were drinking beer last night and we began to wonder about something.
If you were on a stationary (non orbiting) platform 100,000 miles from earth and you stepped off of it -- would you fall to earth?
Would the Earth move away from you too quickly to affect you?
Would it take you years to hit Earth -- or days?
Would you go into orbit around the earth ... or just slam straight in?
We're not nearly smart enough to attempt a solution -- we were just wondering. Can anyone help? If so -- thanks.
Evil_Klown
2. Jun 23, 2007
cristo
Staff Emeritus
I don't see how such a stationary platform could exist. What is the platform attached to? You are talking about dropping a body off this platform, and whether or not it would be affected by the earth's graviational field (it would). However, so would this platform, and thus it souldn't be stationary.
3. Jun 23, 2007
chaoseverlasting
If the platform is not orbiting the planet, but its position is constant over time wrt the center of the planet, then depending on the mass of the body, the body would take a finite amount of time to fall to the planet providing you can neglect the gravitational effects of other heavenly bodies.
The gravitational force experienced by the body would be given by $$F_g=\frac{GMm}{r^2}$$. Since here $$r=1\times 10^5m$$, initially the force experienced would be very small, but would increase as the body came closer to the planet, hence increasing its acceleration and reducing time taken.
4. Jun 23, 2007
HallsofIvy
Staff Emeritus
I'm going to assume you have a space station "stationary" in the sense that it is constantly directly above a fixed point on earth. At a particular altitude (stationary orbit) it would be any orbit. At any other altitude, something, such as rockets, would be required to keep it in position. What would happen would depend on the precise altitude. Below stationary orbit a person "stepping off" the space station would fall down- his horizontal speed would not be sufficient to keep him in "orbit". Of course, he would have some horizontal speed and would not fall "straight" down- he would impact at some distance to the west of the point directly below the space platform. If the station were above "stationary orbit" altitude, his speed would be too great for orbit and he would, in fact "fall up"- he move away from the earth.
5. Jun 24, 2007
loom91
As others have pointed out, the details of his motion will depend critically on the initial conditions, the motion of the platform in this case. Remember that 'stationery' is not a well-defined concept. It immediately begs the question stationary with respect to what?
Stationary with respect to Earth: this is known as a geostationary satellite. If it is at the altitude where it can be kept in orbit simply by Earth's gravitation and no expenditure of energy (such as by firing thrusters) is needed, then stepping out of it will not affect you. You will be stationary with respect to the satellite and join it in its geostationary orbit. If thrusters are being used, then the result will depend on the position of the platform as explained by HallsofIvy.
But note that even if you do fall down, you will not accelerate indefinitely. The viscous drag of air will cause you to accelerate only up to a maximum velocity, known as the terminal velocity. In fact, as the density of the atmosphere increases, you may even decelerate. The bottom line is that you will be completely vapourised long before you have a chance to hit the ground. This is not a recommended experiment in physics.
Molu | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8433170914649963, "perplexity": 545.1074699167879}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00217-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/182585/compactly-supported-continuous-function-is-uniformly-continuous | # Compactly supported continuous function is uniformly continuous.
What is the most general space where compactly supported continuous functions are uniformly continuous? I managed to prove this for metric spaces but I am interested if it also holds in more general uniform spaces. Thanks
-
Uniform continuity does not make sense in a general topological space. But it can be generalized to uniform spaces. – azarel Aug 14 '12 at 20:41
I said uniform space, not topological space. – Nicolas Bourbaki Aug 15 '12 at 17:40
Indeed, let $(x_\alpha)$ and $(y_\alpha)$ be two nets that approach each other according to the uniform structure (that is, $(x_\alpha, y_\alpha)$ converges to the diagonal). We have to prove that $f(x_\alpha) - f(y_\alpha) \to 0$. Since $f$ is bounded, by choosing subnets we may assume that both $f(x_\alpha)$ and $f(y_\alpha)$ converge to $\xi$ and $\eta$, respectively. Assume that $\xi \neq \eta$, and, say, $\xi \neq 0$. Then $x_\alpha$ is eventually inside the compact support of $f$, hence by a choice of subnets we may assume that $x_\alpha \to x$. But since $(y_\alpha)$ approaches $(x_\alpha)$, $x_\alpha \to x$ implies $y_\alpha \to x$ (this is the definition of topology induced by the uniform structure). Hence $\eta = \xi$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955204725265503, "perplexity": 125.52320688831611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00038-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/5/lesson/5.2.4/problem/5-99 | ### Home > CALC > Chapter 5 > Lesson 5.2.4 > Problem5-99
5-99.
This is Hana's Definition of the Derivative. Notice that Δx is equivalent to h.
f(x) = 5 − 3x where c = 1
so f '(x) = −3 and f '(1) = −3
See hint in part (a).
This is Ana's Definition of the Derivative. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898533821105957, "perplexity": 3820.6106296236558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00340.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/164819-independent-random-variables.html | # Math Help - Independent Random Variables
1. ## Independent Random Variables
"Let X and Y be two independent random variables with the same probability density function given by
f(x)= (
e^(-x) if 0<x<inf.
0 elsewhere
)
Show that g, the probability density function of X/Y, is given by
g(t)=(
1/(1+t)^2 if 0<t<inf.
0 t<=inf.
"
Thanks!
2. Originally Posted by DEUCSB
"Let X and Y be two independent random variables with the same probability density function given by
f(x)= (
e^(-x) if 0<x<inf.
0 elsewhere
)
Show that g, the probability density function of X/Y, is given by
g(t)=(
1/(1+t)^2 if 0<t<inf.
0 t<=inf.
"
Thanks!
What have you tried? Where are you stuck?
3. I have found the density functions for both of x and y but dont know how to convert that to X/Y
4. Originally Posted by DEUCSB
I have found the density functions for both of x and y but dont know how to convert that to X/Y
You were told the density functions. Have you done any research eg. Google the key words
ratio distribution
quotient random variables
etc.
What have you been taught in class regarding the algebra of random variables? eg. Change of variable theorem. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401865005493164, "perplexity": 1015.9673216680253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00167-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/cantors-definition-of-reals.185075/ | # Cantor's definition of reals
1. Sep 16, 2007
### grossgermany
Hi I've got a tough analysis proof. If I can do this on my own I might as well be Cantor himself.
The first step is :
Let C and N denote the collection of every cauchy sequence and null sequence (consisting of rationals) , prove that N is a subset of C
Second step is :
Prove N induces a equivalence relations on C.
Last edited: Sep 16, 2007
2. Sep 16, 2007
### morphism
What thoughts have you had so far?
3. Sep 16, 2007
### Hurkyl
Staff Emeritus
Do you understand the definitions of all the technical terms? Can you write them down?
Anyways, you certainly have at least definition wrong. C and N, as you've defined them, are not sets1. So it doesn't make sense to ask if one is a subset of the other.
1: At least, not in a useful sense.
4. Sep 16, 2007
### grossgermany
Sorry I meant the C and N are the collection of all cauchy sequences
I think the step one is trivially true, since null sequence is defined as xn such that there exists N: for any n>N ,|xn|<epsilon/2, therefore for any m,n>N, |xm-xn|<|xm-0|+|0-xn|=|xm|+|xn|=epsilon by triangular inequality
Therefore a null sequence is neccesarilly cauchy.
I'm not sure where the significance of 'colection of sequences consisting of rationals' come into play
5. Sep 16, 2007
### HallsofIvy
Staff Emeritus
What do you mean by "the cauchy sequence"? I'm pretty sure there's more than one! Do you mean the set of all cauchy sequences of rational numbers? I'm also not sure what you mean by "the null sequence (consisting of rationals)". Do you mean the set of all sequences of rational numbers converging to 0?
Do you know the theorem that says every convergent sequence is a Cauchy sequence?
The phrase "of rational numbers" is necessary because you haven't yet defined real numbers!
Last edited: Sep 16, 2007
6. Sep 16, 2007
### grossgermany
thanks and you are right, I misquoted, it's the collection of all such sequences, but what property of rationals are necessary? What doesn't work if I define them using inegers.
7. Sep 16, 2007
### morphism
The exercise is attempting to construct the real numbers starting from the rational numbers. One way to do this is to take limits of Cauchy sequences of rational numbers. The problem is that not all Cauchy sequences of rational numbers converge to a rational number, so we try to 'fill in the gaps' using what we would call 'real numbers'.
Now that you've shown that null sequences are Cauchy, you know that N sits in C. How do we usually use a subset of a set to induce an equivalence relation?
Here's a hint. Consider the set of integers Z. The set of even integers 2Z is a subset of Z. We can let 2Z induce the following equivalence relation on Z: n~m iff n-m is even, i.e. is in 2Z. This is the familiar "mod 2" equivalence relation. Under it we do not distinguish between any two even numbers or any two odd numbers.
Last edited: Sep 16, 2007
8. Sep 16, 2007
### grossgermany
I'm not sure about the definition of induce, why is it called induce?
we first define n~m iff n-m is in N, then prove ~ is reflexive, transitive and symmetric, and that's it i guess?
edited:
Done!I've proven ~ is reflexive, transitive and symmetric
Last edited: Sep 16, 2007
9. Sep 16, 2007
### grossgermany
I'm completely clueless on this one
Let R := C/N, i.e., the collection of cosets of C under the equivalence
relation induced by N. Define addition and multiplication
on R so that R is a field with respect to these operations.
10. Sep 16, 2007
### morphism
There's no special meaning behind "induce" -- it's used in the normal English sense.
About your definition of ~: n and m are supposed to be sequences, so what's your definition of n-m? And did you manage to prove that ~ satisfies those 3 properties?
Do you understand what we're trying to do? Intuitively, what's going on is that we have some sequences of rational numbers that are trying to converge (because they're Cauchy) but are not finding a rational number to converge to. What we're trying to do is to artificially add a point that can be the limit of such a sequence. How would we go about doing this? Well, if two sequences are 'trying' to converge to the same 'real number', then their tails will be very close to each other, i.e. if x_n and y_n are two such sequences, then for some large enough n, |x_n - y_n| is very small (it goes to zero as n goes to infinity).
11. Sep 16, 2007
### morphism
Hopefully you can see that an element of C/N is an equivalence class of Cauchy sequences. And if you read my previous post, you'll see that each equivalence class consists of those Cauchy sequences that are trying to converge to the same limit. Essentially, we're using the entire equivalence class to be the real number the sequences are trying to converge to.
Maybe an example will help shed some light on what's going on. Consider the three Cauchy sequences of rationals: (x_n) = (1/n), (y_n) = (1/2^n) and (z_n) = (-1/n). We know that each of them converges to 0. In particular, the tails of all three sequences are close to each other. So these three sequences lie in the same equivalence class in C/N. This equivalence class is what we would call the real number 0, and denote by [(x_n)] or [(y_n)] or [(z_n)] or [(0, 0, 0, ...)] or [(any sequence that converges to 0)].
On the other hand, consider the sequence (w_n) defined recursively by w_1 = 1 and w_{n+1} = 1 + 1/(1 + w_n). This sequence of rational numbers is trying to converge to $\sqrt2$, but cannot do this because $\sqrt2$ is not a rational number, so it does not exist in our universe of rational numbers. In C/N, we can think of [(w_n)] as all Cauchy sequences of rationals that are trying to converge to $\sqrt2$. Keeping this in mind, we can then define $\sqrt2$ to be this equivalence class of sequences.
I hope everything's clearer now.
Last edited: Sep 16, 2007
12. Sep 16, 2007
### grossgermany
thanks a lot for your help. i never thought of it that way. Stupid me, I spent 50 hours last week reading baby rudin and didn't see the whole picture.
I've never learnd Equivalence class and couldn't find it in baby rudin either.
I defined n-m as nx-mx where x is the index of sequence. the difference of two sequence is the difference between every element of them. I prove the 3 properties using properties of absolute value.
Last edited: Sep 16, 2007
13. Sep 16, 2007
### grossgermany
x~y iff x-y belongs to null set N, by definition of equivalence
therefore C-C/N=N which is trivially a subset of N
therefore C/N~C
Brilliant, now I know C~C/N, but how do I define a addition and multiplication operation on C/N so as to make it a field isomorphic to R?
edit: I got it! I can define addition of two cauchy seuqnces in C/N as the addition of the the limits of these two cauchy sequences.
similarly for multiplication.
Last edited: Sep 16, 2007
14. Sep 17, 2007
### morphism
I had a detailed reply, but my browser lost it. Damn keyboard shortcuts.
Anyway, what do you mean by C-C/N=N and C/N~C? C/N is a collection of equivalence classes of sequences, i.e. C/N = {[(x_n)] : (x_n) is in C}, where [(x_n)] = {(y_n) in C : (y_n) ~ (x_n)}. And because equivalence classes can have many representatives (e.g. in Z/2Z, [0]=[-2]=[2]=[2342435423554] etc.), you will have to check that the addition and multiplication operations you put on C/N are well-defined, i.e. do not depend on which representative one chooses. (For instance, if we define + on Z/2Z by [n]+[m]=[n+m], we must prove that if [n]=[n'] and [m]=[m'], then [n]+[m]=[n']+[m'].)
Also, you said that you defined the addition of two cauchy sequences in C/N (actually, you should say "two elements in C/N") as the addition of their limits, but we do not know if these limits exist. So unless you don't mean what I think you mean, this would not be a very good definition.
Finally, after you define + and *, make sure you check that R together with these two operations is indeed a field, i.e. check that it satisfies the axioms of a field (e.g. addition is commutative and multiplication is associative, etc.).
15. Sep 17, 2007
### grossgermany
I thought C/N is the set C without elements in N, that's my interpretation of the operator /
Now I remember Z/2Z is all even integers.
I see, So C/N means the collection of all equivalence classes in C(collection of all cauchy sequences): an equivalence class is defined as if A-B belongs to N, then A and B are in the same equivalence class.
This is interesting because Z/2Z gives you one equivalence class, But C/N gives you infinitely many equivalence classes, since you have equivalence class who has one representative cauchy sequence which converges to 0, and those converging to 3 and those converging to sqrt2.
We know that Cauchy sequences converges in R, can we somehow uses that result here?
Last edited: Sep 17, 2007
16. Sep 17, 2007
### morphism
No, we can't use the fact that Cauchy sequences converge in R, because we don't know what R is! This is the point of the exercise: constructing R. The way it's being done will trivially give us the fact that Cauchy sequences converge in R.
(By the way, C\N would be the set C minus N.)
17. Sep 18, 2007
### mathwonk
another satisifed baby rudin user. when are people going to stop banging their heads against that book? there are so many better ones out there. as have often been listed hereabouts. this particular construction is even in van der waerdens algebra book.
Similar Discussions: Cantor's definition of reals | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945001482963562, "perplexity": 665.5407465497568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00281.warc.gz"} |
https://planetmath.org/QuotientRepresentations | # quotient representations
We assume that all representations ($G$-modules) are finite-dimensional.
###### Definition 1
If $N_{1}$ and $N_{2}$ are $G$-modules over a field $k$ (i.e. representations of $G$ in $N_{1}$ and $N_{2}$), then a map $\varphi:N_{1}\to N_{2}$ is a $G$-map if $\varphi$ is $k$-linear and preserves the $G$-action, i.e. if
$\varphi(\sigma\cdot x)=\sigma\cdot\varphi(x)$
$G$-maps have subrepresentations, also called $G$-submodules, as their kernel and image. To see this, let $\varphi:N_{1}\to N_{2}$ be a $G$-map; let $M_{1}\subset N_{1}$ and $M_{2}\subset N_{2}$ be the kernel and image respectively of $\varphi$. $M_{1}$ is a submodule of $N_{1}$ if it is stable under the action of $G$, but
$x\in M_{1}\Rightarrow\varphi(\sigma\cdot x)=\sigma\cdot\varphi(x)=0\Rightarrow% \sigma\cdot x\in M_{1}$
$M_{2}$ is a submodule of $N_{2}$ if it is stable under the action of $G$, but
$y=\varphi(x)\in M_{2}\Rightarrow\sigma\cdot y=\sigma\cdot\varphi(x)=\varphi(% \sigma\cdot x)\Rightarrow\sigma\cdot y\in M_{2}$
Finally, we define the intuitive concept of a quotient $G$-module. Suppose $N^{\prime}\subset N$ is a $G$-submodule. Then $N/N^{\prime}$ is a finite-dimensional vector space. We can define an action of $G$ on $N/N^{\prime}$ via $\sigma(n+N^{\prime})=\sigma(n)+\sigma(N^{\prime})=\sigma(n)+N^{\prime}$, so that $n+N^{\prime}$ is well-defined under the action and $N/N^{\prime}$ is a $G$-module.
Title quotient representations QuotientRepresentations 2013-03-22 16:37:59 2013-03-22 16:37:59 rm50 (10146) rm50 (10146) 6 rm50 (10146) Definition msc 20C99 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 39, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998114824295044, "perplexity": 239.6584344965982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00178.warc.gz"} |
http://www.hk3338.com/Classes/Alg/Circles.aspx | Paul's Online Notes
Paul's Online Notes
Home / Algebra / Graphing and Functions / Circles
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 3-3 : Circles
In this section we are going to take a quick look at circles. However, before we do that we need to give a quick formula that hopefully you’ll recall seeing at some point in the past.
Given two points $$\left( {{x_1},{y_1}} \right)$$ and $$\left( {{x_2},{y_2}} \right)$$ the distance between them is given by,
$d = \sqrt {{{\left( {{x_2} - {x_1}} \right)}^2} + {{\left( {{y_2} - {y_1}} \right)}^2}}$
So, why did we remind you of this formula? Well, let’s recall just what a circle is. A circle is all the points that are the same distance, $$r$$ – called the radius, from a point, $$\left( {h,k} \right)$$ - called the center. In other words, if $$\left( {x,y} \right)$$ is any point that is on the circle then it has a distance of $$r$$ from the center, $$\left( {h,k} \right)$$.
If we use the distance formula on these two points we would get,
$r = \sqrt {{{\left( {x - h} \right)}^2} + {{\left( {y - k} \right)}^2}}$
Or, if we square both sides we get,
${\left( {x - h} \right)^2} + {\left( {y - k} \right)^2} = {r^2}$
This is the standard form of the equation of a circle with radius $$r$$ and center $$\left( {h,k} \right)$$.
Example 1 Write down the equation of a circle with radius 8 and center $$\left( { - 4,7} \right)$$.
Show Solution
Okay, in this case we have $$r = 8$$, $$h = - 4$$ and $$k = 7$$ so all we need to do is plug them into the standard form of the equation of the circle.
\begin{align*}{\left( {x - \left( { - 4} \right)} \right)^2} + {\left( {y - 7} \right)^2} & = {8^2}\\ {\left( {x + 4} \right)^2} + {\left( {y - 7} \right)^2} & = 64\end{align*}
Do not square out the two terms on the left. Leaving these terms as they are will allow us to quickly identify the equation as that of a circle and to quickly identify the radius and center of the circle.
Graphing circles is a fairly simple process once we know the radius and center. In order to graph a circle all we really need is the right most, left most, top most and bottom most points on the circle. Once we know these it’s easy to sketch in the circle.
Nicely enough for us these points are easy to find. Since these are points on the circle we know that they must be a distance of $$r$$ from the center. Therefore, the points will have the following coordinates.
\begin{align*}& {\mbox{right most point : }}\left( {h + r,k} \right)\\ & {\mbox{left most point : }}\left( {h - r,k} \right)\\ & {\mbox{top most point : }}\left( {h,k + r} \right)\\ & {\mbox{bottom most point : }}\left( {h,k - r} \right)\end{align*}
In other words all we need to do is add $$r$$ on to the $$x$$ coordinate or $$y$$ coordinate of the point to get the right most or top most point respectively and subtract $$r$$ from the $$x$$ coordinate or $$y$$ coordinate to get the left most or bottom most points.
Let’s graph some circles.
Example 2 Determine the center and radius of each of the following circles and sketch the graph of the circle.
1. $${x^2} + {y^2} = 1$$
2. $${x^2} + {\left( {y - 3} \right)^2} = 4$$
3. $${\left( {x - 1} \right)^2} + {\left( {y + 4} \right)^2} = 16$$
Show All Solutions Hide All Solutions
Show Discussion
In all of these all that we really need to do is compare the equation to the standard form and identify the radius and center. Once that is done find the four points talked about above and sketch in the circle.
a $${x^2} + {y^2} = 1$$ Show Solution
In this case it’s just $$x$$ and $$y$$ squared by themselves. The only way that we could have this is to have both $$h$$ and $$k$$ be zero. So, the center and radius is,
${\mbox{center}} = \left( {0,0} \right)\hspace{0.25in}\hspace{0.25in}{\mbox{radius}} = \sqrt 1 = 1$
Don’t forget that the radius is the square root of the number on the other side of the equal sign. Here is a sketch of this circle.
A circle centered at the origin with radius 1 (i.e. this circle) is called the unit circle. The unit circle is very useful in a Trigonometry class.
b $${x^2} + {\left( {y - 3} \right)^2} = 4$$ Show Solution
In this part, it looks like the $$x$$ coordinate of the center is zero as with the previous part. However, this time there is something more with the $$y$$ term and so comparing this term to the standard form of the circle we can see that the $$y$$ coordinate of the center must be 3. The center and radius of this circle is then,
${\mbox{center}} = \left( {0,3} \right)\hspace{0.25in}\hspace{0.25in}{\mbox{radius}} = \sqrt 4 = 2$
Here is a sketch of the circle. The center is marked with a red cross in this graph.
c $${\left( {x - 1} \right)^2} + {\left( {y + 4} \right)^2} = 16$$ Show Solution
For this part neither of the coordinates of the center are zero. By comparing our equation with the standard form it’s fairly easy to see (hopefully…) that the $$x$$ coordinate of the center is 1. The $$y$$ coordinate isn’t too bad either, but we do need to be a little careful. In this case the term is $${\left( {y + 4} \right)^2}$$ and in the standard form the term is $${\left( {y - k} \right)^2}$$. Note that the signs are different. The only way that this can happen is if $$k$$ is negative. So, the $$y$$ coordinate of the center must be -4.
The center and radius for this circle are,
${\mbox{center}} = \left( {1, - 4} \right)\hspace{0.25in}\hspace{0.25in}{\mbox{radius}} = \sqrt {16} = 4$
Here is a sketch of this circle with the center marked with a red cross.
So, we’ve seen how to deal with circles that are already in the standard form. However, not all circles will start out in the standard form. So, let’s take a look at how to put a circle in the standard form.
Example 3 Determine the center and radius of each of the following.
1. $${x^2} + {y^2} + 8x + 7 = 0$$
2. $${x^2} + {y^2} - 3x + 10y - 1 = 0$$
Show All Solutions Hide All Solutions
Show Discussion
Neither of these equations are in standard form and so to determine the center and radius we’ll need to put it into standard form. We actually already know how to do this. Back when we were solving quadratic equations we saw a way to turn a quadratic polynomial into a perfect square. The process was called completing the square.
This is exactly what we want to do here, although in this case we aren’t solving anything and we’re going to have to deal with the fact that we’ve got both $$x$$ and $$y$$ in the equation. Let’s step through the process with the first part.
a $${x^2} + {y^2} + 8x + 7 = 0$$ Show Solution
We’ll go through the process in a step by step fashion with this one.
Step 1 : First get the constant on one side by itself and at the same time group the $$x$$ terms together and the $$y$$ terms together.
${x^2} + 8x + {y^2} = - 7$
In this case there was only one term with a $$y$$ in it and two with $$x$$’s in them.
Step 2 : For each variable with two terms complete the square on those terms.
So, in this case that means that we only need to complete the square on the $$x$$ terms. Recall how this is done. We first take half the coefficient of the $$x$$ and square it.
${\left( {\frac{8}{2}} \right)^2} = {\left( 4 \right)^2} = 16$
We then add this to both sides of the equation.
${x^2} + 8x + 16 + {y^2} = - 7 + 16 = 9$
Now, the first three terms will factor as a perfect square.
${\left( {x + 4} \right)^2} + {y^2} = 9$
Step 3 : This is now the standard form of the equation of a circle and so we can pick the center and radius right off this. They are,
${\mbox{center}} = \left( { - 4,0} \right)\hspace{0.25in}{\mbox{radius}} = \sqrt 9 = 3$
b $${x^2} + {y^2} - 3x + 10y - 1 = 0$$ Show Solution
In this part we’ll go through the process a little quicker. First get terms properly grouped and placed.
$\underbrace {\,\,\,\,\,{x^2} - 3x\,\,\,\,\,}_{{\mbox{complete the square}}} + \underbrace {\,\,\,{y^2} + 10y\,\,\,}_{{\mbox{complete the square}}} = 1$
Now, as noted above we’ll need to complete the square twice here, once for the $$x$$ terms and once for the $$y$$ terms. Let’s first get the numbers that we’ll need to add to both sides.
${\left( { - \frac{3}{2}} \right)^2} = \frac{9}{4}\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}{\left( {\frac{{10}}{2}} \right)^2} = {\left( 5 \right)^2} = 25$
Now, add these to both sides of the equation.
$\underbrace {{x^2} - 3x + \frac{9}{4}}_{{\mbox{factor this}}} + \underbrace {{y^2} + 10y + 25}_{{\mbox{factor this}}} = 1 + \frac{9}{4} + 25 = \frac{{113}}{4}$
When adding the numbers to both sides make sure and place them properly. This means that we need to put the number from the coefficient of the $$x$$ with the $$x$$ terms and the number from the coefficient of the $$y$$ with the $$y$$ terms. This placement is important since this will be the only way that the quadratics will factor as we need them to factor.
Now, factor the quadratics as show above. This will give the standard form of the equation of the circle.
${\left( {x - \frac{3}{2}} \right)^2} + {\left( {y + 5} \right)^2} = \frac{{113}}{4}$
This looks a little messier than the equations that we’ve seen to this point. However, this is something that will happen on occasion so don’t get excited about it. Here is the center and radius for this circle.
${\mbox{center}} = \left( {\frac{3}{2}, - 5} \right)\hspace{0.25in}\hspace{0.25in}{\mbox{radius}} = \sqrt {\frac{{113}}{4}} = \frac{{\sqrt {113} }}{2}$
Do not get excited about the messy radius or fractions in the center coordinates. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889328241348267, "perplexity": 203.1405298048343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00109.warc.gz"} |
https://www.arxiv-vanity.com/papers/1210.5532/ | # Refinability of splines derived from regular tessellations
Jörg Peters
###### Abstract
Splines can be constructed by convolving the indicator function of a cell whose shifts tessellate . This paper presents simple, non-algebraic criteria that imply that, for regular shift-invariant tessellations, only a small subset of such spline families yield nested spaces: primarily the well-known tensor-product and box splines. Among the many non-refinable constructions are hex-splines and their generalization to the Voronoi cells of non-Cartesian root lattices.
## 1 Introduction
Univariate uniform B-splines can be defined by repeated convolution, starting with the indicator functions111An indicator function takes on the value one on the interval but is zero otherwise. of the intervals or cells delineated by knots. This construction implies local support and delivers a number of desirable properties (see [dB78, dB87]) that have made B-splines the representation of choice in modeling and analysis. In particular, B-splines are refinable. That is, they can be exactly represented as a linear combinations of B-splines with a finer knot sequence. Refinability is a key ingredient of multi-resolution and adaptive and sparse representation of data. Refinability also guarantees monotone decay of error when shrinking the intervals.
By tensoring univariate B-splines, we can obtain splines on Cartesian grids in any dimension. Box-splines [dHR93] generalize tensoring by allowing convolution in directions other than orthogonal ones. As a prominent example in two variables, the linear 3-direction box-spline consists of linear pieces over each of six equilateral triangles surrounding one vertex. Shifts of this ‘hat function’ on an equilateral triangulation sum to one. Convolution of the hat function with itself results in a twice continuously differentiable function of degree 4; and -fold convolution yields a function of degree with smoothness . Since this progression skips odd orders of smoothness, van der Ville et al. [vBU04] proposed to directly convolve the indicator function of the hexagon and build splines customized to the hexagonal tessellation of the plane (cf. Fig. 1a). They went on to show that the resulting hex-splines share a number of desirable properties familiar from box-splines. But the authors did not settle whether the splines were refinable [vdVU10], i.e. whether hex-splines of the given hexagonal tessellation can be represented as linear combinations of hex-splines based on a scaled-down hexagonal tessellation, say . Generalizing the analysis of hex-splines,
• this paper presents simple non-algebraic criteria necessary for regular shift-invariant tessellations to admit refinable indicator functions.
For example, such a tessellation must contain, for every cell facet , the plane through . Therefore, requiring refinability, even of just the constant spline, strongly restricts allowable tessellations.
• In contrast to tensor-product and box splines, we show that hex-splines and similar constructions can only be scaled, but not refined: scaled hex-spline spaces are not nested.
• The analysis extends to overcomplete families (superpositions) of spline spaces.
The following example illustrates how non-refinability leads to loss of monotonicity of the approximation error under scaling: for one or more steps halving the scale can increase the error. By contrast, nested spaces guarantee monotonically decreasing error.
###### Example 1
Let be the space of indicator functions over a regular tessellation by hexagons of diameter and such that, at each level of scaling, the origin is the center of one hexagon. Denote by the indicator function in whose support hexagon is centered at the origin. does not contain a linear combination of functions that can replicate since the supports of the six relevant scaled indicator functions are bisected by the boundary of the support of (see Fig. 1b). Correspondingly, the approximation error to from is where is the area of the hexagon with diameter . Since the error from is by construction zero, the scaling by has increased the error. By carefully adding to an increasing number of scaled-down copies, small increases in the error can be distributed over multiple consecutive steps.
Overview. Section 2 reviews tessellations induced by lattices, hex-splines and their generalizations. Section 3 exhibits two non-algebraic criteria, chosen for their simplicity, for testing whether a tessellation can support a refinable space of splines that are constructed by convolution of indicator functions of its cells. Section 4 extends this investigation to a multiple covering of by distinct families of indicator functions.
## 2 Splines from lattice Voronoi cells
A -dimensional lattice is a discrete subgroup of full rank in a -dimensional Euclidean vector space. Alternatively, such a lattice may be viewed as inducing a tessellation of space into identical cells without -dimensional overlap222 A common convention is to define the cells to be half-open sets so that they do not overlap on facets, but nevertheless cover.. The tessellation is then generated by the translational shifts of one cell. For example, lattice points can serve as sites of Voronoi cells. The Euclidean plane admits three highly symmetric shift-invariant tessellations: partition into equilateral triangles, squares, or hexagons respectively. Repeated convolution starting with the indicator function of any of these polygons yields spline functions of local support and increasing degree. The regular partition into squares gives rise to uniform tensor-product B-splines and the regular triangulation and its hexagonal dual to box splines.
An interesting additional type of spline arises from convolving the indicator function of the hexagon with itself. Such hex-splines, a family of splines supported on a local -neighborhood, were developed and analyzed by van De Ville et al. [vBU04]. That paper compares hex-splines to tensor-product splines and uses the Fourier transform of hex-splines to derive, for low frequencies, the approximation order, as a combination of the projection into the hex-spline space and a quasi-interpolation error. [CvB05] derived quasi-interpolation formulas and showed promising results when applying hex-splines to the reconstruction of images (see also [CvU06, Cv07, Cv08]). Van De Ville et al. [vBU04]. also observed that hexagons are Voronoi cells of a lattice and that the cell can be split into three quadrilaterals, using one of two choices of the central split. Thus can be split into three constant box splines whose mixed convolution yields higher-order splines [Kim08b, ME10]. However, while box-splines are refinable, we will see that hex-splines are not refinable in a shift-invariant way.
## 3 Refinability constraints
We consider a polyhedral tessellation of into unpartitioned -dimensional units, called cells, that are bounded by a finite number of -dimensional facets. We denote by the space of indicator functions of the cells of and by the space of indicator functions on some scaled-down copy of . The space is refinable if each indicator function in can be represented as a linear combination of functions in .
Establishing whether a tessellation admits a refinable space of indicator functions therefore requires proving the existence of weights such that a linear combination of elements in with these weights reproduces each element in . Proposition 1 below provides a much simpler necessary condition that avoids such algebraic analysis. While our focus is on shift-invariant tessellations, Proposition 1 applies more generally and also to cell boundaries of co-dimension greater than 1. Its proof uses the notion of a straddling a facet of a cell . A cell straddles a facet of if there exists a point on , a unit vector normal to at and such that both and .
###### Proposition 1
Let be a polyhedral tessellation of and its scaled-down copy. Then is refinable only if every facet of is the union of facets of .
Proof Assume that a facet of a cell in is not a union of facets of . Then, since is a tessellation, some cell of must straddle . Let be the indicator function of and the indicator function of . Then, in order to reproduce the unit step of across , must simultaneously take on both the value and the value .
Translation-invariant or shift-invariant tessellations are a special case of transitive tilings where every cell can be mapped to every other cell by translation, without rotation.
###### Proposition 2
If is a shift-invariant tessellation, is refinable only if, contains, for each facet , the hyperplane through .
Proof The coarser-scaled copies of contain enlarged copies of every facet in . By Proposition 1 these copies must be a union of facets of . Therefore a shifted copy of every facet is strictly contained in the interior of and so extended by some coarser facet. Shift-invariance then implies that every facet lies strictly inside such an extension. Ever coarser tessellations provide a sequence of extensions of in any direction by any amount.
By inspection of the three regular tessellations of the plane, only the Cartesian grid and the uniform triangulation satisfy Proposition 2, but not the partition into hexagons.
###### Corollary 1
Hex splines are not refinable.
We can generalize this observation by simplifying the inspection criterion.
We say that two abutting facets and of a cell meet with an obtuse angle if, for , there exist unit vectors orthogonal to and outward pointing so that .
###### Proposition 3
Let be a tessellation of by shifts of one polyhedral cell . If two facets and of meet with an obtuse angle and if , the reflection of across , is a cell of then is not refinable.
Proof Assume is refinable under the given conditions. Let be the reflection of across (the plane through) . Since must not overlap , obtuse angles exceeding , such as the reentrant corner of an L-shaped cell, cannot occur in . Denote by the reflection of across and by the common intersection of , and (see Fig. 2).
Within , by reflection, the facets and meet at with an obtuse angle. By Proposition 2 the extension of lies in . Since the outward-pointing normal of with respect to is , and meet at with an acute angle. Therefore extends into and splits . This contradicts the definition of a cell as an unpartitioned unit and hence the initial assumption.
The next Proposition 3 allows us to quickly decide which of the (symmetric crystallographic) root lattices , , , , , , [CS98] are suitable for building refinable splines by convolution of their nearest-neighbor (Voronoi) cells.
###### Corollary 2
Splines obtained by convolving the Voronoi cell of a non-Cartesian crystallographic root lattice are not refinable.
Proof We test whether the Voronoi cells of the root lattices contain a pair of abutting faces that meet with an obtuse angle. We may assume that one Voronoi site (cell center) is at the origin. By definition of a Voronoi cell, the position vectors and of two adjacent nearest neighbors, as identified by their root system, are normal to the corresponding abutting bisector facets. Therefore these facets meet with an obtuse angle if .
The lattice is traditionally defined via an embedding in , . More convenient for our purpose is the alternative geometric construction in via the generator matrix of Theorem 1 of [KP10]. Here is the identity matrix, the matrix of ones and . Denoting the th coordinate vector by , we choose and on the Cartesian grid, and map them via to the nearest neighbors of the origin. The inner product of the images of and is
Ane1⋅An(e1+e2)=n+4cn+c2nn=2n(n+√n+1−1)>0.
For the lattice, the computation is identical except that . The inner product is .
For the lattice, defined in dimensions, the generator matrix is (see e.g. Section 7 of [KP11]) and
Dne1⋅Dn(e1+e2)=3>0.
Since is the generator of , the inner product for is .
For , the Cartesian cube lattice has an inner product of identifying its uniform tensor-product B-spline constructions as potentially refinable (which indeed they are). On the other hand, splitting each cube by adding the diagonal directions of the full root system [Kim08a] yields the inner product .
For , we select the root vectors and with inner product . For , we select the root vectors and with inner product . For , we select the root vectors and with inner product .
The equilateral triangulation in is dual to the ‘honeycomb lattice’ which is not a standard lattice. The equilateral triangulation yields an inner product of compatible with refinability and indeed plays host to the refinable ‘hat’ function.
## 4 Overcomplete spaces
Since the evaluation of hex-splines by convolving three families of box splines already makes use of a large number of terms, it is reasonable to investigate whether superposition of several families of hex-splines are refinable as a family. That is, we consider a family of distinct shift-invariant tessellations obtained by shifts of . Their union covers -fold. We check refinability of the family, i.e. whether each member of the family can be expressed as a linear combination of the scaled-down copies of splines of the family. Example 2 makes this concrete for . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153450131416321, "perplexity": 965.9409274780854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00409.warc.gz"} |
http://www.vincenzofiore.it/neural-networks/ | # Neural Models
Teaching material and code examples for neural network programming and parameter regression with artificial neural systems (in Matlab).
I usually run through the code several times before publishing it and especially the parts that have been used in a course/workshop have been commented, but do let me know if you find any bug or have any particular question. Please consider I am not a programmer, but a cognitive neuroscientist, so these examples do not represent the best possible practice for coding.
By the way, if you think there is a better way to code the same functions or if you simply want to share your code for neural networks, please feel free to comment!
## Minimal onset detector and parameter regression
The following code examples represent four distinct codes to solve in different ways the task of generating a small onset detector neural network, as portrayed in the figure below: An arbitrary input reaches two units performing a simple (identical) computation. The output unit (neuron 1) receives the excitatory input and, with a very short delay due to the …
## Mean field activity
In this case a layer of interconnected leaky neurons is characterised by a positive tanh as a transfer function. The time component is slower than the one we have in spiking neurons (the decay of the leaky is at least 100 times slower), as a consequence each of these slow units can be conceived as simulating the average activity …
## Spiking neurons, cluster example
Quick example to create a cluster of interconnected spiking neurons controlled by a leaky integrator for the action potential. A simple input is provided characterised by the same numerosity of the neural cluster. All parameters can be changes to see the effect of different types of connectivity on the overall activity. NB the simulation relies on …
## Layer of clusters with lateral inhibitions
Download the compressed folder to run the simulation in Matlab: zip_files. The main file “cluster_competition” calls several functions to build a structure of clusters and the relative connections. In a way this is the equivalent of the simulations presented in the mean field example, where this time we have a cluster of spiking neurons per each single unit simulating the mean field …
## Simple time series regression using genetic algorithms
It is not surprising that artificial neural networks have been primarily developed as a tool to approximate, estimate and forecast the evolution of time series in the future starting from a dataset describing the past. Indeed, a record of neural activity in a single neuron (spiking) or population (mean field) is just one of the many possible examples …
## Pattern raiders: regressions of complex time series
NB In this example I use most of the code and concepts already presented when explaining simple time series regressions using genetic algorithms. Please refer to that text for the basic explanations. In the previous example we were happy with the idea to get rid of many features in the target time series that were considered unimportant. …
## Basal Ganglia: package to simulate mean field neural dynamics
Very briefly, you can download here a Matlab code to simulate the neural activity in a single, 3-channel, cortico-striatal loop. The parameters used in this example have been hand tuned to replicate the dynamics described in Fiore et al. 2016,Changing pattern in the basal ganglia: motor switching under reduced dopaminergic drive. http://www.nature.com/articles/srep23327 As a start, you can …
Insert math as
$${}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181885480880737, "perplexity": 1240.39338270095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359065.88/warc/CC-MAIN-20211130171559-20211130201559-00259.warc.gz"} |
https://lucatrevisan.wordpress.com/tag/pseudorandomness/ | # What’s New at the Simons Institute
This Thursday (December 15) is the deadline to apply for post-doctoral positions at the Simons Institute for the next academic year. In Fall 2017 we will run a single program, instead of two programs in parallel, on optimization. The program will be double the size of our typical programs, and will focus on the interplay between discrete and continuous methods in optimization. In Spring 2018 there will be a program on real-time decision-making and one on theoretical neuroscience.
In a few weeks, our Spring 2017 programs will start. The program on pseudorandomness will have a mix of complexity theorists and number theorists, and it will happen at the same time as a program on analytic number theory at MSRI. It will start with a series of lectures. It will start with a series of lectures in the third week of January. The program on foundations of machine learning has been much anticipated, and it will also start with a series of lectures, in the fourth week of January, which will bring to Berkeley at impressive collection of machine learning practitioners whose research is both applied and rigorous. All lectures will be streamed live, and I encourage you to set up viewing parties.
During the current semester, the Simons Institute had for the first time a journalist in residence. The inaugural resident journalist has been Erica Klarreich, that many of you probably know for her outstanding writing on mathematics and computer science for quanta magazine.
Next semester, our resident journalist will be Pulitzer-prize winner John Markoff, who just retired from the New York Times.
# Small-bias Distributions and DNFs
Two of my favorite challenges in unconditional derandomization are to find logarithmic-seed pseudorandom generators which are good against:
1. log-space randomized algorithms
2. ${AC^0}$, that is, constant depth circuits of polynomial size
Regarding the first challenge, the best known pseudorandom generator remains Nisan’s, from 1990, which requires a seed of ${O(\log^2 n)}$ bits. Maddeningly, even if we look at width-3 oblivious branching programs, that is, non-uniform algorithms that use only ${\log_2 3}$ bits of memory, nobody knows how to beat Nisan’s generator.
Regarding the second challenge, Nisan showed in 1988 that for every ${d}$ there is a pseudorandom generator of seed length ${O(\log^{2d+6} n)}$ against depth-${d}$ circuits of size ${n}$. The simplest case is that of depth-2 circuits, or, without loss of generality, of disjunctive-normal-form (DNF) formulas. When specialized to DNF formulas, Nisan’s generator has seed length ${O(\log^{10} n)}$, but better constructions are known in this case.
Luby, Velickovic and Wigderson improved the seed length to ${O(\log^4 n)}$ in 1993. Bazzi’s celebrated proof of the depth-2 case of the Linian-Nisan conjecture implies that a ${O(\log^2 m/\delta)}$-wise independent distribution “${\delta}$-fools” every ${m}$-term DNF, by which we mean that for every such DNF formula ${\phi}$ and every such distribution ${X}$ we have
$\displaystyle \left| \mathop{\mathbb P}_{x\sim X} [ \phi(x) = 1] - \mathop{\mathbb P}_{x\sim U} [\phi(x) = 1] \right| \leq \delta$
where ${U}$ is the uniform distribution over assignments. This leads to a pseudorandom generator that ${\delta}$-fools ${n}$-variable, ${m}$-term DNF formulas and whose seed length is ${O(\log n \cdot \log^2 m/\delta)}$, which is ${O(\log^3 n)}$ when ${m,n,\delta^{-1}}$ are polynomially related.
In a new paper with Anindya De, Omid Etesami, and Madhur Tulsiani, we show that an ${n}$-variable, ${m}$-term DNF can be ${\delta}$-fooled by a generator of seed length ${O(\log n + \log^2 m/\delta \cdot \log\log m/\delta)}$, which is ${O(\log^{2+o(1)} n)}$ when ${n,m,\delta^{-1}}$ are polynomially related.
Our approach is similar to the one in Razborov’s proof of Bazzi’s result, but we use small-bias distribution instead of ${k}$-wise independent distributions
# The Large Deviation of Fourwise Independent Random Variables
Suppose ${X_1,\ldots,X_n}$ are mutually independent unbiased ${\pm 1}$ random variables. Then we know everything about the distribution of
$\displaystyle | X_1 + \ldots + X_N | \ \ \ \ \ (1)$
either by using the central limit theorem or by doing calculations by hand using binomial coefficients and Stirling’s approximation. In particular, we know that (1) takes the values ${1,\ldots, 1/\sqrt N}$ with probability ${\Theta(1/\sqrt N)}$ each, and so with constant probability (1) is at most ${O(\sqrt N)}$.
The last statement can be proved from scratch using only pairwise independence. We compute
$\displaystyle \mathop{\mathbb E} \left| \sum_i X_i \right|^2 = N$
so that
$\displaystyle \mathop{\mathbb P} \left[ \left|\sum_i X_i \right| \geq c \cdot \sqrt N \right] = \mathop{\mathbb P} \left[ \left|\sum_i X_i \right|^2 \geq c^2 \cdot N \right] \leq \frac 1 {c^2}$
It is also true that (1) is at least ${\Omega(\sqrt N)}$ with constant probability, and this is trickier to prove.
First of all, note that a proof based on pairwise independence is not possible any more. If ${(X_1,\ldots,X_N)}$ is a random row of an Hadamard matrix, then ${\sum_i X_i = N}$ with probability ${1/N}$, and ${\sum_i X_i =0}$ with probability ${1-1/N}$.
Happily, four-wise independence suffices.
# Distinguishers from linear functions
In the last post we introduced the following problem: we are given a length-increasing function, the hardest case being a function ${G: \{ 0,1 \}^{n-1} \rightarrow \{ 0,1 \}^n}$ whose output is one bit longer than the input, and we want to construct a generator ${D}$ such that the advantage or distinguishing probability of ${D}$
$\displaystyle \left| \mathop{\mathbb P}_{z \in \{ 0,1 \}^{n-1}} [D(G(z)) =1 ] - \mathop{\mathbb P}_{x \in \{ 0,1 \}^{n}} [D(x) =1 ] \right| \ \ \ \ \ (1)$
is as large as possible relative to the circuit complexity of ${D}$.
I will show how to achieve advantage ${\epsilon}$ with a circuit of size ${O(\epsilon^2 n 2^n)}$. Getting rid of the suboptimal factor of ${n}$ is a bit more complicated. These results are in this paper.
# Efficiently Correlating with a Real-valued Function and Breaking PRGs
Suppose we have a length-increasing function ${G: \{ 0,1 \}^{n-1} \rightarrow \{ 0,1 \}^n}$, which we think of as a pseudorandom generator mapping a shorter seed into a longer output.
Then the distribution of ${G(z)}$ for a random seed ${z}$ is not uniform (in particular, it is concentrated on at most ${2^{n-1}}$ of the ${2^n}$ elements of ${\{ 0,1 \}^n}$). We say that a statistical test ${D: \{ 0,1 \}^n \rightarrow \{ 0,1 \}}$ has advantage ${\epsilon}$ in distinguishing the output of ${G}$ from the uniform distribution if
$\displaystyle \left| \mathop{\mathbb P}_{z \in \{ 0,1 \}^{n-1}} [D(G(z)) =1 ] - \mathop{\mathbb P}_{x \in \{ 0,1 \}^{n}} [D(x) =1 ] \right| \geq \epsilon \ \ \ \ \ (1)$
If the left-hand side of (1) is at most ${\epsilon}$ for every ${D}$ computable by a circuit of size ${S}$, then we say that ${G}$ is ${\epsilon}$-pseudorandom against circuits of size ${S}$, or that it is an ${(S,\epsilon)}$-secure pseudorandom generator.
How secure can a pseudorandom generator possibly be? This question (if we make no assumption on the efficiency of ${G}$) is related to the question in the previous post on approximating a boolean function via small circuits. Both questions, in fact, are special cases of the question of how much an arbitrary real-valued function must correlate with functions computed by small circuits, which is answered in a new paper with Anindya De and Madhur Tulsiani.
# CS276 Lecture 14: Pseudorandom Functions from Pseudorandom Generators
Summary
Today we show how to construct a pseudorandom function from a pseudorandom generator. Continue reading
# CS276 Lecture 3: Pseudorandom Generators
Scribed by Bharath Ramsundar
Summary
Last time we introduced the setting of one-time symmetric key encryption, defined the notion of semantic security, and proved its equivalence to message indistinguishability.
Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of pseudorandom generator, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption. Finally, we shall introduce the basic definition of security for protocols which send multiple messages with the same key.
1. Pseudorandom Generators And One-Time Encryption
Intuitively, a Pseudorandom Generator is a function that takes a short random string and stretches it to a longer string which is almost random, in the sense that reasonably complex algorithms cannot differentiate the new string from truly random strings with more than negligible probability.
Definition 1 [Pseudorandom Generator] A function ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is a ${(t,\epsilon)}$-secure pseudorandom generator if for every boolean function ${T}$ of complexity at most ${t}$ we have
$\displaystyle \left | {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] - {\mathbb P} _{x\sim U_m} [ T(x) = 1] \right| \leq \epsilon \ \ \ \ \ (1)$
(We use the notation ${U_n}$ for the uniform distribution over ${\{ 0,1 \}^n}$.)
The definition is interesting when ${m> k}$ (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with ${\epsilon=0}$ and arbitrarily large ${t}$). Typical parameters we may be interested in are ${k=128}$, ${m=2^{20}}$, ${t=2^{60}}$ and ${\epsilon = 2^{-40}}$, that is we want ${k}$ to be very small, ${m}$ to be large, ${t}$ to be huge, and ${\epsilon}$ to be tiny. There are some unavoidable trade-offs between these parameters.
Lemma 2 If ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,2^{-k-1})}$ pseudorandom with ${t = O(m)}$, then ${k\geq m-1}$.
Proof: Pick an arbitrary ${y \in \{ 0,1 \}^k}$. Define
$\displaystyle T_y(x) = 1 \Leftrightarrow x = G(y)$
It is clear that we may implement ${T}$ with an algorithm of complexity ${O(m)}$: all this algorithm has to do is store the value of ${G(y)}$ (which takes space ${O(m)}$) and compare its input to the stored value (which takes time ${O(m)}$) for total complexity of ${O(m)}$. Now, note that
$\displaystyle {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] \geq \frac{1}{2^k}$
since ${G(x) = G(y)}$ at least when ${x = y}$. Similarly, note that ${{\mathbb P} _{x\sim U_m} [ T(x) = 1] = \frac{1}{2^m}}$ since ${T(x) = 1}$ only when ${x = G(y)}$. Now, by the pseudorandomness of ${G}$, we have that ${\frac{1}{2^k} - \frac{1}{2^m} \leq \frac{1}{2^{k+1}}}$. With some rearranging, this expression implies that
$\displaystyle \frac{1}{2^{k+1}} \leq \frac{1}{2^m}$
which then implies ${m \leq k + 1 }$ and consequently ${k \geq m - 1}$
Exercise 1 Prove that if ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,\epsilon)}$ pseudorandom, and ${k < m}$, then
$\displaystyle t \cdot \frac 1 \epsilon \leq O( m \cdot 2^k)$
Suppose we have a pseudorandom generator as above. Consider the following encryption scheme:
• Given a key ${K\in \{ 0,1 \}^k}$ and a message ${M \in \{ 0,1 \}^m}$,
$\displaystyle Enc(K,M) := M \oplus G(K)$
• Given a ciphertext ${C\in \{ 0,1 \}^m}$ and a key ${K\in \{ 0,1 \}^k}$,
$\displaystyle Dec(K,C) = C \oplus G(K)$
(The XOR operation is applied bit-wise.)
It’s clear by construction that the encryption scheme is correct. Regarding the security, we have
Lemma 3 If ${G}$ is ${(t,\epsilon)}$-pseudorandom, then ${(Enc,Dec)}$ as defined above is ${(t-m,2\epsilon)}$-message indistinguishable for one-time encryption.
Proof: Suppose that ${G}$ is not ${(t-m, 2\epsilon)}$-message indistinguishable for one-time encryption. Then ${\exists}$ messages ${M_1, M_2}$ and ${\exists}$ algorithm ${T}$ of complexity at most ${t - m}$ such that
$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(Enc(K, M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(Enc(K, M_2)) = 1] \right | > 2\epsilon$
By using the definition of ${Enc}$ we obtain
$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2)) = 1] \right | > 2\epsilon$
Now, we can add and subtract the term ${{\mathbb P}_{R \sim U_m} [T(R) = 1]}$ and use the triangle inequality to obtain that ${\left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right |}$ added to ${\left | {\mathbb P}_{R \sim U_m} [T(R) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2) = 1] \right |}$ is greater than ${2\epsilon}$. At least one of the two terms in the previous expression must be greater that ${\epsilon}$. Suppose without loss of generality that the first term is greater than ${\epsilon}$
$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right | > \epsilon$
Now define ${T'(X) = T(X \oplus M_1)}$. Then since ${H(X) = X \oplus M_1}$ is a bijection, ${{\mathbb P}_{R \sim U_m} [T'(R) = 1] = {\mathbb P}_{R \sim U_m} [T(R) = 1]}$. Consequently,
$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T'(G(K)) = 1] - {\mathbb P}_{R \sim U_m} [T'(R) = 1] \right | > \epsilon$
Thus, since the complexity of ${T}$ is at most ${t - m}$ and ${T'}$ is ${T}$ plus an xor operation (which takes time ${m}$), ${T'}$ is of complexity at most ${t}$. Thus, ${G}$ is not ${(t, \epsilon)}$-pseudorandom since there exists an algorithm ${T'}$ of complexity at most ${t}$ that can distinguish between ${G}$‘s output and random strings with probability greater than ${\epsilon}$. Contradiction. Thus ${(Enc, Dec)}$ is ${(t-m, 2\epsilon)}$-message indistinguishable. ◻
2. Security for Multiple Encryptions: Plain Version
In the real world, we often need to send more than just one message. Consequently, we have to create new definitions of security for such situations, where we use the same key to send multiple messages. There are in fact multiple possible definitions of security in this scenario. Today we shall only introduce the simplest definition.
Definition 4 [Message indistinguishability for multiple encryptions] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable for ${c}$ encryptions if for every ${2c}$ messages ${M_1,\ldots,M_c}$, ${M'_1,\ldots,M'_c}$ and every ${T}$ of complexity ${\leq t}$ we have
$\displaystyle | {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1]$
$\displaystyle -{\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon$
Similarly, we define semantic security, and the asymptotic versions.
Exercise 2 Prove that no encryption scheme ${(Enc,Dec)}$ in which ${Enc()}$ is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions.
Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.)
If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”
Next time, we shall consider a stronger model of multiple message security which will be secure against Chosen Plaintext Attacks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 172, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109688401222229, "perplexity": 448.5030493133495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00238.warc.gz"} |
https://gmatclub.com/forum/how-many-positive-integers-less-than-1000-are-multiples-of-44535.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Jun 2018, 14:37
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# How many positive integers less than 1000 are multiples of 5
Author Message
TAGS:
### Hide Tags
GMAT Instructor
Joined: 04 Jul 2006
Posts: 1253
How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
Updated on: 30 Apr 2015, 03:34
7
33
00:00
Difficulty:
95% (hard)
Question Stats:
43% (02:12) correct 57% (02:07) wrong based on 550 sessions
### HideShow timer Statistics
How many positive integers less than 1000 are multiples of 5 but NOT of 4 or 7?
(A) 114
(B) 121
(C) 122
(D) 129
(E) 136
Originally posted by kevincan on 14 Apr 2007, 23:34.
Last edited by Bunuel on 30 Apr 2015, 03:34, edited 1 time in total.
Renamed the topic, edited the question and added the OA.
Senior Manager
Joined: 01 Jan 2007
Posts: 308
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 01:42
kevincan wrote:
How many positive integers less than 1000 are multiples of 5 but NOT of 4 or 7?
(A) 114 (B) 121 (C) 122 (D) 129 (E) 136
There are 1000/5=200 multiples of 5 . now we have to find the number of multiples of 5 and 4 that is 20 between 0 to 1000 and 5 and 7 that is 35 between 0 to 1000.
multiples of 20=1000/20=50 multiples. and multiples of 35= 1000/35 =28.5 so 28 multiples of 35.
So multiples of 5 between o to 1000 not including multiples of 20 and 35 are equal to 200-78 = 122 multiples.
Javed.
Cheers!
Manager
Joined: 27 Feb 2007
Posts: 68
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 06:48
9
11
It's 129 (D)
There are 199 multiples of 5 that are less than 1000.
Now we have to exclude the multiples of 4 and 7 from these 199 numbers.
Let's first find out the numbers below 1000 that are multiples of 5 as well as 4. (that is, multiples of 20). There are 49 such numbers.
Ditto for 5 as well as 7. There are 28 such numbers.
Subtracting these numbers from 199, we get, 199-49-28=122.
However, there will be some numbers which are multiples of both 4 and 7 which have been subtracted twice in the above calculation.
There are 7 such numbers (multiples of 20 as well as 35, that is multiples of 140).
Thus the required number is 122+7 = 129.
Senior Manager
Joined: 11 Feb 2007
Posts: 346
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 06:49
Javed's method seems to be straight forward and correct.
Although in a counting problem such as this, B is tempting because it's very close to C and it might make you think that you forgot to subtract 1 from somewhere...
What is the OA?
Senior Manager
Joined: 01 Jan 2007
Posts: 308
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 10:08
rakesh.id wrote:
It's 129 (D)
There are 199 multiples of 5 that are less than 1000.
Now we have to exclude the multiples of 4 and 7 from these 199 numbers.
Let's first find out the numbers below 1000 that are multiples of 5 as well as 4. (that is, multiples of 20). There are 49 such numbers.
Ditto for 5 as well as 7. There are 28 such numbers.
Subtracting these numbers from 199, we get, 199-49-28=122.
However, there will be some numbers which are multiples of both 4 and 7 which have been subtracted twice in the above calculation.
There are 7 such numbers (multiples of 20 as well as 35, that is multiples of 140).
Thus the required number is 122+7 = 129.
Javed
Cheers!
Manager
Joined: 25 Mar 2007
Posts: 81
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 12:12
multiples of 5: 5(1,2,3,4,5......, 199), 199 in total < 1000
multiples of 4 and 5: 20(1,2,3,4,5,....,49) 49 in total < 1000
multiples of 5 and 7: 35(1,2,3,4,5,....,28) 28 in total < 1000
199 - 49 - 28 = 122
The double counts include multiples of 20,35 (2*2*5, 7*5)
70,140,350,700,770,910,980
122 + 7(doubles) =129
Manager
Joined: 27 Feb 2007
Posts: 68
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Apr 2007, 13:51
alfyG wrote:
multiples of 5: 5(1,2,3,4,5......, 199), 199 in total < 1000
multiples of 4 and 5: 20(1,2,3,4,5,....,49) 49 in total < 1000
multiples of 5 and 7: 35(1,2,3,4,5,....,28) 28 in total < 1000
199 - 49 - 28 = 122
The double counts include multiples of 20,35 (2*2*5, 7*5)
70,140,350,700,770,910,980
122 + 7(doubles) =129
I think the double counts given by you are incorrect. 70, for example, is not a multiple of 4 and 5.
The double counts are: 140, 280, 420, 560, 700, 840, 980
Intern
Joined: 01 Feb 2016
Posts: 10
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
15 Jun 2016, 23:50
Did I understand the concept correctly?
To find multiples of a number X less that Y, we divide Y/X. If Y/X has no remainder then no. of multiples if Y/X - 1. If Y/X has remainder then no. of multiples is quotient of Y/X. Eg
no. of multiples of 5 less than 1000 = 1000/5 - 1=199. (If 1000 was included then we wouldnt have subtracted 1)
No. of multiples of 3 less than 1000 = 1000/3 = 333
To find common multiples of two nos U and V up to some no. W (eg common multiples of 20 and 35 less than 1000), we find the first common multiple (in this case 140). No all multiples of 140 less than 1000 would be common multiples of 20 and 35 less than 1000 (140, 280, 420, 560, 700, 840, 980, total 7)
Moderator
Joined: 22 Jun 2014
Posts: 1047
Location: India
Concentration: General Management, Technology
GMAT 1: 540 Q45 V20
GPA: 2.49
WE: Information Technology (Computer Software)
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
18 Jun 2016, 07:15
2
multiple of 5 between 1 to 100 = (100-5)/5 + 1 = 20
multiple of 5&4 that is 20 between 1 to 100 = (100-20)/20 + 1 = 5
multiple of 5&7 that is 35 between 1 to 100 = (100-35)/35 + 1 = 2
Total multiple of 20 and 35 between 1 to 100 = 7
multiple of 5 between 1 to 1000 = 20*10 = 200
multiple of 20 and 35 between 1 to 1000 = 7*10 = 70
multiple of 5 BUT not of 4&7 between 1 to 1000 = 200-70 = 130
we need to subtract 1 because question asks less than 1000.
hence total multiple asked = 130-1 = 129
_________________
---------------------------------------------------------------
Target - 720-740
http://gmatclub.com/forum/information-on-new-gmat-esr-report-beta-221111.html
http://gmatclub.com/forum/list-of-one-year-full-time-mba-programs-222103.html
BSchool Forum Moderator
Joined: 12 Aug 2015
Posts: 2642
GRE 1: 323 Q169 V154
How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
23 Aug 2016, 10:58
here is my approach ->
Multiples of 5=> 199
Multiples of 4 and 5 i.e of 20 => 49
Multiples of 7 and 5 i.e 35 => 28
Multiples of all 5,4,& i.e multiples of 140=> 7
Required value => 199-49-28+7 => 129
Smash that D
Unfortunately its taking me over 4 minutes
Any Quickies here abhimahna
Try your hand on this one
_________________
MBA Financing:- INDIAN PUBLIC BANKS vs PRODIGY FINANCE!
Getting into HOLLYWOOD with an MBA!
The MOST AFFORDABLE MBA programs!
STONECOLD's BRUTAL Mock Tests for GMAT-Quant(700+)
AVERAGE GRE Scores At The Top Business Schools!
Manager
Joined: 02 Jul 2016
Posts: 111
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
13 Oct 2016, 03:07
I am not able to understand why 7 has been added in the end...
Can somebody explain this to me...
Of possible please explain with the help of Venn diagram
Posted from my mobile device
Manager
Joined: 17 May 2015
Posts: 232
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
13 Oct 2016, 04:01
1
[quote="suramya26"]I am not able to understand why 7 has been added in the end...
Can somebody explain this to me...
Of possible please explain with the help of Venn diagram
Hope this will help.
Thanks.
Attachments
HowManyIntegers.jpg [ 22.51 KiB | Viewed 15923 times ]
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 3511
Location: India
GPA: 3.5
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
13 Oct 2016, 08:28
3
kevincan wrote:
How many positive integers less than 1000 are multiples of 5 but NOT of 4 or 7?
(A) 114
(B) 121
(C) 122
(D) 129
(E) 136
Numbers less than 1000 = { 1 , 2 , 3 , 4 ............999 }
Numbers upto 999 Divisible by 5 is $$\frac{999}{5}$$ = 199
Numbers upto 999 Divisible by 20 (Multiple of 5 & 4) is $$\frac{999}{5}$$ = 49
Numbers upto 999 Divisible by 35 (Multiple of 5 & 7) is $$\frac{999}{5}$$ = 28
Numbers upto 999 Divisible by 140 (Multiple of 5 ,4 & 7) is $$\frac{999}{5}$$ = 7
Quote:
So, Total number of positive integers less than 1000 are multiples of 5 but NOT of 4 or 7
=> 199 - 49-28+7 = 129
Hence answer will be (D) 129...
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
VP
Joined: 07 Dec 2014
Posts: 1020
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
14 Oct 2016, 15:04
1
1
kevincan wrote:
How many positive integers less than 1000 are multiples of 5 but NOT of 4 or 7?
(A) 114
(B) 121
(C) 122
(D) 129
(E) 136
995/5=all multiples of 5=199
980/20=all multiples of 5 and 4=49
980/35=all multiples of 5 and 7=28
980/140=duplicate multiples of 5, 4 and 7=7
199-49-28+7=129
D.
Manager
Joined: 20 Jan 2017
Posts: 61
Location: United States (NY)
Schools: CBS '20 (A)
GMAT 1: 750 Q48 V44
GMAT 2: 610 Q34 V41
GPA: 3.92
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
20 Jan 2017, 18:37
1)Multiples of 5: (995-5)/5+1=198+1=199
2)Multiples of 5&4: 20: (980-20)/20+1=960/20+1=48+1=49
3)Multiples of 5&7: 35: (980-35)/35+1=945/35+1=27+1=28
4)Multiples of 5&4&7: 140: (980-140)/140+1=840/140+1=6+1=7
5)Multiples of 5 but not 4 and not 7: 199-(49-7)-(28-7)-7=129
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 2779
Location: United States (CA)
Re: How many positive integers less than 1000 are multiples of 5 [#permalink]
### Show Tags
05 Feb 2018, 10:20
kevincan wrote:
How many positive integers less than 1000 are multiples of 5 but NOT of 4 or 7?
(A) 114
(B) 121
(C) 122
(D) 129
(E) 136
Let’s first determine the number of multiples of 5 from 5 to 995, inclusive:
(995 - 5)/5 + 1 = 199
If a number is a multiple of 5, but not a multiple of 4, then it’s not a multiple of 20. So let’s determine the number of multiples of 20 less than 1000:
(980 - 20)/20 + 1 = 49
If a number is a multiple of 5, but not a multiple of 7, then it’s not a multiple of 35. So let’s determine the number of multiples of 35 less than 1000:
(980 - 35)/35 + 1 = 28
Thus, we need to subtract 49 and 28 from 199:
199 - 49 - 28 = 122
However, we “over” subtracted from 199; for example, we subtracted 980 twice, as it is a multiple of 20 and also a multiple of 35. In other words, we “over” subtracted the number of numbers that are multiples of 5, 4 and 7, i.e., a multiple of the LCM of 5, 4, 7, which is 140. So we have to determine the number of multiples of 140 less than 100 and add that number back to 122:
(980 - 140)/140 + 1 = 7
Thus, the number of positive integers less than 1000 which are a multiple of 5 but not 4 or 7 is 122 + 7 = 129.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: How many positive integers less than 1000 are multiples of 5 [#permalink] 05 Feb 2018, 10:20
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8011331558227539, "perplexity": 1554.301688371887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00105.warc.gz"} |
http://mathhelpforum.com/algebra/112290-plus-minus-logs.html | # Math Help - plus and minus logs
1. ## plus and minus logs
i can do the first one but the second one i cant seem to work it out
2. Originally Posted by realistic
i can do the first one but the second one i cant seem to work it out
What base are you working to? That is an important omission on your part. Note that, for a > 0, $1 = \log_{a} a$ ....
3. Originally Posted by mr fantastic
What base are you working to? That is an important omission on your part. Note that, for a > 0, $1 = \log_{a} a$ ....
thats how the question in the book is
log3−log10+1
wouldnt the answer just be log3?
4. Originally Posted by realistic
thats how the question in the book is
Well, if the base is 10 then you know that $\log 10 = 1$ and so the answer to the second question is plainly $\log 3$ since the 1's will cancel.
5. Originally Posted by mr fantastic
Well, if the base is 10 then you know that $\log 10 = 1$ and so the answer to the second question is plainly $\log 3$ since the 1's will cancel.
ah yeah i see that but was not sure thanks for explaining it out | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643997550010681, "perplexity": 481.7973378896977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445208.17/warc/CC-MAIN-20151124205405-00161-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/author/A-Babaev/2560272 | # A. Babaev
A search for excited electron (e *) production is described in which the electroweak decays e * → eγ , e * → eZ and e * → νW are considered. The data used correspond to an integrated luminosity of 120 pb −1 taken in e ± p collisions from 1994 to 2000 with the H1 detector at HERA at centre-of-mass energies of 300 and 318 GeV. No evidence for a signal is(More)
• V. Arkadov, I. Ayyaz, A. Babaev +18 others
• 1997
Measurements of ep scattering with squared 4–momentum transfer Q 2 up to 35000 GeV 2 are compared with the expectation of the standard deep-inelastic model of lepton–nucleon scattering (DIS). For Q 2 > 15000 GeV 2 , N obs = 12 neutral current candidate events are observed where the expectation is N DIS = 4.71 ± 0.76 events. In the same Q 2 range, N obs = 4(More)
• 1999
The inclusive single and double differential cross-sections for neutral and charged current processes with four-momentum transfer squared É ¾ between ½½¼ and ¿¼ ¼¼¼ Î ¾ and with Bjorken Ü between ¼¼¼¿¾ and ¼ are measured in · Ô collisions. The data were taken with the H1 detector at HERA between 1994 and 1997, and they correspond to an integrated luminosity(More)
A new measurement of the proton structure function F 2 (x, Q 2) is reported for momentum transfers squared Q 2 between 1.5 GeV 2 and 5000 GeV 2 and for Bjorken x between 3 · 10 −5 and 0.32 using data collected by the HERA experiment H1 in 1994. The data represent an increase in statistics by a factor of ten with respect to the analysis of the 1993 data.(More)
• V. Arkadov, I. Ayyaz, A. Babaev +26 others
• 2000
Cross sections for elastic photoproduction of J/ψ and Υ mesons are presented. For J/ψ mesons the dependence on the photon-proton centre-of-mass energy W γp is analysed in an extended range with respect to previous measurements of 26 ≤ W γp ≤ 285 GeV. The measured energy dependence is parameterized as σ γp ∝ W δ γp with δ = 0.83 ± 0.07. The differential(More)
• T. Anthonis, A. Asmone +39 others
• 2005
Deep-inelastic ep scattering data taken with the H1 detector at HERA and corresponding to an integrated luminosity of 106 pb −1 are used to study the differential distributions of event shape variables. These include thrust, jet broadening, jet mass and the C-parameter. The four-momentum transfer Q is taken to be the relevant energy scale and ranges between(More)
The leptoproduction of J/ψ mesons is studied in inelastic reactions for four momentum transfers 2 < Q 2 < 100 GeV 2. The data were taken with the H1 detector at the electron proton collider HERA and correspond to an integrated luminosity of 77 pb −1. Single differential and double differential cross sections are measured with increased precision compared(More)
• C. Adloff, T. Anthonis, A. Babaev +79 others
• 2003
The inclusive e + p single and double differential cross sections for neutral and charged current processes are measured with the H1 detector at HERA. The data were taken in 1999 and 2000 at a centre-of-mass energy of √ s = 319 GeV and correspond to an integrated luminosity of 65.2 pb −1. The cross sections are measured in the range of four-momentum(More)
Results on J== production in ep interactions in the H1 experiment at HERA are presented. The J== mesons are produced by almost real photons (Q 2 0) and detected via their leptonic decays. The data have been taken in 1994 and correspond to an integrated luminosity of 2:7 pb ?1. The p cross section for elastic J== production is observed to increase strongly(More)
• T. Anthonis, V. Arkadov, I. Ayyaz +28 others
• 2000
Jet production is studied in the Breit frame in deep-inelastic positron-proton scattering over a large range of four-momentum transfers 5 < Q 2 < 15 000 GeV 2 and transverse jet energies 7 < E T < 60 GeV. The analysis is based on data corresponding to an integrated lumi-nosity of L int ≃ 33 pb −1 taken in the years 1995–1997 with the H1 detector at HERA at(More) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540197849273682, "perplexity": 2989.373876160007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193284.93/warc/CC-MAIN-20170322212953-00156-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/94544-gradient-vector-help.html | # Math Help - Gradient vector help
1. ## Gradient vector help
Q: let $r=\sqrt{x^2+y^2}$. Show that...
a) $\triangledown\\r=\frac{\vec{r}}{r}$, where $\vec{r}=$
b) $\triangledown\\f(r)=$ $f'(r)\triangledown\\r=$ $\big({\frac{f'(r)}{r}})\vec{r}$
I got the first part just fine, but I am stuck on part b. I'm not sure what $f(r)$ really is. If I let the derivative equal 1, then everything falls into place and I get equivalent answers, but I doubt thats how I am supposed to do it. Do I solve for $r$ and then evaluate that as my function for $r$?
Any suggestions?
Thanks,
Oh, and how do I make my parenthesis proportional to what’s inside them?! Its angering me....
2. chain rule
$\nabla f(r) = \nabla f(\sqrt{x^{2}+y^{2}}) = \Big (f'(\sqrt{x^{2}+y^{2}} * \frac {x}{\sqrt{x^{2}+y^{2}}}$ $, f'(\sqrt{x^{2}+y^{2}}) * \frac{y}{ \sqrt{x^{2}+y^{2}}} \Big)$
$= f'(r) \nabla r$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834049940109253, "perplexity": 378.8961850426665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455180384.30/warc/CC-MAIN-20150501043940-00072-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://aas.org/archives/BAAS/v26n2/aas184/abs/S2910.html | We present a study of the radio emission from rotating, charged dust grains immersed in the ionized gas constituting the thick, H$\alpha$-emitting disk of many spiral galaxies. Grains are found to have substantial radio emission peaked at a cutoff frequency in the range 10-100~GHz, depending on the grain size distribution and on the efficiency of the radiative damping of the grain rotation. The dust radio emission dominates the free-free emission from the ionized gas component in the range 4-20~GHz. The model can be used to test the disk-halo interface environment in spiral galaxies, to determine the amount and size distribution of dust in their ionized component, and to investigate the rotation mechanisms for the dust. Numerical estimates are given for experimental purposes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966844379901886, "perplexity": 1049.530570584704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662336.11/warc/CC-MAIN-20160924173742-00183-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://rosalind.info/glossary/independent-random-variables/ | # Glossary
## Independent random variables
Two random variables are independent if the outcomes of one occur with no dependence on those of the other. Formally, $X$ and $Y$ are independent if whenever $A$ and $B$ are events containing outcomes of $X$ and $Y$, respectively, $\mathrm{Pr}(A \textrm{and} B) = \mathrm{Pr}(A) \times \mathrm{Pr}(B)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670379757881165, "perplexity": 252.86833767558545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00416.warc.gz"} |
http://clay6.com/qa/23612/a-charge-particle-of-mass-1-kg-and-charge-1-mu-c-is-projected-from-horizont | Browse Questions
# A charge particle of mass 1 kg and charge $1 \mu c$ is projected from horizontal ground at an angle $\theta=45^{\circ}$ with speed $10 ms^{-1}$. In space a horizontal electric field towards the direction of projection $E= 10^{7}NC^{-1}$ exists. The range of projectile is :
$(A)\;200 m \\ (B)\;20 m \\ (C)\;60 m \\ (D)\;300 m$
$a_x =\large\frac{qE}{m}=\large\frac{1 \times 10^{-6} \times 10^7}{1}$
$\qquad =10\;ms^{-2}$
Time of flight $t= \large\frac{2 u \sin \theta}{g}$
=> $t= \large\frac{2 \times 10}{10} \times \large\frac{1}{\sqrt 2}$
=> $\quad= \sqrt 2 sec$
$Range = U \times t +\large\frac{1}{2} $$a \times t^2 \qquad= 10 \times \large\frac{1}{\sqrt 2}$$ \times \sqrt 2 +\large\frac{1}{2}$$\times 10 \times (\sqrt 2)^2$
$\qquad= 20 m$
Hence B is the correct answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99997878074646, "perplexity": 1411.7614396222361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171066.47/warc/CC-MAIN-20170219104611-00380-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2937606/how-to-determine-the-step-size-using-eulers-method | # How to determine the step size using Euler's Method?
Consider the initial value problem $$x' = x+e^{-x}$$ , $$x(0)= 0$$. This problem can’t be solved analytically.
Using the Euler method, compute $$x$$ at $$t = 1$$, correct to three decimal places. How small does the stepsize need to be to obtain the desired accuracy? (Give the order of magnitude, not the exact number.
I am not sure how to go about this, I was thinking guess and check but figured that would take too long. Is there a method for determining the stepsize given these conditions?
Any help or intuition would be greatly appreciated.
• Oct 1 '18 at 5:38
Consider a Taylor series expansion with the Lagrange remainder $$x(t+d) = x(t) + x'(t)d + \frac{x''(c)}{2}d^2,$$ where $$t \leq c \leq t+d$$. The Euler's method is basically a truncation of this expansion, so the error at each step is bound by the remainder term. Now $$x'' = \frac{df}{dx}x' = \frac{df}{dx}f(x) = (x + e^{-x})(1 - e^{-x}) < (x+1)x$$ So each step may introduce an error $$\delta = \frac{(x(c) + 1)\cdot x(c)}{2}d^2$$. To estimate it we need some bounds on $$x$$. Note that if $$y' = y + 1,\ y(0) = 0$$, then $$x < y$$, and there is an analytic solution $$y(t) = e^t - 1$$. So per step $$\delta = \frac{(x(c) + 1)\cdot x(c)}{2}d^2 < \frac{(y(c) + 1)\cdot y(c)}{2}d^2 < \frac{(y(1) + 1)\cdot y(1)}{2}d^2 = \frac{(e + 1)\cdot e}{2}d^2$$ Altogether there are $$N = 1/d$$ steps, and the full error is $$N\delta < \frac{(e + 1)\cdot e}{2}d \approx 5.054 \ d$$ You need $$5.054\ d < 0.001$$, so $$d < 0.0002$$ is enough. There might be a tighter estimate if you choose $$y(t)$$ more ingeniously.
UPDATE As LutzL poined out in his comment, we can take $$y(t) = \sqrt 2\tan\frac{t}{\sqrt{2}}$$. Then $$y(1) \approx 1.208$$ and $$N\delta < \frac{(y(1) + 1)\cdot y(1)}{2}d \approx 1.334\ d$$ which gives for the desired precision: $$d < 0.0007$$
• You could better estimate $1-e^{-x}\le\min(1,x)\le 1$ for $x\ge 0$. -- Numerical experiments give the factor for the error as even smaller at about $0.282$ with a maximum value $x(1)=1.153638..$. Oct 1 '18 at 7:30
• To get a tighter analytical solution use $e^{-x}\le 1-x+\frac12x^2$ for $x\ge 0$. This gives $x(t)\le y(t)=\sqrt2\tan(t/\sqrt2)$ with $y(1)=1.20846..$ Oct 1 '18 at 7:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965465247631073, "perplexity": 172.61236677775108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00380.warc.gz"} |
https://www.physicsforums.com/threads/tension-question-vectors.287524/ | # Tension Question - - Vectors
1. Jan 26, 2009
### Random-Hero-
1. The problem statement, all variables and given/known data
I have a quick question I'm just wondering that I saw in a book for my class, and I was wondering if anyone could give me any insight on it!
A mass M is hung on a line between two supports A and B.
http://img502.imageshack.us/img502/1892/lulzdt6.jpg [Broken]
a. Which part of the line supporting the mass has the greater tension? Explain.
b. The supports A and B are not at the same level. What effect does this have on the tension in the line? Explain.
I've got no idea of how to explain it without going into physics, which im sure my teacher doesn't want. If anyone could help me figure this out that would be awesome! thanks so much!
Last edited by a moderator: May 3, 2017
2. Jan 26, 2009
### CompuChip
I don't see how to explain this without going into physics.
Anyway, what is the tension in the line determined by (hint: the only forces acting on the block are tension and gravity). How does this depends on the properties of the ropes (hint: there is a dependence on the angle).
3. Jan 26, 2009
### Dr.D
To the eye at least, it appears that each of these cords is at the same angle above the horizontal. If that is true, then the vertical component of tension is the same in each and they are each carrying half of the weight of the block.
Now if they are at different angles with respect to the horizontal, the cord more steeply inclined to the horizontal will be the one carrying the greater load.
The fact that the supports are at different levels has no effect at all as long as we neglect the weight of the cord it self.
Similar Discussions: Tension Question - - Vectors | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166656494140625, "perplexity": 304.3256982407369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00495.warc.gz"} |
http://nuit-blanche.blogspot.com/2013/06/towards-better-compressed-sensing.html | ## Friday, June 21, 2013
### Towards a better compressed sensing
I believe that after reading this paper, I am getting a better sense as to how the Donoho-Tanner phase transition is being beaten through structured sparsity.
Towards a better compressed sensing
Mihailo Stojnic
In this paper we look at a well known linear inverse problem that is one of the mathematical cornerstones of the compressed sensing field. In seminal works \cite{CRT,DOnoho06CS} $\ell_1$ optimization and its success when used for recovering sparse solutions of linear inverse problems was considered. Moreover, \cite{CRT,DOnoho06CS} established for the first time in a statistical context that an unknown vector of linear sparsity can be recovered as a known existing solution of an under-determined linear system through $\ell_1$ optimization. In \cite{DonohoPol,DonohoUnsigned} (and later in \cite{StojnicCSetam09,StojnicUpper10}) the precise values of the linear proportionality were established as well. While the typical $\ell_1$ optimization behavior has been essentially settled through the work of \cite{DonohoPol,DonohoUnsigned,StojnicCSetam09,StojnicUpper10}, we in this paper look at possible upgrades of $\ell_1$ optimization. Namely, we look at a couple of algorithms that turn out to be capable of recovering a substantially higher sparsity than the $\ell_1$. However, these algorithms assume a bit of "feedback" to be able to work at full strength. This in turn then translates the original problem of improving upon $\ell_1$ to designing algorithms that would be able to provide output needed to feed the $\ell_1$ upgrades considered in this papers.
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024315476417542, "perplexity": 934.3558563787531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00273.warc.gz"} |
https://indico.nikhef.nl/event/1389/ | Theory
# Proposal of new BSM searches at the LHC and at Telescopes
## by Dr. Filippo Sala
Thursday, 7 February 2019 from to (Europe/Amsterdam)
at Nikhef
Description The LHC has pushed the scale of the needed new physics (NP) beyond the ~ TeV range. How to experimentally test NP models at and beyond those scales? A first possibility is to look for low energy remnants of such theories, like pseudo-Goldstone bosons (aka ALPs). I will show how ALP masses between a few and 100 GeV are poorly constrained, derive new strong bounds on diphoton resonances in that range, and propose new promising searches at ATLAS, CMS, LHCb and Belle-II. A second possibility is to look for DM (much) heavier than a TeV in cosmic rays, that are observed up to 100 TeV and beyond by several ongoing and near-future telescopes (HESSII, CTA, TAIGA, ANTARES, IceCube, KM3NeT, CALET, AMS-02...). I will discuss the theory and phenomenology of models that evade the so-called DM unitarity bound, and thus enrich the physics case of such telescopes. The impact of both LHC and telescope searches on motivated UV frameworks will also be emphasised. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419100880622864, "perplexity": 4378.310517402882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00467.warc.gz"} |
http://math.stackexchange.com/questions/295445/proving-algebraic-sets/295481 | # Proving algebraic sets
i) Let $Z$ be an algebraic set in $\mathbb{A}^n$. Fix $c\in \mathbb{C}$. Show that $$Y=\{b=(b_1,\dots,b_{n-1})\in \mathbb{A}^{n-1}|(b_1,\dots,b_{n-1},c)\in Z\}$$ is an algebraic set in $\mathbb{A}^{n-1}$.
ii) Deduce that if $Z$ is an algebraic set in $\mathbb{A}^2$ and $c\in \mathbb{C}$ then $Y=\{a\in \mathbb{C}|(a,c)\in Z\}$ is either finite or all of $\mathbb{A}^1$. Deduce that $\{(z,w)\in \mathbb{A}^2 :|z|^2 +|w|^2 =1\}$ is not an algebraic set in $\mathbb{A}^2$.
-
Any thoughts of yours so far? Giving us information on what you've tried will help us giving you a better answer. If you haven't tried anything yet, then you should probably do so prior to asking here; this way you will learn much more. – Nils Matthes Feb 5 '13 at 14:48
This answer was merged from another question so it only covers part ii).
$Z$ is algebraic, and hence the simultaneous solution to a set of polynimials in two variables. If we swap one variable in all the polynomials with the number $c$, you will get a set of polynomials in one variable, with zero set being your $Y$. $Y$ is therefore an algebraic set, and closed in $\Bbb A^1$, therefore either finite or the whole affine line.
Assume for contradiction that $Y = \{ ( z,w) \in \mathbb{A}^2 : |z|^2 + |w|^2 = 1 \}$ is algebraic. Set $w = 0$ (this is our $c$). The $Y$ we get from this is the unit circle in the complex plane. That is an infinite set, but certainly not all of $\Bbb C$. Thus $Y$ is neither finite nor all of $\Bbb A^1 = \Bbb C$, and therefore $Z$ cannot be algebraic to begin with.
-
Now that this is merged, I can note that the procedure above the line in my answer will solve part i) with a slight modification to adjust for more variables in a general field. – Arthur Feb 6 '13 at 12:45
An idea: for $\,f\in I(Z)\,$ , look at
$$g(x_1,...,x_{n-1}):=f(x_1,...,x_{n-1},c)\in\Bbb C[x_1,...,x_{n-1}]$$
with $\,c\in\Bbb C\,$ s.t. for some $\,a_1,...,a_{n-1}\,\,,\,(a_1,...,a_{n-1},c)\in Z\,$
-
This answer was merged from another question so it only covers part ii).
Begin by writing $Z=V(I)$ for $I \subset \mathbb C[x,y]$. Take generators of $I$ evaluate at $c$ and consider these as elements of $\mathbb C[x,y]$ and determine what their zero sets can be to find $Y$. The next part should follow.
-
Let $Z=Z(f_1,\ldots,f_r)$ with certain polynomials $f_i\in\mathbb C[x_1,\ldots,x_n]$. Then, we have polynomials $g_i := f_i(x_1,\ldots,x_{n-1},c)\in\mathbb C[x_1,\ldots,x_{n-1}]$. We claim that $Y=Z(g_1,\ldots,g_r)$. Indeed, \begin{align*} Y &= \{ (b_1,\ldots,b_{n-1}) \mid (b_1,\ldots,b_{n-1},c)\in Z \} \\ &= \{ (b_1,\ldots,b_{n-1}) \mid \forall i: f_i(b_1,\ldots,b_{n-1},c)=0 \} \\ &= \{ (b_1,\ldots,b_{n-1}) \mid \forall i: g_i(b_1,\ldots,b_{n-1})=0 \} \\ &= Z(g_1,\ldots,g_r). \end{align*}
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988218545913696, "perplexity": 229.2482495943164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257063.58/warc/CC-MAIN-20150827031417-00253-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/35881-convergence-divergence-question-quick-easy-d.html | # Thread: convergence divergence question quick and easy :D
1. ## convergence divergence question quick and easy :D
when taking the summation of $1/n$ from 1 to infinity,
what is the difference between saying that it diverges or converges to 0?
I am kind of puzzled by this because, for example the summation from 1 to infinity of $ln(2x+3)/x$, what does this do?
does it converge to 0 after using l'hopitals rule, or does it follow the harmonic series rule and diverge?
2. Originally Posted by p00ndawg
when taking the summation of $1/n$ from 1 to infinity,
what is the difference between saying that it diverges or converges to 0?
I am kind of puzzled by this because, for example the summation from 1 to infinity of $ln(2x+3)/x$, what does this do?
does it converge to 0 after using l'hopitals rule, or does it follow the harmonic series rule and diverge?
No...it does not ...the first one diverges because it is a divergent P-series..the second one diverges by a comparison to $\frac{\ln(x)}{x}$ which diverges by the integral test...if you need A LOT of help look here http://www.mathhelpforum.com/math-he...-tutorial.html
3. Originally Posted by Mathstud28
No...it does not ...the first one diverges because it is a divergent P-series..the second one diverges by a comparison to $\frac{\ln(x)}{x}$ which diverges by the integral test...if you need A LOT of help look here http://www.mathhelpforum.com/math-he...-tutorial.html
naw I dont need a lot of help, but I got a test tomorrow and just have a few holes i needed some filling on.
you say $1/n$ is divergent because of the p series, but isnt the series for it being greater than or less than 1? if its a big proof just reply with because I said so, ill understand.
but, so are you saying that if a question or function reduces through some kind of comparison or test, like the integral or nth term test, then if it does reduce to $1/n$ it would actually converge instead of diverge?
oh and im sorry the equation is $ln(2x^3 +1)/x$
4. Originally Posted by p00ndawg
when taking the summation of $1/n$ from 1 to infinity,
what is the difference between saying that it diverges or converges to 0?
I am kind of puzzled by this because, for example the summation from 1 to infinity of $ln(2x+3)/x$, what does this do?
does it converge to 0 after using l'hopitals rule, or does it follow the harmonic series rule and diverge?
The difference between a series diverging or converging to zero is a big difference. When a series diverges, that means the series has no sum. If a series converges to zero, that means that the sum of the series exists and is equal to zero.
The series $\sum\frac{\ln(2x+3)}{x}$ diverges; although L'Hospital's Rule can tell you that the limit of the terms is zero, this does not imply that the series converges. If the limit of the terms (as n approaches infinity) is not zero, then you can conclude the series diverges.
You show that $\sum\frac{\ln(2x+3)}{x}$ diverges by comparison with the harmonic series.
5. Originally Posted by p00ndawg
naw I dont need a lot of help, but I got a test tomorrow and just have a few holes i needed some filling on.
you say $1/n$ is divergent because of the p series, but isnt the series for it being greater than or less than 1? if its a big proof just reply with because I said so, ill understand.
but, so are you saying that if a question or function reduces through some kind of comparison or test, like the integral or nth term test, then if it does reduce to $1/n$ it would actually converge instead of diverge?
another way is this...since it decreases as n gets bigger and all the terms are positive we can apply the integral test $\int_1^{\infty}\frac{1}{n}dn=\ln(n)\bigg|_1^{\inft y}=\infty-0$ therefore it is divergent....and if a p-series has exponents 0<x<1 it diverges
6. Originally Posted by Mathstud28
another way is this...since it decreases as n gets bigger and all the terms are positive we can apply the integral test $\int_1^{\infty}\frac{1}{n}dn=\ln(n)\bigg_|1^{\inft y}=\infty-0$ therefore it is divergent....and if a p-series has exponents 0<x<1 it diverges
hey sorry about this but the equation was $ln(2x^3+1)/x$
7. Originally Posted by icemanfan
The difference between a series diverging or converging to zero is a big difference. When a series diverges, that means the series has no sum. If a series converges to zero, that means that the sum of the series exists and is equal to zero.
The series $\sum\frac{\ln(2x+3)}{x}$ diverges; although L'Hospital's Rule can tell you that the limit of the terms is zero, this does not imply that the series converges. If the limit of the terms (as n approaches infinity) is not zero, then you can conclude the series diverges.
You show that $\sum\frac{\ln(2x+3)}{x}$ diverges by comparison with the harmonic series.
does it change if the equation is $ln(2x^3+3/x)$?
8. Originally Posted by p00ndawg
hey sorry about this but the equation was $ln(2x^3+1)/x$
Originally Posted by p00ndawg
does it change if the equation is $ln(2x^3+3/x)$?
The first one still diverges by comparison to that $\frac{\ln(x)}{x}$ and the second one diverges since $\lim_{x\to\infty}\ln\bigg(2x^2+\frac{3}{x}\bigg)=\ infty\ne{0}$...which is the n-th term test
9. Originally Posted by p00ndawg
does it change if the equation is $ln(2x^3+3/x)$?
That series diverges even faster. And in that case, you can use the nth term test to prove it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620953798294067, "perplexity": 294.0313987837079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00360.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/ipi.2019029 | # American Institute of Mathematical Sciences
June 2019, 13(3): 635-652. doi: 10.3934/ipi.2019029
## Inverse random source problem for biharmonic equation in two dimensions
School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
* Corresponding author: [email protected]
Received May 2018 Revised December 2018 Published March 2019
Fund Project: The authors are partly supported by NSFC grant 11471284, 11421110002, 11621101, 91630309 and the Fundamental Research Funds for the Central Universities.
The establishment of relevant model and solving an inverse random source problem are one of the main tools for analyzing mechanical properties of elastic materials. In this paper, we study an inverse random source problem for biharmonic equation in two dimension. Under some regularity assumptions on the structure of random source, the well-posedness of the forward problem is established. Moreover, based on the explicit solution of the forward problem, we can solve the corresponding inverse random source problem via two transformed integral equations. Numerical examples are presented to illustrate the validity and effectiveness of the proposed inversion method.
Citation: Yuxuan Gong, Xiang Xu. Inverse random source problem for biharmonic equation in two dimensions. Inverse Problems & Imaging, 2019, 13 (3) : 635-652. doi: 10.3934/ipi.2019029
##### References:
show all references
##### References:
The model for the two-dimensional biharmonic equation
The mesh generation under the polar coordination
The solution to direct problem with random source
Inverse stiffness D(The exact solution is 0.05)
The left subfigure is the L-curve for $g$ and the right subfigure is the inverse mean(The dotted plots are accurate values)
The left subfigure is the L-curve for $h^2$ and the right subfigure is the inverse variance(The dotted plots are accurate values)
The left subfigure is the L-curve for $g$ and the right subfigure is the inverse mean(The dotted plots are accurate values)
The left subfigure is the L-curve for $h^2$ and the right subfigure is the inverse variance(The dotted plots are accurate values)
[1] Shuli Chen, Zewen Wang, Guolin Chen. Cauchy problem of non-homogenous stochastic heat equation and application to inverse random source problem. Inverse Problems & Imaging, 2021, 15 (4) : 619-639. doi: 10.3934/ipi.2021008 [2] Xiaoli Feng, Meixia Zhao, Peijun Li, Xu Wang. An inverse source problem for the stochastic wave equation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021055 [3] Roman Chapko, B. Tomas Johansson. Integral equations for biharmonic data completion. Inverse Problems & Imaging, 2019, 13 (5) : 1095-1111. doi: 10.3934/ipi.2019049 [4] Kenichi Sakamoto, Masahiro Yamamoto. Inverse source problem with a final overdetermination for a fractional diffusion equation. Mathematical Control & Related Fields, 2011, 1 (4) : 509-518. doi: 10.3934/mcrf.2011.1.509 [5] Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 [6] Z. K. Eshkuvatov, M. Kammuji, Bachok M. Taib, N. M. A. Nik Long. Effective approximation method for solving linear Fredholm-Volterra integral equations. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 77-88. doi: 10.3934/naco.2017004 [7] Guillaume Warnault. Regularity of the extremal solution for a biharmonic problem with general nonlinearity. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1709-1723. doi: 10.3934/cpaa.2009.8.1709 [8] Tran Ngoc Thach, Nguyen Huy Tuan, Donal O'Regan. Regularized solution for a biharmonic equation with discrete data. Evolution Equations & Control Theory, 2020, 9 (2) : 341-358. doi: 10.3934/eect.2020008 [9] Zhousheng Ruan, Sen Zhang, Sican Xiong. Solving an inverse source problem for a time fractional diffusion equation by a modified quasi-boundary value method. Evolution Equations & Control Theory, 2018, 7 (4) : 669-682. doi: 10.3934/eect.2018032 [10] Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (2) : 315-330. doi: 10.3934/ipi.2018014 [11] Roman Chapko, B. Tomas Johansson. On the numerical solution of a Cauchy problem for the Laplace equation via a direct integral equation approach. Inverse Problems & Imaging, 2012, 6 (1) : 25-38. doi: 10.3934/ipi.2012.6.25 [12] El Mustapha Ait Ben Hassi, Salah-Eddine Chorfi, Lahcen Maniar, Omar Oukdach. Lipschitz stability for an inverse source problem in anisotropic parabolic equations with dynamic boundary conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020094 [13] Teemu Tyni, Valery Serov. Inverse scattering problem for quasi-linear perturbation of the biharmonic operator on the line. Inverse Problems & Imaging, 2019, 13 (1) : 159-175. doi: 10.3934/ipi.2019009 [14] Andreas Kirsch. An integral equation approach and the interior transmission problem for Maxwell's equations. Inverse Problems & Imaging, 2007, 1 (1) : 159-179. doi: 10.3934/ipi.2007.1.159 [15] Seiyed Hadi Abtahi, Hamidreza Rahimi, Maryam Mosleh. Solving fuzzy volterra-fredholm integral equation by fuzzy artificial neural network. Mathematical Foundations of Computing, 2021, 4 (3) : 209-219. doi: 10.3934/mfc.2021013 [16] A. Pedas, G. Vainikko. Smoothing transformation and piecewise polynomial projection methods for weakly singular Fredholm integral equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 395-413. doi: 10.3934/cpaa.2006.5.395 [17] Yin Yang, Yunqing Huang. Spectral Jacobi-Galerkin methods and iterated methods for Fredholm integral equations of the second kind with weakly singular kernel. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 685-702. doi: 10.3934/dcdss.2019043 [18] Nguyen Dinh Cong, Doan Thai Son. On integral separation of bounded linear random differential equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 995-1007. doi: 10.3934/dcdss.2016038 [19] Rui Zhang, Yong-Kui Chang, G. M. N'Guérékata. Weighted pseudo almost automorphic mild solutions to semilinear integral equations with $S^{p}$-weighted pseudo almost automorphic coefficients. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5525-5537. doi: 10.3934/dcds.2013.33.5525 [20] Hermann Gross, Sebastian Heidenreich, Mark-Alexander Henn, Markus Bär, Andreas Rathsfeld. Modeling aspects to improve the solution of the inverse problem in scatterometry. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 497-519. doi: 10.3934/dcdss.2015.8.497
2020 Impact Factor: 1.639 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361961841583252, "perplexity": 4112.309154174837}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00086.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-5-linear-functions-5-3-slope-intercept-form-practice-and-problem-solving-exercises-page-314/63 | ## Algebra 1: Common Core (15th Edition)
$y=5x-2$
The line passes through the point (0, -2), so we know that the y intercept is -2 (because this is where x=0). We will write this in the form y=mx+b. The slope is 5, so we will plug 5 in for m. The y intercept is -2, so we will plug -2 in for b. Thus, we obtain the function: $y=5x-2$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429598450660706, "perplexity": 461.4914060433194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00519.warc.gz"} |
https://mathhelpboards.com/threads/improper-integrals-comparison-test.8682/ | # Improper Integrals - Comparison Test
#### ISITIEIW
##### New member
Nov 4, 2013
17
Hey, not too sure about what function i would compare this integral from 1 to infinity of (3x^3 -2)/(x^6 +2) dx. I also have to show that it converges.
Thanks!
#### Random Variable
##### Well-known member
MHB Math Helper
Jan 31, 2012
253
For large $x$, the integrand behaves like $\displaystyle \frac{3x^{3}}{x^{6}} = \frac{3}{x^{3}}$.
Thus the integral converges by the p-test.
#### ISITIEIW
##### New member
Nov 4, 2013
17
thanks, i didn't know that you could have 3/x^p for the p test
#### Random Variable
##### Well-known member
MHB Math Helper
Jan 31, 2012
253
It's also important to note that $\displaystyle \frac{3x^{3}-2}{x^{6}+2}$ is continuous on $[1,\infty)$.
#### Krizalid
##### Active member
Feb 9, 2012
118
Hey, not too sure about what function i would compare this integral from 1 to infinity of (3x^3 -2)/(x^6 +2) dx. I also have to show that it converges.
You're taking the integral of such function for all $x\ge1$ and within this range, the function is continuous as said above which is important to bound the integrand the way we want. So for example $\dfrac{1}{{{x}^{6}}+2}<\dfrac{1}{{{x}^{6}}}$ holds always for $x\ge1$ and besides $3x^2-2<3x^2+2$ holds always, then actually for all $x\ge1$ you have $$\displaystyle \displaystyle\frac{3{{x}^{3}}-2}{{{x}^{6}}+2}<\frac{3{{x}^{3}}+2}{{{x}^{6}}}= \frac{3}{x^3}+\frac{2}{{{x}^{6}}},$$ implying $\displaystyle\int_{1}^{\infty }{\frac{3{{x}^{3}}-2}{{{x}^{6}}+2}\,dx}<\infty$
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Just a side-note on a "general tip" that may prove useful:
1. If you are integrating a function $\dfrac{f(x)}{g(x)}$ it is often helpful to find a bounding function:
$\dfrac{h(x)}{k(x)}$ where:
$f(x) \leq h(x)$ for all $x > N$
$g(x) \geq k(x)$ for all $x > M$ and
$h(x),k(x)$ share some common factor so we have some nice cancellation occuring.
Close examination shows this is exactly what is happening in Krizalid's post (caveat: finding the "right" bounding functions can often take some algebraic ingenuity). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776109457015991, "perplexity": 1363.2720145197698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00108.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-3-section-3-3-slope-exercise-set-page-242/19 | Chapter 3 - Section 3.3 - Slope - Exercise Set: 19
m=0
Work Step by Step
The graphed line is horizontal and we can observe that the y-value of the line is constantly at 1. Therefore, we know that the line must have a slope of 0. m=0
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408516645431519, "perplexity": 787.4858837782394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945855.61/warc/CC-MAIN-20180423070455-20180423090455-00231.warc.gz"} |
https://www.physicsforums.com/threads/proton-acceleration-do-you-agree.76893/ | # Proton Acceleration - Do you Agree?
1. May 25, 2005
### TrippingSunwise
A proton, initially at rest, is accelerated from plate A to plate B, and aquires 1.92 * 10-17 J of kinetic energy.
i. Which plate is positive and which is negative?
ii. What is the potential difference?
iii. Sketch the correct direction of E between the plates. Does the proton move with or against the field E? Does this agree with the gain (or loss) calculated in ii?? Explain?
My Solution:
i. Plate A is positive Plate B is negative
ii. q * delta V = 1.92 * 10-17 J
Delta V = -1.92 * 10-17 / 1.602 * 10-19 C = -1.1985 * 10^2 V
The potential difference is -1.1985 * 10^2 V
This is a loss in potential
iii. The proton moves with the field and this agrees with my previous calculation as an electron moving with a field is like a mass falling downward within Earth's gravitational fieldd. There is no need for an external force so there we witness a loss of potential.
Does this make sense to everyone?
Last edited: May 26, 2005
2. May 25, 2005
### whozum
That last line is a bit confusing.
In (ii) the potential between the plates isnt affected by the proton's gain in kinetic energy. The proton's final kinetic energy is due to the presence of the potential difference. The proton experienced a change in PE when flowing from A to B, this change in PE results in the change in KE.
3. May 25, 2005
### Dr.Brain
1)Ok for your first question, as the proton gains energy moving from A to B ... B should have a higher potential, that is B should be positive.
2) Potential difference can be calculated:
$qV= 1.92 x 10^-17$
Calculate V
3)Electric field lines go from +ve to -ve plates and the proton moves against the field.
The proton has to do work in moving against the field ...so that much energy is stored in it which is the gain in energy.
4. May 26, 2005
### OlderDan
The OP has it right. Plate A is positive and plate B is negative. If they were reversed, the proton would not accelerate toward plate B. Plate A is at the higher potential. The proton loses potential energy as it moves from plate A to plate B, gaining kinetic energy in the process. There is a bit of a problem with the last statement.
For gravity, the force is always in the direction of the gravitional field and in the direction of lower potential energy. For charges, the electric force direction is always toward lower potential energy, but not always toward lower electric potential. I assume the word electron was supposed to be proton, though it can be correct either way if you by "field" you mean the potentail energy field rather than the electric potential field. For a positive charge like the proton, the force is in the direction of the electric potential.
Similar Discussions: Proton Acceleration - Do you Agree? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767773509025574, "perplexity": 701.0288348451304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00408.warc.gz"} |
http://stnb.cat/en/seminaris/2018/xerrades/423/ | ## STNB2018 (32nd edition)
### Howard's Big Heegner points
#### Presenters
Santiago Molina Blanco
#### Abstract
In this talk, we will describe how to put the Heegner points in families, namely, we will explain Howard's construction of big Heegner points. We will define a sequence of compatible points $P_s$ in the tower of modular curves $X_1(N p^s)$, where each $P_s$ has complex multiplication by an order of conductor $p^s$. These points define classes in the cohomology of the tower of modular curves through the Kummer map, hence they provide a class in the projective limit of the cohomology of the tower. The big Heegner point is the projection of such a class to the isotypical component associated with a given Hida family. Finally, we will explain Castellà's result that relates the specialization of a big Heegner point at higher weights $2k$ with the generalized Heegner cycles introduced in the second talk. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424036145210266, "perplexity": 475.91637050608324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257497.4/warc/CC-MAIN-20190524024253-20190524050253-00282.warc.gz"} |
https://www.ias.ac.in/describe/article/seca/026/06/0481-0492 | • Dynamical theory of the vibration spectra of crystals - Part I Diamond
• Fulltext
https://www.ias.ac.in/article/fulltext/seca/026/06/0481-0492
• Keywords
Normal Mode; Force Component; Valence Bond; Vibration Spectrum; Unit Displacement
• Abstract
Exact expressions have been derived for the frequencies of the nine normal modes of vibration of the diamond structure, which take account of the forces of interaction between each atom and its 28 nearest neighbours. The formulæ involve 8 independent constants together with an additional relation between them, and the constants are thus perfectly determinate
• Author Affiliations
1. Department of Physics, Indian Institute of Science, Bangalore
• | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372089505195618, "perplexity": 1751.4354848831003}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00601.warc.gz"} |
http://www.gradesaver.com/into-the-wild/q-and-a/chapter-1-and-2-274610 | chapter 1 and 2
consider the last paragraph of each chapter. what is the function of each? How does each paragraph work? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984685480594635, "perplexity": 694.4915942506289}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189090.69/warc/CC-MAIN-20170322212949-00555-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://deepai.org/publication/monte-carlo-tree-search-with-scalable-simulation-periods-for-continuously-running-tasks | # Monte Carlo Tree Search with Scalable Simulation Periods for Continuously Running Tasks
Monte Carlo Tree Search (MCTS) is particularly adapted to domains where the potential actions can be represented as a tree of sequential decisions. For an effective action selection, MCTS performs many simulations to build a reliable tree representation of the decision space. As such, a bottleneck to MCTS appears when enough simulations cannot be performed between action selections. This is particularly highlighted in continuously running tasks, for which the time available to perform simulations between actions tends to be limited due to the environment's state constantly changing. In this paper, we present an approach that takes advantage of the anytime characteristic of MCTS to increase the simulation time when allowed. Our approach is to effectively balance the prospect of selecting an action with the time that can be spared to perform MCTS simulations before the next action selection. For that, we considered the simulation time as a decision variable to be selected alongside an action. We extended the Hierarchical Optimistic Optimization applied to Tree (HOOT) method to adapt our approach to environments with a continuous decision space. We evaluated our approach for environments with a continuous decision space through OpenAI gym's Pendulum and Continuous Mountain Car environments and for environments with discrete action space through the arcade learning environment (ALE) platform. The evaluation results show that, with variable simulation times, the proposed approach outperforms the conventional MCTS in the evaluated continuous decision space tasks and improves the performance of MCTS in most of the ALE tasks.
## Authors
• 1 publication
• 9 publications
• 8 publications
• 1 publication
• 29 publications
• ### Dream and Search to Control: Latent Space Planning for Continuous Control
Learning and planning with latent space dynamics has been shown to be us...
10/19/2020 ∙ by Anurag Koul, et al. ∙ 0
• ### Decentralized Cooperative Planning for Automated Vehicles with Continuous Monte Carlo Tree Search
Urban traffic scenarios often require a high degree of cooperation betwe...
09/10/2018 ∙ by Karl Kurzer, et al. ∙ 0
• ### SPSC: a new execution policy for exploring discrete-time stochastic simulations
In this paper, we introduce a new method called SPSC (Simulation, Partit...
09/20/2019 ∙ by Yu-Lin Huang, et al. ∙ 0
• ### Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds
Monte Carlo Tree Search (MCTS), most famously used in game-play artifici...
04/20/2017 ∙ by Daniel R. Jiang, et al. ∙ 0
• ### Selecting Computations: Theory and Applications
Sequential decision problems are often approximately solvable by simulat...
08/09/2014 ∙ by Nicholas Hay, et al. ∙ 0
• ### Vector Quantized Models for Planning
Recent developments in the field of model-based RL have proven successfu...
06/08/2021 ∙ by Sherjil Ozair, et al. ∙ 0
• ### Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search
Monte Carlo tree search (MCTS) has achieved state-of-the-art results in ...
12/14/2020 ∙ by Li-Cheng Lan, et al. ∙ 7
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## Introduction
Monte Carlo Tree Search (MCTS) [Coulom2006, Kocsis and Szepesvári2006] is a simulation-based planning method. It seeks the action that has the best expected outcome when applied to an environment at its current state. To select an action, MCTS builds, through simulations, a search tree that represents sequences of actions that can be taken from the current state and their expected outcome. As such, MCTS is particularly suited to domains where actions can be represented as trees of sequential decisions, such as turn-based games and sequential decision-making tasks. Since it was proposed in 2006, MCTS has become very successful in the complex domain of computer Go [Gelly and Silver2007, Silver et al.2016, Silver et al.2017b]. With the latest milestones of translating lessons learned from computer Go to master other board games such as Chess and Shogi [Silver et al.2017a]
, MCTS has been solidified as an important planning method in reinforcement learning.
Application of MCTS to many real-world environments involves selecting sequential actions for a continuous running task. In continuously running tasks, where the environment is constantly changing, MCTS is presented with a set of challenges that differ from the ones tackled in computer Go. A key challenge to MCTS in such environments is the bottleneck constituted by the number of simulations that can be performed between action selections. The number of simulations performed between action selections dictates how accurate the search tree is in its depiction of the decision space. In continuously running tasks, actions are often taken to offset the effect of the constantly changing environment. As a result, the performance of MCTS is limited due to the short periods of time available for simulations between action selections.
This paper presents an approach that takes advantage of the anytime characteristic of MCTS to introduce scalable simulation periods for continuously running tasks. The proposed approach adds the simulation time as a decision variable alongside the action selection. The idea is to effectively balance the prospect of selecting an action with the time that can be spared before an action update is required. We expand the Hierarchical Optimistic Optimization applied to Tree (HOOT) method [Mansley, Weinstein, and Littman2011] to adapt our approach to fast-changing environments with a continuous decision space. The Hierarchical Optimistic Optimization (HOO) algorithm exploits a set of promising actions that forms a general topological representation of the decision space [Bubeck et al.2011].
The rest of the paper is organized as follows. First, the background to this work is reviewed. Then, the proposed scalable search period MCTS is introduced. Then, the evaluation performance of the proposed approach are presented. Finally, some concluding remarks are made.
## Background
### Monte Carlo Tree Search
The driving idea of MCTS is to determine the best action to take in the current state by representing the decision space with an incrementally growing search tree. The search tree is updated through random simulations with new simulated states (nodes) and actions (edges) iteratively added as action paths, which go from the current state to a terminal state. A node of the search tree maintains the expected value going forward from that node’s state. The expected value is the average outcome of all simulations that went through the node.
MCTS simulations can be divided into multi-phase play-outs, namely, selection, expansion, roll-out, and back-propagation phases. Each simulation starts from the root (current state). During the selection phase, simulations go through the search tree with actions taken based on a selection policy and information maintained in the node. When the selection process reaches a node to which an immediate child can be added, the tree is expanded by attaching a new leaf to that node. The addition of the new node constitutes the expansion phase. Then, a roll-out policy is applied from the new leaf state to a terminal state. The straight-forward random action selection roll-out policy is widely used. Finally, the outcome of the simulation is back-propagated to update the information maintained by the tree from the leaf node to the root.
A key issue during the selection phase of MCTS is to balance the exploitation of promising actions and the exploration of the decision space. The commonly used Upper Confidence Bounds applied to Trees (UCT) algorithm [Kocsis and Szepesvári2006] offers a compromise to that. UCT is an extension of the Upper Confidence Bound (UCB) approach, which was developed for the multi-armed bandit problem [Auer, Cesa-Bianchi, and Fischer2002]. With UCT, a node is selected to maximize the UCB1 value given as
UCB1=¯Xj+C×√2lognnj (1)
where is the node’s value, the average of all outcomes of simulations that pass through that node. is the number of times the parent node has been visited, the number of times child has been visited. is a coefficient, which usually tuned experimentally to control the exploration-exploitation trade-off.
### MCTS for continuous decision spaces
Default MCTS approaches, such as UCT, are adapted for finite-number sequential decision problems. However, as all actions from a given state are explored at least once, the look-ahead of the search tree can become very shallow when the decision space is very large. This is the case for environments with a continuous decision space where the decision space is infinite. Progressive widening and the HOOT approaches have been proposed to deal with such environments.
#### Progressive widening
The solution concurrently introduced as progressive widening by Coulom coulom2006efficient and progressive unpruning by Chaslot et al. chaslot2007progressive initially reduces the number of evaluated actions. Eventually, more actions are added based on the number of visits to a node, and thus the decision space is progressively covered. The order of adding the actions could be determined randomly or by exploiting domain knowledge. The progressive widening strategies assure that the added actions are sufficiently estimated, while UCT directs the tree growth toward the most promising part of the search tree.
#### Hierarchical optimistic optimization applied to tree
The HOOT strategy [Mansley, Weinstein, and Littman2011] integrates the HOO algorithm [Bubeck et al.2011] into the tree search planning to overcome the limitation of UCT in a continuous decision space. The HOO algorithm exploits a set of actions that forms a general topological representation of the action space as a tree. When queried for an action, the HOO algorithm follows the path of maximal B-values, which are scores computed at the nodes. At a leaf node, an action is sampled within the range of the decision space that is represented by the leaf node. Two child nodes are then added to the node, each covering a part of the decision space represented by the parent node.
As defined by Bubeck et al. bubeck2011x, the B-value for a node is computed from its reward estimation and its number of node visits , which are saved at the node, and a reward bias based on a node’s position depth . Let be the upper bound on the estimate of the reward after iterations. It is given by
Ui=^Ri+√2lognni+v1ρhi (2)
for and .
The B-value of a node is defined as
Bi=min{Ui,maxj∈children{Bj}}, (3)
with for nodes that have not yet been sampled.
The HOOT approach is similar to UCT, except that HOOT places a continuous action bandit algorithm, HOO, at each node of the search tree. HOO is used to sample the decision space for action selection to overcome the discrete action limitation of UCT.
## MCTS for continuously running tasks
MCTS applications are traditionally allocated fixed time for planning between action selections. However, in continuously running environments that are constantly changing, a dilemma arises between frequently updating the action taken and allowing enough time for planning. With MCTS, actions are selected according to search trees that are built and updated through sampled simulations. As a search tree depicts the decision space, the efficiency of the selected action depends on the accuracy of the search tree, and therefore on the number of performed simulations. On the other hand, with limited deliberation time between actions, the number of simulations that can be performed between action selections is limited.
### Proposed scalable search approach
To improve the performance of MCTS for continuously running tasks, we consider the possibility to run MCTS simulations with scalable deliberation periods between action selections. As conventional MCTS selects actions at regular steps, this confines the simulation period for all action selections to the time available between steps. We advocate to extend the time a selected action is applied before selecting a new action in order to increase the number of MCTS simulations that are performed. Our insight is that depending on the environment’s state, an action can be selected with regard to the simulation period that is allowed before an action update is necessary. In other words, actions can be selected while considering the period of time that would be available for simulations prior to the next action selection. We consider in this work the possibility for MCTS to explore the trade-off between frequent action selections and the time that can be afforded for simulations.
The proposed approach is to incorporate into the MCTS selection process the deliberation time between action selections as a decision variable. The allowed simulation period before the next action selection is determined alongside the selected action. Figure 1 presents an illustration example of the proposed approach. In Figure 1LABEL:sub@fig:ill_prop_mc, suppose that selecting either action or results in the car going up or down, respectively. In that case, the time () that can be afforded before selecting the next action depends on the selected action. If action were to be selected, a new action has to be selected as the car reaches the top of the left hill for an effective control. The proposed scalable search period MCTS take this into account to select action and the simulation time () that will be used for simulations to select the next action (See Fig. 1LABEL:sub@fig:ill_prop_tl-LABEL:sub@fig:ill_prop_ts).
With the simulation time included as a variable, the transition process changes from to , where
is the probability of moving to state
from state if action is taken, and where is the period between selecting action
and the next action selection. Note that our approach is different from a Semi-Markov Decision process where
are random variables rather than decision variables. Note also that, with the notation
, where the pair action/simulation time forms a two-dimensional action space, we revert to the default .
### Algorithm for scalable search period MCTS
The algorithm for the proposed approach is summarily illustrated in Fig. 2. The different phases of its implementation mostly mirror the conventional MCTS. The selection, simulation, and update phase of MCTS are mostly unchanged. The main updates are with the expansion process where the added continuous time space introduces new constraints to be tackled. The algorithm applies UCT with UCB1 as the selection policy to navigate the search tree. It uses progressive widening with pruning to restrict the search tree’s lateral expansion. At a given time, a node is considered fully expanded if its number of children is equal to the maximum number of actions allowed with regards to the node’s visits. When an expansion is required, a HOO algorithms is queried to sample an action/deliberation time pair for the extension node.
Progressive widening allows MCTS to initially focus the simulations on a limited number of actions to avoid having shallow search trees. As the number of performed simulations increases, it then gradually allows more actions to be considered to cover the decision space more broadly. We use progressive widening to decide whether a node can be further expanded. As progressive widening limits the number of considered action/simulation time pairs considered from a node, the search tree is regularly pruned. The pruning of the search tree promotes the exploration of a vast number of action/simulation period pairs by discarding the least promising pairs in favor of trying new pairs for the search tree.
The HOOT approach adds a filter layer to the set of action/simulation periods that are evaluated during simulations. For tasks with discrete action space, conventional HOO is used to sample the simulation periods to be paired with the actions. As for tasks with a continuous decision space, a two-dimensional HOO is introduced to sample pairs of action/simulation time. Through HOO sampling, the selection of the action/simulation period pairs to be added to the search tree are directed toward sections of the decision space where promising actions/simulation period pairs are most likely to be found. As a result, the efficiency of the actions/simulation period pairs considered in the search tree is improved, in particular when the number of simulations possible is limited.
Another aspect considered in the proposed approach for continuously running tasks is the effect of running MCTS simulations for an expected state that is multiple steps ahead. In continuously running tasks, the MCTS simulations to select an action/simulation period pair () are performed considering the state that is expected after applying the previously selected action for the simulation period (See Fig. (b)b). When the simulation period exceeds a single step, the expected state at the end of the simulations may drift from the task’s actual state. This occurs if the task is stochastic or if the simulation model for a deterministic task is not accurate. To mitigate this effect on the accuracy of the search tree, the task’s state during the MCTS simulations is monitored and the expected state for the simulation model is updated after each step.
### 2-D HOOT for scalable search period MCTS
We adapt the HOOT method to our approach to set scalable search periods for MCTS. For tasks with a discrete action space, HOOT is applied as defined by Bubeck et al. bubeck2011x to sample the simulation periods to be paired with the actions. However, for tasks with a continuous decision space, both actions and simulation periods are to be sampled. For that, the HOO algorithm is updated to work in a two-dimensional space (See Fig. 3). Mainly, the HOO algorithm is structurally changed as the HOO tree can no longer be handled as a binary tree. Bifurcating both the decision space and the time space covered by a node leads to four regions being created, with each region represented by an additional child node.
In our implementation of the approach, each node of the MCTS contains a HOO tree from which the action/deliberation time pairs used as transition edges to its child nodes are sampled. When queried during an expansion of the search tree, the HOO algorithm follows the path of largest B-values to a leaf node where it samples a pair of action/deliberation time from the node’s covered range ().
Each node that is added to the search tree saves a pointer to its parent node’s HOO tree, parallel with initializing a new HOO tree. The pointers to the parent nodes’ HOO trees are saved for updating the HOO tree’s average values. The value of a HOO node is taken as the average reward pulled from the tree search instead of the immediate reward returned by directly applying that action. In the case of HOOT, the immediate reward does not provide information about the effectiveness of the selected action with regard to the MCTS. Using the average expected value leads the HOO selection toward a section of the decision space where actions are expected to perform well in the long run.
In the original HOOT paper [Mansley, Weinstein, and Littman2011], the action was taken by greedily following branches according to the mean rewards as opposed to the B-values. In this paper, we compute the B-value from the average values given by the MCTS. Since the MCTS values change as the simulations progress, the HOO values are updated to mirror them. Whenever the value of a tree search node is updated, the value of its corresponding HOO node is also updated. The value from the MCTS corresponds to the value of the action selected from that HOO node. However, the value of a HOO node reflects on all actions selected from nodes of its sub-tree. As such, the values of the HOO nodes are iteratively updated to reflect that. They are given by
^Ri=¯Xi+∑j∈childrennj×^Rjni (4)
where is the action value from the MCTS tree, and is the number of visits.
The and values are computed as given by Equations (2) and (3). To avoid unnecessary repetitions of the iterative updates, their updates are performed only when an action selection is required.
## Performance evaluation
We evaluated the performance of the proposed scalable search period MCTS (SSP-MCTS) for tasks with a continuous decision space through OpenAI gym’s Pendulum and Continuous Mountain Car environments [Brockman et al.2016] and for environments with a discrete action space through the arcade learning environment (ALE) platform [Bellemare et al.2013]. The experiments were performed on servers consisting of Intel cores Xeon E5-2690v4, with Ghz clock speed.
### Classical control with continuous decision space
We assess the performance of the proposed approach in terms of average accumulated rewards per episode. Our results are compared to the performance of conventional MCTS. For that, we considered two cases of conventional MCTS implementations where the deliberation time is not a factor. In the first case only progressive widening (PW) is used, and in the second case progressive widening is paired with conventional HOOT (PW+HOOT).
Two OpenAI gym [Brockman et al.2016] environments, Pendulum and Continuous Mountain Car environments, are used for our simulation purpose. With the Pendulum environment, the goal is to keep a frictionless pendulum standing up. The pendulum starts in a random position, and continuous-value torques must be selected to swing it up so it stays upright. With the mountain car environment, a car is on a one-dimensional track, positioned between two hills. The goal is to drive up the hill on the right; however, the car is under-powered. Therefore, to build up momentum and accelerate towards the target, the opposite hill must be climbed.
We set the number of simulations that can be performed per unit of time as a simulation parameter rather than the run time. Given a uniform processing speed, the number of performed simulations is proportional to the run time. Furthermore, the performance represented as a function of the number of simulations per time step is not affected by fluctuations of the processing speed due to events unrelated to the simulation (e.g. computer load).
The proposed SSP-MCTS approach outperforms conventional MCTS in both considered environments as illustrated in Figure 4. Figure 4 presents the average accumulated rewards per episode for the different MCTS approaches with regard to the number of simulations per step. The number of simulation episodes is for the pendulum environment and for the continuous mountain car environment. We observe from Figure 4LABEL:sub@fig:rslt_pend the effect of default HOOT on conventional MCTS. Adding to the progressive widening method the HOOT approach, which prioritizes the evaluation of the most promising actions, improves the performance of conventional MCTS. Our approach achieves further improvement by including the updated HOOT method to flexibly select simulation periods during action selection.
The advantage of the SSP-MCTS over conventional MCTS is more visible in Figure 4LABEL:sub@fig:rslt_cmc. This suggests that the proposed approach is more efficient with the continuous mountain car environment than it is with the pendulum environment. However, Figure 4LABEL:sub@fig:tau_pend suggests that this result is due to intrinsic characteristics of the environments rather than to a lack of efficiency from the proposed approach. In fact, Figure 4LABEL:sub@fig:tau_pend demonstrates the versatility of the proposed approach.
Figure 4LABEL:sub@fig:tau_pend presents the distribution of the simulation periods selected by the SSP-MCTS approach. We observe that our approach was able to identify when applying action updates in short succession was preferable and when it was more advantageous to run extended simulations before updating the action. The Pendulum environment is a stability task where frequent updates are required to keep a steady control of the pole. That is not the case of the continuous mountain car environment, where the car is constantly moving from one side of the hill to the other until it reaches the goal. The two environments offer different types of challenge which are reflected in the distribution of the selected simulation periods. Short simulation periods are selected for the pendulum environment to ensure continuous control, while the selected simulation periods are quite distributed for the continuous mountain car.
### ALE for discrete action space evaluation
The performance of the proposed SSP-MCTS for tasks with a discrete action space are evaluated using the ALE environment with Atari games. The results are presented in comparison with UCT and IW(1). The Iterated Width (IW) algorithm has been introduced as a classical planning algorithm that takes a planning problem as an input, and computes an action sequence that solves the problem as the output [Geffner and Lipovetzky2012]. Its variant algorithms, IW(1) and 2BFS, have been implemented for Atari games and reported in Lipovetzky, Ramirez, and Geffner lipovetzky2015classical.
To follow their experimental setup, we limit the maximum number of simulated frames per step to . The number of MCTS simulations per step is and the maximum search depth is frames. The discount factor used is . Due to the wide range of scores throughout the different games, the exploration constant is selected among , and depending on the game. For the proposed approach, each presented result is the rounded average performance over at least episodes run.
Table 1 presents some preliminary results of the proposed SSP-MCTS in comparison with UCT in selected games. Both approaches were simulated under our experimental settings, with the maximum search depth limited to a frames. These results indicate that our experimental settings lead to results that are similar to previously reported ones [Bellemare et al.2013]. As such, for the whole set of games, we compare the scores obtained for the proposed approach with the ones for UCT and IW(1) as reported in Bellemare et al. bellemare2013arcade and Lipovetzky, Ramirez, and in Geffner lipovetzky2015classical, respectively (see Table 2). IW(1) is selected out of the iterative width algorithms as it has overall the best reported performance among them.
Table 2 presents the score of the proposed SSP-MCTS in comparison with UCT and IW(1). The SSP-MCTS performs better than both UCT and IW(1) on of games ( for UCT, for IW()). More interestingly, SSP-MCTS outperforms UCT on games ( for IW(1)). In most cases, the SSP-MCTS performs significantly better. The improvements over UCT scores, have led the SSP-MCTS to outscore IW() on ten games for which UCT had lower scores than IW() (e.g. Bowling, Frostbite, VideoPinball). These results indicates that the extended simulation periods allow the proposed approach to select more effective actions. We note that repeating the same action during the extended simulation periods offer some advantage in games where the player benefits from repeating a particular action, e.g. accelerating in Enduro, or moving up in Freeway. That being said, the wide range of games where SSP-MCTS outperforms UCT, and in which there is no apparent advantage in repeating the same action (e.g. Bowling, Demon Attack, Fishing Derby, Private Eye, Space Invader), consolidates the proposed approach.
On the other hand, there are ten games where the SSP-MCTS have scored lower than UCT. In five of those ten games, namely Asterix, gopher, pacman, road runner and seaquest, a sizable drop of score is noticed. These are games where a failure to select the right action in some states will lead to the loss of a life for the player. Since the losses of lives are not accounted for until the last one, the loss of which ends the game, repeating actions by the proposed SSP-MCTS sometimes leads to early game termination. Theoretically, the SSP-MCTS could select one-step simulation periods to at least match the score of UCT. However, with the simulation time added as a decision variable and progressive widening used to limit the number of considered action/simulation time pairs, there is a probability of missing out as the cost of exploring different simulation time for multiple actions.
## Related work
MCTS has proven beneficial in a wide range of domains. An extensive survey of early applications of MCTS is given by Browne et al. browne2012survey. MCTS is used for real-time game environments to control the Pac-Man character [Pepels, Winands, and Lanctot2014, Guo et al.2014] and as an offline planner in an approach that combines it with DQN in the ALE [Guo et al.2014]. Silver and Veness silver2010monte introduced a Monte-Carlo algorithm for online planning in large partially observable Markov decision problems (POMDPs) and their method was extended to Bayes-Adaptive POMDPs by Katt et al. katt2017learning. MCTS was also applied for stochastic environments [Couetoux2013, Yee, Lisy, and Bowling2016]. Couetoux couetoux2013monte advocated the use of double progressive widening for stochastic and continuous sequential decision making problems. Yee et al. yee2016monte proposed a variant of MCTS based on Kernel Regression KR-UCT for continuous action spaces with execution uncertainty. They based their approach on the existence of similarities among actions that could generate a common outcome.
Dynamic Frame skip Deep Q-Network (DFDQN) [Lakshminarayanan, Sharma, and Ravindran2016] has been considered for the ALE environment. DFDQN treats the frame skip rate as a dynamic learnable parameter that defines the number of times a selected action is repeated based on the current state. The agent can select a pair of action/frame skip rate from a set of options that includes two predefined frame skip rate values for each action. In comparison, our approach proposes scalable simulation time, during which a selected action is repeated, for MCTS. The simulation periods are not predefined but selected alongside the actions through MCTS simulations.
## Conclusions
This paper has proposed a scalable search period MCTS approach that balances action selections with the simulation time that can be afforded for effective action selections in continuously running tasks. To mitigate the trade-off between action selection frequency and the time available for MCTS simulations, the proposed approach considers the simulation time available between action selections as a decision variable to be selected alongside the actions. To direct the MCTS towards the most promising area of the decision space, the implementation algorithm relies on progressive widening, pruning and HOOT. An updated HOOT is introduced for action/simulation time pairs sampling in tasks with a continuous decision space. The simulation results suggest that the proposed scalable search period MCTS approach effectively selects actions/simulation time pairs with regard to the environment. The MCTS with scalable simulation periods outperforms the conventional MCTS in simulated continuous action space environments and improve its result in most of the Atari games.
## References
• [Auer, Cesa-Bianchi, and Fischer2002] Auer, P.; Cesa-Bianchi, N.; and Fischer, P. 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3):235–256.
• [Bellemare et al.2013] Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An evaluation platform for general agents.
Journal of Artificial Intelligence Research
47:253–279.
• [Brockman et al.2016] Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. Openai gym. arXiv preprint arXiv:1606.01540.
• [Browne et al.2012] Browne, C. B.; Powley, E.; Whitehouse, D.; Lucas, S. M.; Cowling, P. I.; Rohlfshagen, P.; Tavener, S.; Perez, D.; Samothrakis, S.; and Colton, S. 2012. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games 4(1):1–43.
• [Bubeck et al.2011] Bubeck, S.; Munos, R.; Stoltz, G.; and Szepesvári, C. 2011. X-armed bandits. Journal of Machine Learning Research 12(May):1655–1695.
• [Chaslot et al.2007] Chaslot, G. M.-B.; Winands, M. H.; Uiterwijk, J. W.; HERIK, H. V. D.; and Bouzy, B. 2007. Progressive strategies for Monte-Carlo tree search. In Information Sciences 2007. World Scientific. 655–661.
• [Couetoux2013] Couetoux, A. 2013. Monte Carlo tree search for continuous and stochastic sequential decision making problems. Ph.D. Dissertation, Université Paris Sud-Paris XI.
• [Coulom2006] Coulom, R. 2006. Efficient selectivity and backup operators in Monte-Carlo tree search. In International conference on computers and games, 72–83. Springer.
• [Geffner and Lipovetzky2012] Geffner, H., and Lipovetzky, N. 2012. Width and serialization of classical planning problems.
• [Gelly and Silver2007] Gelly, S., and Silver, D. 2007. Combining online and offline knowledge in uct. In Proceedings of the 24th international conference on Machine learning, 273–280. ACM.
• [Guo et al.2014] Guo, X.; Singh, S.; Lee, H.; Lewis, R. L.; and Wang, X. 2014. Deep learning for real-time atari game play using offline Monte-Carlo tree search planning. In Advances in neural information processing systems, 3338–3346.
• [Katt, Oliehoek, and Amato2017] Katt, S.; Oliehoek, F. A.; and Amato, C. 2017. Learning in POMDPs with Monte Carlo tree search. In Precup, D., and Teh, Y. W., eds., Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 1819–1827. International Convention Centre, Sydney, Australia: PMLR.
• [Kocsis and Szepesvári2006] Kocsis, L., and Szepesvári, C. 2006. Bandit based Monte-Carlo planning. In European conference on machine learning, 282–293. Springer.
• [Lakshminarayanan, Sharma, and Ravindran2016] Lakshminarayanan, A. S.; Sharma, S.; and Ravindran, B. 2016. Dynamic frame skip deep q network. arXiv preprint arXiv:1605.05365.
• [Lipovetzky, Ramirez, and Geffner2015] Lipovetzky, N.; Ramirez, M.; and Geffner, H. 2015. Classical planning with simulators: Results on the atari video games. In IJCAI, volume 15, 1610–1616.
• [Mansley, Weinstein, and Littman2011] Mansley, C. R.; Weinstein, A.; and Littman, M. L. 2011. Sample-based planning for continuous action markov decision processes. In Proceedings of the 21st International Conference on Automated Planning and Scheduling.
• [Pepels, Winands, and Lanctot2014] Pepels, T.; Winands, M. H.; and Lanctot, M. 2014. Real-time Monte Carlo tree search in ms pac-man. IEEE Transactions on Computational Intelligence and AI in games 6(3):245–257.
• [Silver and Veness2010] Silver, D., and Veness, J. 2010. Monte-Carlo planning in large POMDPs. In Advances in neural information processing systems, 2164–2172.
• [Silver et al.2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; and Hassabis, D. 2016.
Mastering the game of go with deep neural networks and tree search.
Nature 529:484–503.
• [Silver et al.2017a] Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. 2017a. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815.
• [Silver et al.2017b] Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. 2017b. Mastering the game of go without human knowledge. Nature 550(7676):354.
• [Yee, Lisy, and Bowling2016] Yee, T.; Lisy, V.; and Bowling, M. 2016. Monte Carlo tree search in continuous action spaces with execution uncertainty. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 690–696. AAAI Press. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241789937019348, "perplexity": 2132.5865998482805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00303.warc.gz"} |
https://tutoringbychristine.com/quiz/tachs-math-quiz-3-alternate-2/ | There Is No Such Thing As A Second Impression.
WELCOME
CALL (917) 748-9089
• No products in the cart.
Top
### TACHS Math Quiz #3 Alternate
July 30, 2021
Math Test: TACHS Math Quiz #3 Alternate
What class are you in?*
MULTIPLE CHOICE. Choose the one alternative that answers the question.
1) What is the absolute value of -10?
2) Sean is -245 feet below sea level. What is the absolute value of the number of feet he is above sea level?
3) What is the answer to the problem |2| + |-2|?
4) What is the answer to the problem: - |-3| ?
5) |-19| - |-16| + |4|
6)Which of the following statements is true?
7) (-19) - (-20)
8) Find the product of: (-4)(3)(-10)(2)
9) Find the opposite of -16
10) -18 + 28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442907333374023, "perplexity": 2392.450093884283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00711.warc.gz"} |
https://rltheory.github.io/w2021-lecture-notes/planning-in-mdps/lec15/ | In the previous lectures we attempted to reduce the complexity of planning by assuming that value functions over the large state-action spaces can be compactly represented with a few parameters. While value-functions are an indispensable component of poly-time MDP planners (see Lectures 3 and 4), it is far from clear whether they should also be given priority when working with larger MDPs.
Indeed, perhaps it is more natural to consider sets of policies with a compact description. Formally, in this problem setting the planner will be given a black-box simulation access to a (say, $\gamma$-discounted) MDP $M=(\mathcal{S},\mathcal{A},P,r)$ as before, but the interface also provides access to a parameterized family of policies over $(\mathcal{S},\mathcal{A})$, $$\pi = (\pi_\theta)_{\theta\in \mathbb{R}^d}$$, where for any fixed parameter $\theta\in \mathbb{R}^d$, $\pi_\theta$ is a memoryless stochastic policy: $\pi_\theta:\mathcal{S} \to \mathcal{M}_1(\mathcal{A})$.
For example, $\pi_\theta$ could be such that for some feature-map $\phi: \mathcal{S}\times \mathcal{A} \to \mathcal{R}^d$,
\begin{align} \pi_\theta(a|s) = \frac{\exp( \theta^\top \varphi(s,a))}{\sum_{a'} \exp(\theta^\top \varphi(s,a'))}\,, \qquad (s,a)\in \mathcal{S}\times \mathcal{A}\,. \label{eq:boltzmannpp} \end{align}
In this case “access” to $\pi_\theta$ means access to $\varphi$, which can be either global (i.e., the planner is given the “whole” of $\varphi$ and can run any preprocessing on it), or local (i.e., $\varphi(s’,a)$ is returned by the simulator for the “next states” $s’\in \mathcal{S}$ and for all actions $a$). Of course, the exponential function can be replaced with other functions, or, one can just use a neural network to output “scores”, which are turned into probabilities in some way. Dispensing with stochastic policies, a narrower class is the class of policies that are greedy with respect to action-value functions that belong to some parametric class.
One special case that is worthy of attention due to its simplicity is the case when $\mathcal{S}$ is partitioned into $m$ (disjoint) subsets $\mathcal{S}_1,\dots,\mathcal{S}_m$ and for $i\in [m]$, we have $\mathrm{A}$ basis functions defined as follows:
\begin{align} \phi_{i,a'}(s,a) = \mathbb{I}( s\in \mathcal{S}_i, a= a' )\,, \qquad s\in \mathcal{S}, a,a'\in \mathcal{A}, i\in [m]\,. \label{eq:stateagg} \end{align}
Here, to minimize clutter, we allow the basis functions to be indexed by pairs and identified $\mathcal{A}$ with ${ 1,\dots,\mathrm{A}}$, as usual. Then, the policies are given by $\theta = (\theta_1,\dots,\theta_m)$, the collection of $m$ probability vectors $\theta_1,\dots,\theta_m\in \mathcal{M}_1(\mathcal{A})$:
\begin{align} \pi_\theta(a|s) = \sum_{i=1}^m \sum_{a'} \phi_{i,a'}\theta_{i,a'}\,. \label{eq:directpp} \end{align}
Note that because of the special choice of $\phi$, $\pi_{\theta}(a|s) = \theta_{i,a}$ for the unique index $i\in [m]$ such that $s\in \mathcal{S}_i$. This is known as state-aggregretion: States belonging to the same group give rise to the same probability distribution over the actions. We say that the featuremap $\varphi:\mathcal{S}\times \mathcal{A}\to \mathbb{R}^d$ is of the state-aggregation type if it takes the form \eqref{eq:stateagg} with an appropriate reindexing of the basis functions.
Fix now a state-aggregation type featuremap. We can consider both the direct parameterization of policies given in \eqref{eq:directpp}, or the “Boltzmann” parameterization given in \eqref{eq:boltzmannpp}. As it is easy to see the set of possible policies that can be expressed with the two parameterizations are nearly identical. Letting $\Pi_{\text{direct}}$ be the set of policies that can be expressed using $\varphi$ and the direct parameterization and letting $\Pi_{\text{Boltzmann}}$ be the set of policies that can be expressed using $\varphi$ but with the Boltzmann parameterization, first note that $$\Pi_{\text{direct}},\Pi_{\text{Boltzmann}} \subset \mathcal{M}_1(\mathcal{A})^{\mathcal{S}} \subset ([0,1]^{\mathrm{A}})^{\mathrm{S}}$$, and if we take the closure, $\text{clo}(\Pi_{\text{Boltzmann}})$ of $\Pi_{\text{Boltzmann}}$ then we can notice that
$\text{clo}(\Pi_{\text{Boltzmann}}) = \Pi_{\text{direct}}\,.$
In particular, the Boltzmann policies cannot express point-mass distributions with finite parameters, but letting the parameter vectors grow without bound, any policy that can be expressed with the direct parameterization can also be expressed by the Boltzmann parameterization. There are many other possible parameterizations, as also mentioned earlier. The important point to notice is that while the parameterization is necessary so that the algorithms can work with a compressed representation, different representations may describe an identical set of policies.
A reasonable goal then is to ask for a planner that competes with the best policy within the parameterized family, or the $\varepsilon$-best policy policy for some positive $\varepsilon$. Since there may not be a parameter $\theta$ such that $v^{\pi_\theta}\ge v^{\pi_{\theta’}}-\varepsilon\boldsymbol{1}$ for any $\theta’\in \mathbb{R}^d$, we simplify the problem by requiring that the policy computed is nearly best when started from some initial distribution $\mu \in \mathcal{M}_1(\mathcal{S})$.
Defining $J: \text{ML} \to \mathbb{R}$ as
$J(\pi) = \mu v^{\pi} (=\sum_{s\in \mathcal{S}}\mu(s)v^{\pi}(s)),$
the policy search problem is to find a parameter $\theta\in \mathbb{R}^d$ such that
\begin{align*} J(\pi_{\theta}) = \max_{\theta'} J(\pi_{\theta'})\,. \end{align*}
The approximation version of the problem asks for finding $\theta’\in \mathbb{R}^d$ such that
\begin{align*} J(\pi_{\theta}) = \max_{\theta'} J(\pi_{\theta'}) - \varepsilon\,. \end{align*}
The formal problem definition then is as follows: a planning algorithm is given the MDP $M$ and a policy parameterization $(\pi_\theta)_{\theta}$ and we are asking for an algorithm that returns the solution to the policy search problem in time polynomial in the number of actions $\mathrm{A}$ and the number of parameters $d$ that describes the policy. An even simpler problem is when the MDP has finitely many states, and the algorithm needs to run in polynomial time in $\mathrm{S}$, $\mathrm{A}$ and $d$. In this case, it is clearly advantageous for the algorithm if it is given the exact description of the MDP (as described in Lecture 3) Sadly, even this mild version of policy search is intractable.
Theorem (Policy search hardness): Unless $\text{P}=\text{NP}$, there is no polynomial time algorithm for the finite policy search problem even when the policy space is restricted to the constant policies and the MDPs are restricted to be deterministic with binary rewards.
The constant policies are those that assign the same probability distribution to each state. This is a special case of state aggregation when all the states are aggregated into a single class. As the policy does not depend on the state, the problem is also known as the blind policy search problem. Note that the result holds regardless of the representation used to express the set of constant policies.
Proof: Let $\mathcal{S} = \mathcal{A}=[n]$. The dynamics is deterministic: The next state is $a$ if action $a\in \mathcal{A}$ is taken in state $n$. A policy is simple a probability distribution $$\pi \in \mathcal{M}_1([n])$$ over the action space, which we shall view as a column vector taking values in $[0,1]^n$. The transition matrix of $\pi$ is $P_{\pi}(s,s’) = \pi(s’)$, or, in matrix form, $P_\pi = \boldsymbol{1} \pi^\top$. Clearly, $P_\pi^2 = \boldsymbol{1} \pi^\top \boldsymbol{1} \pi^\top = P_\pi$ (i.e., $P_\pi$ is idempotent). Thus, $P_\pi^t = \boldsymbol{1}\pi^\top$ for any $t>0$ and hence
\begin{align*} J(\pi) & = \mu (r_\pi + \sum_{t\ge 1} \gamma^t P_\pi^t r_\pi) = \mu \left(I + \frac{\gamma}{1-\gamma} \boldsymbol{1} \pi^\top \right)r_\pi\,. \end{align*}
Defining $R_{s,a} = r_a(s)$ so that $R\in [0,1]^{n\times n}$, we have $r_\pi = R\pi$. Plugging this in into the previous displayed equation and using that $\mu \boldsymbol{1}=1$, we get
\begin{align*} J(\pi) & = \mu R \pi + \frac{\gamma}{1-\gamma} \pi^\top R \pi\,. \end{align*}
Thus we see that the policy search problem is equivalent to maximizing the quadratic expression in the previous display over the probability simplex. Since there is no restriction on $R$, one may at this point conjecture that this will be hard to do. That this is indeed the case can be shown by a reduction to the maximum independent set problem, which asks for checking whether the independence number of a graph is above a threshold and which is known to be NP-hard even for $3$-regular graphs (i.e., graphs where every vertex has exactly three neighbours).
Here, the independence number of a graph is defined as follows: We are given a simple graph $G=(V,E)$ (i.e., there are no self-loops, no double edges, and the graph is undirected). An independent set in $G$ is a neighbour-free subset of vertices. The independence number of $G$ is defined as
\begin{align*} \alpha(G) = \max \{ |V'| \,:\, V'\subset \text{ independent in } G \}\,. \end{align*}
Quadratic optimization has close ties to the maximum independent set problem:
Lemma (Motzkin-Strauss ‘65): Let $$G\in \{0,1\}^n$$ be the vertex-vertex adjacency matrix of simple graph (i.e., $G_{ij}=1$ if and only if $(i,j)$ is an edge of the graph). Then, for $$I\in \{0,1\}^{n\times n}$$ the $n\times n$ identity matrix,
\begin{align*} \frac{1}{\alpha(G)} = \min_{y\in \mathcal{M}_1([n])} y^\top (G+I) y\,. \end{align*}
We now show that if there is an algorithm that solves policy search in polynomial time then it can also be used to solve the maximum independent set problem for simple, $3$-regular graphs. For this pick a $3$-regular graph $G$ with $n$ vertices. Define the MDP as above with $n$ states and actions and the rewards chosen to that $R = E-(I+G)$ where $G$ is the vertex-vertex adjacency matrix of the graph and $E$ is the all-ones matrix: $E = \boldsymbol{1} \boldsymbol{1}^\top$. We add $E$ so that the rewards are in the $[0,1]$ interval and in fact are binary as required. Choose $\mu$ as the uniform distribution over the states. Note that $\boldsymbol{1}^\top (I+G) = 4 \boldsymbol{1}^\top$ because the graph is $3$-regular. Then, for $\pi \in \mathcal{M}_1(\mathcal{A})$,
\begin{align*} J(\pi) & = \frac{1}{1-\gamma}- \mu(E+I+G) \pi - \frac{\gamma}{1-\gamma} \pi^\top (E+I+G) \pi \\ & = \frac{1}{1-\gamma}- \frac{1}{n} \boldsymbol{1}^\top (I+G) \pi - \frac{\gamma}{1-\gamma} \pi^\top (I+G) \pi \\ & = \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \pi^\top (I+G) \pi\,. \end{align*}
Hence, \begin{align*} \max_{\pi \in \mathcal{M}_1([n]} J(\pi) & = \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \frac{1}{\alpha(G)} \ge \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \frac{1}{m} \end{align*} holds if and only if $\alpha(G)\ge m$. Thus, the decision problem of deciding that $J(\pi)\ge a$ is at least as hard as the maximum independent set problem. As noted, this is an NP-hard problem, hence the result follows. $$\qquad \blacksquare$$
Based on the theorem just proved it is not very likely that we can find computationally efficient planners to compete with the best policy in a restricted policy class, even if the class looks quite benign. This motivates aiming at some more modest goal, one possibility of which is to aim for computing stationary points of the map $J:\pi \mapsto \mu v^{\pi}$. Let $\Pi = { \pi_\theta \,:\, \theta\in \mathbb{R}^d } \in [0,1]^{\mathcal{S}\times\mathcal{A}}$ be the set of policies that can represented; we view these now as “large vectors”. Then, in this approach we aim to identify $$\pi^*\in \Pi$$ (and its parameters) so that for any $\pi’\in \Pi$ and small enough $\delta>0$ so that $$\pi^*+\delta (\pi'-\pi^*)\in \Pi$$, $$J(\pi^*+\delta (\pi'-\pi^*))\le J(\pi^*)$$. For $\delta$ small, $$J(\pi^*+\delta (\pi'-\pi^*))\approx J(\pi^*) + \delta \langle J'(\pi^*), \pi'- \pi^* \rangle$$. Plugging this in into the previous inequality, reordering and dividing by $\delta>0$ gives
\begin{align} \langle J'(\pi^*), \pi'- \pi^* \rangle \le 0\,, \qquad \pi' \in \Pi\,. \label{eq:stp} \end{align}
Here, $J’(\pi)$ denotes the derivative of $J$. What remains to be seen is whether (1) relaxing the goal to computing $$\pi^*$$ helps with the computation (and when) and (2) whether we can get some guarantees for how well $\pi^*$ satisfying \eqref{eq:stp} will do compared to $$J^* = \max_{\pi\in \Pi} J(\pi)$$, that is obtaining some approximation guarantees. For the latter we seek for some function $\varepsilon$ of the MDP $M$ and $\Pi$ (or $\phi$, when $\Pi$ is based on some featuremap) so that
\begin{align*} J(\pi^*) \ge J^* - \varepsilon(M,\Pi) \end{align*}
As to the computational approaches, we will consider a simple approach based on (approximately) following the gradient of $\theta \mapsto J(\pi_\theta$.
## Notes
### Access models
The reader may be wondering about what is the appropriate “access model” when $\pi_\theta$ is not restricted to the form given in \eqref{eq:boltzmannpp}. There are many possibilities. One is to develop planners for specific parametric forms. A more general approach is to let the planner access $$\pi_{\theta}(\cdot\vert s)$$ and $\frac{\partial}{\partial\theta}\pi_{\theta}(\cdot \vert s)$ for any $s$ it has encountered and any value of $\theta\in \mathbb{R}^d$ it chooses. This is akin to the first-order black-box oracle model familiar from optimization theory.
### From function approximation to POMDPs
The hardness result for policy search is taken from a paper of Vlassis, Littman and Barber, who actually were interested in the computational complexity of planning in partially observable Markov Decision Problems (POMDPs). It is in fact an important observation that with function approximation, planning in MDPs becomes a special case planning in POMDPs: In particular, if policies are restricted to depend on the states through a feature-map $\phi:\mathcal{S}\to \mathbb{R}^d$ (any two states with identical features will get the same action distribution assigned to them), then planning to achieve high reward with this restricted class is almost the same as planning to achieve high reward in a partially observable MDP where the observation function is $\phi$. Planners for the former problem could still have some advantage though if they can also access the states: In particular, a local planner which is given a feature-map to help its search but is also given access to the states is in fact not restricted to return actions whose distribution follows a policy from the feature-restricted class of policies. In machine learning, in the analogue problem of competing with a best predictor within a class but using predictors that do not respect the restrictions put on the competitors are called improper and it is known that improper learning is often more powerful than proper learning. However, when it comes to learning online or in a batch fashion then feature-restricted learning and learning in POMDPs become exact analogs. Finally, we note in passing that Vlassis et al. (2012) also add an argument that show that it is not likely that policy search is in NP.
The result almost implies that the approximate version of policy search is also NP-hard (Theorem 11.15, Arora, Barak 2009). In particular, it is not hard to see with the same construction that if one has an efficient method find a policy with $J(\pi) \ge \max_\pi J_\pi - \varepsilon$ then this gives an efficient method to find an independent set of size $\alpha(G)/c$ for the said $3$-regular graphs where
$c = 1 + \frac{1-\gamma}{\gamma} \varepsilon \alpha(G) \le 1+ \frac{1-\gamma}{\gamma} \varepsilon n \le 1+\varepsilon n \,,$
where the last inequality follows if we choose $\gamma=0.5$. Now, while there exist results that show that the maximum independent set is hard to approximate (i.e., for any fixed $c>1$ finding an independent set of size $\alpha(G)/c$ is hard), this would only imply hardness of approximate policy search if the hardness result also uses $3$-regular graphs. Also, the above bound on $c$ may be too naive: For example, to get $2$-approximations, one needs $\varepsilon\le 1/n$, which is small range for $\varepsilon$. To get a hardness result for a “constant” $\varepsilon$ (independent of $n$) needs significantly more work.
### Dealing with large action spaces
A common reason to consider policy search is because working with a restricted parametric family of policies holds the promise of decoupling the computational cost of learning and planning from the cardinality of the action-space. Indeed, with action-value functions, one usually needs an efficient way of computing greedy actions (with respect to some fixed action-value function). Computing $\arg\max_{a\in \mathcal{A}} q(s,a)$ in the lack of extra structure of the action-space and the function $q(s,\cdot)$ takes linear time in the size of $\mathcal{A}$, which is highly problematic unless $\mathcal{A}$ has a small cardinality. In many applications of practical interest this is not the case: The action space can be “combinatorially sized”, or even a subset of some (potentially multidimensional) continuous space.
If sampling from $\pi_{\theta}(\cdot\vert s)$ can be done efficiently, one may then potentially avoid the above expensive calculation. Thus, policy search is often proposed as a remedy to extend algorithms to work with large action spaces. Of course, this only applies if the sampling problem can indeed be efficiently implemented, which adds an extra restriction on the policy representation. Nevertheless, there are a number of options to achieve this: One can use for example an implicit representation (perhaps in conjunction with a direct one that uses probabilities/densities) for the policy.
For example, the policy may be “represented” as a map $f_\theta: \mathcal{S} \times \mathcal{R} \to \mathcal{A}$ so that sampling from $\pi_\theta(\cdot\vert s)$ is accomplished by drawing a sample $R\sim P$ from a fixed distribution over the set $\mathcal{R}$ and then returning $f(s,R)\in \mathcal{A}$. Clearly, this is efficient as long as $f_\theta$ can be efficiently evaluated at any of its inputs and the random value $R$ can be efficiently produced. If $f_\theta$ is sufficiently flexible, one can in fact choose a very simple distribution for $P$, such as the standard normal distribution, or the uniform distribution.
Note that when $\mathcal{A}$ is continuous and the policies are deterministic is a special case: The key is still to be able to efficiently produce a sample from $\pi_\theta(\cdot\vert s)$, just in this case this means a deterministic computation.
The catch is that one may also still need the derivatives of $\pi_{\theta}(\cdot\vert s)$ with respect to the parameter $\theta$ and with an implicit representation as described above, it is unclear whether these derivatives can be efficiently obtained. As it turns out, this can be arranged if $f_{\theta}(\cdot\vert s)$ is made of composition of elementary (invertible, differentiable) transformations with this property (by the chain rule). This observation is the basis of various approaches to “neural” density estimation (e.g., Tabak and Vanden-Eijnden, 2010, Rezende, Mohamed, 2015, or Jaini et al. 2019).
## References
• Vlassis, Nikos, Michael L. Littman, and David Barber. 2012. “On the Computational Complexity of Stochastic Controller Optimization in POMDPs.” ACM Trans. Comput. Theory, 12, 4 (4): 1–8.
• Esteban G. Tabak. Eric Vanden-Eijnden. “Density estimation by dual ascent of the log-likelihood.” Commun. Math. Sci. 8 (1) 217 - 233, March 2010.
• Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. “Variational Inference with Normalizing Flows” link.
• Rezende, D. J., and S. Mohamed. 2014. “Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” ICML. link.
• Jaini, Priyank, Kira A. Selby, and Yaoliang Yu. 2019. “Sum-of-Squares Polynomial Flow.” In Proceedings of the 36th International Conference on Machine Learning, edited by Kamalika Chaudhuri and Ruslan Salakhutdinov, 97:3009–18. Proceedings of Machine Learning Research. PMLR.
• Arora, Sanjeev, and Boaz Barak. 2009. Computational Complexity. A Modern Approach. Cambridge: Cambridge University Press.
The hardness of the maximum independent set problem is a classic result; see, e.g., Theorem 2.15 in the book of Arora and Barak (2009) above, though this proof does not show that the hardness also applies to the case of 3-regular graphs. According to a comment by Gamow on stackexchange, a “complete NP-completeness proof for this problem is given right after Theorem 4.1 in the following paper”:
• Bojan Mohar: “Face Covers and the Genus Problem for Apex Graphs” Journal of Combinatorial Theory, Series B 82, 102-117 (2001)
On the same page, Yixin Cao notes that there is a way to remove vertices of degree larger than three (presumable without changing the independence number) and refers to another stackexchange page. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449918866157532, "perplexity": 424.95062579924036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00107.warc.gz"} |
http://math.stackexchange.com/questions/260161/limit-of-trigonometric-functions-using-identities | # Limit of trigonometric functions using identities.
I would like to solve the following using only trig identities.
$$\lim_{x \to \pi} {\cot2(x-\pi)}{\cot(x-\frac\pi2)}$$
I have so far that the above is equal to $$\lim_{x \to \pi} \frac{\cos2(x-\pi)}{\sin2(x-\pi)}\frac{\cos(x-\frac\pi2)}{\sin(x-\frac\pi2)} = \lim_{x \to \pi} \frac{-\cos2x}{-\sin2x}\frac{-\sin x}{-\cos x} = \lim_{x \to \pi} \frac{\cos2x}{2\sin x\cos x}\frac{\sin x}{\cos x} = \frac12\lim_{x \to \pi} \frac{\cos2x}{\cos x}\frac{1}{\cos x}=\frac12$$
However, I am afraid the correct answer is $-\frac12$. Where am I going wrong? Also, if there is another method of solving this, I would be thankful for the insight.
-
$\cos(x)$ is an even function. – Babak S. Dec 16 '12 at 19:06
Of course, thanks. – revok Dec 16 '12 at 19:13
First of all , $\sin(2(x-\pi)) = \sin(2x)$ and $\cos(2(x-\pi) )= \cos(2x)$. Look at the second step in your displayed equality. You have $\cos(x - \pi/2) = \cos(\pi/2 - x) = \sin(x)$, but $\sin(x - \pi/2) = -\sin(\pi/2 - x) = \cos(x).$ Apply these and you will get it.
And $\cos(2x-2\pi)=\cos(2x)$ and $\cos(x-\frac{\pi}{2})=\sin(x)$ there.
${{+1^{+}}^+}^+ \quad \ddot\smile$ – amWhy Apr 13 '13 at 0:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643586874008179, "perplexity": 172.20742569361184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00133-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Wave | # Wave
Surface waves in water showing water ripples
Example of biological waves expanding over the brain cortex. Spreading Depolarizations. [1]
In physics, mathematics, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities, sometimes as described by a wave equation. In physical waves, at least two field quantities in the wave medium are involved. Waves can be periodic, in which case those quantities oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction it is said to be a traveling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero.
The types of waves most commonly studied in classical physics are mechanical and electromagnetic. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves, string vibrations (standing waves), and vortices[dubious ]. In an electromagnetic wave (such as light) energy is interchanged between the electric and magnetic fields which sustains propagation of a wave involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, according to their frequencies (or wavelengths) have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.
Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves[dubious ]; plasma waves that combine mechanical deformations and electromagnetic fields; reaction-diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more.
Mechanical and electromagnetic waves transfer energy,[2], momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals.[3] On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. Some, like the probability waves of quantum mechanics, may be completely static[dubious ].
A physical wave is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains.
A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal if those vectors are exactly in the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization which can be an important attribute for waves having more than one single possible polarization.
## Mathematical description
### Single waves
A wave can be described just like a field, namely as a function ${\displaystyle F(x,t)}$ where ${\displaystyle x}$ is a position and ${\displaystyle t}$ is a time.
The value of ${\displaystyle x}$ is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space ${\displaystyle \mathbb {R} ^{3}}$. However, in many cases one can ignore one dimension, and let ${\displaystyle x}$ be a point of the Cartesian plane ${\displaystyle \mathbb {R} ^{2}}$. This is the case, for example, when studying vibrations of a drum skin. One may even restrict ${\displaystyle x}$ to a point of the Cartesian line ${\displaystyle \mathbb {R} }$ — that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time ${\displaystyle t}$, on the other hand, is always assumed to be a scalar; that is, a real number.
The value of ${\displaystyle F(x,t)}$ can be any physical quantity of interest assigned to the point ${\displaystyle x}$ that may vary with time. For example, if ${\displaystyle F}$ represents the vibrations inside an elastic solid, the value of ${\displaystyle F(x,t)}$ is usually a vector that gives the current displacement from ${\displaystyle x}$ of the material particles that would be at the point ${\displaystyle x}$ in the absence of vibration. For an electromagnetic wave, the value of ${\displaystyle F}$ can be the electric field vector ${\displaystyle E}$, or the magnetic field vector ${\displaystyle H}$, or any related quantity, such as the Poynting vector ${\displaystyle E\times H}$. In fluid dynamics, the value of ${\displaystyle F(x,t)}$ could be the velocity vector of the fluid at the point ${\displaystyle x}$, or any scalar property like pressure, temperature, or density. In a chemical reaction, ${\displaystyle F(x,t)}$ could be the concentration of some substance in the neighborhood of point ${\displaystyle x}$ of the reaction medium.
For any dimension ${\displaystyle d}$ (1, 2, or 3), the wave's domain is then a subset ${\displaystyle D}$ of ${\displaystyle \mathbb {R} ^{d}}$, such that the function value ${\displaystyle F(x,t)}$ is defined for any point ${\displaystyle x}$ in ${\displaystyle D}$. For example, when describing the motion of a drum skin, one can consider ${\displaystyle D}$ to be a disk (circle) on the plane ${\displaystyle \mathbb {R} ^{2}}$ with center at the origin ${\displaystyle (0,0)}$, and let ${\displaystyle F(x,t)}$ be the vertical displacement of the skin at the point ${\displaystyle x}$ of ${\displaystyle D}$ and at time ${\displaystyle t}$.
### Wave families
Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echos one could get from an airplane that may be approaching an airport.
In some of those situations, one may describe such a family of waves by a function ${\displaystyle F(A,B,\ldots ;x,t)}$ that depends on certain parameters ${\displaystyle A,B,\ldots }$, besides ${\displaystyle x}$ and ${\displaystyle t}$. Then one can obtain different waves — that is, different functions of ${\displaystyle x}$ and ${\displaystyle t}$ — by choosing different values for those parameters.
Sound pressure standing wave in a half-open pipe playing the 7th harmonic of the fundamental (n = 4)
For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as
${\displaystyle F(A,L,n,c;x,t)=A\,(\cos 2\pi x{\frac {2n-1}{4L}})(\cos 2\pi ct{\frac {2n-1}{4L}})}$
The parameter ${\displaystyle A}$ defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); ${\displaystyle c}$ is the speed of sound; ${\displaystyle L}$ is the length of the bore; and ${\displaystyle n}$ is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position ${\displaystyle x}$ should be measured from the mouthpiece, and the time ${\displaystyle t}$ from any moment at which the pressure at the mouthpiece is maximum. The quantity ${\displaystyle \lambda =4L/(2n-1)}$ is the wavelength of the emitted note, and ${\displaystyle f=c/\lambda }$ is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters.
As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance ${\displaystyle r}$ from the center of the skin to the strike point, and on the strength ${\displaystyle s}$ of the strike. Then the vibration for all possible strikes can be described by a function ${\displaystyle F(r,s;x,t)}$.
Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function ${\displaystyle h}$ such that ${\displaystyle h(x)}$ is the initial temperature at each point ${\displaystyle x}$ of the bar. Then the temperatures at later times can be expressed by a function ${\displaystyle F}$ that depends on the function ${\displaystyle h}$ (that is, a functional operator), so that the temperature at a later time is ${\displaystyle F(h;x,t)}$
### Differential wave equations
Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of ${\displaystyle F(x,t)}$, only constrains how those values can change with time. Then the family of waves in question consists of all functions ${\displaystyle F}$ that satisfy those constraints — that is, all solutions of the equation.
This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if ${\displaystyle F(x,t)}$ is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation
${\displaystyle {\frac {\partial F}{\partial t}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta Q(x,t)}$
where ${\displaystyle Q(p,f)}$ is the heat that is being generated per unit of volume and time in the neighborhood of ${\displaystyle x}$ at time ${\displaystyle t}$ (for example, by chemical reactions happening there); ${\displaystyle x_{1},x_{2},x_{3}}$ are the Cartesian coordinates of the point ${\displaystyle x}$; ${\displaystyle \partial F/\partial t}$ is the (first) derivative of ${\displaystyle F}$ with respect to ${\displaystyle t}$; and ${\displaystyle \partial ^{2}F/\partial x_{i}^{2}}$ is the second derivative of ${\displaystyle F}$ relative to ${\displaystyle x_{i}}$. (The symbol "${\displaystyle \partial }$" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.)
This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures.
For another example, we can describe all possible sounds echoing within a container of gas by a function ${\displaystyle F(x,t)}$ that gives the pressure at a point ${\displaystyle x}$ and time ${\displaystyle t}$ within that container. If the gas was initially at uniform temperature and composition, the evolution of ${\displaystyle F}$ is constrained by the formula
${\displaystyle {\frac {\partial ^{2}F}{\partial t^{2}}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta P(x,t)}$
Here ${\displaystyle P(x,t)}$ is some extra compression force that is being applied to the gas near ${\displaystyle x}$ by some external process, such as a loudspeaker or piston right next to ${\displaystyle p}$.
This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is ${\displaystyle \partial ^{2}F/\partial t^{2}}$, the second derivative of ${\displaystyle F}$ with respect to time, rather than the first derivative ${\displaystyle \partial F/\partial t}$. Yet this small change makes a huge difference on the set of solutions ${\displaystyle F}$. This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves.
## Wave in elastic medium
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling
Wavelength λ, can be measured between any two corresponding points on a waveform
Animation of two waves, the green wave moves to the right while blue wave moves to the left, the net red wave amplitude at each point is the sum of the amplitudes of the individual waves. Note that f(x,t) + g(x,t) = u(x,t)
• in the ${\displaystyle x}$ direction in space. For example, let the positive ${\displaystyle x}$ direction be to the right, and the negative ${\displaystyle x}$ direction be to the left.
• with constant amplitude ${\displaystyle u}$
• with constant velocity ${\displaystyle v}$, where ${\displaystyle v}$ is
• with constant waveform, or shape
This wave can then be described by the two-dimensional functions
${\displaystyle u(x,t)=F(x-v\ t)}$ (waveform ${\displaystyle F}$ traveling to the right)
${\displaystyle u(x,t)=G(x+v\ t)}$ (waveform ${\displaystyle G}$ traveling to the left)
or, more generally, by d'Alembert's formula:[6]
${\displaystyle u(x,t)=F(x-vt)+G(x+vt).\,}$
representing two component waveforms ${\displaystyle F}$ and ${\displaystyle G}$ traveling through the medium in opposite directions. A generalized representation of this wave can be obtained[7] as the partial differential equation
${\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}={\frac {\partial ^{2}u}{\partial x^{2}}}.\,}$
General solutions are based upon Duhamel's principle.[8]
### Wave forms
Sine, square, triangle and sawtooth waveforms.
The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).[9]
In the case of a periodic function F with period λ, that is, F(x + λvt) = F(x vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(xv(t + T)) = F(x vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.[10]
### Amplitude and modulation
Amplitude modulation can be achieved through f(x,t) = 1.00*sin(2*pi/0.10*(x-1.00*t)) and g(x,t) = 1.00*sin(2*pi/0.11*(x-1.00*t))only the resultant is visible to improve clarity of waveform.
Illustration of the envelope (the slowly varying red curve) of an amplitude-modulated wave. The fast varying blue curve is the carrier wave, which is being modulated.
The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form:[11][12][13]
${\displaystyle u(x,t)=A(x,t)\sin(kx-\omega t+\phi )\ ,}$
where ${\displaystyle A(x,\ t)}$ is the amplitude envelope of the wave, ${\displaystyle k}$ is the wavenumber and ${\displaystyle \phi }$ is the phase. If the group velocity ${\displaystyle v_{g}}$ (see below) is wavelength-independent, this equation can be simplified as:[14]
${\displaystyle u(x,t)=A(x-v_{g}\ t)\sin(kx-\omega t+\phi )\ ,}$
showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.[14][15]
### Phase velocity and group velocity
The red square moves with the phase velocity, while the green circles propagate with the group velocity
There are two velocities that are associated with waves, the phase velocity and the group velocity.
Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength λ (lambda) and period T as
${\displaystyle v_{\mathrm {p} }={\frac {\lambda }{T}}.}$
A wave with the group and phase velocities going in different directions
Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes – modulation or envelope of the wave.
## Sine waves
Sinusoidal waves correspond to simple harmonic motion.
Mathematically, the most basic wave is the (spatially) one-dimensional sine wave (also called harmonic wave or sinusoid) with an amplitude ${\displaystyle u}$ described by the equation:
${\displaystyle u(x,t)=A\sin(kx-\omega t+\phi )\ ,}$
where
• ${\displaystyle A}$ is the maximum amplitude of the wave, maximum distance from the highest point of the disturbance in the medium (the crest) to the equilibrium point during one wave cycle. In the illustration to the right, this is the maximum vertical distance between the baseline and the wave.
• ${\displaystyle x}$ is the space coordinate
• ${\displaystyle t}$ is the time coordinate
• ${\displaystyle k}$ is the wavenumber
• ${\displaystyle \omega }$ is the angular frequency
• ${\displaystyle \phi }$ is the phase constant.
The units of the amplitude depend on the type of wave. Transverse mechanical waves (for example, a wave on a string) have an amplitude expressed as a distance (for example, meters), longitudinal mechanical waves (for example, sound waves) use units of pressure (for example, pascals), and electromagnetic waves (a form of transverse vacuum wave) express the amplitude in terms of its electric field (for example, volts/meter).
The wavelength ${\displaystyle \lambda }$ is the distance between two sequential crests or troughs (or other equivalent points), generally is measured in meters. A wavenumber ${\displaystyle k}$, the spatial frequency of the wave in radians per unit distance (typically per meter), can be associated with the wavelength by the relation
${\displaystyle k={\frac {2\pi }{\lambda }}.\,}$
The period ${\displaystyle T}$ is the time for one complete cycle of an oscillation of a wave. The frequency ${\displaystyle f}$ is the number of periods per unit time (per second) and is typically measured in hertz denoted as Hz. These are related by:
${\displaystyle f={\frac {1}{T}}.\,}$
In other words, the frequency and period of a wave are reciprocals.
The angular frequency ${\displaystyle \omega }$ represents the frequency in radians per second. It is related to the frequency or period by
${\displaystyle \omega =2\pi f={\frac {2\pi }{T}}.\,}$
The wavelength ${\displaystyle \lambda }$ of a sinusoidal waveform traveling at constant speed ${\displaystyle v}$ is given by:[16]
${\displaystyle \lambda ={\frac {v}{f}},}$
where ${\displaystyle v}$ is called the phase speed (magnitude of the phase velocity) of the wave and ${\displaystyle f}$ is the wave's frequency.
Wavelength can be a useful concept even if the wave is not periodic in space. For example, in an ocean wave approaching shore, the incoming wave undulates with a varying local wavelength that depends in part on the depth of the sea floor compared to the wave height. The analysis of the wave can be based upon comparison of the local wavelength with the local water depth.[17]
Although arbitrary wave shapes will propagate unchanged in lossless linear time-invariant systems, in the presence of dispersion the sine wave is the unique shape that will propagate unchanged but for phase and amplitude, making it easy to analyze.[18] Due to the Kramers–Kronig relations, a linear medium with dispersion also exhibits loss, so the sine wave propagating in a dispersive medium is attenuated in certain frequency ranges that depend upon the medium.[19] The sine function is periodic, so the sine wave or sinusoid has a wavelength in space and a period in time.[20][21]
The sinusoid is defined for all times and distances, whereas in physical situations we usually deal with waves that exist for a limited span in space and duration in time. An arbitrary wave shape can be decomposed into an infinite set of sinusoidal waves by the use of Fourier analysis. As a result, the simple case of a single sinusoidal wave can be applied to more general cases.[22][23] In particular, many media are linear, or nearly so, so the calculation of arbitrary wave behavior can be found by adding up responses to individual sinusoidal waves using the superposition principle to find the solution for a general waveform.[24] When a medium is nonlinear, then the response to complex waves cannot be determined from a sine-wave decomposition.
## Plane waves
A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length ${\displaystyle {\hat {n}}}$ indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction (${\displaystyle {\hat {n}}\cdot {\vec {x}}}$) and time (${\displaystyle t}$). Since the wave profile only depends on the position ${\displaystyle {\vec {x}}}$ in the combination ${\displaystyle {\hat {n}}\cdot {\vec {x}}}$, any displacement in directions perpendicular to ${\displaystyle {\hat {n}}}$ cannot affect the value of the field.
Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other.
## Standing waves
Standing wave. The red dots represent the wave nodes
A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions.
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.
## Physical properties
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism
Waves exhibit common behaviors under a number of standard situations, for example:
### Transmission and media
Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories:
• A bounded medium if it is finite in extent, otherwise an unbounded medium
• A linear medium if the amplitudes of different waves at any particular point in the medium can be added
• A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space
• An anisotropic medium if one or more of its physical properties differ in one or more directions
• An isotropic medium if its physical properties are the same in all directions
### Absorption
Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored.
### Reflection
When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.
### Refraction
Sinusoidal traveling plane wave entering a region of lower wave velocity at an angle, illustrating the decrease in wavelength and change of direction (refraction) that results.
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.
### Diffraction
A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
### Interference
Identical waves from two sources undergoing interference. Observed at the bottom one sees 5 positions where the waves add in phase, but in between which they are out of phase and cancel.
When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one weren't present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern.
### Polarization
The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter.
Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel.
### Dispersion
Schematic of light being dispersed by a prism. Click to see animation.
A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colours of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colours and that these colours cannot be decomposed any further.[25]
## Mechanical waves
### Waves on strings
The speed of a transverse wave traveling along a vibrating string ( v ) is directly proportional to the square root of the tension of the string ( T ) over the linear mass density ( μ ):
${\displaystyle v={\sqrt {\frac {T}{\mu }}},\,}$
where the linear density μ is the mass per unit length of the string.
### Acoustic waves
Acoustic or sound waves travel at speed given by
${\displaystyle v={\sqrt {\frac {B}{\rho _{0}}}},\,}$
or the square root of the adiabatic bulk modulus divided by the ambient fluid density (see speed of sound).
### Water waves
• Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths.
• Sound – a mechanical wave that propagates through gases, liquids, solids and plasmas;
• Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect;
• Ocean surface waves, which are perturbations that propagate through water.
### Seismic waves
Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy.
### Doppler effect
The Doppler effect (or the Doppler shift) is the change in frequency of a wave in relation to an observer who is moving relative to the wave source.[26] It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
### Shock waves
Formation of a shock wave by a plane.
A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium.[27]
### Other
• Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves[28]
• Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
## Electromagnetic waves
An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. Electromagnetic waves can have different frequencies (and thus wavelengths), giving rise to various types of radiation such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and Gamma rays.
## Quantum mechanical waves
### Schrödinger equation
The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle.
### Dirac equation
The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-½ particles.
A propagating wave packet; in general, the envelope of the wave packet moves at a different speed than the constituent waves.[29]
### de Broglie waves
Louis de Broglie postulated that all particles with momentum have a wavelength
${\displaystyle \lambda ={\frac {h}{p}},}$
where h is Planck's constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m.
A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows:
${\displaystyle \psi (\mathbf {r} ,\ t=0)=A\ e^{i\mathbf {k\cdot r} }\ ,}$
where the wavelength is determined by the wave vector k as:
${\displaystyle \lambda ={\frac {2\pi }{k}}\ ,}$
and the momentum by:
${\displaystyle \mathbf {p} =\hbar \mathbf {k} \ .}$
However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet,[30] a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value.
In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet.[31] Gaussian wave packets also are used to analyze water waves.[32]
For example, a Gaussian wavefunction ψ might take the form:[33]
${\displaystyle \psi (x,\ t=0)=A\ \exp \left(-{\frac {x^{2}}{2\sigma ^{2}}}+ik_{0}x\right)\ ,}$
at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis,[34] or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian.[35] Given the Gaussian:
${\displaystyle f(x)=e^{-x^{2}/(2\sigma ^{2})}\ ,}$
the Fourier transform is:
${\displaystyle {\tilde {f}}(k)=\sigma e^{-\sigma ^{2}k^{2}/2}\ .}$
The Gaussian in space therefore is made up of waves:
${\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\ {\tilde {f}}(k)e^{ikx}\ dk\ ;}$
that is, a number of waves of wavelengths λ such that kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k.
Animation showing the effect of a cross-polarized gravitational wave on a ring of test particles
## Gravity waves
Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy tries to restore equilibrium. A ripple on a pond is one example.
## Gravitational waves
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.[36] Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
## References
1. ^ Santos, Edgar; Schöll, Michael; Sánchez-Porras, Renán; Dahlem, Markus A.; Silos, Humberto; Unterberg, Andreas; Dickhaus, Hartmut; Sakowitz, Oliver W. (2014-10-01). "Radial, spiral and reverberating waves of spreading depolarization occur in the gyrencephalic brain". NeuroImage. 99: 244–255. doi:10.1016/j.neuroimage.2014.05.021. ISSN 1095-9572. PMID 24852458.
2. ^ (Hall 1982, p. 8)
3. ^ Pragnan Chakravorty, "What Is a Signal? [Lecture Notes]," IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 175-177, Sept. 2018. doi:10.1109/MSP.2018.2832195
4. ^ Michael A. Slawinski (2003). "Wave equations". Seismic waves and rays in elastic media. Elsevier. pp. 131 ff. ISBN 978-0-08-043930-3.
5. ^ Lev A. Ostrovsky & Alexander I. Potapov (2001). Modulated waves: theory and application. Johns Hopkins University Press. ISBN 978-0-8018-7325-6.
6. ^ Karl F Graaf (1991). Wave motion in elastic solids (Reprint of Oxford 1975 ed.). Dover. pp. 13–14. ISBN 978-0-486-66745-4.
7. ^ For an example derivation, see the steps leading up to eq. (17) in Francis Redfern. "Kinematic Derivation of the Wave Equation". Physics Journal.
8. ^ Jalal M. Ihsan Shatah; Michael Struwe (2000). "The linear wave equation". Geometric wave equations. American Mathematical Society Bookstore. pp. 37ff. ISBN 978-0-8218-2749-9.
9. ^ Louis Lyons (1998). All you wanted to know about mathematics but were afraid to ask. Cambridge University Press. pp. 128 ff. ISBN 978-0-521-43601-4.
10. ^ Alexander McPherson (2009). "Waves and their properties". Introduction to Macromolecular Crystallography (2 ed.). Wiley. p. 77. ISBN 978-0-470-18590-2.
11. ^ Christian Jirauschek (2005). FEW-cycle Laser Dynamics and Carrier-envelope Phase Detection. Cuvillier Verlag. p. 9. ISBN 978-3-86537-419-6.
12. ^ Fritz Kurt Kneubühl (1997). Oscillations and waves. Springer. p. 365. ISBN 978-3-540-62001-3.
13. ^ Mark Lundstrom (2000). Fundamentals of carrier transport. Cambridge University Press. p. 33. ISBN 978-0-521-63134-1.
14. ^ a b Chin-Lin Chen (2006). "§13.7.3 Pulse envelope in nondispersive media". Foundations for guided-wave optics. Wiley. p. 363. ISBN 978-0-471-75687-3.
15. ^ Stefano Longhi; Davide Janner (2008). "Localization and Wannier wave packets in photonic crystals". In Hugo E. Hernández-Figueroa; Michel Zamboni-Rached; Erasmo Recami (eds.). Localized Waves. Wiley-Interscience. p. 329. ISBN 978-0-470-10885-7.
16. ^ David C. Cassidy; Gerald James Holton; Floyd James Rutherford (2002). Understanding physics. Birkhäuser. pp. 339ff. ISBN 978-0-387-98756-9.
17. ^ Paul R Pinet (2009). op. cit. p. 242. ISBN 978-0-7637-5993-3.
18. ^ Mischa Schwartz; William R. Bennett & Seymour Stein (1995). Communication Systems and Techniques. John Wiley and Sons. p. 208. ISBN 978-0-7803-4715-1.
19. ^ See Eq. 5.10 and discussion in A.G.G.M. Tielens (2005). The physics and chemistry of the interstellar medium. Cambridge University Press. pp. 119 ff. ISBN 978-0-521-82634-1.; Eq. 6.36 and associated discussion in Otfried Madelung (1996). Introduction to solid-state theory (3rd ed.). Springer. pp. 261 ff. ISBN 978-3-540-60443-3.; and Eq. 3.5 in F Mainardi (1996). "Transient waves in linear viscoelastic media". In Ardéshir Guran; A. Bostrom; Herbert Überall; O. Leroy (eds.). Acoustic Interactions with Submerged Elastic Structures: Nondestructive testing, acoustic wave propagation and scattering. World Scientific. p. 134. ISBN 978-981-02-4271-8.
20. ^ Aleksandr Tikhonovich Filippov (2000). The versatile soliton. Springer. p. 106. ISBN 978-0-8176-3635-7.
21. ^ Seth Stein, Michael E. Wysession (2003). An introduction to seismology, earthquakes, and earth structure. Wiley-Blackwell. p. 31. ISBN 978-0-86542-078-6.
22. ^ Seth Stein, Michael E. Wysession (2003). op. cit.. p. 32. ISBN 978-0-86542-078-6.
23. ^ Kimball A. Milton; Julian Seymour Schwinger (2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators. Springer. p. 16. ISBN 978-3-540-29304-0. Thus, an arbitrary function f(r, t) can be synthesized by a proper superposition of the functions exp[i (k·r−ωt)]...
24. ^ Raymond A. Serway & John W. Jewett (2005). "§14.1 The Principle of Superposition". Principles of physics (4th ed.). Cengage Learning. p. 433. ISBN 978-0-534-49143-7.
25. ^ Newton, Isaac (1704). "Prop VII Theor V". Opticks: Or, A treatise of the Reflections, Refractions, Inflexions and Colours of Light. Also Two treatises of the Species and Magnitude of Curvilinear Figures. 1. London. p. 118. All the Colours in the Universe which are made by Light... are either the Colours of homogeneal Lights, or compounded of these...
26. ^ Giordano, Nicholas (2009). College Physics: Reasoning and Relationships. Cengage Learning. pp. 421–424. ISBN 978-0534424718.
27. ^ Anderson, John D. Jr. (January 2001) [1984], Fundamentals of Aerodynamics (3rd ed.), McGraw-Hill Science/Engineering/Math, ISBN 978-0-07-237335-6
28. ^ M.J. Lighthill; G.B. Whitham (1955). "On kinematic waves. II. A theory of traffic flow on long crowded roads". Proceedings of the Royal Society of London. Series A. 229 (1178): 281–345. Bibcode:1955RSPSA.229..281L. CiteSeerX 10.1.1.205.4573. doi:10.1098/rspa.1955.0088.CS1 maint: ref=harv (link) And: P.I. Richards (1956). "Shockwaves on the highway". Operations Research. 4 (1): 42–51. doi:10.1287/opre.4.1.42.CS1 maint: ref=harv (link)
29. ^ A.T. Fromhold (1991). "Wave packet solutions". Quantum Mechanics for Applied Physics and Engineering (Reprint of Academic Press 1981 ed.). Courier Dover Publications. pp. 59 ff. ISBN 978-0-486-66741-6. (p. 61) ...the individual waves move more slowly than the packet and therefore pass back through the packet as it advances
30. ^ Ming Chiang Li (1980). "Electron Interference". In L. Marton; Claire Marton (eds.). Advances in Electronics and Electron Physics. 53. Academic Press. p. 271. ISBN 978-0-12-014653-6.
31. ^ See for example Walter Greiner; D. Allan Bromley (2007). Quantum Mechanics (2 ed.). Springer. p. 60. ISBN 978-3-540-67458-0. and John Joseph Gilman (2003). Electronic basis of the strength of materials. Cambridge University Press. p. 57. ISBN 978-0-521-62005-5.,Donald D. Fitts (1999). Principles of quantum mechanics. Cambridge University Press. p. 17. ISBN 978-0-521-65841-6..
32. ^ Chiang C. Mei (1989). The applied dynamics of ocean surface waves (2nd ed.). World Scientific. p. 47. ISBN 978-9971-5-0789-3.
33. ^ Walter Greiner; D. Allan Bromley (2007). Quantum Mechanics (2nd ed.). Springer. p. 60. ISBN 978-3-540-67458-0.
34. ^ Siegmund Brandt; Hans Dieter Dahmen (2001). The picture book of quantum mechanics (3rd ed.). Springer. p. 23. ISBN 978-0-387-95141-6.
35. ^ Cyrus D. Cantrell (2000). Modern mathematical methods for physicists and engineers. Cambridge University Press. p. 677. ISBN 978-0-521-59827-9.
36. ^ "Gravitational waves detected for 1st time, 'opens a brand new window on the universe'". CBC. 11 February 2016.
## Sources
• Fleisch, D.; Kinnaman, L. (2015). A student's guide to waves. Cambridge: Cambridge University Press. Bibcode:2015sgw..book.....F. ISBN 978-1107643260.CS1 maint: ref=harv (link)
• Campbell, Murray; Greated, Clive (2001). The musician's guide to acoustics (Repr. ed.). Oxford: Oxford University Press. ISBN 978-0198165057.
• French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN 978-0-393-09936-2. OCLC 163810889.
• Hall, D.E. (1980). Musical Acoustics: An Introduction. Belmont, CA: Wadsworth Publishing Company. ISBN 978-0-534-00758-4.CS1 maint: ref=harv (link).
• Hunt, Frederick Vinton (1978). Origins in acoustics. Woodbury, NY: Published for the Acoustical Society of America through the American Institute of Physics. ISBN 978-0300022209.
• Ostrovsky, L.A.; Potapov, A.S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press. ISBN 978-0-8018-5870-3.CS1 maint: ref=harv (link).
• Griffiths, G.; Schiesser, W.E. (2010). Traveling Wave Analysis of Partial Differential Equations: Numerical and Analytical Methods with Matlab and Maple. Academic Press. ISBN 9780123846532. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 147, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511215686798096, "perplexity": 677.5160320972658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00065.warc.gz"} |
https://subwiki.org/w/index.php?title=Relation_theory&oldid=763 | # Relation theory
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
This article treats relations from the perspective of combinatorics, in other words, as a subject matter in discrete mathematics, with special attention to finite structures and concrete set-theoretic constructions, many of which arise quite naturally in applications. This approach to relation theory, or the theory of relations, is distinguished from, though closely related to, its study from the perspectives of abstract algebra on the one hand and formal logic on the other.
## Preliminaries
Two definitions of the relation concept are common in the literature. Although it is usually clear in context which definition is being used at a given time, it tends to become less clear as contexts collide, or as discussion moves from one context to another.
The same sort of ambiguity arose in the development of the function concept and it may save some effort to follow the pattern of resolution that worked itself out there.
When we speak of a function we are thinking of a mathematical object whose articulation requires three pieces of data, specifying the set the set and a particular subset of their cartesian product So far so good.
Let us write to express what has been said so far.
When it comes to parsing the notation everyone takes the part to specify the type of the function, that is, the pair but is used equivocally to denote both the triple and the subset that forms one part of it. One way to resolve the ambiguity is to formalize a distinction between a function and its graph, letting
Another tactic treats the whole notation as sufficient denotation for the triple, letting denote
In categorical and computational contexts, at least initially, the type is regarded as an essential attribute or an integral part of the function itself. In other contexts it may be desirable to use a more abstract concept of function, treating a function as a mathematical object that appears in connection with many different types.
Following the pattern of the functional case, let the notation bring to mind a mathematical object that is specified by three pieces of data, the set the set and a particular subset of their cartesian product As before we have two choices, either let or let denote and choose another name for the triple.
## Definition
It is convenient to begin with the definition of a -place relation, where is a positive integer.
Definition. A -place relation over the nonempty sets is a -tuple where is a subset of the cartesian product
## Remarks
Though usage varies as usage will, there are several bits of optional language that are frequently useful in discussing relations. The sets are called the domains of the relation with being the domain. If all of the are the same set then is more simply described as a -place relation over The set is called the graph of the relation on analogy with the graph of a function. If the sequence of sets is constant throughout a given discussion or is otherwise determinate in context, then the relation is determined by its graph making it acceptable to denote the relation by referring to its graph. Other synonyms for the adjective -place are -adic and -ary, all of which leads to the integer being called the dimension, adicity, or arity of the relation
## Local incidence properties
A local incidence property (LIP) of a relation is a property that depends in turn on the properties of special subsets of that are known as its local flags. The local flags of a relation are defined in the following way:
Let be a -place relation
Select a relational domain and one of its elements Then is a subset of that is referred to as the flag of with at or the -flag of an object that has the following definition:
Any property of the local flag is said to be a local incidence property of with respect to the locus
A -adic relation is said to be -regular at if and only if every flag of with at has the property where is taken to vary over the theme of the fixed domain
Expressed in symbols, is -regular at if and only if is true for all in
## Regional incidence properties
The definition of a local flag can be broadened from a point in to a subset of arriving at the definition of a regional flag in the following way:
Suppose that and choose a subset Then is a subset of that is said to be the flag of with at or the -flag of an object which has the following definition:
## Numerical incidence properties
A numerical incidence property (NIP) of a relation is a local incidence property that depends on the cardinalities of its local flags.
For example, is said to be -regular at if and only if the cardinality of the local flag is for all in or, to write it in symbols, if and only if for all
In a similar fashion, one can define the NIPs, -regular at -regular at and so on. For ease of reference, a few of these definitions are recorded here:
Returning to 2-adic relations, it is useful to describe some familiar classes of objects in terms of their local and numerical incidence properties. Let be an arbitrary 2-adic relation. The following properties of can be defined:
If is tubular at then is called a partial function or a prefunction from to This is sometimes indicated by giving an alternate name, say, and writing
Just by way of formalizing the definition:
If is a prefunction that happens to be total at then is called a function from to indicated by writing To say that a relation is totally tubular at is to say that it is -regular at Thus, we may formalize the following definition:
In the case of a function one has the following additional definitions:
## Variations
Because the concept of a relation has been developed quite literally from the beginnings of logic and mathematics, and because it has incorporated contributions from a diversity of thinkers from many different times and intellectual climes, there is a wide variety of terminology that the reader may run across in connection with the subject.
One dimension of variation is reflected in the names that are given to -place relations, for with some writers using the Greek forms, medadic, monadic, dyadic, triadic, -adic, and other writers using the Latin forms, nullary, unary, binary, ternary, -ary.
The number of relational domains may be referred to as the adicity, arity, or dimension of the relation. Accordingly, one finds a relation on a finite number of domains described as a polyadic relation or a finitary relation, but others count infinitary relations among the polyadic. If the number of domains is finite, say equal to then the relation may be described as a -adic relation, a -ary relation, or a -dimensional relation, respectively.
A more conceptual than nominal variation depends on whether one uses terms like predicate, relation, and even term to refer to the formal object proper or else to the allied syntactic items that are used to denote them. Compounded with this variation is still another, frequently associated with philosophical differences over the status in reality accorded formal objects. Among those who speak of numbers, functions, properties, relations, and sets as being real, that is to say, as having objective properties, there are divergences as to whether some things are more real than others, especially whether particulars or properties are equally real or else one is derivative in relationship to the other. Historically speaking, just about every combination of modalities has been used by one school of thought or another, but it suffices here merely to indicate how the options are generated.
## Examples
See the articles on relations, relation composition, relation reduction, sign relations, and triadic relations for concrete examples of relations.
Many relations of the greatest interest in mathematics are triadic relations, but this fact is somewhat disguised by the circumstance that many of them are referred to as binary operations, and because the most familiar of these have very specific properties that are dictated by their axioms. This makes it practical to study these operations for quite some time by focusing on their dyadic aspects before being forced to consider their proper characters as triadic relations.
## References
• Peirce, C.S., “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic”, Memoirs of the American Academy of Arts and Sciences, 9, 317–378, 1870. Reprinted, Collected Papers CP 3.45–149, Chronological Edition CE 2, 359–429.
• Ulam, S.M. and Bednarek, A.R., “On the Theory of Relational Structures and Schemata for Parallel Computation”, pp. 477–508 in A.R. Bednarek and Françoise Ulam (eds.), Analogies Between Analogies : The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley, CA, 1990.
## Bibliography
• Barr, Michael, and Wells, Charles (1990), Category Theory for Computing Science, Prentice Hall, Hemel Hempstead, UK.
• Bourbaki, Nicolas (1994), Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany.
• Carnap, Rudolf (1958), Introduction to Symbolic Logic with Applications, Dover Publications, New York, NY.
• Chang, C.C., and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
• van Dalen, Dirk (1980), Logic and Structure, 2nd edition, Springer-Verlag, Berlin, Germany.
• Devlin, Keith J. (1993), The Joy of Sets : Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY.
• Halmos, Paul Richard (1960), Naive Set Theory, D. Van Nostrand Company, Princeton, NJ.
• van Heijenoort, Jean (1967/1977), From Frege to Gödel : A Source Book in Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977.
• Kelley, John L. (1955), General Topology, Van Nostrand Reinhold, New York, NY.
• Kneale, William; and Kneale, Martha (1962/1975), The Development of Logic, Oxford University Press, Oxford, UK, 1962. Reprinted with corrections, 1975.
• Lawvere, Francis William; and Rosebrugh, Robert (2003), Sets for Mathematics, Cambridge University Press, Cambridge, UK.
• Lawvere, Francis William; and Schanuel, Stephen H. (1997/2000), Conceptual Mathematics : A First Introduction to Categories, Cambridge University Press, Cambridge, UK, 1997. Reprinted with corrections, 2000.
• Manin, Yu. I. (1977), A Course in Mathematical Logic, Neal Koblitz (trans.), Springer-Verlag, New York, NY.
• Mathematical Society of Japan (1993), Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi Itô (ed.), MIT Press, Cambridge, MA.
• Mili, A., Desharnais, J., Mili, F., with Frappier, M. (1994), Computer Program Construction, Oxford University Press, New York, NY. (Introduction to Tarskian relation theory and relational programming.)
• Mitchell, John C. (1996), Foundations for Programming Languages, MIT Press, Cambridge, MA.
• Peirce, Charles Sanders (1870), ``Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic", Memoirs of the American Academy of Arts and Sciences 9 (1870), 317–378. Reprinted (CP 3.45–149), (CE 2, 359–429).
• Peirce, Charles Sanders (1931–1935, 1958), Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA. Cited as (CP volume.paragraph).
• Peirce, Charles Sanders (1981–), Writings of Charles S. Peirce : A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN. Cited as (CE volume, page).
• Poizat, Bruno (2000), A Course in Model Theory : An Introduction to Contemporary Mathematical Logic, Moses Klein (trans.), Springer-Verlag, New York, NY.
• Quine, Willard Van Orman (1940/1981), Mathematical Logic, 1940. Revised edition, Harvard University Press, Cambridge, MA, 1951. New preface, 1981.
• Royce, Josiah (1961), The Principles of Logic, Philosophical Library, New York, NY.
• Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
• Styazhkin, N.I. (1969), History of Mathematical Logic from Leibniz to Peano, MIT Press, Cambridge, MA.
• Suppes, Patrick (1957/1999), Introduction to Logic, 1st published 1957. Reprinted, Dover Publications, New York, NY, 1999.
• Suppes, Patrick (1960/1972), Axiomatic Set Theory, 1st published 1960. Reprinted, Dover Publications, New York, NY, 1972.
• Tarski, Alfred (1956/1983), Logic, Semantics, Metamathematics : Papers from 1923 to 1938, J.H. Woodger (trans.), Oxford University Press, 1956. 2nd edition, J. Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983.
• Ulam, Stanislaw Marcin; and Bednarek, A.R. (1977), “On the Theory of Relational Structures and Schemata for Parallel Computation”, pp. 477–508 in A.R. Bednarek and Françoise Ulam (eds.), Analogies Between Analogies : The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley, CA, 1990.
• Ulam, Stanislaw Marcin (1990), Analogies Between Analogies : The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, A.R. Bednarek and Françoise Ulam (eds.), University of California Press, Berkeley, CA.
• Ullman, Jeffrey D. (1980), Principles of Database Systems, Computer Science Press, Rockville, MD.
• Venetus, Paulus (1472/1984), Logica Parva : Translation of the 1472 Edition with Introduction and Notes, Alan R. Perreiah (trans.), Philosophia Verlag, Munich, Germany.
## Syllabus
### Relational concepts
Relation theory Relative term Sign relation Triadic relation
## Document history
Portions of the above article were adapted from the following sources under the GNU Free Documentation License, under other applicable licenses, or by permission of the copyright holders. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 146, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8588445782661438, "perplexity": 1693.766518656983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00194.warc.gz"} |
http://www.zazzle.co.kr/%EB%84%A5%ED%83%80%EC%9D%B4 | Showing All Results
1,664,097 results
Page 1 of 27,735
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
₩40,000
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
검색 결과가 없습니다 -
Showing All Results
1,664,097 results
Page 1 of 27,735 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238565921783447, "perplexity": 4323.7352105870095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999649814/warc/CC-MAIN-20140305060729-00083-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://en.wikibooks.org/wiki/Measure_Theory/L%5Ep_spaces | # Measure Theory/L^p spaces
Recall that an $\mathcal{L}^p$ space is defined as $\mathcal{L}^p(X)=\{f:X\to\mathbb{C}:f\text{ is measurable,}\int_X|f|^pd\mu<\infty\}$
## Jensen's inequality
Let $(X,\Sigma,\mu)$ be a probability measure space.
Let $f:X\to\mathbb{R}$, $f\in\mathcal{L}^1$ be such that there exist $a,b\in\mathbb{R}$ with $a
If $\phi$ is a convex function on $(a,b)$ then,
$\displaystyle\phi\left(\int_Xfd\mu\right)\leq\int_X\phi\circ fd\mu$
Proof
Let $t=\displaystyle\int_Xfd\mu$. As $\mu$ is a probability measure, $a
Let $\beta=\sup\{\frac{\phi(t)-\phi(s)}{t-s}:a
Let $t; then $\beta\leq\displaystyle\frac{\phi(u)-\phi(t)}{u-t}$
Thus, $\displaystyle\frac{\phi(t)-\phi(s)}{t-s}\leq\frac{\phi(u)-\phi(t)}{u-t}$, that is $\phi(t)-\phi(s)\leq\beta(t-s)$
Put $s=f(x)$
$\phi\left(\int_Xfd\mu\right)-\phi f(x)+\beta\left(f(x)-\int_Xfd\mu\right)\geq 0$, which completes the proof.
### Corollary
1. Putting $\phi(x)=e^x$,
$\displaystyle e^{(\int_Xfd\mu)}\leq\int_Xe^fd\mu$
1. If $X$ is finite, $\mu$ is a counting measure, and if $f(x_i)=p_i$, then
$\displaystyle e^{\left(\frac{p_1+\ldots+p_n}{n}\right)}\leq\frac{1}{n}\left(e^{p_1}+e^{p_2}+\ldots+e^{p_n}\right)$
For every $f\in\mathcal{L}^p$, define $\|f\|_p=\left(\int_X|f|^pd\mu\right)^{\frac{1}{p}}$
## Holder's inequality
Let $1 such that $\displaystyle\frac{1}{p}+\frac{1}{q}=1$. Let $f\in\mathcal{L}^p$ and $g\in\mathcal{L}^q$.
Then, $fg\in\mathcal{L}^1$ and
$\|fg\|\leq\|f\|_p\|g\|_q$
Proof
We know that $\log$ is a concave function
Let $0\leq t\leq 1$, $0. Then $t\log a+(1-t)\log b\leq \log(at+b(1-t))$
That is, $a^tb^{1-t}\leq ta+(1-t)b$
Let $t=\frac{1}{p}$, $a=\left(\frac{|f|}{\|f\|_p}\right)^p$, $b=\left(\frac{|f|}{\|f\|_q}\right)^q$
$\displaystyle\frac{|f|}{\|f\|_p}\frac{|g|}{\|g\|_q}\leq\frac{1}{p}\frac{|f|^p}{\|f\|^p_p}+\frac{1}{q}\frac{|g|^q}{\|g\|^q_q}$
Then, $\displaystyle\frac{1}{\|f\|_p\|g\|_q}\int_X|f||g|d\mu\leq\frac{1}{p\|f\|_p^p}\int_X|f|^pd\mu+\frac{1}{p\|g\|_q^q}\int_X|g|^qd\mu=1$,
which proves the result
### Corollary
If $\mu(X)<\infty$, $1 then $\mathcal{L}^r\subset \mathcal{L}^s$
Proof
Let $\phi\in\mathcal{L}^s$, $p=\frac{r}{s}\geq 1$, $g\equiv 1$
Then, $f=|\phi|^s\in\mathcal{L}^1$, and hence $\displaystyle\int_X|\phi|^sd\mu\leq\left(\int_X\left(|\phi|^s\right)^{\frac{r}{s}}d\mu\right)^{\frac{s}{r}}\mu(X)^{1-\frac{s}{r}}$
We say that if $f,g:X\to\mathbb{C}$, $f=g$ almost everywhere on $X$ if $\mu(\{x|f(x)\neq g(x)\})=0$. Observe that this is an equivalence relation on $\mathcal{L}^p$
If $(X,\Sigma,\mu)$ is a measure space, define the space $L^p$ to be the set of all equivalence classes of functions in $\mathcal{L}^p$
## Theorem
The $L^p$ space with the $\|\cdot\|_p$ norm is a normed linear space, that is,
1. $\|f\|_p\geq0$ for every $f\in L^p$, further, $\|f\|_p=0\iff f=0$
2. $\|\lambda\|_p=|\lambda|\|f\|_p$
3. $\|f+g\|_p\leq\|f\|_p+\|g\|_p$ . . . (Minkowski's inequality)
Proof
1. and 2. are clear, so we prove only 3. The cases $p=1$ and $p=\infty$ (see below) are obvious, so assume that $0 and let $f,g\in L^p$ be given. Hölder's inequality yields the following, where $q$ is chosen such that $1/q+1/p = 1$ so that $p/q = p-1$:
$\displaystyle\int_X|f+g|^pd\mu=\int_X|f+g|^{p-1}|f+g|d\mu\leq\int_X|f+g|^{p-1}(|f|+|g|)d\mu$
$\leq\displaystyle\left(\int_X|f+g|^{(p-1)q}d\mu\right)^\frac{1}{q}\|f\|_p+\left(\int_X|f+g|^{(p-1)q}d\mu\right)^{\frac{1}{q}}\|g\|_p=\|f+g\|_p^{\frac{p}{q}}\|f\|_p+\|f+g\|_p^{\frac{p}{q}}\|g\|_p.$
Moreover, as $t\mapsto t^p$ is convex for $p>1$,
$\displaystyle \frac{|f+g|^p}{2^p} = \left|\frac{f}{2}+\frac{g}{2}\right|^p\leq \left(\frac{|f|}{2}+\frac{|g|}{2}\right)^p \leq \frac{1}{2}|f|^p + \frac{1}{2}|g|^p.$
This shows that $\|f+g\|_p<\infty$ so that we may divide by it in the previous calculation to obtain $\|f+g\|_p\leq\|f\|_p+\|g\|_p$.
Define the space $L^{\infty}=\{f|X\to\mathbb{C},f\text{ is bounded almost everywhere}\}$. Further, for $f\in L^{\infty}$ define $\|f\|_{\infty}=\sup\{|f(x)|:x\notin E\}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 84, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997594952583313, "perplexity": 1036.658083329874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899124.21/warc/CC-MAIN-20141030025819-00178-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/questions/12639/understanding-the-equation-of-td0-in-the-paper-learning-to-predict-by-the-met/12641 | # Understanding the equation of TD(0) in the paper “Learning to predict by the methods of temporal differences”
In the paper Learning to predict by the methods of temporal differences (p. 15), the weights in the temporal difference learning are updated as given by the equation $$\Delta w_t = \alpha \left(P_{t+1} - P_t\right) \sum_{k=1}^{t}{\lambda^{t-k} \nabla_w P_k} \tag{4} \,.$$ When $$\lambda = 0$$, as in TD(0), how does the method learn? As it appears, with $$\lambda = 0$$, there will never be a change in weight and hence no learning.
Am I missing anything?
I think the detail that you're missing is that one of the terms in the sum (the final "iteration" of the sum, the case where $$k = t$$) has $$\lambda$$ raised to the power $$0$$, and anything raised to the power $$0$$ (even $$0$$) is equal to $$1$$. So, for $$\lambda = 0$$, your update equation becomes
$$\Delta w_t = \alpha \left( P_{t+1} - P_t \right) \nabla_w P_t,$$
• At page $16$ of the same paper Learning to Predict by the Methods of Temporal Differences (1988), Sutton actually states that $\Delta w_t = \alpha \left( P_{t+1} - P_t \right) \nabla_w P_t$ is the learning rule when $\lambda = 0$. – nbro Jun 1 '19 at 16:51
• He starts with the supervised setting and then derives the Widrow-Hoff (or delta) rule. The TD rule is then a special case of the delta rule, where the errors $z - P_t$ are replaced with a summation of the successive temporal-difference predictions. However, how is that specific 1-step TD learning rule exactly related to the usual learning rules of (tabular) temporal difference methods, where apparently no gradient is needed? – nbro Jun 1 '19 at 16:58
• @nbro You can view tabular methods as methods using linear function "approximation", where there is a single binary feature for every possible state-action pair. Then there would be a gradient needed, but the gradient would simply be $1$ for the "binary feature" corresponding to the state-action pair, and $0$ everywhere else. – Dennis Soemers Jun 1 '19 at 17:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173184633255005, "perplexity": 552.0568137832621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00197.warc.gz"} |
http://mathhelpforum.com/trigonometry/212293-distance-formula.html | # Math Help - Distance formula
1. ## Distance formula
I have to find the distance for the the points -1,1 at 135 degrees. (Find the distance d from the origin to that point) Using the distance formula, I am getting a 0. Is this correct?
2. ## Re: Distance formula
It should be obvious that the distance between the point (1,1) and (0,0) is not 0. The formula for distance betwen points A and B is:
$d = \sqrt{(x_A - x_B)^2 + (y_A - y_B)^2}$
Here point A is $(x_A,y_A) = (1,1)$ and point B is $(x_B,y_B)=(0,0)$.
3. ## Re: Distance formula
so 2 squared then?
4. ## Re: Distance formula
Not 2 squared, but square root of 2.
5. ## Re: Distance formula
Originally Posted by goldbug78
so 2 squared then?
Please tell us what "I have to find the distance for the the points -1,1 at 135 degrees." means.
What does $135^o$ have to do with any of this?
6. ## Re: Distance formula
Originally Posted by Plato
What does $135^o$ have to do with any of this?
I think the OP is just saying that a line from the origin to the point (-1,1) is at an angle of 135 degrees from the +x axis. I considered it extraneous information. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573270082473755, "perplexity": 917.8286844693514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007832.1/warc/CC-MAIN-20141125155647-00218-ip-10-235-23-156.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.