url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://www.yacora.de/help/help-page-for-he-model | This website is maintained by
# Help page for He model
In this page, some useful information concerning the Yacora model for helium and the required input parameters are reported. At the end of the page, you can find some references for further and more detailed explanations.
### Description of the He model
Helium is a two electron system, thus according to the possible orientations of the electron spins, the energy levels split up into a singlet system (antiparallel configuration), which is called parahelium, and a triplet system (parallel configuration), which is called orthohelium. In the figure below, an energy-level diagram for He [1] is shown. As reported in this diagram, the Yacora model for helium includes all the states with principal quantum number $p\leq 4$ and the singly ionized positive ion.
Since for helium model only dipole transitions are considered, there are two metastable states: $2\ ^1S$ in the singlet system and $2\ ^3S$ in the triplet system. In the "Diffusion of the metastable states" section, it will be explained how these states are managed by Yacora and which are the possible choices for the user.
### Cross section and rate coefficients
Yacora considers a comprehensive set of reactions in order to determine the population densities of the excited states with $p\leq 4$ by solving the following set of differential equations:
$$\frac{\mathrm{d}n_p}{\mathrm{d}t}= \sum_{q\ne p} \Bigl(X_{q\rightarrow p}n_q - X_{p \rightarrow q}n_p\Bigr)$$
where $X_{q\rightarrow p}$ is the total rate coefficient which includes all the processes from the state $q$ to the state $p$ and $X_{p \rightarrow q}$ is the total rate coefficient which includes all the processes from the state $p$ to the state $q$. The reactions considered by Yacora for the He model are the electron excitation processes [2,3], with the inverse processes (calculated using the detailed balance principle [4]), the spontaneous emissions [5] and the ionization process [6]. The self-absorption due to the optical thickness [7] is neglected.
### Diffusion of the metastable states
As already mentioned before, the He atom has two metastable states. For high electron densities, the dominant depopulating process for these states is the excitation and the de-excitation by electron collisions, but in plasma with low electron density, the transport of particles in the metastable states can take over. This is the reason why in Yacora two possible mechanisms are provided: fix the density of such states, i.e. treat them in the same way as the ground state, or include the diffusion. You can choose between the two possibilities just selecting "off" or "on" in the field called "Diffusion". According to your choice, the densities of $2\ ^1S$ and $2\ ^3S$ or the normal diffusion length and the molecular diffusion length [8] are required (the turbulent diffusion is not included in the model). For high neutral density also the quenching (excitation due to collision with heavy particles) can play an important role, but it is not considered in the He model.
Before explaining what the mean diffusion lengths are, it is necessary to give a simple introduction to the diffusion theory of the neutral particles, in particular for He. Considering a gas of He in a vessel, The "wall" confinement time which is approximately the time that a particle takes to reach the wall is
$$\tau_w = \frac{\int n\;\mathrm{d}V}{\oint \vec{j_w}\cdot\mathrm{d}\vec{A}}$$
where $\vec{j_w}$ denotes the net flux to the wall element $\mathrm{d}\vec{A}$. Now, the mean free path of the He atoms is given by
$$\lambda_n=\frac{1}{n\sigma_n}$$
where $n$ denotes the helium density. The collisional cross section $\sigma_n$=1.3x10-19 m2 considered in Yacora is taken for collision of helium atoms in a helium background from [9]. For $\lambda_n$ small compared to the vessel dimensions (fluid regime), the transport to the walls is governed by the Fick's law
$$\vec{j_w}= -D\nabla n$$
where $D$ is the diffusion coefficient [8] given by
$$D= \frac{3\sqrt{\pi}}{8} \lambda_n \sqrt{\frac{k_B T_g}{m}}$$
where $m$ is the He mass and $T_g$ its temperature. For simple geometry, the confinement time can be determined analytically and the solution can be written as
$$\tau_d = \frac{\Lambda^2}{D}$$
where $\Lambda$ is called mean diffusion length and it is the one of the two parameters required by Yacora. Just to make an example, for a cylindrical vessel with radius $\rho$ and length $2l$ and assuming perfectly sticking walls, $\Lambda$ is given by
$$\Lambda = \Bigl(\frac{8}{\rho^2}+\frac{3}{l^2}\Bigr)^{-1}$$
In [8], some other examples are reported and the user can find (or calculate) the $\Lambda$ value that proper suits with his or her requirement.
If the mean free path is large compared to the vessel dimensions (free fall situation), the mean confinement time is given by
$$\tau_f=\frac{\bar{\Lambda}}{v_{th}}$$
where $v_{th}$ is the thermal velocity
$$v_{th}=\sqrt{\frac{8 k_B T_g}{\pi m}}$$
and $\bar{\Lambda}$ denotes an average connection length from the locus of production to the wall and it is the second parameter required by Yacora. Assuming perfectly sticking wall, you can set this parameter as
$$\bar{\Lambda}=2 d$$
where
$$d=\frac{V}{A}$$
is the characteristic linear dimension of the vessel. Again, the user is invited to see [8] in order to determine the $\bar{\Lambda}$ that better suits with his or her requirement.
Since, according to the plasma regime, the diffusion can be molecular or laminar, to implement a smooth transition between the two conditions, Yacora sums the two confinement times:
$$\tau_w=\tau_d+\tau_f$$
The user must pay attention that the previous expression is valid only if the pumping is not considered, as in Yacora on the Web.
### Input parameters
For almost all the input parameters, there are three possible choices:
1. Fixed: The related parameter is kept fixed during the calculation.
2. Range: The related parameter varies in the specific range during the calculation. The admissible maximum number of points is 100.
3. Values: The related parameter assumes the given values and Yacora performs a calculation for each of them. The values must be separated by a semicolon.
NB: The total maximum number of calculations must be less than 10000 (100*100).
The following table shows the allowed values for each input parameter.
#### Admissible values for each input parameter
Te [1,50] eV max. 100 points Used to determine the rate coefficients for electron collision excitation and de-excitation.
ne [1e14,1e22] m-3 max. 100 points Used to determine the reaction rate for electron collision excitation and de-excitation.
T(He) [300,57971] K max. 100 points Used in order to calculate the rate coefficient for reactions for which the cross section is available.
n(He) [1e14,1e22] m-3 max. 100 points You can also set this value to 1, if you are interested only in the population coefficients. You should not fix this value to 1 if you consider the diffusion, because it depends on the helium density.
### Output quantities
There are three possible output quantities: population density, population coefficient and density balance (that is, in principle, not a quantity, as explained below). The user can choose more than one quantity in the same submission. For every chosen quantity, a file is generated. At the end of the calculation, all the files are automatically uploaded in the user folder and an email is sent to the user, as notification.
#### Population density
The population density of the considered species is obtained integrating the system of differential equations reported above which takes into account the processes that populate or depopulate each state.
#### Population coefficient
To better understand the meaning of the population coefficients, we invite the users to see [10,11]. The population coefficients are useful quantities that were introduced with the purpose to find the solution of the above system of differential equations when the steady-state solution (no time dependence) is considered. In particular, they are defined as
$$R_{0p}=\frac{n_p}{n_0 n_e}$$
where $n_0$ denotes the ground state density of He, $n_e$ is the electron density and $n_p$ is the density of the excited state $p$.
#### Density balance
The density balance option shows the rate of all the considered reactions that populate or depopulate the given state. The rate is positive if the reaction populates the state, vice versa it is negative if the reaction depopulates such state. In the output files, you can find a comment for each reaction that gives you more information about the relative process.
NB: Creating the density balance needs a lot of calculation time and disk space. Please use it only for a very reduced number of calculations.
### Line emission intensity
Starting from the population densities $n_p$, it is possible to determine the absolute intensity line emissions (in units of $m^{-3}s^{-1}$):
$$I_{pq}= n_p A_{pq}=n_e n_0 R_{0p} A_{pq}=n_e n_0 X^\text{eff}_{pq}$$
where $A_{pq}$ is the Einstein coefficient [5] from the state $p$ to the state $q$ and $X^\text{eff}_{pq}$ is the effective emission rate coefficient:
$$X^\text{eff}_{pq}\equiv R_{0p} A_{pq}\ .$$
### Final notes
#### Time trace
In the output box you can choose if you want that the time trace is calculated or not. The time trace is reported in a homonym file, which contains the evolution in time of the population density of all the considered species and excited states. The time trace is available only for the very last used set of parameters; i.e. if you select a range of values for the electron temperature, then the time trace is related to the maximum value of the temperature in that range. Generally, the user doesn't need this information and the default choice is to not upload this file.
#### Comment
Another note concerns the comment box: you should always put some comments about the calculation that you are doing. This is a very good habit (not only in this case), because in few months you will not be able to remember what you have done, especially if in your folder there are a lot of calculations.
### References
[1] D. Wünderlich and U. Fantz, Evaluation of State-Resolved Reaction Probabilities and Their Application in Population Models for He, H and H2 , Atoms 2016, 4, 26, doi:10.3390/atoms4040026.
[2] F. J. De Heer, Critically Assessed Electron-Impact Excitation Cross Sections for He (11S), IAEA Nuclear Data Section Report, INDC(NDS)-385, IAEA: Vienna, Austria, 1998, www-nds.iaea.org/publications/indc/indc-nds-0385/.
[3] Y. V. Ralchenko, R. K. Janev, T. Kato D. V. Fursa, I. Bray and F. J. de Heer, Cross Section Database for Collision Processes of Helium Atom with Charged Particles, Research Reports NIFS DATA, NIFS-DATA-59, NIFS: Toki, Japan, 2000.
[4] R. H. Fowler, Statistical equilibrium with special reference to the mechanism of ionization by electronic impacts, Philos. Mag. 1926, 47, 257-277.
[5] W. F. Drake Gordon, Springer Handbook of Atomic, Molecular, and Optical Physics, Springer Science+Business Media, Inc.: New York, NY, USA, 2006; pp. 199-216.
[6] T. Fujimoto, A collisional-radiative model for helium and its application to a discharge plasma, Quant. Spectrosc. Radiat. Transfer 21, 439 (1979), doi:10.1016/0022-4073(79)90004-9.
[7] K. Behringer and U. Fantz, The influence of opacity on hydrogen excited-state population and applications to low-temperature plasmas, New Journal of Physics 2 (2000), doi:10.1088/1367-2630/2/1/323.
[8] W. Möller, Plasma and Surface Modelling of the Deposition of Hydrogenated Carbon Films from Low-Pressure Methane Plasmas, Appl. Phys. A 56, 527-546 (1993), doi:10.1007/BF00331402.
[9] B. M. Smirnov, Reference Data on Atomic Physics and Atomic Processes, Springer Series on Atomic, Optical and Plasma Physics 51, 2008; p.102, doi:10.1007/978-3-540-79363-2.
[10] T. Fujimoto, Plasma Spectroscopy, Springer Berlin Heidelberg, Series on Atomic, Optical and Plasma Physics 44, 2008, pp 29-49, doi:10.1007/978-3-540-73587-8_3.
[11] D. Wünderlich, S. Dietrich and U. Fantz, Application of a collisional radiative model to atomic hydrogen for diagnostic purposes, J. Quant. Spectrosc. Radiat. Transfer 110 (2009), doi:10.1016/j.jqsrt.2008.09.015. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731849193572998, "perplexity": 847.4575344358564}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00339.warc.gz"} |
http://mathoverflow.net/questions/123814/reflection-principles/123877 | # Reflection principles
Let con(ZFC) be a sentence in ZFC asserting that ZFC has an omega-model M. Let $A_{M}$ be an wff over M. Let S be the theory ZFC+con(ZFC). Is the reflection for S: $Bew_{S}(A_{M}) \implies A_{M}$ is satisfied? I asking also for an explanation of the paradox in the link
http://cs.nyu.edu/pipermail/fom/2007-October/012035.html
of the case when ZFC is replaced on S=ZFC+(ZFC has omega-model)?
-
Could you explain what does $Bew_S(A_M)$ mean? Also, perhaps you could re-word your final question somehow; I don't really understand it as it is written. – Joel David Hamkins Mar 6 at 22:20
I don't know what Bew_S(A) means here. Are you asking for an explanation of the paradox in the link you mention? – Joel David Hamkins Mar 6 at 22:35
Joel, I think Bew_S(A_M) is supposed to be (a formalization of) the statement that A_M is provable in S. ("Bew" was, I believe, used by Gödel to abbreviate "beweisbar".) – Andreas Blass Mar 6 at 22:41
Of course Bew_S(X)--->X is true for any X, because all the axioms of S are true. But that argument uses information that goes beyond ZFC, so presumably the question should be whether Bew_S(A_M)--->A_M is provable in some (yet to be specified) formal system. It should also be explained what is meant by a wff being "over M" and in particular why such a wff is in the language of S so that Bew_S(A_M) makes sense. – Andreas Blass Mar 6 at 22:45
I suppose the "paradox" you're asking about is the passage marked with >> at the link you gave, but with "$\omega$-model" in place of "model" and with "has an $\omega$-model" in place of "is consistent". But then there is no longer any justification for the statement (on lines 9 & 10) that there's a proof in ZFC of the negation of con(ZFC) (which now becomes the negation of "ZFC has an $\omega$-model"). What you have is rather that this negation holds in all $\omega$-models of ZFC, but that doesn't immediately translate into a syntactic fact about existence of a proof, which you could then translate into English. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679599761962891, "perplexity": 891.5981940680491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758566/warc/CC-MAIN-20131218054918-00063-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://openstudy.com/updates/50cd2f5fe4b0031882dc397d | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## Callisto Group Title If $$\lim_{x \rightarrow 0^{+}} f(x)=A$$ and $$\lim_{x \rightarrow 0^{-}} f(x)=B$$, find $$\lim_{x \rightarrow 0^{+}} f(x^3-x)$$ How to start? one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. satellite73
Best Response
You've already chosen the best response.
3
hmmm maybe find out whether $$x^3-x$$ is approaching 0 from the right or the left as x approaches 0 from the right
• one year ago
2. Callisto
Best Response
You've already chosen the best response.
0
How....? FYI, I have the answer :\
• one year ago
3. slaaibak
Best Response
You've already chosen the best response.
0
• one year ago
4. slaaibak
Best Response
You've already chosen the best response.
0
When in doubt, go with DNE, lol
• one year ago
5. satellite73
Best Response
You've already chosen the best response.
3
you can reason as follows: for small positive values of $$x$$ we have $$x^3<x$$ and so $$x^3-x<0$$
• one year ago
6. satellite73
Best Response
You've already chosen the best response.
3
this is my best guess at any rate i am trying to think up a counter example, one where you wouldn't know the limit, but off the top of my head i cannot, so perhaps what i wrote is correct
• one year ago
7. satellite73
Best Response
You've already chosen the best response.
3
in english, as $$x\to 0^+$$ we have $$x^3-x\to 0^-$$
• one year ago
8. satellite73
Best Response
You've already chosen the best response.
3
so my guess is $$B$$ although i have a 50% chance of being right even if my reasoning is faulty
• one year ago
9. Callisto
Best Response
You've already chosen the best response.
0
Nice *guess* :\
• one year ago
10. satellite73
Best Response
You've already chosen the best response.
3
thnx
• one year ago
11. Callisto
Best Response
You've already chosen the best response.
0
Assuming your way to do this question is correct. Similarly, for the question (in part b) $$\lim_{x \rightarrow 0^{-}} f(x^3-x)$$ $x^3-x>0$So, as $$x \rightarrow 0^+$$, $$x^3-x \rightarrow 0^{+}$$. And it is A. Hmmm...
• one year ago
12. Callisto
Best Response
You've already chosen the best response.
0
*Assume
• one year ago
13. UnkleRhaukus
Best Response
You've already chosen the best response.
0
is the limit of the function of a sum , equal to the sum of the limits of the function ?
• one year ago
14. Callisto
Best Response
You've already chosen the best response.
0
As for part c, $\lim_{x \rightarrow 0^{+}} f(x^2-x^4)$ $x^2-x^4>0$ As $$x \rightarrow 0^{+}$$, $$x^2-x^4\rightarrow 0^{+}$$, so it is A. Seems this trick works, but I don't know why...
• one year ago
15. HELP!!!!
Best Response
You've already chosen the best response.
0
teach me how to do this
• one year ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988920092582703, "perplexity": 4525.156602826749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648438.26/warc/CC-MAIN-20141024030048-00135-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://economics.stackexchange.com/questions/47512/dynamic-programming-in-infinite-horizon-model | # Dynamic programming in infinite horizon model
Using an infinite horizon model, a dynamic programming approach uses a fixed point to solve the model: $$V = \Gamma(V)$$.
1. How do I interpret the meaning of $$V$$? For example, when we decide a investment level in next time $$k_{t+1}$$ given the existing value $$k_t$$, we could calculate a value $$v(k_t)$$. Is this value equal the value given solving maximization problem after $$t$$ ($$t+1, t+2 \cdots$$)?
2. If the interpretation is correct, why a fixed point procedure yield such a maximized function?
Although I checked several lecture notes, I could not get the intuition by words about these points.
• My (almost forgotten) understanding of functional analysis and dynamic programming is tempted to say that your interpretation of (1) is correct. V is the maximised function of the state. Yet in infinite horizon (2), the max is not guaranteed to exist. But nevertheless, our model requires some type of convergent solution - and this is the fixed point, expressed in terms of something like Banach's fixed point theorem/contraction mapping theorem and Blackwell's sufficiency conditions etc. Sep 11 '21 at 20:36
There are two interrelated maximisation problems. The first is the infinite horizon maximisation problem: \begin{align*} v(k) = &\max_{a_1, a_2, \ldots} \sum_{t = 0}^\infty \delta^t F(k_t, c_t),\\ \text{ subject to } & k_{t+1} = g(k_t, a_t),\\ & a_t = \Gamma(k_t),\\ & k_0 = k \end{align*} Here we call $$a_t$$ the decision variables, $$k_t$$ the state variables. This problem maximises an infinite sum of discounted values $$F(k_t, c_t)$$, subject to a law of motion that determines the next periods state depending on the action and state today. $$\Gamma(k)$$ gives a set of feasible actions $$a$$ that can be taken and finally $$k$$ is set equal to the initial state.
As such, $$v(k)$$ is the value of this optimisation problem when the initial state is $$k$$.
The second problem is the Bellman equation: $$v(k) = \max_{a \in \Gamma(k)}\left\{F(k,a) + \delta v(g(k,a))\right\}.$$ You should interpret this as an identity involving the function $$v$$, which appears both on the left hand right hand side of the equation, so the function $$v(.)$$ is the unknown in this equation (which has to hold for all $$k$$).
Under some conditions it can be shown that for every $$k$$, the function $$v(k)$$ of the first problem is equal to the value of the $$v(k)$$ that satisfies the second equation.
In order to find this function $$v(.)$$ one can define the Bellman operator $$T$$: $$(Tv)(k) = \max_{a \in \Gamma(k)}\left\{F(k,a) + \delta v(g(k,a))\right\}.$$ The operator $$T$$ takes a function $$v$$ (on the right hand side) and produces a new function $$Tv$$.
Again under suitable conditions, one can show that this operator is a contraction mapping. So by iterating this function over and over again we will converge to the fixed point of this operator, which then also gives the solution to the Bellman equation.
1. How do I interpret the meaning of $$V$$? For example, when we decide a investment level in next time $$k_{t+1}$$ given the existing value $$k_t$$, we could calculate a value $$v(k_t)$$. Is this value equal the value given solving maximization problem after $$t (t+1,t+2⋯)$$?
Yes by definition of the first problem, $$v(k_t)$$ gives the value of the infinite horizon maximisation problem when $$k_t$$ is the initial level of capital.
1. If the interpretation is correct, why a fixed point procedure yield such a maximized function?
This results from the equivalence between the first optimisation problem and the Bellman equation. The solution $$v$$ of the Bellman equation is a fixed point of the Bellman operator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954750657081604, "perplexity": 233.11922009160602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00181.warc.gz"} |
https://www.physicsforums.com/threads/gyroscopic-action-on-earth-surface.138697/ | # Gyroscopic action on earth surface
1. Oct 16, 2006
### RandallB
What will gyro alignment be if ?
Establish a free spinning spherical gyro similar to a GPB gyro; but designed to run on earth surface at the equator supported by jets of air or something to minimize friction and not influence the spin once established.
Establish a spin axis Aligned with the Sun at high noon and the center of the earth.
Let run free for 6 hours.
Does spin axis it stay in line with the sun as it would were it in true orbital freefall?
Or does whatever we do to provide the required support (since it moves to slowly to be in orbit) cause the local influence of gravity to demand the gyroscope use the center of the earth as a reference and keep the axis pointed there and turn it 90o off of alignment with the sun?
Last edited: Oct 17, 2006
2. Oct 16, 2006
### empirical
Perhaps a gyrocompass might be relevant to your query.
3. Oct 16, 2006
### cesiumfrog
The effect GPB aims to measure is neglegibly small; no matter how you support your gyro it should maintain it's orientation with respect to the distant stars. Over six hours it should stay aligned to the sun, not the earth. (Hence, like the stars themselves, it could be used for navigation.)
4. Oct 17, 2006
### Garth
The gyroscope should experience the Geodetic Precession, which is a GR effect due to the curvature of space-time caused by the mass of the Earth, and Thomas Precession, which is a SR effect due to the Earth's surface supporting the gyro and therefore accelerating it relative to the freely falling inertial frame of reference.
There would also be a much smaller E-W Lense-Thirring or frame-dragging Precession, a GR effect caused by the spinning mass of the Earth dragging space-time and inertial compasses round with it.
Of course the gyro itself would have to be supported exactly through its centre of gravity otherwise it would suffer a much larger gyroscopic torque precession as the Earth's gravity tried to rotate it in the vertical direction.
The Thomas Precession named after Llewellyn Thomas, and discovered on Earth in 1988, is a correction to the spin-orbit interaction in Quantum Mechanics.
The GR effects can only be practically measured under the very sensitive conditions of the Gravity Probe B experiment in free fall and the results will be published April 2007. (we hope )
Garth
Last edited: Oct 17, 2006
5. Oct 17, 2006
### HallsofIvy
Staff Emeritus
Sounds like it would behave very much like a Foucault pendulum.
6. Oct 17, 2006
### RandallB
It does ??
I read the responses as saying the gyro would turn losing its alignment with the earth bound lab as the gyro in our 6 hour experiment remains inline with the Sun and Stars (continue the test for 6 months and it would lose alignment with the Sun to hold with the stars).
BUT a Foucault Pendulum at the equator shows no such turn or movement. At least not in my view of a Foucault Pendulum maybe someone has a reference that says different, but I doubt it.
Last edited: Oct 17, 2006
7. Oct 17, 2006
### RandallB
Garth
Although you did not say so directly, you are in agreement that the gyro would act as a three dimensional Gyrocompass and as long as my OP requirement that the support not influence the spin significantly it should easily show an ability to hold alignment with the sun. Sounds like a fun little demonstration to actually build (need to work in a trip to the equator somewhere, somehow, a grant maybe :-)
Of course this is a very coarse measurement between comparing a result between 0 and 90 would not call for a great deal of precision to confirm a 3D Gyrocompass working as expected.
But a few questions the other levels of measure you refer to:
Any earth bound lab would not be able to insulate the experiment well enough and long enough to actually measure Geodetic Precession, or Lense-Thirring AKA frame-dragging Precession hence the need for the Gravity Probe B experiment.
But is Thomas Precession as “discovered on Earth in 1988” larger than the GR affects and actually measurable on earth or is it a QM calculation discovery too small to be measured in our environment? (I’ll try to do some searches on it, but my guess is too small)
Also isn’t the “much smaller E-W Lense-Thirring” actually a West to East Precession in the same direction as earth's rotation and the Geodetic Precession here?
And finally a question on the calculation of the expected Lense-Thirring being measure by GP-B; do you know if the GR formula for it requires taking into account the mass density distribution of the rotating mass (earth).
That is, would the expected Lense-Thirring on GP-B be different (smaller) if earth’s mass was concentrated in one-tenth the diameter or (larger) if earth was hollow with all the mass located in a thick dense surface shell?
Last edited: Oct 17, 2006 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751552104949951, "perplexity": 1123.0131033157988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00069-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.lmfdb.org/Variety/Abelian/Fq/1/5/a | # Properties
Label 1.5.a Base Field $\F_{5}$ Dimension $1$ Ordinary No $p$-rank $0$ Principally polarizable Yes Contains a Jacobian Yes
## Invariants
Base field: $\F_{5}$ Dimension: $1$ L-polynomial: $1 + 5 x^{2}$ Frobenius angles: $\pm0.5$ Angle rank: $0$ (numerical) Number field: $$\Q(\sqrt{-5})$$ Galois group: $C_2$ Jacobians: 2
This isogeny class is simple and geometrically simple.
## Newton polygon
This isogeny class is supersingular.
$p$-rank: $0$ Slopes: $[1/2, 1/2]$
## Point counts
This isogeny class contains the Jacobians of 2 curves, and hence is principally polarizable:
$r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 6 36 126 576 3126 15876 78126 389376 1953126 9771876
$r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 6 36 126 576 3126 15876 78126 389376 1953126 9771876
## Decomposition and endomorphism algebra
Endomorphism algebra over $\F_{5}$
The endomorphism algebra of this simple isogeny class is $$\Q(\sqrt{-5})$$.
Endomorphism algebra over $\overline{\F}_{5}$
The base change of $A$ to $\F_{5^{2}}$ is the simple isogeny class 1.25.k and its endomorphism algebra is the quaternion algebra over $$\Q$$ ramified at $5$ and $\infty$.
All geometric endomorphisms are defined over $\F_{5^{2}}$.
## Base change
This is a primitive isogeny class.
## Twists
This isogeny class has no twists. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975500762462616, "perplexity": 1077.2320230968694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00074.warc.gz"} |
https://www.alibris.com/Dynamical-Zeta-Functions-for-Piecewise-Monotone-Maps-of-the-Interval-David-Ruelle/book/1846331 | ## Autumn Sale | Save $12. Get the code » # Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval ## by David Ruelle Write The First Customer Review ##### Browse related Subjects + Browse All Subjects Consider a space$M$, a map$f:M\to M$, and a function$g:M \to {\mathbb C}$. The formal power series$\zeta (z) = \exp \sum ^\infty_{m=1} \frac {z^m} {m} \sum_{x \in \mathrm {Fix}\,f^m} \prod ^{m-1}_{k=0} g (f^kx)\$ yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general ...
2004, American Mathematical Society, Providence ISBN-13: 9780821836019 New edition Paperback 1994, American Mathematical Society(RI) ISBN-13: 9780821869918 Hardcover | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141590356826782, "perplexity": 4601.019501799829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688966.39/warc/CC-MAIN-20170922130934-20170922150934-00521.warc.gz"} |
http://mathhelpforum.com/calculus/136559-numerical-analysis-method-undetermined-co-efficients.html | # Thread: Numerical Analysis - Method of Undetermined Co-efficients
1. ## Numerical Analysis - Method of Undetermined Co-efficients
Having a lot of trouble with this one, not even sure where to start:
Q3: Use the method of undetermined coefficients to prove that, for two given points $\displaystyle x0, x1$ (different) and any gives values $\displaystyle y0, y1, z0$ and $\displaystyle z1$ there exists a unique polynomial p(x) of degree 3 such that:
$\displaystyle p(x0) = y0; p(x1) = y1; p'(x0) = z0; p'(x1) = z1 .$ [25 marks]
My lecturer has not specified whether or not MATLAB may be used. Is this question possible without it? If so, how do I go about it?
P.S - This is the correct forum for this, isn't it?
2. Originally Posted by MickQ
Having a lot of trouble with this one, not even sure where to start:
Q3: Use the method of undetermined coefficients to prove that, for two given points $\displaystyle x0, x1$ (different) and any gives values $\displaystyle y0, y1, z0$ and $\displaystyle z1$ there exists a unique polynomial p(x) of degree 3 such that:
$\displaystyle p(x0) = y0; p(x1) = y1; p'(x0) = z0; p'(x1) = z1 .$ [25 marks]
My lecturer has not specified whether or not MATLAB may be used. Is this question possible without it? If so, how do I go about it?
P.S - This is the correct forum for this, isn't it?
The problem says "polynomial of degree 3" and "use the method of undetermined coefficients"- have you tried that?
Any polynomial of degree 3 is of the form $\displaystyle p(x)= ax^3+ bx^2+ cx+ d$ which has four "undetermined coefficients" and you are given four conditions. That will give you four linear equations to solve for a, b, c, and d. Since the problem only says "prove there exist", all you need to do is show that the four equations have a unique solution.
For example, "$\displaystyle p(x_0)= y_0$" means that $\displaystyle y_0= ax_0^3+ bx_0^2+ cx_0+ d$, "$\displaystyle p(x_1)= y_1$" means that $\displaystyle y_1= ax_1^3+ bx_1^2+ cx_1+ d$, "$\displaystyle p'(x_0)= z_0$" means that $\displaystyle z_0= 3ax_0^2+ 2bx_0+ c$, and "$\displaystyle p'(x_1)= z_1$" means that $\displaystyle z_1= 3ax_1+ 2bx_1+ c$.
Those are your four linear equations for a, b, c, and d. There are many different ways to show that such a set of equations has a unique solution and I don't know which you have learned. One might be to just go ahead and find the solution, in terms of $\displaystyle x_0, x_1, y_0, y_1, z_0$, and $\displaystyle z_1$, of course. Another, more sophisticated and probably simpler, would be to show that the coefficient matrix could be row reduce to the identity matrix. Still another would be to show that the determinant of the coefficient matrix is not 0.
3. Just my opinion , i am not going to answer your question
Given that $\displaystyle p(x)$ is a polynomial of degree $\displaystyle 3$ so its derivative is a polynomial of degree $\displaystyle 2$
By Lagrange's Interpolation formula ,
but wait ! Don't we need to find three points to construct a quadratic function ? We only have two points $\displaystyle p'(x_0) = z_0$ and $\displaystyle p'(x_1) = z_1$ . Where is the third point ?
Now , i let $\displaystyle x_3 =0 , p'(x_3) = p'(0) = t$ (undetermined)
and hope that neither $\displaystyle x_1$ nor $\displaystyle x_2$ is also zero . haha
Now , we have
$\displaystyle p'(x ) = \frac{x(x-x_2)}{x_1 (x_1 - x_2)} z_1 + \frac{x(x-x_1)}{x_2 (x_2 - x_1)} z_2 + \frac{(x-x_1)(x-x_2)}{x_1 x_2} t$
followed by integration and substitution $\displaystyle p(x_1) = y_1 ~,~ p(x_2) = y_2$ , we obtain two linear equations with two unknowns .
Therefore , we can confirm the two unknowns $\displaystyle t , C$ from them .
I guess even though one of $\displaystyle (x_1,x_2)$ is zero , it is still ok because i believe we could finally eliminate the denominators $\displaystyle x_1 , x_2$ .
4. I thought of another method in my dream last night , i'm still using Lagrange's formula .
$\displaystyle p(x_1) = y_1$
$\displaystyle p(x_2) = y_2$
$\displaystyle p'(x_1) = z_1$
$\displaystyle p'(x_2) = z_2$
Consider $\displaystyle p'(x) = \lim_{a\to 0 } \frac{ p(x+a) - p(x) }{a}$
so $\displaystyle p'(x_1) = z_1 = \lim_{a\to 0 } \frac{ p(x_1+a) - p(x_1) }{a}$
$\displaystyle y_1 + a z_1 = p(x_1 + a)$ and
$\displaystyle y_2 + a z_2 = p(x_2 + a)$ and the above
$\displaystyle y_1 = p(x_1)$
$\displaystyle y_2 = p(x_2 )$
Now , we have four points , so that a cubic polynomial could be easily obtained .
Consider
$\displaystyle \frac{ (x-x_2)(x-x_1-a) (x-x_2 -b)}{(x_1-x_2) (x_1-x_1-a) (x_1-x_2-b) } y_1 +$ $\displaystyle \frac{ (x-x_1)(x-x_2) (x-x_2 - b)}{(x_1 + a -x_1) (x_1 + a-x_2) (x_1 + a -x_2 - b ) } (y_1 + a z_1 )$ $\displaystyle a,b \to 0$
$\displaystyle = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + (x-x_2-b)(x-x_2)(x-x_1)(x-x_1-a) \frac{y_1}{a}$ $\displaystyle ( \frac{1}{ [ (x_1 + a ) -x_2 ] [ (x_1 + a) - x_2 - b ] [ x - (x_1 + a) ] }$ $\displaystyle - \frac{1}{ [ (x_1 ) -x_2 ] [ (x_1 ) - x_2 - b ] [ x - (x_1 ) ] } )$
$\displaystyle = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + y_1 (x-x_2-b)(x-x_2)(x-x_1)(x-x_1-a) \frac{ \partial }{ \partial t } \left [ \frac{1}{ (t-x_2)(t - x_2 - b)(x-t) } \right ] |_{t=x_1}$
$\displaystyle = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + y_1 \frac{(x-x_2-b)(x-x_2)(x-x_1)(x-x_1-a) }{ (t-x_2)(t - x_2 - b)(x-t) } \left( \frac{1}{x-t} - \frac{1}{t - x_2} - \frac{1}{t - x_2 - b} \right ) |_{t=x_1}$
$\displaystyle = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + y_1 \frac{ ( x-x_2)^2 (x-x_1)}{(x_1 - x_2)^2} \left( \frac{1}{ x - x_1} - \frac{2}{ x_1 - x_2 } \right )$
$\displaystyle = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + y_1 \frac{ ( x-x_2)^2 }{(x_1 - x_2)^2} - 2 y_1 \frac{ ( x-x_2)^2 (x-x_1)}{(x_1 - x_2)^3}$
This is a part of $\displaystyle p(x)$ , the other part is similar to it but we have to exchange between $\displaystyle 1$ and $\displaystyle 2$.
$\displaystyle p(x) = z_1 \frac{ (x-x_1)(x-x_2)^2 }{(x_1 - x_2)^2}$ $\displaystyle + y_1 \frac{ ( x-x_2)^2 }{(x_1 - x_2)^2} - 2 y_1 \frac{ ( x-x_2)^2 (x-x_1)}{(x_1 - x_2)^3}$ $\displaystyle + z_2 \frac{ (x-x_2)(x-x_1)^2 }{(x_2 - x_1)^2}$ $\displaystyle + y_2 \frac{ ( x-x_1)^2 }{(x_2 - x_1)^2} - 2 y_2 \frac{ ( x-x_1)^2 (x-x_2)}{(x_2 - x_1)^3}$
This method looks so complicated but it is not , the dizzy thing is just the typing ,but the idea is simple indeed ! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879843354225159, "perplexity": 175.0491475783298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863411.67/warc/CC-MAIN-20180620031000-20180620051000-00257.warc.gz"} |
https://quantumcomputing.stackexchange.com/questions/2140/simons-algorithm-probability-of-independence/2141 | # Simon's Algorithm Probability of Independence
In this pdf for the Simon's algorithm we need $$n-1$$ independent $$\mathbf y$$ such that: $$\mathbf y \cdot \mathbf s=0$$ to find $$\mathbf s$$. On page 6 of the pdf the author writes that the probability of getting $$n-1$$ independent values of $$\mathbf y$$ in $$n-1$$ attempts is: $$P_{\text{ind}}=(1-1/N(\mathbf s))(1-2/N(\mathbf s))\cdots (1-2^{n-1}/N(\mathbf s))\tag{1}$$ where $$N(\mathbf s)=2^{n-1}$$ if $$\mathbf s \ne 0$$ and $$2^n$$ if $$\mathbf s=0$$. Clearly then $$P_{\text{ind}}=0$$ for $$\mathbf{s}\ne 0$$ - which I believe to be wrong.
My question is, therefore: Is formula (1) wrong and if so what is the correct version. If it is not wrong how do we interpret $$P_{\text{ind}}=0$$ .
At first glance, the formula looks lightly wrong: the last term in the should only be $(1-2^{n-2}/N(s))$, giving $$P_{ind}=\prod_{k=1}^{n-1}\left(1-\frac{2^{k-1}}{N(s)}\right)$$ overall. Thus every term is a half, or larger.
My reasoning is as follows: you perform a measurement and get a random outcome. The first time you do this, it can be any outcome except the all 0 string. This happens with probability $1-1/N(s)$.
The second time, you want any string except the all zeros, or the answer you got last time, $y_1$. Thus, the term $1-2/N(s)$.
The third time, you want any string except the all zeros, $y_1$, $y_2$ or $y_1\oplus y_2$. Thus, the term $1-4/N(s)$.
Once you have $k-1$ linearly independent strings $y_1$ to $y_{k-1}$ and you're trying top find the $k^{th}$, there are $2^{k-1}$ answers you don't want to get: the $2^{k-1}$ answers that are linearly dependent on the strings you already have (note that this counting includes the all zeros string). You keep going until the last term, $k=n-1$ because you're trying to find $n-1$ linearly independent cases.
Incidentally, this is not the way that I would ever make the argument. Who cares about the probability of needing exactly $n-1$ calls? You can just keep repeating as many times as you need to in order to find $n-1$ linearly independent strings. Since we've already argued that the worst-case probability of finding a new linearly independent string is 1/2, this means that, on average, no more than $2(n-1)$ trials would be required (and actually somewhat less, because early on you're far more likely to get a hit). You could also apply a Chernoff bound to prove that the probability of needing significantly more runs than that is vanishingly small. OK, that's essentially where the solution gets to, it just feels a little excessive (to me). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258525967597961, "perplexity": 199.883354155895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00019.warc.gz"} |
https://testbook.com/question-answer/consider-an-lti-system-subjected-to-a-wide-sense-s--5f969157137d6198bcbcaf52 | # Consider an LTI system subjected to a wide-sense stationary input {x(n)}, which is a white noise sequence. The cross-correlation ϕx[m] between the input x(n), and output y(n) is:Where $${{\rm{\Phi }}_{xx}}\left[ m \right] = \sigma _x^2\delta \left[ m \right]$$ and h[⋅] is the impulse response.
This question was previously asked in
ESE Electronics 2015 Paper 1: Official Paper
View all UPSC IES Papers >
1. $$\sigma _x^2h\left[ m \right]$$
2. $${\sigma _x}h\left[ m \right]$$
3. $$\frac{{\sigma _x^2}}{2}h\left[ m \right]$$
4. $$\frac{{{\sigma _x}}}{2}h\left[ m \right]$$
Option 1 : $$\sigma _x^2h\left[ m \right]$$
Free
CT 3: Building Materials
3174
10 Questions 20 Marks 12 Mins
## Detailed Solution
Derivation:
The cross-correlation function between the input and output processes is given by:
Rxy (t1, t2) = E{X(t1) Y*(t2)}
$$= E\left\{ {X\left( {{t_1}} \right)\mathop \smallint \limits_{ - \infty }^\infty {X^*}\left( {{t_2} - \alpha } \right){h^*}\left( \alpha \right)d\alpha } \right\}$$
$$= \mathop \smallint \limits_{ - \infty }^\infty {R_{xx}}\left( {{t_1},{t_2} - \alpha } \right){h^*}\left( \alpha \right)d\alpha$$
= Rxx (t1, t2) * (h* (t2))
∴ The output cross-correlation between the input and the output is the convolution of the autocorrelation function of the input with the conjugate of the impulse response of the system.
Analysis:
Given the autocorrelation of the input wide-sense stationary signal as:
$${\phi _{xx}}\left[ m \right] = \sigma _x^2\delta \left( m \right)$$
The cross-correlation will be:
ϕx[m] = ϕxx [m] * h[⋅]
$$= \sigma _x^2\delta \left( m \right) \otimes h\left[ \cdot \right]$$
Since x(n) * δ(n) = x(n)
$${\phi _x}\left( m \right) = \sigma _x^2h\left( m \right)$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202340006828308, "perplexity": 4058.1606524111717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00224.warc.gz"} |
https://link.springer.com/chapter/10.1007%2F978-3-319-19264-2_20?error=cookies_not_supported&code=8922762a-6a8e-40f1-ab5e-b4b661cf67ef | MCPR 2015: Pattern Recognition pp 203-213
Sampled Weighted Min-Hashing for Large-Scale Topic Mining
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9116)
Abstract
We present Sampled Weighted Min-Hashing (SWMH), a randomized approach to automatically mine topics from large-scale corpora. SWMH generates multiple random partitions of the corpus vocabulary based on term co-occurrence and agglomerates highly overlapping inter-partition cells to produce the mined topics. While other approaches define a topic as a probabilistic distribution over a vocabulary, SWMH topics are ordered subsets of such vocabulary. Interestingly, the topics mined by SWMH underlie themes from the corpus at different levels of granularity. We extensively evaluate the meaningfulness of the mined topics both qualitatively and quantitatively on the NIPS (1.7 K documents), 20 Newsgroups (20 K), Reuters (800 K) and Wikipedia (4 M) corpora. Additionally, we compare the quality of SWMH with Online LDA topics for document representation in classification.
Keywords
Large-scale topic mining Min-Hashing Co-occurring terms
1 Introduction
The automatic extraction of topics has become very important in recent years since they provide a meaningful way to organize, browse and represent large-scale collections of documents. Among the most successful approaches to topic discovery are directed topic models such as Latent Dirichlet Allocation (LDA) [1] and Hierarchical Dirichlet Processes (HDP) [15] which are Directed Graphical Models with latent topic variables. More recently, undirected graphical models have been also applied to topic modeling, (e.g., Boltzmann Machines [12, 13] and Neural Autoregressive Distribution Estimators [9]). The topics generated by both directed and undirected models have been shown to underlie the thematic structure of a text corpus. These topics are defined as distributions over terms of a vocabulary and documents in turn as distributions over topics. Traditionally, inference in topic models has not scale well to large corpora, however, more efficient strategies have been proposed to overcome this problem (e.g., Online LDA [8] and stochastic variational inference [10]). Undirected Topic Models can be also trained efficient using approximate strategies such as Contrastive Divergence [7].
In this work, we explore the mining of topics based on term co-occurrence. The underlying intuition is that terms consistently co-occurring in the same documents are likely to belong to the same topic. The resulting topics correspond to ordered subsets of the vocabulary rather than distributions over such a vocabulary. Since finding co-occurring terms is a combinatorial problem that lies in a large search space, we propose Sampled Weighted Min-Hashing (SWMH), an extended version of Sampled Min-Hashing (SMH) [6]. SMH partitions the vocabulary into sets of highly co-occurring terms by applying Min-Hashing [2] to the inverted file entries of the corpus. The basic idea of Min-Hashing is to generate random partitions of the space so that sets with high Jaccard similarity are more likely to lie in the same partition cell.
One limitation of SMH is that the generated random partitions are drawn from uniform distributions. This setting is not ideal for information retrieval applications where weighting have a positive impact on the quality of the retrieved documents [3, 14]. For this reason, we extend SMH by allowing weights in the mining process which effectively extends the uniform distribution to a distribution based on weights. We demonstrate the validity and scalability of the proposed approach by mining topics in the NIPS, 20 Newsgroups, Reuters and Wikipedia corpora which range from small (a thousand of documents) to large scale (millions of documents). Table 1 presents some examples of mined topics and their sizes. Interestingly, SWMH can mine meaningful topics of different levels of granularity.
Table 1.
SWMH topic examples.
NIPS introduction, references, shown, figure, abstract, shows, back, left, process, $$\ldots$$ (51) chip, fabricated, cmos, vlsi, chips, voltage, capacitor, digital, inherent, $$\ldots$$ (42) spiking, spikes, spike, firing, cell, neuron, reproduces, episodes, cellular, $$\ldots$$ (17) 20 Newsgroups algorithm communications clipper encryption chip key lakers, athletics, alphabetical, pdp, rams, pct, mariners, clippers, $$\ldots$$ (37) embryo, embryos, infertility, ivfet, safetybelt, gonorrhea, dhhs, $$\ldots$$ (37) Reuters prior, quarterly, record, pay, amount, latest, oct precious, platinum, ounce, silver, metals, gold udinese, reggiana, piacenza, verona, cagliari, atalanta, perugia, $$\ldots$$ (64) Wikipedia median, householder, capita, couples, racial, makeup, residing, $$\ldots$$ (54) decepticons’, galvatron’s, autobots’, botcon, starscream’s, rodimus, galvatron avg, strikeouts, pitchers, rbi, batters, pos, starters, pitched, hr, batting, $$\ldots$$ (21)
The remainder of the paper is organized as follows. Section 2 reviews the Min-Hashing scheme for pairwise set similarity search. The proposed approach for topic mining by SWMH is described in Sect. 3. Section 4 reports the experimental evaluation of SWMH as well as a comparison against Online LDA. Finally, Sect. 5 concludes the paper with some discussion and future work.
2 Min-Hashing for Pairwise Similarity Search
Min-Hashing is a randomized algorithm for efficient pairwise set similarity search (see Algorithm 1). The basic idea is to define MinHash functions h with the property that the probability of any two sets $$A_1, A_2$$ having the same MinHash value is equal to their Jaccard Similarity, i.e.,
\begin{aligned} P[h(A_1) = h(A_2)] = \frac{\mid A_1 \cap A_2 \mid }{\mid A_1 \cup A_2 \mid } \in [0, 1]. \end{aligned}
(1)
Each MinHash function h is realized by generating a random permutation $$\pi$$ of all the elements and assigning the first element of a set on the permutation as its MinHash value. The rationale behind Min-Hashing is that similar sets will have a high probability of taking the same MinHash value whereas dissimilar sets will have a low probability. To cope with random fluctuations, multiple MinHash values are computed for each set from independent random permutations. Remarkably, it has been shown that the portion of identical MinHash values between two sets is an unbiased estimator of their Jaccard similarity [2].
Taking into account the above properties, in Min-Hashing similar sets are retrieved by grouping l tuples $$g_1, \ldots , g_l$$ of r different MinHash values as follows
\begin{aligned} \begin{array}{l} g_1(A_1) = (h_1(A_1), h_2(A_1), \ldots , h_r(A_1))\\ g_2(A_1) = (h_{r+1}(A_1), h_{r+2}(A_1), \ldots , h_{2\cdot r}(A_1))\\ \cdots \\ g_l(A_1) = (h_{(l-1)\cdot r+1}(A_1), h_{(l-1)\cdot r+2}(A_1), \ldots , h_{l\cdot r}(A_1)) \end{array}, \end{aligned}
where $$h_j(A_1)$$ is the j-th MinHash value. Thus, l different hash tables are constructed and two sets $$A_1, A_2$$ are stored in the same hash bucket on the k-th hash table if $$g_k(A_1) = g_k(A_2), k = 1, \ldots , l$$. Because similar sets are expected to agree in several MinHash values, they will be stored in the same hash bucket with high probability. In contrast, dissimilar sets will seldom have the same MinHash value and therefore the probability that they have an identical tuple will be low. More precisely, the probability that two sets $$A_1,A_2$$ agree in the r MinHash values of a given tuple $$g_k$$ is $$P[g_k(A_1) = g_k(A_2)] = sim(A_1, A_2)^r$$. Therefore, the probability that two sets $$A_1, A_2$$ have at least one identical tuple is $$P_{collision}[A_1, A_2] = 1-(1-sim(A_1, A_2)^r)^l$$.
The original Min-Hashing scheme was extended by Chum et al. [5] to weighted set similarity, defined as
\begin{aligned} sim_{hist}(H_1, H_2) = \frac{\sum _i w_i \min (H_1^i, H_2^i)}{\sum _i w_i \max (H_1^i, H_2^i)} \in [0, 1], \end{aligned}
(2)
where $$H_1^i, H_2^i$$ are the frecuencies of the i-th element in the histograms $$H_1$$ and $$H_2$$ respectively and $$w_i$$ is the weight of the element. In this scheme, instead of generating random permutations drawn from a uniform distribution, the permutations are drawn from a distribution based on element weights. This extension allows the use of popular document representations based on weighting schemes such as tf-idf and has been applied to image retrieval [5] and clustering [4].
3 Sampled Min-Hashing for Topic Mining
Min-Hashing has been used in document and image retrieval and classification, where documents and images are represented as bags of words. Recently, it was also successfully applied to retrieving co-occurring terms by hashing the inverted file lists instead of the documents [5, 6]. In particular, Fuentes-Pineda et al. [6] proposed Sampled Min-Hashing (SMH), a simple strategy based on Min-Hashing to discover objects from large-scale image collections. In the following, we briefly describe SMH using the notation of terms, topics and documents, although it can be generalized to any type of dyadic data. The underlying idea of SMH is to mine groups of terms with high Jaccard Co-occurrence Coefficient (JCC), i.e.,
\begin{aligned} JCC(T_1, \ldots , T_k) = \frac{\vert T_1 \cap T_2 \cap \cdots \cap T_k \vert }{\vert T_1 \cup T_2 \cup \cdots \cup T_k \vert }, \end{aligned}
(3)
where the numerator correspond to the number of documents in which terms $$T_1, \ldots , T_k$$ co-occur and the denominator is the number of documents with at least one of the k terms. Thus, Eq. 1 can be extended to multiple co-occurring terms as
\begin{aligned} P[h(T_1) = h(T_2) \ldots = h(T_k)] = JCC(T_1, \ldots , T_k). \end{aligned}
(4)
From Eqs. 3 and 4, it is clear that the probability that all terms $$T_1, \ldots , T_k$$ have the same MinHash value depends on how correlated their occurrences are: the more correlated the higher is the probability of taking the same MinHash value. This implies that terms consistently co-occurring in many documents will have a high probability of taking the same MinHash value.
In the same way as pairwise Min-Hashing, l tuples of r MinHash values are computed to find groups of terms with identical tuple, which become a co-occurring term set. By choosing r and l properly, the probability that a group of k terms has an identical tuple approximates a unit step function such that
\begin{aligned} P_{collision}[T_1, \ldots , T_k] \approx {\left\{ \begin{array}{ll} 1 &{} \text{ if } JCC(T_1, \ldots , T_k) \ge s* \\ 0 &{} \text{ if } JCC(T_1, \ldots , T_k) < s* \end{array}\right. }, \end{aligned}
Here, the selection of r and l is a trade-off between precision and recall. Given $$s*$$ and r, we can determine l by setting $$P_{collision}[T_1, \ldots ,T_k]$$ to 0.5, which gives
\begin{aligned} l = \frac{\log (0.5)}{\log (1 - s*^r)}. \end{aligned}
In SMH, each hash table can be seen as a random partitioning of the vocabulary into disjoint groups of highly co-occurring terms, as illustrated in Fig. 1. Different partitions are generated and groups of discriminative and stable terms belonging to the same topic are expected to lie on overlapping inter-partition cells. Therefore, we cluster co-occurring term sets that share many terms in an agglomerative manner. We measure the proportion of terms shared between two co-occurring term sets $$C_1$$ and $$C_2$$ by their overlap coefficient, namely
\begin{aligned} ovr(C_1, C_2) = \frac{\mid C_1 \cap C_2 \mid }{\min (\mid C_1 \mid , \mid C_2\mid )} \in [0, 1]. \end{aligned}
Since a pair of co-occurring term sets with high Jaccard similarity will also have a large overlap coefficient, finding pairs of co-occurring term sets can be speeded up by using Min-Hashing, thus avoiding the overhead of computing the overlap coefficient between all the pairs of co-occurring term sets.
The clustering stage merges chains of co-occurring term sets with high overlap coefficient into the same topic. As a result, co-occurring term sets associated with the same topic can belong to the same cluster even if they do not share terms with one another, as long as they are members of the same chain. In general, the generated clusters have the property that for any co-occurring term set, there exists at least one co-occurring term set in the same cluster with which it has an overlap coefficient greater than a given threshold $$\epsilon$$.
We explore the use of SMH to mine topics from documents but we judge term co-occurrence by the Weighted Co-occurrence Coefficient (WCC), defined as
\begin{aligned} WCC (T_1, \ldots , T_k) = \frac{\sum _i w_i \min {(T_1^i, \cdots , T_k^i)}}{\sum _i w_i \max {(T_1^i, \cdots , T_k^i )}} \in [0, 1], \end{aligned}
(5)
where $$T_1^i, \cdots , T_k^i$$ are the frecuencies in which terms $$T_1, \ldots , T_k$$ occur in the i-th document and the weight $$w_i$$ is given by the inverse of the size of the i-th document. We exploit the extended Min-Hashing scheme by Chum et al. [5] to efficiently find such co-occurring terms. We call this topic mining strategy Sampled Weighted Min-Hashing (SWMH) and summarize it in Algorithm 2.
4 Experimental Results
In this section, we evaluate different aspects of the mined topics. First, we present a comparison between the topics mined by SWMH and SMH. Second, we evaluate the scalability of the proposed approach. Third, we use the mined topics to perform document classification. Finally, we compare SWMH topics with Online LDA topics.
The corpora used in our experiments were: NIPS, 20 Newsgroups, Reuters and Wikipedia1. NIPS is a small collection of articles (3, 649 documents), 20 Newsgroups is a larger collection of mail newsgroups (34, 891 documents), Reuters is a medium size collection of news (137, 589 documents) and Wikipedia is a large-scale collection of encyclopedia articles (1, 265, 756 documents)2.
All the experiments presented in this work were performed on an Intel(R) Xeon(R) 2.66 GHz workstation with 8 GB of memory and with 8 processors. However, we would like to point out that the current version of the code is not parallelized, so we did not take advantage of the multiple processors.
4.1 Comparison Between SMH and SWMH
For these experiments, we used the NIPS and Reuters corpora and different values of the parameters $$s*$$ and r, which define the number of MinHash tables. We set the parameters of similarity ($$s*$$) to 0.15, 0.13 and 0.10 and the tuple size (r) to 3 and 4. These parameters rendered the following table sizes: 205, 315, 693, 1369, 2427, 6931. Figure 2 shows the effect of weighting on the amount of mined topics. First, notice the breaking point on both figures when passing from 1369 to 2427 tables. This effect corresponds to resetting the $$s*$$ to .10 when changing r from 3 to 4. Lower values in $$s*$$ are more strict and therefore less topics are mined. Figure 2 also shows that the amount of mined topics is significantly reduced by SWMH, since the colliding terms not only need to appear on similar documents but now with similar proportions. The effect of using SWMH is also noticeable in the number of terms that compose a topic. The maximum reduction reached in NIPS was $$73\,\%$$ while in Reuters was $$45\,\%$$.
4.2 Scalability Evaluation
To test the scalability of SWMH, we measured the time and memory required to mine topics in the Reuters corpus while increasing the number of documents to be analyzed. In particular, we perform 10 experiments with SWMH, each increasing the number of documents by 10 %3. Figure 3 illustrates the time taken to mine topics as we increase the number of documents and as we increase an index of complexity given by a combination of the size of the vocabulary and the average number of times a term appears in a document. As can be noticed, in both cases the time grows almost linearly and is in the thousand of seconds.
The mining times for the corpora were: NIPS, 43 s; 20 Newsgroups, $$70\,\mathrm{s}$$; Reuters, $$4,446\,\mathrm{s}$$ and Wikipedia, $$45,834\,\mathrm{s}$$. These times contrast with the required time by Online LDA to model 100 topics4: NIPS, $$60\,\mathrm{s}$$; 20 Newsgroups, $$154\,\mathrm{s}$$ and Reuters, 25, 997. Additionally, we set Online LDA to model 400 topics with the Reuters corpus and took 3 days. Memory figures follow a similar behavior to the time figures. Maximum memory: NIPS, $$141\,\mathrm{MB}$$; 20 Newsgroups, $$164\,\mathrm{MB}$$; Reuters, $$530\,\mathrm{MB}$$ and Wikipedia, $$1,500\,\mathrm{MB}$$.
Table 2.
Document classification for 20 Newsgroups corpus.
Model
Topics
Accuracy
Avg. score
205
3394
59.9
60.6
319
4427
61.2
64.3
693
6090
68.9
70.7
1693
2868
53.1
55.8
2427
3687
56.2
60.0
6963
5510
64.1
66.4
Online LDA
100
59.2
60.0
Online LDA
400
65.4
65.9
4.3 Document Classification
In this evaluation we used the mined topics to create a document representation based on the similarity between topics and documents. This representation was used to train an SVM classifier with the class of the document. In particular, we focused on the 20 Newsgroups corpus for this experiment. We used the typical setting of this corpus for document classification ($$60\,\%$$ training, $$40\,\%$$ testing). Table 2 shows the performance for different variants of topics mined by SWMH and Online LDA topics. The results illustrate that the number of topics is relevant for the task: Online LDA with 400 topics is better than 100 topics. A similar behavior can be noticed for SWMH, however, the parameter r has an effect on the content of the topics and therefore on the performance.
4.4 Comparison Between Mined and Modeled Topics
In this evaluation we compare the quality of the topics mined by SWMH against Online LDA topics for the 20 Newsgroups and Reuters corpora. For this we measure topic coherence, which is defined as
$$C(t) = \sum \limits _{m=2}^{M} \sum \limits _{l=1}^{m-1} \log \frac{D(v_m, v_l)}{D(v_l)},$$
where $$D(v_l)$$ is the document frequency of the term $$v_l$$, and $$D(v_m, v_l)$$ is the co-document frequency of the terms $$v_m$$ and $$v_l$$ [11]. This metric depends on the first M elements of the topics. For our evaluations we fixed M to 10. However, we remark that the comparison is not direct since both the SWMH and Online LDA topics are different in nature: SWMH topics are subsets of the vocabulary with uniform distributions while Online LDA topics are distributions over the complete vocabulary. In addition, Online LDA generates a fixed number of topics which is in the hundreds while SWMH produces thousands of topics. For the comparison we chose the n-best mined topics by ranking them using an ad hoc metric involving the co-occurrence of the first element of the topic. For the purpose of the evaluation we limited the SWMH to the 500 best ranked topics. Figure 4 shows the coherence for each corpus. In general, we can see a difference in the shape and quality of the coherence box plots. However, we notice that SWMH produces a considerable amount of outliers, which calls for further research in the ranking of the mined topics and their relation with the coherence.
5 Discussion and Future Work
In this work we presented a large-scale approach to automatically mine topics in a given corpus based on Sampled Weighted Min-Hashing. The mined topics consist of subsets of highly correlated terms from the vocabulary. The proposed approach is able to mine topics in corpora which go from the thousands of documents (1 min approx.) to the millions of documents (7 h approx.), including topics similar to the ones produced by Online LDA. We found that the mined topics can be used to represent a document for classification. We also showed that the complexity of the proposed approach grows linearly with the amount of documents. Interestingly, some of the topics mined by SWMH are related to the structure of the documents (e.g., in NIPS the words in the first topic correspond to parts of an article) and others to specific groups (e.g., team sports in 20 Newsgroups and Reuters, or the Transformers universe in Wikipedia). These examples suggest that SWMH is able to generate topics at different levels of granularity.
Further work has to be done to make sense of overly specific topics or to filter them out. In this direction, we found that weighting the terms has the effect of discarding several irrelevant topics and producing more compact ones. Another alternative, it is to restrict the vocabulary to the top most frequent terms as done by other approaches. Other interesting future work include exploring other weighting schemes, finding a better representation of documents from the mined topics and parallelizing SWMH.
Footnotes
1. 1.
Wikipedia dump from 2013-09-04.
2. 2.
All corpora were preprocessed to cut off terms that appeared less than 6 times in the whole corpus.
3. 3.
The parameters were fixed to $$s*=0.1$$, $$r=3$$, and overlap threshold of 0.7.
4. 4.
https://github.com/qpleple/online-lda-vb was adapted to use our file formats.
References
1. 1.
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
2. 2.
Broder, A.Z.: On the resemblance and containment of documents. Comput. 33(11), 46–53 (2000)
3. 3.
Buckley, C.: The importance of proper weighting methods. In: Proceedings of the Workshop on Human Language Technology, pp. 349–352 (1993)Google Scholar
4. 4.
Chum, O., Matas, J.: Large-scale discovery of spatially related images. IEEE Trans. Pattern Anal. Mach. Intell. 32, 371–377 (2010)
5. 5.
Chum, O., Philbin, J., Zisserman, A.: Near duplicate image detection: min-hash and tf-idf weighting. In: Proceedings of the British Machine Vision Conference (2008)Google Scholar
6. 6.
Fuentes Pineda, G., Koga, H., Watanabe, T.: Scalable object discovery: a hash-based approach to clustering co-occurring visual words. IEICE Trans. Inf. Syst. E94–D(10), 2024–2035 (2011)
7. 7.
Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)
8. 8.
Hoffman, M.D., Blei, D.M., Bach, F.: Online learning for latent Dirichlet allocation. In: Advances in Neural Information Processing Systems 23 (2010)Google Scholar
9. 9.
Larochelle, H., Stanislas, L.: A neural autoregressive topic model. In: Advances in Neural Information Processing Systems 25, pp. 2717–2725 (2012)Google Scholar
10. 10.
Mimno, D., Hoffman, M.D., Blei, D.M.: Sparse stochastic inference for latent Dirichlet allocation. In: International Conference on Machine Learning (2012)Google Scholar
11. 11.
Mimno, D., Wallach, H.M., Talley, E., Leenders, M., McCallum, A.: Optimizing semantic coherence in topic models. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 262–272. ACL (2011)Google Scholar
12. 12.
Salakhutdinov, R., Srivastava, N., Hinton, G.: Modeling documents with a deep Boltzmann machine. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (2013)Google Scholar
13. 13.
Salakhutdinov, R., Hinton, G.E.: Replicated softmax: an undirected topic model. In: Advances in Neural Information Processing Systems 22, pp. 1607–1614 (2009)Google Scholar
14. 14.
Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Inf. Process. Manage. 24(5), 512–523 (1988)
15. 15.
Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical Dirichlet processes. J. Am. Stat. Assoc. 101, 1566–1581 (2004) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871948719024658, "perplexity": 2483.6376299760204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088264.43/warc/CC-MAIN-20210415222106-20210416012106-00270.warc.gz"} |
https://www.physicsforums.com/threads/help-me-please.54640/ | 1. Nov 29, 2004
ziddy83
Hi, I am having trouble figuring out how to tackle this problem, it would be great if anyone can help!
Here is the problem...
The three containers shown below (sorry, i can only describe them and not draw them here, i'll describe em at the bottom) are all initially empty. Water is simultaneously poured into the containers at the constant rate of 20 cm^3/ sec. Water leaks out of a hole in the bottom of the second container at a constant rate. at some point in time, the water levels in the three containers are all rising at the same time. At what rate is the water leaking out of the second container?
Container 1: A cylinder with radius of 10cm.
Container 2: Cone shape with 60 degrees point
Container 3: Another cone with a 90 degrees point.
It would be awesome if i could get any help!! THANKS!!
2. Nov 29, 2004
Gokul43201
Staff Emeritus
Hint : Write V = f(h), (where V : volume, h : height) for all three containers. Find the time at which the levels rise at the same rate by comparing 1 and 3. From this determine tha leak rate of 2.
3. Nov 29, 2004
ziddy83
can you give me another hint?
4. Nov 30, 2004
ziddy83
Ok, do i set this up by using the volume equation, f(V)= 1/3 *pi *r^2* h......
this is also given in the problem..
$$\frac {dV_3} {dt} = 20, V_3 = 20t, and since \frac {dV_2}{dt} = r(in) - r(out) = 20 - r (out), V_2 = (20-r (out))t$$
sorry i didnt include this before...but yeah, another push in the right direction would be great, thanks.
5. Dec 4, 2004
ziddy83
can anyone help me on this problem?
6. Dec 6, 2004
Gokul43201
Staff Emeritus
$$V_1 = \pi R^2 h_1 => \frac{dV_1}{dt} = \pi R^2 \frac{dh_1}{dt} = 20$$
$$=> \frac{dh_1}{dt} = \frac {20}{100 \pi} = \frac{1}{5 \pi}$$
$$V_3 = \frac{1}{3} \pi r_3^2 h_3$$
$$But~\frac{r_3}{h_3} = tan 45 = \frac{1}{\sqrt{2}} => V_3 = \frac{1}{6} \pi h_3^3$$
$$So~ \frac{dV_3}{dt} = \frac {\pi}{2} h_3^2 \frac {dh_3}{dt} = 20$$
$$=> \frac{dh_3}{dt} = \frac {40}{\pi h_3^2}$$
$$But~at~some~t,~ \frac{dh_3}{dt} = \frac{dh_1}{dt} = \frac{1}{5 \pi}$$
From this, you can get h3, and from that, V3. Dividing V3 by 20 gives you the time in seconds. Can you take it from there ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594458103179932, "perplexity": 724.5268927454802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00231.warc.gz"} |
https://infoscience.epfl.ch/record/190426 | Infoscience
Journal article
# From electrostatics to almost optimal nodal sets for polynomial interpolation in a simplex
The electrostatic interpretation of the Jacobi-Gauss quadrature points is exploited to obtain interpolation points suitable for approximation of smooth functions defined on a simplex. Moreover, several new estimates, based on extensive numerical studies, for approximation along the line using Jacobi-Gauss-Lobatto quadrature points as the nodal sets are presented. The electrostatic analogy is extended to the two-dimensional case, with the emphasis being on nodal sets inside a triangle for which two very good matrices of nodal sets are presented. The matrices are evaluated by computing the Lebesgue constants and they share the property that the nodes along the edges of the simplex are the Gauss-Lobatto quadrature points of the Chebyshev and Legendre polynomials, respectively. This makes the resulting nodal sets particularly well suited for integration with conventional spectral methods and supplies a new nodal basis for h - p finite element methods. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343555927276611, "perplexity": 422.31732706883435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170286.6/warc/CC-MAIN-20170219104610-00069-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=99-486 | 99-486 J.Bricmont, A.Kupiainen, R.Lefevere
Probabilistic estimates for the Two Dimensional Stochastic Navier-Stokes Equations (36K, LATeX 2e) Dec 22, 99
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. We consider the Navier-Stokes equation on a two dimensional torus with a random force, white noise in time and analytic in space, for arbitrary Reynolds number $R$. We prove probabilistic estimates for the long time behaviour of the solutions that imply bounds for the dissipation scale and energy spectrum as $R\to\infty$.
Files: 99-486.tex | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894216656684875, "perplexity": 2059.7498924494616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00702.warc.gz"} |
https://www.sarthaks.com/103212/a-vessel-is-in-the-form-of-a-hollow-hemisphere-mounted-by-a-hollow-cylinder | # A vessel is in the form of a hollow hemisphere mounted by a hollow cylinder.
263 views
A vessel is in the form of a hollow hemisphere mounted by a hollow cylinder. The diameter of the hemisphere is 14 cm and the total height of the vessel is 13 cm. Find the inner surface area of the vessel.[Use π = 22/7]
by (18.2k points)
selected
It can be observed that radius (r) of the cylindrical part and the hemispherical part is the same (i.e., 7 cm).
Height of hemispherical part = Radius = 7 cm
Height of cylindrical part (h) = 13 −7 = 6 cm
Inner surface area of the vessel = CSA of cylindrical part + CSA of hemispherical part
= 2 πrh + 2π r2
Inner surface area of vessel
= 2 x 22/7 x 7 x 6 +2 x 22/7 x 7
= 44(6+7)
= 44 x 13
= 572 cm2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283340573310852, "perplexity": 1557.276103947521}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00329.warc.gz"} |
https://science.sciencemag.org/content/308/5723/838?ijkey=ceb2c66912fcfda97ce2b72fbd5c7c3914efd49f&keytype2=tf_ipsecsha | Report
# The Optical Resonances in Carbon Nanotubes Arise from Excitons
See allHide authors and affiliations
Science 06 May 2005:
Vol. 308, Issue 5723, pp. 838-841
DOI: 10.1126/science.1110265
## Abstract
Optical transitions in carbon nanotubes are of central importance for nanotube characterization. They also provide insight into the nature of excited states in these one-dimensional systems. Recent work suggests that light absorption produces strongly correlated electron-hole states in the form of excitons. However, it has been difficult to rule out a simpler model in which resonances arise from the van Hove singularities associated with the one-dimensional bond structure of the nanotubes. Here, two-photon excitation spectroscopy bolsters the exciton picture. We found binding energies of ∼400 millielectron volts for semiconducting single-walled nanotubes with 0.8-nanometer diameters. The results demonstrate the dominant role of many-body interactions in the excited-state properties of one-dimensional systems.
Coulomb interactions are markedly enhanced in one-dimensional (1D) systems. Single-walled carbon nanotubes (SWNTs) provide an ideal model system for studying these effects. Strong electron-electron interactions are associated with many phenomena in the charge transport of SWNTs, including Coulomb blockade (1, 2), Kondo effects (3, 4), and Luttinger liquid behavior (5, 6). The effect of Coulomb interactions on nanotube optical properties has remained unclear, in spite of its central importance both for a fundamental understanding of these model 1D systems (7-9) and for applications (7, 10, 11). Theoretical studies suggest that optically produced electron-hole pairs should, under their mutual Coulomb interaction, form strongly correlated entities known as excitons (12-18). Although some evidence of excitons has emerged from studies of nanotube optical spectra (7, 19) and excited-state dynamics (20), it is difficult to rule out an alternative and widely used picture that attributes the optical resonances to van Hove singularities in the 1D density of states (21-23). Here, we demonstrate experimentally that the optically excited states of SWNTs are excitonic in nature. We measured exciton binding energies that represent a large fraction of the semiconducting SWNT band gap. As such, excitonic interactions are not a minor perturbation as in comparable bulk semiconductors, but actually define the optical properties of SWNTs. The importance of many-body effects in nanotubes derives from their 1D character; similar excitonic behavior is also seen in organic polymers with 1D conjugated backbones (24).
We identified excitons in carbon nanotubes using two-photon excitation spectroscopy. Two-photon transitions obey selection rules distinct from those governing linear excitation processes and thereby provide complementary insights into the electronic structure of excited states, as has been demonstrated in studies of molecular systems (25) and bulk solids (26). In 1D materials like SWNTs, the exciton states show defined symmetry with respect to reflection through a plane perpendicular to the nanotube axis. A Rydberg series of exciton states describing the relative motion of the electron and hole, analogous to the hydrogenic states, is then formed with definite parity with respect to this reflection plane. The even states are denoted as 1s, 2s, 3s, and so on, and the odd wave functions are labeled as 2p, 3p, and so on (27). Because of the weak spin-orbit coupling in SWNTs, all optically active excitons are singlet states, with the allowed transitions being governed by electric-dipole selection rules. For the dominant transitions polarized along the nanotube axis, one-photon (linear) excitation requires the final and initial states to exhibit opposite symmetry. In contrast, a two-photon transition is allowed only when the final state has the same parity as the initial state. Given the symmetry of the underlying atomic-scale wave functions, one-photon excitation produces only excitons of s-symmetry, whereas two-photon excitation leads only to excitons of p-symmetry (28). Thus, one-photon transitions access the lowest lying 1s exciton; two-photon transitions access only the excited states of the exciton.
An experimental method to determine the energies of the ground and excited exciton states follows immediately from these symmetry arguments: We measured the energies needed for one-photon and two-photon transitions in semiconducting nanotubes (Fig. 1A). A comparison of these energies yields the energy difference between the ground and excited exciton states and thereby directly indicates the exciton binding strength. When the excitonic interactions were negligible, we reverted to a simple band picture in which the onset of two-photon absorption coincides with the energy of one-photon absorption (Fig. 1B). The two-photon excitation spectra reflect the qualitative difference between these two pictures in an unambiguous fashion. In contrast, conventional linear optical measurements, such as absorption and fluorescence spectroscopy, access only one-photon transitions, for which a van Hove singularity and a broadened excitonic resonance exhibit qualitatively similar features. Because the one-photon absorption and emission arise from the same electronic transition in SWNTs, there is no Stokes shift between the two, as apparent in comparison of absorption and fluorescence spectra (8).
In our experiment, we used isolated SWNTs in a poly(maleic acid/octyl vinyl ether) (PMAOVE) matrix. SWNTs grown by high-pressure CO synthesis were dispersed in an aqueous solution of PMAOVE by a sonication method (29). In order to minimize infrared absorption of water, we formed a film of SWNTs imbedded in polymer matrix by slowly drying a drop of the solution. The SWNT samples obtained by this procedure showed fluorescence emission comparable to that of the SWNTs in aqueous solution.
Two-photon excitation is a nonlinear optical effect that requires the simultaneous absorption of a pair of photons. Femtosecond laser pulses provided the high intensities of light necessary to drive this process. The light source, a commercial optical parametrical amplifier (Spectra Physics OPA-800C), pumped by an amplified mode-locked Ti:sapphire laser, produced infrared pulses of 130-fs duration at a 1-kHz repetition rate. Peak powers exceeding 108 W were obtained over a photon energy range from 0.6 to 1.0 eV. Because these photon energies were well below the 1-photon absorption threshold (>1.2 eV) of the relevant SWNTs, no linear excitation occurred. A laser fluence of 5 J/m2 was typically chosen for the measurements. At this fluence, we explicitly verified the expected quadratic dependence of the excitation process on laser intensity.
To detect the two-photon excitation process in the SWNTs, we did not directly measure the depletion of the pump beam. Rather, we used the more sensitive approach of monitoring the induced light emission. The scheme can thus be described as two-photon-induced fluorescence excitation spectroscopy. Prior studies have shown that rapid excited-state relaxation processes in SWNTs (20) lead to fluorescence emission exclusively from the 1s-exciton state. Measurement of the two-photon-induced fluorescence thus yielded (Fig. 1A) both two-photon absorption spectra (from the fluorescence strength as a function of the laser excitation wavelength) and the one-photon 1s-exciton spectra (from the fluorescence emission wavelength). Further, because the fluorescence peaks reflect the physical structure of the emitting nanotubes, we obtained structure-specific excitation spectroscopy even when probing an ensemble sample. We detected the fluorescence emission in a backscattering geometry, using a spectrometer with 8-nm spectral resolution and a 2D array charge-coupled device (CCD) detector. Our data sampled the infrared excitation range in 10-meV steps.
The measured two-photon excitation spectra (Fig. 2) show the strength of fluorescence emission as a function of both the (two-photon) excitation energy and the (one-photon) emission energy. From the 2D contour plot, distinct fluorescence emission features emerge at emission energies of 1.21, 1.26, 1.30, and 1.36 eV (Fig. 2, circles). These emission peaks have been assigned, respectively, to SWNTs with chiral indices of (7,5), (6,5), (8,3), and (9,1) (7). It is apparent that none of the nanotubes were excited when the two-photon excitation energy was the same as the emission energy (Fig. 2, solid line). Only when the excitation energy was substantially greater than the emission energy did two-photon absorption occur. This behavior is a signature of the presence of excitons with significant binding energy and is incompatible with a simple band picture of the optical transitions.
The two-photon excitation spectra for nanotubes of given chiral index can be obtained as a horizontal cut in the contour plot of Fig. 2, taken at an energy corresponding to 1s-exciton emission of the relevant SWNT. To enhance the quality of the data, we applied a fitting procedure (30) to eliminate background contributions from the emission of other nanotube species. The resulting two-photon excitation spectra are shown for the (7,5), (6,5), and (8,3) SWNTs in Fig. 3. For each of the SWNT structures, the energy of the 1s fluorescence emission is indicated by an arrow.
The peaks in the two-photon excitation spectra can be assigned to the energy for creation of the 2p exciton, the lowest lying symmetry-allowed state for the nonlinear excitation process. From a comparison of this energy with that of the 1s-exciton emission feature, we obtained directly the relevant energy differences for the ground and excited exciton states: E2p - E1s = 280, 310, and 300 meV, respectively, for the (7,5), (6,5), and (8,3) SWNTs.
To determine the exciton binding energy and understand the nature of the two-photon spectra more fully, we considered the two-photon excitation process in greater detail. In addition to two-photon transitions to the 2p state, higher lying bound excitons are also accessible (such as 3p and 4p). The strength of these transitions was relatively small, and they do not account for the main features of the spectrum. We also, however, have transitions to the continuum or unbound exciton states. Including the influence of electron-hole interactions on the continuum transitions, we found that the expected shape of this contribution to the two-photon excitation spectrum could be approximated by a step function near the band edge (31). The experimental two-photon excitation spectra can be fit quite satisfactorily to the sum of a Lorentzian 2p exciton resonance and the continuum transitions with a broadened onset.
A more quantitative description of the two-photon excitation spectra can be achieved with a specific model of the effective electron-hole interaction within a SWNT. In the model, we consider a truncated 1D Coulomb interaction given by the potential V(z) = -e2/[ϵ(|z| + z0)] for electron-hole separation z. The value of z0 = 0.30d is fixed to approximate the Coulomb interaction between two charges distributed as rings at a separation z on a cylindrical surface of diameter d (27); the effective dielectric screening ϵ is the only adjustable parameter in the analysis. This simple model provides a good fit to the experimental data for the different nanotube species examined when we use an effective dielectric constant of 2.5 (Fig. 3, solid line). The features predicted in the model have been broadened by 80 meV (full width at half maximum). This broadening is in part experimental, reflecting the spectral width of the short laser excitation pulses (30 meV). The main contribution, however, is the width of the excitonic transition itself. This width is ascribed to lifetime broadening associated with the rapid relaxation of the excited states to the 1s exciton state (20). From this analysis, we determined the energy of 2p for the three SWNT species in Fig. 3 to be E2p≈ -120 meV with respect to the onset of the continuum states at the band gap energy Eg.
Combining the previously determined E2p - E1s energy difference with the position of the 2p exciton relative to the continuum, we obtained an overall binding energy for the ground-state (1s) exciton of Eex = (Eg - E1s)≈ 420 meV for the investigated SWNTs. This value is comparable to recent theoretical predictions of large exciton binding energies (13, 14). The exciton binding energy thus constitutes a substantial fraction of the gap energy Eg≈ 1.3 eV for our 0.8-nm SWNTs. To put this result in context, the exciton binding energies in bulk semiconductors typically lie in the range of several meV and represent a slight correction to the band gap. Furthermore, because thermal energies at room temperature exceed typical bulk exciton binding energies, excitonic effects in bulk materials can be largely neglected under ambient conditions. This situation clearly does not prevail for SWNTs.
We can understand the strong increase in excitonic effects in the SWNTs as the consequence of two factors. The first arises from a general property of reduced dimensionality: In three dimensions, the probability of having an electron and hole separated by a displacement of r includes a phase space factor of r2, favoring larger separations over smaller ones. In one dimension, no such factor exists. Short separations are thus of greater relative importance, and the role of the Coulomb interactions is enhanced. The second factor relates to the decreased dielectric screening for a quasi-1D SWNT system. This effect arises because the electric field lines generated by the separated electron-hole pair travel largely outside of the nanotube, where dielectric screening is decreased. Because these effects are general features arsing from the 1D character, they should be widely present in 1D systems. Indeed, similar excitonic effects have been extensively studied in a large family of 1D structures of conjugated polymers (24).
To help visualize the strongly bound excitons in SWNTs, we estimated the exciton's spatial extent, i.e., the typical separation between the electron and the hole in the correlated exciton state. Assuming an exciton kinetic energy comparable to its binding energy Eex, which applies precisely for 3D excitons, we obtain the relation , where is Planck's constant h divided by 2π, m is the reduced electron-hole mass, and R is the exciton radius. For m = 0.05 m0 (21), we deduced from our experimental binding energy a ground-state exciton radius of R = 1.2 nm. This value is similar to that obtained by calculation within the truncated Coulomb model specified above. Figure 4 provides a representation of the calculated density distribution of the exciton envelope wave function. The result is a highly localized entity, with a spatial extent along the nanotube axis only slightly exceeding the nanotube radius of 0.8 nm.
The importance of excitonic effects is clear for the interpretation and assignment of the observed optical spectra, as discussed in the literature on the relation of the E11 and E22 transition energies in SWNTs (7, 15, 17). The excitonic character of the optically excited state also has immediate implications for optoelectronic devices and phenomena. For example, photo-conductivity in SWNTs should have a strong dependence on the applied electric field, because charge transport requires spatial separation of the electron-hole pair. The excitonic character of optically excited SWNTs also raises the possibility of modifying the SWNT transitions through external perturbations, thus facilitating new electro-optical modulators and sensors. More broadly, the strong electron-hole interaction demonstrated in our study highlights the central role of many-body effects in 1D materials.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746392130851746, "perplexity": 1437.6920511927065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00133.warc.gz"} |
https://nrich.maths.org/2046/note | ### Poly Fibs
A sequence of polynomials starts 0, 1 and each poly is given by combining the two polys in the sequence just before it. Investigate and prove results about the roots of the polys.
### Fixing It
A and B are two fixed points on a circle and RS is a variable diamater. What is the locus of the intersection P of AR and BS?
### OK! Now Prove It
Make a conjecture about the sum of the squares of the odd positive integers. Can you prove it?
# Fibonacci Factors
##### Age 16 to 18 Challenge Level:
The Fibonnaci sequence occurs so frequently because it is the solution of the simplest of all difference relations. It is instructive to view it in this way and perhaps to introduce the idea of difference equations with this familiar example.
Proving these results calls for considering whether or not other terms in the sequences, apart from those in the recognized patterns, can also be multiples of 2 or 3 respectively in the two cases. Are the conditions necessary as well as sufficient? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541951775550842, "perplexity": 468.98408106780875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00447.warc.gz"} |
http://httpa.academickids.com/encyclopedia/index.php/Differential_geometry_and_topology | # Differential geometry and topology
In mathematics, differential topology is the field dealing with differentiable functions on differentiable manifolds. It arises naturally from the study of the theory of differential equations. Differential geometry is the study of geometry using calculus. These fields are adjacent, and have many applications in physics, notably in the theory of relativity. Together they make up the geometric theory of differentiable manifolds - which can also be studied directly from the point of view of dynamical systems.
Contents
## Intrinsic versus extrinsic
Initially and up to the middle of the nineteenth century, differential geometry was studied from the extrinsic point of view: curves, surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves. Starting with the work of Riemann, the intrinsic point of view was developed, in which one cannot speak of moving 'outside' the geometric object because it is considered as given in a free-standing way.
The intrinsic point of view is more flexible, for example it is useful in relativity where space-time cannot naturally be taken as extrinsic. With the intrinsic point of view it is harder to define curvature and other structures such as connection, so there is a price to pay.
These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one (see the Nash embedding theorem).
## Technical requirements
The apparatus of differential geometry is that of calculus on manifolds: this includes the study of manifolds, tangent bundles, cotangent bundles, differential forms, exterior derivatives,integrals of p-forms over p-dimensional submanifolds and Stokes' theorem, wedge products, and Lie derivatives. These all relate to multivariate calculus; but for the geometric applications must be developed in a way that makes good sense without a preferred coordinate system. The distinctive concepts of differential geometry can be said to be those that embody the geometric nature of the second derivative: the many aspects of curvature.
A differential manifold is a topological space with a collection of homeomorphisms from open sets to the open unit ball in Rn such that the open sets cover the space, and if f, g are homeomorphisms then the function f-1 o g from an open subset of the open unit ball to the open unit ball is infinitely differentiable. We say a function from the manifold to R is infinitely differentiable if its composition with every homeomorphism results in an infinitely differentiable function from the open unit ball to R.
At every point of the manifold, there is the tangent space at that point, which consists of every possible velocity (direction and magnitude) with which it is possible to travel away from this point. For an n-dimensional manifold, the tangent space at any point is an n-dimensional vector space, or in other words a copy of Rn. The tangent space has many definitions. One definition of the tangent space is as the dual space to the linear space of all functions which are zero at that point, divided by the space of functions which are zero and have a first derivative of zero at that point. Having a zero derivative can be defined by "composition by every differentiable function to the reals has a zero derivative", so it is defined just by differentiability.
A vector field is a function from a manifold to the disjoint union of its tangent spaces (this union is itself a manifold known as the tangent bundle), such that at each point, the value is an element of the tangent space at that point. Such a mapping is called a section of a bundle. A vector field is differentiable if for every differentiable function, applying the vector field to the function at each point yields a differentiable function. Vector fields can be thought of as time-independent differential equations. A differentiable function from the reals to the manifold is a curve on the manifold. This defines a function from the reals to the tangent spaces: the velocity of the curve at each point it passes through. A curve will be said to be a solution of the vector field if, at every point, the velocity of the curve is equal to the vector field at that point.
An alternating k-dimensional linear form is an element of the antisymmetric k'th tensor power of the dual V* of some vector space V. A differential k-form on a manifold is a choice, at each point of the manifold, of such an alternating k-form -- where V is the tangent space at that point. This will be called differentiable if whenever it operates on k differentiable vector fields, the result is a differentiable function from the manifold to the reals. A space form is a linear form with the dimensionality of the manifold.
## Branches of differential geometry/topology
### Contact geometry
This is an analog of symplectic geometry which works for manifolds of odd dimension. Roughly, the contact structure on (2n+1)-dimensional manifold is a choice of a 1-form [itex]\alpha[itex] such that [itex]\alpha\wedge (d\alpha)^n[itex] does not vanish anywhere.
### Finsler geometry
Finsler geometry has the Finsler manifold as the main object of study — this is a differential manifold with a Finsler metric, i.e. a Banach norm defined on each tangent space. A Finsler metric is much more general structure than a Riemannian metric.
### Riemannian geometry
Riemannian geometry has Riemannian manifolds as the main object of study — smooth manifolds with additional structure which makes them look infinitesimally like Euclidean space. These allow one to generalise the notion from Euclidean geometry and analysis such as gradient of a function, divergence, length of curves and so on; without assumptions that the space is globally so symmetric.
### Symplectic topology
This is the study of symplectic manifolds. A symplectic manifold is a differentiable manifold equipped with a symplectic form (that is, a closed non-degenerate 2-form).
A Modern Course on Curves and Surface, Richard S Palais, 2003 (http://rsp.math.brandeis.edu/3D-XplorMath/Surface/a/bk/curves_surfaces_palais.pdf)
Richard Palais's 3DXM Surfaces Gallery (http://rsp.math.brandeis.edu/3D-XplorMath/Surface/gallery.html)
## Reference books
1. A Comprehensive Introduction to Differential Geometry (5 Volumes), 3rd Edition by Michael Spivak (1999)
2. Differential Geometry of Curves and Surfaces by Manfredo Do Carmo (1976). A classical geometric approach to differential geometry without the tensor machinery.
3. Riemannian Geometry by Manfredo Perdigao do Carmo, Francis Flaherty (1994)
4. Geometry from a Differentiable Viewpoint by John McCleary (1994)
5. A First Course in Geometric Topology and Differential Geometry by Ethan D. Bloch (1996)
6. Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed. by Alfred Gray (1998)
Topics in mathematics related to structure Edit (http://en.wikipedia.org/w/wiki.phtml?title=Template:Structure&action=edit) Abstract algebra | Universal algebra | Graph theory | Category theory | Order theory | Model theory | Structural proof theory Geometry | Topology | General topology | Algebraic geometry | Algebraic topology | Differential geometry and topology Analysis | Measure theory | Functional analysis | Harmonic analysis
Topics in mathematics related to space Edit (http://en.wikipedia.org/w/wiki.phtml?title=Template:Space&action=edit) Geometry | Trigonometry | Non-Euclidean geometry | Fractal geometry | Algebraic geometry Topology | Metric geometry | Algebraic topology | Differential geometry and topology Linear algebra | Functional analysis
de:Differentialgeometrie
• Art and Cultures
• Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
• Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
• Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
• United States (http://www.academickids.com/encyclopedia/index.php/United_States)
• World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
• Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
• Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
• Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
• Space and Astronomy
• Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717361092567444, "perplexity": 305.2271554741556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515539.93/warc/CC-MAIN-20181022201445-20181022222945-00064.warc.gz"} |
https://juliaeconomics.com/2014/06/16/bootstrapping-and-hypothesis-tests-in-julia/ | Bootstrapping and Non-parametric p-values in Julia
* The script to reproduce the results of this tutorial in Julia is located here.
Suppose that we wish to test to see if the the parameter estimates of $\beta$ are statistically different from zero and if the estimate of $\sigma^2$ is different from one for the OLS parameters defined in a previous post. Suppose further that we do not know how to compute analytically the standard errors of the MLE parameter estimates; the MLE estimates were presented in the previous post.
We decide to bootstrap by resampling cases in order to estimate the standard errors. This means that we treat the sample of $N$ individuals as if it were a population from which we randomly draw $B$ samples, each of size $N$. This produces a sample of MLEs of size $B$, that is, it provides an empirical approximation to the distribution of the MLE. From the empirical approximation, we can compare the full-sample point MLE to the MLE distribution under the null hypothesis.
To perform bootstrapping, we rely on Julia’s built-in sample function.
Wrapper Functions and the Likelihood of Interest
Now that we have a bootstrap index, we define the log-likelihood as a function of x and y, which are any subsets of X and Y, respectively.
function loglike(rho,y,x)
beta = rho[1:4]
sigma2 = exp(rho[5])
residual = y-*(x,beta)
dist = Normal(0, sqrt(sigma2))
contributions = logpdf(dist,residual)
loglikelihood = sum(contributions)
return -loglikelihood
end
Then, if we wish to evaluate loglike across various subsets of x and y, we use what is called a wrapper, which simply creates a copy of loglike that has already set the values of x and y. For example, the following function will evaluate loglike when x=X and y=Y:
function wrapLoglike(rho)
return loglike(rho,Y,X)
end
We do this because we want the optimizer to find the optimal $\rho$, holding x and y fixed, but we also want to be able to adjust x and y to suit our purposes. The wrapper function allows the user to modify x and y, but tells the optimizer not to bother them.
Tip: Use wrapper functions to manage arguments of your objective function that are not supposed to be accessed by the optimizer. Give the optimizer functions with only one argument — the parameters over which it is supposed to optimize your objective function.
Bootstrapping the OLS MLE
Now, we will use a random index, which is drawn for each b using the sample function, to take a random sample of individuals from the data, feed them into the function using a wrapper, then have the optimizer maximize the wrapper across the parameters. We repeat this process in a loop, so that we obtain the MLE for each subset. The following loop stores the MLE in each row of the matrix samples using 1,000 bootstrap samples of size one-half (M) of the available sample:
B=1000
samples = zeros(B,5)
for b=1:B
theIndex = sample(1:N,N)
x = X[theIndex,:]
y = Y[theIndex,:]
function wrapLoglike(rho)
return loglike(rho,y,x)
end
samples[b,:] = optimize(wrapLoglike,params0,method=:cg).minimum
end
samples[:,5] = exp(samples[:,5])
The resulting matrix contains 1,000 samples of the MLE. As always, we must remember to exponentiate the variance estimates, because they were stored in log-units.
Bootstrapping for Non-parametric p-values
Estimates of the standard errors of the MLE estimates can be obtained by computing the standard deviation of each column,
bootstrapSE = std(samples,1)
where the number 1 indicates that the standard deviation is taken over columns (instead of rows).
Standard errors like these can be used directly for hypothesis testing under parametric assumptions. For example, if we assume an MLE is normally distributed, then we reject the null hypothesis that the parameter is equal to some point if the parameter estimate differs from the point by at least 1.96 standard errors (using that the sample is large). However, we can make fewer assumptions using non-parametric p-values. The following code creates the distribution implied by the null hypothesis that $\beta_0 =0, \beta_1=0, \beta_2=0, \beta_3=0, \sigma^2=1$ by subtracting the mean from each distribution (thus imposing a zero mean) and then adding 1 to the distribution of $\sigma^2$ (thus imposing a mean of one); this is called nullDistribution.
nullDistribution = samples
pvalues = ones(5)
for i=1:5
nullDistribution[:,i] = nullDistribution[:,i]-mean(nullDistribution[:,i])
end
nullDistribution[:,5] = 1 + nullDistribution[:,5]
The non-parametric p-value (for two-sided hypothesis testing) is the fraction of times that the absolute value of the MLE is greater than the absolute value of the null distribution.
pvalues = [mean(abs(MLE[i]).<abs(nullDistribution[:,i])) for i=1:5]
If we are interested in one-sided hypothesis testing, the following code would test the null hypothesis $\beta_0 =0$ against the alternative that $\beta_0>0$:
pvalues = [mean(MLE[i].<nullDistribution[:,i]) for i=1:5]
Conversely, the following code would test the null hypothesis $\beta_0 =0$ against the alternative that $\beta_0<0$:
pvalues = [mean(MLE[i].>nullDistribution[:,i]) for i=1:5]
Thus, two-sided testing uses the absolute value (abs), and one-sided testing only requires that we choose the right comparison operator (.> or .<).
Results
Let the true parameters be,
julia> trueParams = [0.01,0.05,0.05,0.07]
The resulting bootstrap standard errors are,
julia> bootstrapSE = std(samples,1)
1x5 Array{Float64,2}:
0.0308347 0.0311432 0.0313685 0.0305757 0.0208229
and the non-parametric two-sided p-value estimates are,
julia> pvalues = [mean(abs(MLE[i]).<abs(nullDistribution[:,i])) for i=1:5]
5-element Array{Any,1}:
0.486
0.383
0.06
0.009
0.289
Thus, we reject the null hypotheses for the third and fourth parameters only and conclude that $\beta_2 \neq 0, \beta_3 \neq 0$, but find insufficient evidence to reject the null hypotheses that $\beta_0 =0, \beta_1 =0$ and $\sigma^2=1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903494477272034, "perplexity": 667.9320562427132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00412.warc.gz"} |
http://mathoverflow.net/questions/38529/uniform-quotient-vs-universal-quotient | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
## Uniform Quotient vs Universal Quotient
What is a quotient of an affine scheme that is not a universal quotient? Let's recall some terminology.
Suppose that $k$ is an algebraically closed field and $G$ is a reductive group acting on an affine scheme $X$. Theorem 1.1 of Geometric Invariant Theory states that the uniform categorical quotient $X//G$ of $X$ exists.
In other words, $X \to X//G$ is universal with respect to $G$-invariant morphisms out of $X$ and this property persists under base change by a flat morphism $T \to X//G$.
When $\text{char}(k)=0$, the theorem states that $X \to X//G$ is a universal categorical quotient, so that the universal property persists under base change by an arbitrary morphism $T \to X//G$.
What is an example where $X \to X//G$ is not a universal quotient?
I'd be particularly interested in the case where the stabilizers of the action on $X$ are all linearly reductive.
-
Here is an example, which is in some sense the simplest one. Suppose that $k$ has characteristic $p > 0$; set $X := \mathop{\rm Spec} k[x,y]$. Let $G$ be a cyclic group of order $p$ acting via $(x,y) \mapsto (x, x+y)$. The ring of invariants is $k[u,v] := k[x, y^p - x^{p-1}y]$. Consider the point $\mathop{\rm Spec} k = \mathop{\rm Spec} k[u,v]/(u,v)$ of $X/G = \mathop{\rm Spec} k[u,v]$; the inverse image $Y$ in $X$ is $\mathop{\rm Spec} k[x,y]/(x, y^p)$; it is immediate to check that the action of $G$ on $Y$ is trivial, so $Y/G = Y \neq \mathop{\rm Spec} k$.
If you want an example with a connected group, embed $G$ into $\mathrm{GL}_n$ and take the induced action.
I don't know any example with linearly reductive stabilizers, and I suspect that they don't exist.
-
Angelo's suspicion is right. By thm of Nagata (Ch.IV, Thm. 3.6 in Demazure-Gabriel), in char. $p > 0$ a smooth affine $k$-gp is lin. red. iff its comp. group has order not divisible by $p$ and id. component is torus. Hence, enough to treat tori and finite gps of order prime to $p$. The latter is universal for affine $X$ in char. $p$ via averaging. For tori $S$, can restrict to noetherian base change and then by noetherian induction $S$-invariants in coordinate rings upstairs and downstairs are $S[n]$-invariants for sufficiently divisible $n$ coprime to $p$. So reduced back to the first case. – BCnrd Sep 13 2010 at 19:05
Brian, this is for the case of actions of linearly reductive stabilizers, but jlk was asking for the case when the stabilizers are linearly reductive, and I think this is much harder – Angelo Sep 13 2010 at 20:28
@Angelo: Thanks. Your interpretation is correct: I would particularly like a proof/example in the case where the stabilizers are all linearly reductive, but the group $G$ is not. – jlk Sep 13 2010 at 21:16
Dear Torsten, of course the approach would be the one you mention. There is a formal slice theorem for group actions with linearly reductive stabilizers, but it is very hard to get consequences from it. This is discussed in a paper of Jarod Alper, On the local quotient structure of Artin stacks <arxiv.org/abs/0904.2050>. – Angelo Sep 14 2010 at 6:13
Consider the action of $\mathrm{PGL}_2$ on the space of unordered 4-tuples of points of $\mathbb P^1$ (a.k.a. $\mathbb P^4$). The generic stabilizer is isomorphic to the product of two cyclic groups of order 2 (the Klein group); but the stabilizer of a point corresponding to a double points and two distinct point of $\mathbb P^1$ is cyclic of order 2. Clearly, this means that you can't have a slice around this point, since the generic stabilizer is not conjugate to a subgroup of the special stabilizer. – Angelo Sep 15 2010 at 18:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343719482421875, "perplexity": 218.0997625523189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005723/warc/CC-MAIN-20130516133005-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://hal.in2p3.fr/in2p3-01196068 | # Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in $pp$ collision data at $\sqrt{s}=8$ TeV with the ATLAS detector
Abstract : This paper reports inclusive and differential measurements of the $t\bar{t}$ charge asymmetry $A_{\textrm{C}}$ in 20.3 fb$^{-1}$ of $\sqrt{s} = 8$ TeV $pp$ collisions recorded by the ATLAS experiment at the Large Hadron Collider at CERN. Three differential measurements are performed as a function of the invariant mass, transverse momentum and longitudinal boost of the $t\bar{t}$ system. The $t\bar{t}$ pairs are selected in the single-lepton channels ($e$ or $\mu$) with at least four jets, and a likelihood fit is used to reconstruct the $t\bar{t}$ event kinematics. A Bayesian unfolding procedure is performed to infer the asymmetry at parton level from the observed data distribution. The inclusive $t\bar{t}$ charge asymmetry is measured to be $A_{\textrm{C}} = 0.009 \pm 0.005$ (stat.$+$syst.). The inclusive and differential measurements are compatible with the values predicted by the Standard Model.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-01196068
Contributor : Sabine Starita <>
Submitted on : Wednesday, September 9, 2015 - 9:49:19 AM
Last modification on : Thursday, November 26, 2020 - 3:39:06 PM
### Citation
G. Aad, M.K. Ayoub, A. Bassalat, C. Bécot, S. Binet, et al.. Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in $pp$ collision data at $\sqrt{s}=8$ TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2016, 76, pp.87. ⟨10.1140/epjc/s10052-016-3910-6⟩. ⟨in2p3-01196068⟩
Record views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766132831573486, "perplexity": 2407.5397156362887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00571.warc.gz"} |
https://www.physicsforums.com/threads/drop-a-ball-where-does-the-energy-go.735744/ | # Drop a ball - where does the energy go?
1. Jan 30, 2014
### oneamp
Hi -
If I lift a bowling ball in the air, so that it has potential energy, then drop it on the dirt, it makes a sound and a dent. Where did all the energy go? It did not all go to sound, since there's a dent, right? A small bit went to heat... what else?
Thanks
2. Jan 30, 2014
### A.T.
All of it goes to heat, after a while.
3. Jan 30, 2014
### dauto
The bit that went to heat aint small. Do the following experiment. Ask someone to hold a piece of paper up in the air for you and Bang two pieces of steel together making sure the paper is caught in between them. There should be a burnt spot on the paper afterwards.
4. Jan 30, 2014
### oneamp
Thanks
5. Jan 30, 2014
### Oldfart
That's interesting, dauto. If I'm lost in Alaska and about to freeze, and find two hammers and some paper, is it possible that I could start a fire to save my life?
6. Jan 30, 2014
### HallsofIvy
Staff Emeritus
Initially some energy goes into the kinetic energy of the little pieces of dirt that are moved away to make that 'dent'. Of course, all of those eventually (very quickly) stop because of friction- which is the same as saying the energy becomes heat energy. (In the long scale 'eventually' all energy becomes heat.)
7. Jan 30, 2014
### dauto
Unlikely. The paper burns but you don't get a flame that way. The heat dissipates to quickly
8. Jan 30, 2014
### Oldfart
Hmmm... What if you lightly soaked the paper in alcohol first? Anyone want to give that a try?
9. Jan 30, 2014
### phinds
Nah, you do it ... I'm drinking my alcohol
10. Jan 30, 2014
### j824h
As mentioned by dauto the heat dissipates very fast, I don't think alcohol will be effective since the temperature will go below the flash point even before oxygen is supplied.
11. Jan 30, 2014
### Oldfart
Well, we won't really know until some idiot tries it, right?
Maybe I'll give it a shot tomorrow...
12. Feb 1, 2014
### Lok
Heat is not the only energy transformation. The dent means plastic deformation, so the breaking or tensing material. So energy gets stored in material induced stresses and get's absorbed by the breaking of chemical or physical bonds. The plastic deformation energy is not all transformable to heat and is very material dependent. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241553902626038, "perplexity": 2358.105617800567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825048.60/warc/CC-MAIN-20160723071025-00269-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://byjus.com/question-answer/can-two-numbers-have-16-as-their-hcf-and-380-lcm-give-reasons/ | Question
# Can two numbers have 16 as their HCF and 380 LCM ? Give reasons
Solution
## HCF of two numbers is a factor of the LCM of those numbers. Thus, we cannot have two numbers whose HCF is 16 and LCM is 380. Because, when we divide 380 by 16, we get a remainder of 12. Thus, 16 is not a factor of 380. Hence, we cannot have two numbers whose HCF is 16 and LCM is 380. HOPE IT HELPS!!
Suggest corrections | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823751211166382, "perplexity": 654.7110131913646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00134.warc.gz"} |
https://www.physicsforums.com/threads/inverse-function-of-a-parabola.198476/ | # Inverse function of a parabola
1. Nov 15, 2007
### fatou123
from a graph of a function ( i obtained the graphg by doing a translation and y-scaling) g(x)= 1/3(x-2)^2 -3 b (x in [2:5]) i can see that g is increasing and so it is a one-one function and an image set is [-3;0]. so therefore the function g has an unverse function g^-1 .
so i can find the rule of g^-1 by solving this equations:
y=g(x)=1/3 (x-2)^2 -3
to obtain x in term of y
i have y=1/3(x-2)^2 -3 that is x= +or-3sqrt-1/9 - 1/3y is this right ?
Last edited: Nov 15, 2007
2. Nov 15, 2007
### HallsofIvy
Staff Emeritus
No, the whole point of being "one-to-one" is that you have one value, not two.
Unfortunately, you didn't show HOW you solve for x so I can't comment but that surely does not look right! (It's hard to be sure since you don't use parentheses to show what you really mean.) You are correct that when x= 2, y= -3. If you take y= -3 in the formula you give, do you get x= 0? When you take x= 5, y= g(5)= 0. When you take y= 0 in your equation, do you get x= 5?
If y= (1/3)(x-2)^2- 3 with x between 2 and 5, then y+ 3= (1/3)(x-2)^2, (x-2)^2= 3(y+3) x-2= sqrt(3(y+3)) and finally x= 2+ sqrt(3(y+3)). The PLUS is used rather than the MINUS because x must be larger than 2. Finally, don't forget to write the solution itself:
g-1(x)= 2+ sqrt(3(x+3)).
3. Nov 16, 2007
### fatou123
thank hallsofivy you have just confirm what i though that theyr were something wrong in my result =.
i tackle this equation by comoleting thr squares of a quadratic eqautions,and i think this is where i got confused by the two result i was looking for! lol
the domain of g(x) is given to us as x in [2;5] so i can't really argue this.
i solved the g^-1 by rearranging g(x)
y=1/3(x-2)^2 -3 , 1/3(x-2)^2=-3-y , (x-2)^2= -1/9-1/3y, x-3= SQRT-1/9-1/3y hence
x=3 SQRT-1/3-1/3y
thank you again i undersrtood were i got wrong in this it was quite obvious really lol
4. Nov 16, 2007
### HallsofIvy
Staff Emeritus
This first step is incorrect. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547185897827148, "perplexity": 2091.33784265742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171664.76/warc/CC-MAIN-20170219104611-00250-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://depositonce.tu-berlin.de/items/4f6eafb6-272f-41a0-bcc1-cc505096d473 | # Variational Tensor Approach for Approximating the Rare-Event Kinetics of Macromolecular Systems
## Inst. Mathematik
Essential information about the stationary and slow kinetic properties of macromolecules is contained in the eigenvalues and eigenfunctions of the dynamical operator of the molecular dynamics. A recent variational formulation allows to optimally approximate these eigenvalues and eigenfunctions when a basis set for the eigenfunctions is provided. In this study, we propose that a suitable choice of basis functions is given by products of one-coordinate basis functions, which describe changes along internal molecular coordinates, such as dihedral angles or distances. A sparse tensor product approach is employed in order to avoid a combinatorial explosion of products, i.e. of the basis-set size. Our results suggest that the high-dimensional eigenfunctions can be well approximated with relatively small basis set sizes. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9094486236572266, "perplexity": 471.4375539586209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00443.warc.gz"} |
https://www.aimsciences.org/journal/1941-4889/2018/10/1 | # American Institute of Mathematical Sciences
ISSN:
1941-4889
eISSN:
1941-4897
All Issues
## Journal of Geometric Mechanics
March 2018 , Volume 10 , Issue 1
Select all articles
Export/Reference:
2018, 10(1): 1-41 doi: 10.3934/jgm.2018001 +[Abstract](6775) +[HTML](466) +[PDF](622.49KB)
Abstract:
The aim of this paper is to write explicit expression in terms of a given principal connection of the Lagrange-d'Alembert-Poincaré equations by several stages. This is obtained by using a reduced Lagrange-d'Alembert's Principle by several stages, extending methods known for the case of one stage in the previous literature. The case of Euler's disk is described as an illustrative example.
2018, 10(1): 43-68 doi: 10.3934/jgm.2018002 +[Abstract](4448) +[HTML](367) +[PDF](1325.47KB)
Abstract:
In this paper we explore the discretization of Euler-Poincaré-Suslov equations on SO(3), i.e. of the Suslov problem. We show that the consistency order corresponding to the unreduced and reduced setups, when the discrete reconstruction equation is given by a Cayley retraction map, are related to each other in a nontrivial way. We give precise conditions under which general and variational integrators generate a discrete flow preserving the constraint distribution. We establish general consistency bounds and illustrate the performance of several discretizations by some plots. Moreover, along the lines of [15] we show that any constraints-preserving discretization may be understood as being generated by the exact evolution map of a time-periodic non-autonomous perturbation of the original continuous-time nonholonomic system.
2018, 10(1): 69-92 doi: 10.3934/jgm.2018003 +[Abstract](6945) +[HTML](433) +[PDF](437.43KB)
Abstract:
We show that the Helmholtz conditions characterizing differential equations arising from variational problems can be expressed in terms of invariants of curves in a suitable Grassmann manifold.
2018, 10(1): 93-138 doi: 10.3934/jgm.2018004 +[Abstract](7783) +[HTML](551) +[PDF](708.21KB)
Abstract:
The jet formalism for Classical Field theories is extended to the setting of Lie algebroids. We define the analog of the concept of jet of a section of a bundle and we study some of the geometric structures of the jet manifold. When a Lagrangian function is given, we find the equations of motion in terms of a Cartan form canonically associated to the Lagrangian. The Hamiltonian formalism is also extended to this setting and we find the relation between the solutions of both formalism. When the first Lie algebroid is a tangent bundle we give a variational description of the equations of motion. In addition to the standard case, our formalism includes as particular examples the case of systems with symmetry (covariant Euler-Poincaré and Lagrange Poincaré cases), variational problems for holomorphic maps, Sigma models or Chern-Simons theories. One of the advantages of our theory is that it is based in the existence of a multisymplectic form on a Lie algebroid.
2020 Impact Factor: 0.857
5 Year Impact Factor: 0.807
2021 CiteScore: 1.3 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170982837677002, "perplexity": 444.8985083152734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00133.warc.gz"} |
https://gamma-opt.github.io/DecisionProgramming.jl/dev/decision-programming/influence-diagram/ | # Influence Diagram
## Introduction
Decision programming uses influence diagrams, a generalization of Bayesian networks, to model multi-stage decision problems under uncertainty. This section defines the influence diagrams and discusses their properties. It is based on the definitions in [1], [2], and [3].
## Definition
We define the influence diagram as a directed, acyclic graph $G=(C,D,V,I,S).$ We describe the nodes $N=C∪D∪V$ with $C∪D=\{1,...,n\}$ and $n=|C|+|D|$ as follows:
1. Chance nodes $C⊆\{1,...,n\}$ (circles) represent uncertain events associated with random variables.
2. Decision nodes $D⊆\{1,...,n\}$ (squares) correspond to decisions among discrete alternatives.
3. Value nodes $V=\{n+1,...,n+|V|\}$ (diamonds) represent consequences that result from the realizations of random variables at chance nodes and the decisions made at decision nodes.
We define the information set $I$ of node $j∈N$ as
$$$I(j)⊆\{i∈C∪D∣i Practically, the information set is a collection of arcs to reverse direction in the graph. The conditions enforce that the graph is acyclic, and there are no arcs from value nodes to other nodes. In an influence diagram, each chance and decision node j∈C∪D is associates with a finite number of states S_j that we encode using integers S_j=\{1,...,|S_j|\} from one to number of states |S_j|≥1. A node j is trivial if it has only one state, |S_j|=1. We refer to the collection of all states S=\{S_1,...,S_n\} as the state space. ## Root and Leaf Nodes Chance or decision node is a root node if it is not affected by other chance or decision nodes. Formally, node j∈C∪D is a root node if I(j)=∅. Chance or decision node is a leaf node if it does not affect other chance or decision nodes. Formally, node j∈C∪D is a leaf node if j∉I(i) for all i∈C∪D. ## Drawing Nodes and Arcs We use a circle to represent chance nodes, a square to represent decision nodes, and a diamond to represent value nodes. The symbol i represents the node's index and symbol S_i the states of the chance or decision node. We use the following colors and styling: • Chance nodes: Fill color F5F5F5 and line color 666666. • Decision nodes: Fill color D5E8D4 and line color 82B366 • Value nodes: Fill color FFE6CC and line color D79B00 • Linewidth 2pt and perimeter 2pt (padding around the node). We represent directed arcs using arrows from a source node to a target node, colored with the target node's line color. We recommend diagrams.net for drawing graphs. ## Drawing Layered Graph We showed the influence diagram as a linear graph in the Definition section. We can also draw a more concise layered graph, which is better at displaying the influence relationship structure — only nodes at smaller depth influence nodes at greater depth. Also, root and leaf nodes are visible from the layered form. We define the depth of a node j∈N as follows. Root nodes have a depth of one $$\[\operatorname{depth}(j)=1,\quad I(j)=∅.$$$
Other nodes have a depth of one greater than the maximum depth of its predecessors
$$$\operatorname{depth}(j)=\max_{i∈I(j)} \operatorname{depth}(i) + 1,\quad I(j)≠∅.$$$
We can then draw the layered graph by grouping the nodes by their depth, ordering the groups by increasing depth and increasing indices order within each group.
## Paths
In influence diagrams, paths represent realizations of states for chance and decision nodes. For example, the above tree represents generating all paths with states $S_1=\{1,2\}$ and $S_2=\{1,2,3\}.$
Formally, a path is a sequence of states
$$$𝐬=(s_1, s_2, ...,s_n)∈𝐒,$$$
where each state $s_i∈S_i$ for all chance and decision nodes $i∈C∪D.$ We denote the set of paths as
$$$𝐒=∏_{j∈C∪D} S_j=S_1×S_2×...×S_n.$$$
We define a subpath of $𝐬$ with $A⊆C∪D$ is a subsequence
$$$𝐬_A=(𝐬_{i}∣i∈A)∈𝐒_A.$$$
We denote the set of subpaths as
$$$𝐒_A=∏_{i∈A} S_i.$$$
We define the number of paths as
$$$|𝐒_A|=∏_{i∈A}|S_i|.$$$
We refer to subpath $𝐬_{I(j)}$ as an information path and subpaths $𝐒_{I(j)}$ as information paths for a node $j∈N.$
Also note that $𝐒=𝐒_{C∪D},$ and $𝐒_{i}=S_i$ and $𝐬_i=s_i$ where $i∈C∪D$ is an individual node.
## Probabilities
Each chance node is associated with a discrete probability distribution over its states for every information path. Formally, for each chance node $j∈C$, we denote the probability of state $s_j$ given information path $𝐬_{I(j)}$ as
$$$ℙ(X_j=s_j∣X_{I(j)}=𝐬_{I(j)})∈[0, 1],$$$
with
$$$∑_{s_j∈S_j} ℙ(X_j=s_j∣X_{I(j)}=𝐬_{I(j)}) = 1.$$$
We refer to chance state with given information path as active if its probability is nonzero
$$$ℙ(X_j=s_j∣X_{I(j)}=𝐬_{I(j)})>0.$$$
Otherwise, it is inactive.
## Decision Strategies
Each decision strategy models how the decision maker chooses a state $s_j∈S_j$ given an information path $𝐬_{I(j)}$ at decision node $j∈D.$ Decision node is a special type of chance node, such that the probability of the chosen state given an information path is fixed to one
$$$ℙ(X_j=s_j∣X_{I(j)}=𝐬_{I(j)})=1.$$$
By definition, the probabilities for other states are zero.
Formally, for each decision node $j∈D,$ a local decision strategy is function that maps an information path $𝐬_{I(j)}$ to a state $s_j$
$$$Z_j:𝐒_{I(j)}↦S_j.$$$
A decision strategy contains one local decision strategy for each decision node
$$$Z=\{Z_j∣j∈D\}.$$$
The set of all decision strategies is denoted $ℤ.$
## Path Probability
The probability distributions at chance and decision nodes define the probability distribution over all paths $𝐬∈𝐒,$ which depends on the decision strategy $Z∈ℤ.$ We refer to it as the path probability
$$$ℙ(X=𝐬∣Z) = ∏_{j∈C∪D} ℙ(X_j=𝐬_j∣X_{I(j)}=𝐬_{I(j)}).$$$
We can decompose the path probability into two parts
$$$ℙ(X=𝐬∣Z) = p(𝐬) q(𝐬∣Z).$$$
The first part consists of the probability contributed by the chance nodes. We refer to it as the upper bound of path probability
$$$p(𝐬) = ∏_{j∈C} ℙ(X_j=𝐬_j∣X_{I(j)}=𝐬_{I(j)}).$$$
The second part consists of the probability contributed by the decision nodes.
$$$q(𝐬∣Z) = ∏_{j∈D} ℙ(X_j=𝐬_j∣X_{I(j)}=𝐬_{I(j)}).$$$
Because the probabilities of decision nodes are defined as one or zero depending on the decision strategy, we can simplify the second part to an indicator function
$$$q(𝐬∣Z)=\begin{cases} 1, & Z(𝐬) \\ 0, & \text{otherwise} \end{cases}.$$$
The expression $Z(𝐬)$ indicates whether a decision stategy is compatible with the path $𝐬,$ that is, if each local decision strategy chooses a state on the path. Formally, we have
$$$Z(𝐬) ↔ ⋀_{j∈D} (Z_j(𝐬_{I(j)})=𝐬_j).$$$
Now the path probability equals the upper bound if the path is compatible with given decision strategy. Otherwise, the path probability is zero. Formally, we have
$$$ℙ(𝐬∣X,Z)= \begin{cases} p(𝐬), & Z(𝐬) \\ 0, & \text{otherwise} \end{cases}.$$$
## Consequences
For each value node $j∈V$, we define the consequence given information path $𝐬_{I(j)}$ as
$$$Y_j:𝐒_{I(j)}↦ℂ,$$$
where $ℂ$ is the set of real-valued consequences.
## Path Utility
The utility function is a function that maps consequences to real-valued utility
$$$U:ℂ^{|V|}↦ℝ.$$$
The path utility is defined as the utility function acting on the consequences of value nodes given their information paths
$$$\mathcal{U}(𝐬) = U(\{Y_j(𝐬_{I(j)}) ∣ j∈V\}).$$$
The default path utility is the sum of consequences
$$$\mathcal{U}(𝐬) = ∑_{j∈V} Y_j(𝐬_{I(j)}).$$$
The utility function, in this case, corresponds to the sum of the elements.
The utility function affects the objectives discussed Decision Model page. We can choose the utility function such that the path utility function either returns:
• a numerical value, which leads to a mixed-integer linear programming (MILP) formulation or
• a linear function with real and integer-valued variables leads to a mixed-integer quadratic programming (MIQP) formulation.
Different formulations require a solver capable of solving them.
## Path Distribution
A path distribution is a pair
$$$(ℙ(X=𝐬∣Z), \mathcal{U}(𝐬))$$$
that comprises of path probability function and path utility function over paths $𝐬∈𝐒$ conditional to the decision strategy $Z.$
## References
• 1Salo, A., Andelmin, J., & Oliveira, F. (2019). Decision Programming for Multi-Stage Optimization under Uncertainty, 1–35. Retrieved from http://arxiv.org/abs/1910.09196
• 2Howard, R. A., & Matheson, J. E. (2005). Influence diagrams. Decision Analysis, 2(3), 127-143. https://doi.org/10.1287/deca.1050.0020
• 3Shachter, R. D. (1986). Evaluating influence diagrams. Operations research, 34(6), 871-882. https://doi.org/10.1287/opre.34.6.871 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777275681495667, "perplexity": 3981.0649237903617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00628.warc.gz"} |
http://mathhelpforum.com/calculus/97261-help-integral.html | # Thread: help with integral
1. ## help with integral
hey guys,
I need to prove that the following integral converges
int ( sqrt(x) * cos(x^2) dx , x = 0 to infinity )
Using Wolfram I found that the solution can be expressed in the closed form
(1/4) * sqrt ( 1 - sqrt(2) ) * Gamma ( 3 / 4 )
Whilst I don't have to show the exact result, I am completely lost with showing that it converges.
Any starting points/ hints would be greatly appreciated,
Thanks,
Tim
2. Originally Posted by Mathisfun3
I need to prove that the following integral converges
$\int_0^\infty\!\!\! \sqrt{x}\cos(x^2)\,dx$
Here are a few hints. The function is continuous and bounded on any finite interval, so the only trouble can occur "at infinity". It's convenient to change the lower limit of integration away from 0 to say 1, so that we don't have to worry about what happens at 0. Replace the upper limit of integration by X. Then we want to show that $\int_1^X\!\!\! \sqrt{x}\cos(x^2)\,dx$ converges as $X\to\infty$.
Make the substitution $y=x^2$, and the integral becomes $\int_1^Y\!\!\! \tfrac12y^{-1/4}\cos y\,dy$, where $Y=X^2$. Now integrate by parts, integrating the factor $\cos y$ and differentiating $y^{-1/4}$. That will give you a new integral, which you should be able to estimate to see that it converges as $Y\to\infty$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927323460578918, "perplexity": 211.0689579158016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00607.warc.gz"} |
http://math.stackexchange.com/questions/10744/correct-method-to-differentiate-a-first-order-vector-matrix/10752 | # Correct method to differentiate a first order vector/matrix
After asking a few questions earlier, I think I've been able to describe my main issue. Could someone show me why this identity is true:
$$\frac{\partial \textbf{x}^{T} B \textbf{x}}{\partial \textbf{x}} = (B + B^{T})\textbf{x}$$
I've gone through things like Matrixcookbook which seem to just show this identity without any derivation of it. I have tried to unsuccessfully derive the formula below:
Via product rule:
$\frac{\partial \textbf{x}^{T} B \textbf{x}}{\partial \textbf{x}} = \frac{\partial \textbf{x}^{T}}{\partial \textbf{x}}B\textbf{x} + \textbf{x}^{T}\frac{\partial B \textbf{x}}{\partial \textbf{x}}$
I realize product rule for scalars is not the same for matrices, but it still holds as long as order is preserved.
At this point, I'm stuck because I'm not sure if $\frac{\partial \textbf{x}^{T}}{\partial \textbf{x}}$ is identity $I$.
I assume this definition of the Jacobian:
$\begin{bmatrix} \dfrac{\partial y_1}{\partial x_1} &; \cdots &; \dfrac{\partial y_1}{\partial x_n} \\ \vdots &; \ddots &; \vdots \\ \dfrac{\partial y_m}{\partial x_1} &; \cdots &; \dfrac{\partial y_m}{\partial x_n} \end{bmatrix}$
And if I write $\frac{\partial B \textbf{x}}{\partial \textbf{x}}$ component by component with the definition above, it becomes simply $B$.
Thus,
$\frac{\partial \textbf{x}^{T} B \textbf{x}}{\partial \textbf{x}} = B\textbf{x} + \textbf{x}^{T}B$
which is not correct, the left side is a nx1 vector and the right side is a nx1 vector.
I'm not sure where my faulty assumption is. I've gone through various sources which don't explain the situation I showed above. Am I missing some very simple point that those books assume?
-
Here is a somewhat simplified view of things to get you started (one can avoid all of the un-necessary elementwise notation down there, but it seems that to help you start, it might still prove to be useful):
Suppose $x \in \mathbb{R}^n$. Further assume the convention that $x$ is a column vector so that
$$x = \begin{bmatrix} x_1\\\\ x_2\\\\ \vdots\\\\ x_n \end{bmatrix}$$
Now suppose that we have some function $f: \mathbb{R}^n \to \mathbb{R}$ (i.e., a scalar valued function of the vector $x$). We define the partial derivatives of $f(x)$ as the vector $$\frac{\partial f(x)}{\partial x} = \begin{bmatrix} \frac{\partial f(x)}{\partial x_1}\\\\ \frac{\partial f(x)}{\partial x_2}\\\\ \vdots\\\\ \frac{\partial f(x)}{\partial x_n} \end{bmatrix}$$
Now let us look at your function $f(x) = x^TBx$. To simplify things for our brute-force attack, we rewrite this is: $$f(x) = \sum_{ij} x_iB_{ij}x_j$$
Now let us try to compute the partial derivative wrt the $p$-th coordinate of $x$, i.e., $\partial f(x) / \partial x_p$. This is given by $$\frac{\partial f(x)}{\partial x_p} = \frac{\partial}{\partial x_p}\sum_{ij} x_i B_{ij} x_j$$
$$= \sum_{j} B_{pj}x_j + \sum_i x_iB_{ip}$$ using the product rule for differentiation. But recognize that $$\sum_i B_{ip}x_i = \sum_i B^T_{pi}x_i = (B^Tx)_p$$ while $$\sum_j B_{pj}x_j = (Bx)_p$$
Thus, we conclude that $$\frac{\partial f(x)}{\partial x_p} = (Bx)_p + (B^Tx)_p$$
Collecting derivatives for $p=1,\ldots,n$ yields the desired identity.
Now you can follow this approach to obtain derivatives for other functions.
Another point that might help you is this: If $f$ is a function as defined above, i.e., $f$ is a function of a column-vector $x$, then $\partial f / \partial x$ is a column-vector, while $\partial f / \partial x^T$ is a row-vector
Please have a look at the matrix-calculus book by Magnus that I have cited in this answer:here
-
Thank you very much, this was the explanation I needed. I guess my biggest misstep was assuming a static definition of differentiation with the Jacobian. – Christopher Dorian Nov 17 '10 at 22:07
$\frac{\partial}{\partial x}$ is just the gradient.
$xBx$ is scalar, namely $\sum_i x_i \sum_j B_{ij} x_j = \sum_i \sum_j B_{ij} x_i x_j$. This can be simplified to
$sum_i B_{ii} x_i^2 + \sum_{i<j} x_i x_j ( B_{ij} + B_{ji})$.
For any index $i$, the derivative with respect to $i$ becomes.
$2 B_{ii} x_i + \sum_{j \neq i} = x_j ( B_{ij} + B_{ji})$.
But for any $i$, the above term is just the i-th component of the vector
$(B + B^T)x$
-
You're right about suspecting that $\frac{ \partial \mathbf x^T }{\partial \mathbf x} \neq I$ because the dimensions no longer match up. I think you might be wading into tensor territory with such operations.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776819348335266, "perplexity": 180.90443254607115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384181/warc/CC-MAIN-20130516092624-00034-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-use-the-squeeze-theorem-to-find-lim-sin-2x-x-2-as-x-approaches-infini | Calculus
Topics
# How do you use the Squeeze Theorem to find lim (sin^2x/x^2) as x approaches infinity?
As
$0 \le {\sin}^{2} x \le 1 \implies \frac{0}{x} ^ 2 \le {\sin}^{2} \frac{x}{x} ^ 2 \le \frac{1}{x} ^ 2$
Now using the squeeze thereom and taking limits $x \to \infty$ in the last inequality we have that
lim_(x->oo)0/x^2<=lim_(x->oo)sin^2x/x^2<=lim_(x->oo) 1/x^2=> lim_(x->oo) sin^2x/x^2=0
##### Impact of this question
9454 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190171718597412, "perplexity": 2196.8009050892133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00783.warc.gz"} |
https://math.stackexchange.com/questions/2164282/is-it-true-that-if-h-circ-r-tau-r-tau-circ-h-then-h-is-a-rotation/2164335 | Is it true that if $h\circ R_{\tau}=R_{\tau}\circ h$, then $h$ is a rotation?
Note: Every map mentioned in this problem is a homeomorphism $S^1\to S^1$.
I'm working on a problem from Katok/Hasselblatt, which is
Show that if $f$ is topologically conjugate to an irrational rotation $R_{\tau}$, then the conjugating homeomorphism is unique up to a rotation. That is, if $h_i\circ f=R_{\tau}\circ h_i$ for $i=1,2$, then $h_1\circ h_2^{-1}$ is a rotation.
Now, in the back of the book there is the following hint:
Hint: Show that $h_1\circ h_2^{-1}\circ R_{\tau}=R_{\tau}\circ h_1\circ h_2^{-1}$.
Now, this isn't difficult, because we can write by assumption
$$h_1^{-1}\circ R_{\tau}\circ h_1=f=h_2^{-1}\circ R_{\tau}\circ h_2,$$
from which rearranging gives us $h_1\circ h_2^{-1}\circ R_{\tau}=R_{\tau}\circ h_1\circ h_2^{-1}$.
Now, I have no idea how to see that $h_1\circ h_2^{-1}$ is a rotation matrix from this fact. There doesn't seem to be anything special about $h_1\circ h_2^{-1}$ here, so I guess the problem boils down to
If $h\circ R_{\tau}=R_{\tau}\circ h$, then $h$ is a rotation.
However, I don't know how to show this. I haven't used at all the fact that $\tau$ is irrational, so that must be important. But I don't know how to use that fact. Can somebody help?
And in this case all orbits are dense. So, if you start with one and then with another one, the images, say $p$ and $q$, of the two initial points of the two orbits are sufficient to determine the two conjugacies (you take the image of an orbit and the rest is obtained using densedness).
Now, $p-q$ (seen in the line) is how much you rotate from one orbit to the next, that is, $$h_1\circ h_2^{-1}=R_{p-q}.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731372594833374, "perplexity": 87.21215058221567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00092.warc.gz"} |
https://www.math.nyu.edu/dynamic/calendars/seminars/mathematics-colloquium/393/ | # Mathematics Colloquium
#### Conformal Metrics of Prescribed Gauss Curvature
Speaker: Michael Struwe, ETH Zurich
Location: Warren Weaver Hall 1302
Date: Monday, October 8, 2012, 3:45 p.m.
Synopsis:
Given a Riemann surface $$\left ( M,g_0 \right )$$, viewed as a two-dimensional Riemannian manifold with background metric $$g_0$$, a classical problem in differential geometry is to determine what smooth functions $$f$$ on $$M$$ arise as the Gauss curvature of a conformal metric on $$M$$. When $$M = S^2$$ this is the famous Nirenberg problem. In fact, even when $$\left ( M,g_0 \right )$$ is closed and has genus greater than 1, this question so far has not been completely settled. In my talk I will present some new results for this problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258007764816284, "perplexity": 406.9259451364619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864391.61/warc/CC-MAIN-20180622104200-20180622124200-00089.warc.gz"} |
http://eprints.iisc.ernet.in/7063/ | # Microstructural evolution in elastically inhomogeneous systems: I. A phase field formulation
Gururajan, MP and Abinandanan, TA (2006) Microstructural evolution in elastically inhomogeneous systems: I. A phase field formulation. [Preprint]
PDF paperI.pdf - Published Version Restricted to Registered users only Download (233Kb) | Request a copy
## Abstract
We present a phase field model suitable for studying phenomena, such as rafting, in which the elastic modulus mismatch (elasticinhomogeneity) plays a central role during microstructural evolution. This model requires a numerical technique for solving for the elastic stress fields in such inhomogeneous systems with, or without an applied stress. We present a technique, adapted from the literature on homogenisation, in which a periodic displacement field ($\mathbf{u}^{\star}$) and a homogeneous strain ($\mathbf {E}$) may be calculated consistent with a given macroscopic applied stress. We also describe an efficient Fourier transform based iterative technique for solving for $\mathbf {u}^{\star}$ and $\mathbf{E}$. We characterise this technique by comparing its results against known analytical results in a variety of settings.
Item Type: Preprint Publisher Phase field modeling;Microstructure;Homogenisation;Thin films;Elastic stress effects Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) 30 May 2006 22 Feb 2012 08:47 http://eprints.iisc.ernet.in/id/eprint/7063 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377859592437744, "perplexity": 3109.269663198303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00055-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/759032/proof-of-eckart-young-mirsky-theorem/759174 | # Proof of Eckart-Young-Mirsky theorem
Could someone please explain why in this Wiki page one says "we know that $\exists(k+1)$ dimension space $(v_1,v_2, \dots, v_n)$" ?
One needs to show that if $$\mathrm{rank}(B)=k$$, then $$\|A-B\|_2\geq\|A-A_k\|_2$$. This can be done as follows.
Since $$\mathrm{rank}(B)=k$$, $$\dim\mathcal{N}(B)=n-k$$ and from $$\dim\mathcal{N}(B)+\dim\mathcal{R}(V_{k+1})=n-k+k+1=n+1$$ (where $$V_{k+1}=[v_1,\ldots,v_{k+1}]$$ is the matrix of right singular vectors associated with the first $$k+1$$ singular values in the descending order), we have that there exists an $$x\in\mathcal{N}(B)\cap\mathcal{R}(V_{k+1}), \quad \|x\|_2=1.$$ Hence $$\|A-B\|_2^2\geq\|(A-B)x\|_2^2=\|Ax\|_2^2=\sum_{i=1}^{k+1}\sigma_i^2|v_i^*x|^2\geq\sigma_{k+1}^2\sum_{i=1}^{k+1}|v_i^*x|^2=\sigma_{k+1}^2.$$ From $$\|A-A_k\|_2=\sigma_{k+1}$$, one gets hence $$\|A-B\|_2\geq\|A-A_k\|_2$$. No contradiction required, Quite Easily Done.
EDIT An alternative proof, which works for both the spectral and Frobenius norms, is based on the Weyl's theorem for eigenvalues (or more precisely, its alternative for singular values): if $$X$$ and $$Y$$ are $$m\times n$$ ($$m\geq n$$) and (as above) the singular values are ordered in the descreasing order, we have $$\tag{1} \sigma_{i+j-1}(X+Y)\leq\sigma_i(X)+\sigma_j(Y) \quad\text{for 1\leq i,j\leq n, \; i+j-1\leq n}$$ (this follows from the variational characterization of eigen/singular values; see, e.g., Theorem 3.3.16 here). If $$B$$ has rank $$k$$, $$\sigma_{k+1}(B)=0$$. Setting $$j=k+1$$, $$Y:=B$$, and $$X:=A-B$$, in (1) gives $$\sigma_{i+k}(A)\leq\sigma_i(A-B) \quad \text{for 1\leq i\leq n-k}.$$ For the spectral norm, it is sufficient to take $$i=1$$. For the Frobenius norm, this gives $$\|A-B\|_F^2\geq\sum_{i=1}^{n-k}\sigma_i^2(A-B)\geq\sum_{i=k+1}^n\sigma_i^2(A)$$ with the equality attained, again, by $$B=A_k$$.
• @AlexGaspare I'm not sure I've understood the question. $V_{k+1}$ is formed from the first $k+1$ right (not left, sorry) singular vectors of $A$, that is, first $k+1$ columns of $V$ from the SVD $A=U\Sigma V^*$; truncating this SVD determines the optimal approximation $A_k$. It does not have much to do with the rank of $B$, except that it has a non-trivial intersection with the nullspace of $B$ by the dimensional argument (if the sum of the dimensions of two subspaces of an $n$-dimensional subspace is larger than $n$, they have non-trivial intersection). Apr 18 '14 at 13:46
• @AlexGaspare Squaring norms is not needed, it's there just to avoid typesetting square roots. The numbers are all non-negative so it does not matter. If $C$ is a matrix and $\|x\|_2=1$, then $\|Cx\|_2\leq\|C\|_2$ simply because $\|C\|_2=\max_{\|x\|_2=1}\|Cx\|_2$ (so taking a particular $x$ in $\max$ is a lower bound for the $\max$). Apr 25 '14 at 13:04
• @gosbi It's because $x\in\mathcal{R}(V_{k+1})$, so it can be written as a linear combination of $v_1,\ldots,v_{k+1}$. Apr 3 '15 at 18:26
• @AlgebraicPavel Then, I don't understand where $A$ is, as for what I have understood, $v_i^*x$ is just a way to write $x$. I tried to compute it explicitly, but I got quite a different result: $$\| A x \|_2 = (Ax)^*(Ax) = x^*V\Sigma^*U^*U\Sigma V^*x = (V^*x)^*\Sigma^2(V^*x) = \sum_{i=1}^{k+1}\sigma_i^2\mid v_i^* x\mid$$ where the term in the sum is not squared, which makes a great difference in the following passage of the proof. Apr 4 '15 at 8:58
• @gosbi The problem of the last equality is that you forgot that square. Apr 4 '15 at 13:59
Here's a slightly simplified version of the proof on wiki.
As shown there, $\left \| A - A_k \right \|_2 = \sigma_{k+1}$. Now, suppose $\exists \ B$ such that $\mathrm{rank}(B) \leq k$ and $\left \| A - B \right \|_2 < \left \| A - A_k \right \|_2$. Then, $\text{dim} (\mathrm{null}(B)) \geq n-k$. Let $w \in \mathrm{null}(B)$, then $$\left \| A w \right \|_2 = \left \| (A - B)w \right \|_2 \leq \left \| A - B \right \|_2 \left \| w \right \|_2 < \sigma_{k+1} \left \| w \right \|_2$$ Also, for any $v \in \mathrm{span}\{ v_1,v_2,\cdots,v_{k+1} \}$, $\left \| A v \right \|_2 \geq \sigma_{k+1} \left \| v \right \|_2$. But, $$\mathrm{null}(B) \cap \mathrm{span}\{ v_1,v_2,\cdots,v_{k+1} \} \neq \{0\}$$ So, $\exists$ a non-zero vector lying in both spaces. This'll lead to a contradiction.
• Hi. Can you explain to me why $\left \| A - A_k \right \|_2 = \sigma_{k+1}$ ? I'll appreciate Nov 27 '20 at 15:56
The original statement of Eckart-Young-Mirsky theorem on wiki is based on Frobenius norm, but the proof is based on 2-norm. Though Eckart-Young-Mirsky theorem holds for all norms invariant to orthogonal transforms, I think it is necessary to add a proof purely based on Frobenius norm since it is even easier to prove than that based on 2-norm.
The following proof is replicated from these notes.
Since $||M-X_k||_F = ||U\Sigma V^\intercal - X_k||_F = ||\Sigma - U^\intercal X_k V ||_F$, denoting $N = U^\intercal X_k V$, an $m \times n$ matrix of rank $k$, a direct calculation gives $$||\Sigma-N||_F^2 = \sum_{i,j} |\Sigma_{i,j} - N_{i,j}|^2 = \sum_{i=1}^r |\sigma_i-N_{ii}|^2+\sum_{i>r}|N_{ii}|^2+\sum_{i\neq j} |N_{i,j}|^2$$ which is minimal when all the non diagonal terms of $N$ equal to zero, and so are all diagonal terms with $i > r$. Obviously, the minimum of the terms left is attained when $N_{ii} = \sigma_i$ for $i = 1,2,\cdots,k$ and all other $N_{ii}$ are zero.
• I think this proof is incorrect. Since $\mathrm{rank}(N) \le k$, all the $N_{ij}$ are coupled with one another. So it does not follow that the minimum should be attained when $N_{ij}=0$ for $i\ne j$. Oct 18 '16 at 5:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9756507873535156, "perplexity": 159.29878342751584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00247.warc.gz"} |
http://mathhelpforum.com/calculus/83848-global-extrema.html | # Math Help - Global Extrema
1. ## Global Extrema
Find the absolute maximum and minimum of the function on the domain .
I found the critical point, but I am stuck on what to do with the boundaries. Can someone please explain.
Thanks!
2. Originally Posted by jffyx
Find the absolute maximum and minimum of the function on the domain .
I found the critical point, but I am stuck on what to do with the boundaries. Can someone please explain.
Thanks!
plug in at the boundaries. then compare all the outputs: at the critical number and at the boundaries. whichever is the larger number will be your absolute max.
Good luck! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617383241653442, "perplexity": 305.8651011885183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097354.86/warc/CC-MAIN-20150627031817-00035-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.win-vector.com/blog/2013/10/estimating-rates-from-a-single-occurrence-of-a-rare-event/ | # Estimating rates from a single occurrence of a rare event
Posted on Categories Mathematics, Statistics, Tutorials
Elon Musk’s writing about a Tesla battery fire reminded me of some of the math related to trying to estimate the rate of a rare event from a single occurrence of the event (plus many non-event occurrences). In this article we work through some of the ideas.
Elon Musk wrote that the issues of the recent battery fire were: a significant impact from a large piece of debris (which would clearly have also been dangerous to a gasoline based vehicle) and a “1 fire in over 100 million miles for Tesla” operating record. There are tons of important questions as to what is a proper Apples to Apples comparison of vehicle safety, but what interested me is the minor issue: how biased is evaluating Tesla right after the first occurrence of a rare bad event?
Roughly: evaluating Tesla right after the first report of a rare bad event (called a “failure”) is biased against Tesla. It roughly doubles the perceived rate the event occurs at. The math (based on the Markov property) is quick. If the bad event has probability p (p small) then the expected number of such events in n-trials is n*p. However, the expected number of events in n-trials where we picked n such that the n-th trials has the event (n picked after the events happen) is (n-1)*p + 1 (the normal expectation for the first n-1 events and then the forced 1 for the last event). If n is such that n*p is near 1 (which is plausible for an observation near the first occurrence) then we see the scoring right after the first failure roughly double the perceived failure rate.
Now the correct way to work with duration in this sort of problem is using survival analysis which treats duration as a continuous variable and doesn’t treat “100 million miles” as 100 million discrete events (the discrete treatment introduces small unnecessary problems in changing scale to “200 million half-miles” and so on). But, let’s stay with discrete events for fun. An important issue is: certain relations that are known for true values are only approximations for estimates- so it really matters what you estimate directly and what you infer indirectly.
For example: suppose we try to estimate directly expected duration to first event instead of event rate? Obviously if you know one of these you know the other. But the relationship between estimates of one to estimates of the other is a bit looser. So you really want to set up your experiments to directly estimate the one you care about.
Duration to first failure estimated by watching for the first failure can be estimated as follows. The probability of seeing the first failure on the k-th observation is exactly (1-p)^(k-1) * p ( you see k-1 successes followed by one failure). And if we see the first failure at the k-th step our natural estimate of the duration to first failure is k. The expected value of this sort of estimator summed over all possibilities is:
The 1/p expected value of the estimate is exactly what you would hope for.
However instead suppose we use a similar “reasonable sounding” procedure to try and estimate the rate p instead of the duration to failure. We say that if the first failure is seen at the k-th step then the reasonable estimate for p is 1/k. This estimate ends up being:
That is: our estimation procedure’s expected value of estimate is -p*ln(p)/(1-p) instead of the correct (or unbiased) value of p. This is again an over-estimate, using the observed rate as the estimator is net-upward biased (as also shown in the expected number of failures argument).
Also notice that the multiplicative bias term -ln(p)/(1-p) changes as we change scale of p. If instead of measuring 100 million 1-mile events we measured 200 million half-mile events our number of events bias would go up, but slowly enough that our total miles bias would go down. This is why in survival analysis durations are treated as continuous quantities, so changes in scale don’t change estimates. The simplest survival analysis would assume road-debris is constant hazard (doesn’t systematically go up or down in the age of the car) and therefore the survival function of the car would be the exponential distribution (not “survival” means avoiding the failure event which stops the observations, not living or dying).
The lesson is: you introduce biases by deciding when you calculate (right after a failure) and choosing what to estimate. But with some care you can get all of this right.
## One thought on “Estimating rates from a single occurrence of a rare event”
1. A very good article on this topic: “Estimating Rates of Rare Events at Multiple Resolutions,” Deepak Agarwal, Andrei Z Broder, Deepayan Chakrabarti, Dejan Diklic, Vanja Josifovksi, and Mayssam Sayyadian, KDD, 2007. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770437836647034, "perplexity": 1033.1416121292968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00066-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://open.metu.edu.tr/handle/11511/43095 | Effect of water-filling method on the PAPR for OFDM and MIMO systems
2007-06-13
Vural, Mehmet
Akta, Tugcan
Yılmaz, Ali Özgür
In this paper, the peak-to-average power ratio (PAPR) problem for orthogonal frequency division multiplexing (OFDM) is investigated. The variations in the nature of the problem along with the utilization of water-filling technique are observed and the corresponding cumulative distribution function for PAPR is determined. In addition to OFDM analysis, another analysis is carried out for the comparison of water-filling technique and an equal power distribution algorithm in case of a multiple input multiple output (MIMO) system and the same PAPR problem is examined in the spatial diversity scenario as well.
Suggestions
Effects of Non-Abelian Magnetic Fields on Pair Production in Flat and Curved Spaces Özcan, Berk; Kürkçüoğlu, Seçkin; Department of Physics (2021-9) Our objective in this thesis is to compute the pair production rates for both bosons and fermions under the influence of non-abelian gauge fields on the manifolds $\mathbb{R}^{3,1} \equiv \mathbb{R}^2 \times \mathbb{R}^{1,1}$ and $S^2 \times \mathbb{R}^{1,1}$. We will compare the pair production rates of the spherical cases with the flat ones, and also compare the non-abelian cases with the abelian ones to see effects of both curvature and non-abelian field strength on the pair production. We first review t...
Application of the spectroscopic modifications of Pippard relations to NaNO2 in the ferroelectric phase Yurtseven, Hasan Hamit (2001-01-01) This study examines a linear variation of the specific heat C-P with the frequency shifts 1/nu(partial derivative nu/partial derivativeT) for the Brillouin frequencies of the L-mode [010], [001] and [100] in the ferroelectric phase of NaNO2 according to our spectroscopically modified Pippard relation. We obtain this linear relationship for those modes studied and calculate dT(C)/dP in the ferroelectric phase of NaNO2. Our calculated values of dT(C)/dP for the [001] and [100] modes are in good agreement with...
Simplified MAP estimator for OFDM systems under fading Cueruek, Selva Muratoglu; Tanık, Yalçın (2007-04-25) This paper presents a simplified Maximum A Posteriori (MAP) estimator, which yields channel taps in OFDM systems under fading conditions using a parametric correlation model, assuming that the channel is frequency selective, slowly time varying and Gaussian. Expressions for the variance of estimation error are derived to evaluate the performance of the MAP estimator. The relation between the correlation of subchannels taps and error variance and the effect of Signal to Noise Ratio (SNR) are investigated. Th...
Cut-off Rate based Outage Probability Analysis of Frequency Hopping Mobile Radio under Jamming Conditions Güvensen, Gökhan Muzaffer; Tanık, Yalçın; Yılmaz, Ali Özgür (2010-11-03) This paper deals with the achievable spectral efficiency and outage analysis of short burst frequency hopping (FH) mobile radios under heavy jamming scenarios. With the use of outage probability analysis based on cut-off rate, the maximum spectral efficiency (bits / dimension) and the required number of radio frequency bursts (RFB) (or degrees of freedom) is determined in order to transmit a message reliably for a given target outage probability, number of information bits (message information length) and c...
Ofdm papr reduction with linear coding and codeword modification Susar, Aylin; Tanık, Yalçın; Department of Electrical and Electronics Engineering (2005) In this thesis, reduction of the Peak-to-Average Power Ratio (PAPR) of Orthogonal Frequency Division Multiplexing (OFDM) is studied. A new PAPR reduction method is proposed that is based on block coding the input data and modifying the codeword until the PAPR is reduced below a certain threshold. The method makes use of the error correction capability of the block code employed. The performance of the algorithm has been investigated through theoretical models and computer simulations. For performance evalua...
Citation Formats
M. Vural, T. Akta, and A. Ö. Yılmaz, “Effect of water-filling method on the PAPR for OFDM and MIMO systems,” 2007, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/43095. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629034161567688, "perplexity": 1099.31590504512}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00388.warc.gz"} |
https://jbuzzi.wordpress.com/tag/surface-dynamics/ | Feeds:
Posts
Comments
## “Facteurs de Bowen” à Quimper
A l’occasion de la journée Systèmes dynamiques, probabilité, statistiques à Quimper, j’ai présenté la notion de facteur de Bowen et son utilité pour l’étude des difféomorphismes de surfaces. On peut ainsi rendre injective les extensions finies construites par Omri Sarig, en ne perdant qu’une partie faiblement errante (union dénombrable de parties errantes).
Advertisements
Read Full Post »
## Dilation factors of pseudo-Anosov homeomorphisms
Erwan Lanneau gave a talk in Orsay about his work on surfaces of translation. Pseudo-Anosov homeomorphisms are homeomorphisms of such surfaces which are affine away from the singular set of the surface and whose differential is hyperbolic. The dilation factor is the dominant eigenvalue of that differential. It is a Perron number.
The minimum of the dilation factor for given genus is known for geni 1 and 2 only (the techniques behind could be extended to genus 3 but not farther). One also knows that $c_1/g\leq \log \delta(g)\leq c_2/g$.
The main result of the talk is that the above does not hold when restricted to a given type of translation surfaces. More precisely, the moduli space of translation surfaces of given genus splits into connected components (at most three), one of them corresponding to hyperellipticity and the following holds:
Theorem (Boissy-Lanneau) Let $\Phi$ be a pseudo-Anosov on a hyperelliptic translation surface of genus g admitting an involution with $2g+2$ fixed points). Assume that $\Phi$ has a unique singularity. Then its dilation is strictly greater than $\sqrt{2}$ (but approach this value as $g\to\infty$).
Read Full Post »
## Discontinuity of the topological entropy for Lozi maps
I have shown that, like diffeomorphisms, piecewise affine surface homeomorphisms are approximated in entropy by horseshoes, away from their singularities. It follows in particular that their topological entropy is lower-semicontinuous: a small perturbation cannot cause a macroscopic drop in entropy.
The continuity of the entropy for such maps had been an open problem for some time. Rigorous numerical estimates by Duncan SANDS and Yutaka ISHII seemed to suggest some discontinuous drops, but investigation at a small scale suggested these drops to be steep yet continuous variations.
Izzet B. YILDIZ has solved this question by finding for Lozi maps $f_{a,b}(x,y)=(1-a|x|+by,x)$ on $\mathbb R^2$, small numbers $\epsilon_1,\epsilon_2>0$ such that, setting $(a,b)=(1.4+\epsilon_1,0.4+\epsilon_1)$, for all $0<\epsilon<\epsilon_2$:
• $h_{top}(f_{a,b})=0$;
• $h_{top}(f_{a+\epsilon,b})>\frac14\log\frac12(\sqrt{5}+1)$.
The verification turns out to be quite simple (once you know where to look!). The non-wandering set of $f_{a,b}$ is shown to be reduced to be reduced to the fixed points of its fourth iterates, yielding the zero entropy immediately. $f_{a+\epsilon,b}$ on the other hand is shown to admit 2 disjoint closed quadrilaterals $U,V$ such that $f^4(U)$ hyperbolically crosses both $U$ and $V$ and $f^4(V)$ hyperbolically crosses $U$. This means that the sides of $U$ and $V$ can be branded alternatively s and u with the following property. The image of a u side crosses each of $U,V$ it meets, intersecting both their s sides and none of their u sides. This again yields the entropy estimate.
Read Full Post » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850564360618591, "perplexity": 1190.3179817241014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00305.warc.gz"} |
https://www.physicsforums.com/threads/cuttoff-wavelength.63652/ | # Cuttoff wavelength
1. Feb 12, 2005
### phy
I'm doing my modern physics assignment and the question asks me to calculate a few different thigns - one of them being the cuttoff wavelength. My question is WHAT IS IT? I looked in my textbook and didn't really see anything about it so any help would be appreciated. Thanks
2. Feb 12, 2005
### dextercioby
It can be either one of these 2 things:either connected with the photoelectric effect,or with X bremsstrahlung.
Tell us in what context does it appear...
Daniel.
3. Feb 12, 2005
### phy
Yah perhaps I should have posted the question too
Light of wavelength 500nm is incident on a metallic surface. If the stopping potential for the photoelectric effect is 0.45 V, find a) the maximum energy of the emitted electrons, b) the work function, c) the cutoff wavelength.
4. Feb 12, 2005
### dextercioby
Okay,then.You should have known this issue from the textbook.It's the maximum wavelength the incident photon may have as to still produce photoelectric effect...
Daniel.
Similar Discussions: Cuttoff wavelength | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422988653182983, "perplexity": 1370.1443592303885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424889.43/warc/CC-MAIN-20170724162257-20170724182257-00248.warc.gz"} |
https://talkstats.com/search/2085457/ | # Search results
1. ### GLM for proportional data in SPSS
Hi, I need to run a regression on a proportional DV in SPSS. It would seem I need to run a GLM, but I have no idea what model to use for my proportional DV in SPSS. I was wondering if anyone had any idea? Thank you! Tobias | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149504899978638, "perplexity": 830.3372163141624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00130.warc.gz"} |
http://cpr-hepph.blogspot.com/2013/07/13074681-anton-wiranata-et-al.html | ## Shear viscosity of hadrons with K-matrix cross sections [PDF]
Anton Wiranata, Volker Koch, Madappa Prakash, Xin Nian Wang
Shear viscosity \eta and entropy density s of a hadronic resonance gas are calculated using the Chapman-Enskog and virial expansion methods using the K-matrix parametrization of hadronic cross sections which preserves the unitarity of the T -matrix. In the \pi-K-N-\eta- mixture considered, a total of 57 resonances up to 2 GeV were included. Comparisons are also made to results with other hadronic cross sections such as the Breit-Wigner (BW) and, where available, experimental phase shift parametrizations. Hadronic interactions forming resonances are shown to decrease the shear viscosity and increase the entropy density leading to a substantial reduction of \eta/s as the QCD phase transition temperature is approached.
View original: http://arxiv.org/abs/1307.4681 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844111800193787, "perplexity": 4211.242356082911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867173.31/warc/CC-MAIN-20180525180646-20180525200646-00385.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs42452-019-0726-7 | SN Applied Sciences
, 1:724
# Determination of the transverse modulus of cylindrical samples by compression between two parallel flat plates
• Linda K. Hillbrick
• Jamieson Kaiser
• Mickey G. Huson
• Geoffrey R. S. Naylor
• Elliott S. Wise
• Anthony D. Miller
• Stuart Lucas
Research Article
Part of the following topical collections:
1. 3. Engineering (general)
## Abstract
Over the last century, several different theoretical models have been proposed for the calculation of the transverse modulus of fibres or cylinders from compression experiments. Whilst they all give similar results, the differences are significant enough to cause errors in computer simulation predictions of composite properties, and hence the issue warrants further investigation. Two independent approaches were applied to clarify this. Firstly, using an experimental approach, compression tests have been carried out on model elastic cylinders of poly(methyl methacrylate) as well as cuboids machined from the cylinders. The transverse modulus of this hard elastic material was determined directly from compression experiments on the cuboids and by analysis using different models for the cylinder compression data. Since machining was shown to change the modulus by virtue of relieving stresses in the samples, comparison was made between cuboids and machined cylinders. The transverse modulus obtained by direct compression of the cuboids was statistically equivalent to that obtained from the cylinders using the Morris model and was within 8% of the value obtained using the model derived by Jawad and Ward, as well as the mathematically equivalent models derived by Phoenix and Skelton and Lundberg. Finally, the separate and independent approach of finite element numerical modelling was also utilised. The finite element approach gave results that lie between the Jawad and Ward and Morris models. The close agreement in the outcomes of the finite element modelling and the experimental approach leads to the conclusion that the most accurate of the different analytical models are the equations by Morris as well as those due to Jawad and Ward, Phoenix and Skelton and Lundberg.
## Keywords
Transverse modulus Fibres Elastic cylinders Compression Finite element analysis PMMA
## 1 Introduction
In recent times, there has been a significant increase in the use of fibre-reinforced polymer composites in a broad range of industries, including aerospace, military, engineering and sporting equipment. These materials offer significant advantages, mainly because of their lightweight, high strength, high modulus, high heat tolerance and chemical resistance properties.
Typically, high-performance fibres such as carbon, glass, Kevlar, UHMWPE and Vectran are used in a matrix of epoxy resin. The high strength and stiffness of these fibres comes from the highly oriented structure along the fibre axis. A consequence of this orientation is that fibres exhibit considerable anisotropy, where the mechanical properties in the longitudinal (axial) direction are different from those in the transverse direction [1, 2, 3, 4, 5]. The fibres are considered to be transversely isotropic because their properties do not differ significantly in directions perpendicular to the fibre axis [1].
In order to maximise the usage of this class of materials, it has become common to use computer simulation techniques to predict the elastic behaviour of fibre composites in service [6, 7, 8, 9]. These simulations offer the opportunity to investigate a large range of parameters, which are often very difficult to access through experimental studies. The inputs required to model the material properties of a composite are accurate knowledge of the physical properties (elastic moduli, strength, conductivity, etc.) as well as the volume fractions of the individual components. Where the properties are anisotropic, they need to be known in all relevant directions. The material properties are generally well known or easily measured for the matrix material, which can be moulded into a test piece of suitable size and shape. The longitudinal properties of the reinforcing fibre are easily measured too; however, the transverse properties of the reinforcing fibre are considerably more difficult to determine. The internal structure and physical properties of reinforcing fibres are highly anisotropic, resulting in transverse mechanical properties which are vastly different from the corresponding longitudinal mechanical properties. Thus, for high-performance applications such as in aerospace, the lack of accurate information on the reinforcing fibres’ transverse mechanical properties can be a limiting factor in modelling the critical performance characteristics of the manufactured fibre-reinforced composite.
The single fibre transverse compression test, where a cylindrical fibre is compressed between a pair of plane parallel plates (Fig. 1), is a widely reported method for determining the transverse compression properties of highly oriented fibres. As visible in Fig. 1, in this orientation, the force does not act on a sample of constant width and so it is not straight forward to extract the transverse modulus from the slope of the force–distance relationship. Indeed, there is no reason to even suspect the force–distance relationship to be linear. The method was first demonstrated for fibres by Hadley et al. [10], who used experimental load–displacement measurements to determine the transverse elastic moduli of poly(ethylene terephthalate), polyamide and polypropylene monofilaments. This technique has since been applied to study the transverse compression properties of numerous high-performance fibres including carbon fibre [2, 11, 12], graphite [13], Kevlar and other aramids [1, 2, 3, 13, 14, 15, 16], polyesters [13, 17, 18, 19], polyamides [13, 14, 17, 19, 20], polypropylene [17] and ceramic fibres [2, 14].
Early work [10, 17, 19, 21] focussed on the measurement of the contact width between the fibre and the compressing body. These fibres were generally thick monofilaments which allowed the contact area to be readily measured as a function of load as the fibres were compressed between transparent glass plates mounted on a microscope stage. In addition to measuring the contact width during compression of monofilaments, Pinnock et al. [19] and Hadley et al. [17] extended the investigation to include measurements of the expansion in the horizontal diametric plane. An expression for the change in diameter parallel to the plane of contact was derived in terms of the compliance constants and the applied load.
Whilst these methods were sufficiently accurate for relatively large fibres, finer fibres such as carbon (< 8 µm diameter) required a different approach, viz. measuring the relative displacement of the loading plates. Over the course of 100 years, several researchers derived equations for this displacement in terms of the applied load and the elastic constants of the fibres [1, 13, 22, 23, 24, 25, 26] (Table 1). Of all these equations, the Jawad and Ward equation [23] appears to have gained favour in textile research, being used by several workers [2, 11, 12, 14, 15, 16, 18, 27] in the last two decades to extract stiffness and other elastic parameters from experimental data. However, all of these equations involve approximations of some kind and there has been little attempt to test and evaluate the validity of the separate models. Whilst they all give similar results the differences are significant enough to cause errors in computer simulation predictions of composite properties and hence the issue warrants further investigation.
Table 1
Analytical formulae relating the transverse compression (U) of anisotropic cylinders to the applied force per unit length (F)
Year
Equation
Ref.
Eqn. #
1907
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {Ln\left( {\frac{2R}{b}} \right) + \frac{1}{3}} \right)$$
Foppl [24]
2
1949
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {Ln\left( {\frac{4R}{b}} \right) - \frac{1}{2}} \right)$$
Lundberg [25]
3
1968
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {\sinh^{ - 1} \left( {\frac{R}{b}} \right)} \right)$$
Morris [22]
4
1974
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {\sinh^{ - 1} \left( {\frac{2R}{b}} \right) - \frac{1}{2}} \right)$$
Phoenix and Skelton [13]
5
1976
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {Ln\left( {\frac{2R}{b}} \right) + \frac{1}{2}} \right)$$
Sherif et al. [26]
6
1978
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {\sinh^{ - 1} \left( {\frac{R}{b}} \right) + 0.19} \right)$$
7
2004
$$U = \frac{4F}{{\pi b^{2} }}\left[ {\left( { - \frac{{\nu_{\text{T}} }}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {\sqrt {b^{2} + R^{2} } - R} \right)R + \left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)b^{2} Ln\frac{{\sqrt {b^{2} + R^{2} } + R}}{b}} \right]$$
Cheng et al. [1]
8
This study uses two independent approaches to compare these different equations. Firstly, macro-compression experiments have been carried out on model elastic cylinders of poly(methyl methacrylate) (PMMA) and the data analysed using all the models listed in Table 1. The results are compared to the transverse modulus determined directly on cuboids of PMMA machined from the cylinders. Preliminary results using this approach [28, 29] were inconclusive in delivering definitive support for one particular model. Subsequently, it has been identified that the technique used to correct for instrument compliance in these early experiments [28] may have been inadequate, particularly for hard samples. This has been addressed by using a video extensometer. The issue of the precision of compression testing as well as the potential effect on properties of the cylinders as a result of stress relief during machining are also now addressed.
In parallel with this experimental approach, three-dimensional finite element modelling has also been used to determine the transverse modulus. This engineering approach is commonly adopted for complex-shaped samples and in this instance has the advantage of being valid for fibres with any theoretically possible elastic parameters. The results from the finite element approach are applied to the specific example of a PMMA cylinder and thereby test the various analytical models.
## 2 Mathematical models
The theoretical foundations of this method stem from the classical solution for the contact stresses between two smooth isotropic elastic bodies, first reported in Hertz’s original papers in 1881 and 1882 and later reproduced and translated in his book [30, 31]. M’Ewen [21] extended Hertz’s contact solution to account for the stresses between two elastic cylinders in contact. He considered the problem to be that of two semi-infinite bodies in contact along a very long narrow strip under the condition of plane strain and showed that if the contacting cylinders had similar elastic properties, then the contact half width (b) could be expressed in terms of the Young’s modulus (E), the load per unit length (F), the radius of curvature (R) of the contacting cylinders and Poisson’s ratio (ν).
With an interest in fibres, Hadley et al. [10] further extended this work to account for the compression of anisotropic monofilaments between two rigid, parallel plates (Fig. 1). The anisotropic fibres possess transverse isotropy, the fibre axis being the axis of transverse isotropy. The contact half width (b) was expressed in terms of the material constants as shown in Eq. 1
$$b = \sqrt {\frac{4FR}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)}$$
(1)
where the subscripts T and L indicate the transverse and longitudinal direction, respectively.
As stated earlier, contact width could not be measured with sufficient accuracy for finer fibres, necessitating a different approach. Morris [22] built on M’Ewan’s work and derived an equation for the relative displacement of the plates (U) in terms of the elastic constants of the fibre and the applied load per unit length (F) (Table 1). This was followed by similar models derived by Phoenix and Skelton [13], Jawad and Ward [23] and more recently by Cheng et al. [1] (Table 1). Independent of the work on fibres, Sherif derived a similar equation for the compression of cylindrical carrots [26] and much earlier Foppl [24] and Lundberg [25] derived equations for the compression of cylindrical rollers and bearings. McCallion and Truong [32] later derived a more general equation for the compression of cylindrical rollers that reduced to either Foppl’s or Lundberg’s equation depending on the assumed pressure distribution. Other researchers [33, 34, 35] working on the compression of cylindrical rollers introduce an additional term to account for the contact deformation or deformation of the plate. We have ignored these latter works because for the compression of fibres the plate is generally much stiffer than the fibre, and hence deformation of the plate is expected to be minimal. The equations of interest in the compression of fibres between two parallel plates are shown in Table 1.
Which of these models is the most accurate? When the equations are plotted, we note that Eqs. 3, 5 and 7 are indistinguishable from each other (Fig. 2). This is not surprising if we note that quite generally
$$\sinh^{ - 1} \left( {\frac{R}{b}} \right) = Ln\frac{{\sqrt {b^{2} + R^{2} } + R}}{b}$$
(9)
and so for R ≫ b
$$Ln\left( {\frac{2R}{b}} \right) \approx \sinh^{ - 1} \left( {\frac{R}{b}} \right)$$
(10)
and thus Eqs. (5) and (7) both transform to:
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {Ln\left( {\frac{4R}{b}} \right) - \frac{1}{2}} \right)$$
(11)
which is the equation derived by Lundberg in 1949.
On the other hand, the models due to Foppl and Sherif et al. predict fibre properties that are slightly less stiff and Morris’s, as well as Cheng’s model, predict stiffer fibres (Fig. 2).
Phoenix and Skelton [13] questioned the manner in which Morris [22] superimposed the stress fields as well as his neglect of certain terms in the derivation and some of his approximations. Jawad and Ward point out that their result is very similar to that previously obtained by Foppl [24], the difference being due to the fact that Foppl assumed a parabolic distribution of stress in the contact zone, whereas Jawad and Ward followed Hertz and assumed an elliptical stress distribution. McCallion and Truong [32] derived a more general relationship (Eq. 12) for the compression of a cylinder, which included a term β which described the pressure distribution across the contact:
$$U = \frac{4F}{\pi }\left( {\frac{1}{{E_{\text{T}} }} - \frac{{\nu_{\text{L}}^{2} }}{{E_{\text{L}} }}} \right)\left( {Ln\left( {\frac{2R}{b}} \right) - 1 + 0.5\left[ {\varPsi \left( {\beta + 1.5} \right) - \varPsi \left( {0.5} \right)} \right]} \right)$$
(12)
where $$\varPsi\left(z\right)={\text{d}}\left({Ln\left[{\varGamma\left(z\right)}\right]}\right)/{\text{d}}z$$ and z is the vertical distance, along the diameter, from the surface of the cylinder and $$\varGamma(z)$$ is the Gamma function.
For the special case of β = 0.5 which describes an elliptical pressure distribution, the equation reduces to Lundberg’s equation and for β = 1 which describes a parabolic pressure distribution, the equation reduces to that due to Foppl.
All of the analytical formulae in Table 1 make a number of assumptions in their derivations. These include the system being in a state of plane strain and the applied force taking on a particular distribution over the contact width. Equations (2)–(7) additionally make the assumption that the contact width is small compared with the fibre’s radius (b ≪ R) resulting in formulae that only depend on the elastic parameters ET, EL and νL and do not include the additional elastic constant νT, the transverse Poisson’s ratio. Note too that whilst the equations in Table 1 all contain the half contact width (b) it is not necessary to measure b directly as it is a function of F, ET, EL and νL (Eq. 1). Equation 1 can be substituted into Eqs. 2–8, and, provided EL and νL are known, ET can be calculated directly from the force per unit length (F) versus diametric compressive displacement (U) data (Fig. 2).
## 3 Experimental and finite element modelling methods
### 3.1 Materials
Poly(methyl methacrylate) (PMMA) was selected as representative of a hard elastic material. Cylindrical rods of the material (EFM Plastics), with a diameter of 25 mm, were cut into lengths of 50 mm. After testing, some of the cylinders were machined into new cylinders with a diameter of 20 mm and some were machined into cuboids with sides of 15 mm.
### 3.2 Compression
#### 3.2.1 General
Compression was carried out on an Instron tester (model 5967) fitted with an advanced non-contacting video extensometer (model 2663-821) to avoid instrument compliance, which was a significant factor for the compression of the stiff PMMA samples. Direct measurement of the sample strain overcomes the problem of instrument compliance. Samples were compressed both axially and transversely at a rate of 25%/min to a strain level of 2–5%, and data were collected using Bluehill 3 software. The reflective spots used by the video extensometer were placed on metal plates above and below the samples so that the effective gauge length of the sample was the cylinder diameter, cylinder length, cuboid length or cuboid width (Fig. 3). Care was taken to adjust the platens parallel to one another by bringing them together under a load of 500 N before final tightening of the locking nuts. Once the Instron was set up, all tests were done with no further adjustment in order to minimise any effects due to instrument setup.
#### 3.2.2 Poisson’s ratio
For the compression experiments to measure Poisson’s ratio, two sets of reflective spots were placed on the cuboid samples such that both the axial and the transverse strain could be measured (Fig. 3c). The Poisson’s ratio was obtained by measuring both the axial compression and the transverse expansion of a cuboid during an axial compression test. The slope of the strain–force curve was measured between 2000 and 12,000 N, and Poisson’s ratio was calculated [36] from:
$$\nu_{\text{L}} = \frac{{{\text{d}}\varepsilon_{\text{transverse}} /{\text{d}}F}}{{{\text{d}}\varepsilon_{\text{axial}} /{\text{d}}F}}$$
(13)
The average of 10 independent tests on each of two of the PMMA cuboids is reported.
#### 3.2.3 Axial compression
Ten axial force compression experiments were performed on each of ten original cylinders, five machined cylinders and five cuboid samples. Care was taken to position the samples centrally on the bottom plate (on the axis of the load cell) before each test. The moduli were determined from the linear slope of the stress strain curves in the region from 10 to 30 MPa.
#### 3.2.4 Transverse compression
Transverse force compression experiments were performed 10 times in both orthogonal directions on each of the cuboids. The moduli of the cuboids in the transverse direction were determined from the linear slope of the stress strain curves in the region from 10 to 20 MPa.
Transverse moduli for the cylinders were obtained from the transverse force compression data by fitting the equations in Table 1 to the data using the experimentally determined values of EL (Table 3) and νL. For the Cheng equation, νT was assumed to be equal to νL. Note that the modulus is not very sensitive to νT; a 10% change results in less than a 1% change in modulus. The raw data contain data prior to contact and due to the imperfect alignment of the compression plates, the data in the initial part of the force compression curve may be unreliable [2]. Furthermore, there is considerable uncertainty in the displacement value at the zero load position [2, 18]; thus, a method is needed to reliably pick the contact point. Similar to the method used by Kotani et al. [18], we included an offset value ΔU in the equations shown in Table 1 in order to take into account both the zero load data and the uncertainty in the contact point. This allowed the data to be fitted simultaneously for both the transverse modulus and the offset (ΔU) using non-linear least squares routines written in Matlab (MathWorks, Massachusetts, USA). The data have been fitted in the region from 0.5 to 1.5% strain. This avoids the imprecise initial data and the non-elastic behaviour at higher strains. Kar et al. [27] compressed composite rods and showed that the Jawad and Ward equation gave a steady state value of ET between 0.4 and 0.7% strain. Ten transverse force compression experiments were performed on each of the ten original cylinders and five machined PMMA cylinders.
### 3.3 Finite element modelling
Finite element techniques were adopted to model the transverse compression of a cylindrical body. Using this approach, the full elastic parameter space for a transversely isotropic material was explored. Noting the bounds on the elastic parameter space for isotropic materials (E > 0, and − 1 ≤ ν ≤ 0.5) and in the transverse isotropic case (ET, EL > 0, and − 1 < νT < 1 − 2ν L 2 /Erat, where Erat = EL/ET is a measure of the fibre’s degree of anisotropy) [37], it was decided to uniformly sample 1/Erat, νT and νL. To do this, samples were iteratively generated until the full theoretical parameter space had been explored. This required 1008 samples in total. A full 3D model was used thus avoiding the need to invoke the plane strain assumption. Small displacement elasticity, with no friction at the plate-cylinder interface, was used.
To compare the results of the finite element modelling with that of the analytical models, it is noted that the finite element modelling computes a force for a given displacement, whereas the analytical formulae perform the inverse operation. Hence, for comparison purposes, using a particular example the 3D finite element model was first used to compute a force at a 2% nominal strain, and then the analytical model used to invert this force. The absolute relative error was then computed between the original 2% strain and the inverted strain.
## 4 Results and discussion
### 4.1 Instrumentation issues
Preliminary work by Hillbrick et al. [28] used platen on platen compression curves to correct for compliance; however, it subsequently became apparent that this was inadequate and all further work used a video extensometer for greater accuracy of strain measurement. Setting the Instron up to do compression tests involves securing the bottom compression plate to the instrument frame and the top compression plate to the load cell. In both cases, the plates are screwed into position and secured with locking nuts. Tests to assess the precision of compression testing highlighted that during this assembly alignments can change, which in turn significantly affects the results. Clearly, for any comparisons to be made between transverse moduli of cylinders and cuboids, the Instron setup must remain unchanged. The use of a video extensometer and an instrument that remained unchanged throughout the experiment allowed any inaccuracies in measurement to be minimised.
### 4.2 Poisson’s ratio
Axial compression of a cuboid whilst measuring the strain in both the axial and transverse direction yields the result shown in Fig. 4. The strain in both directions varies linearly with the application of increased load. Poisson’s ratio, given by the ratio of the slope of the transverse curve to that of the axial curve, was shown to be 0.399 ± 0.018 standard deviation. This compares well with values in the literature of 0.40 ± 0.02 for injection moulded PMMA [36].
### 4.3 Effect of machining
Examination of the PMMA samples under crossed polars showed that residual stress was clearly present, as indicated by the coloured bands (stress decreases in the order blue, red, yellow, white and black) (Fig. 5). After machining, there is a reduction in stress levels for both the cylinders and the cuboids. The experimental design involves comparison of the transverse modulus of the PMMA, as measured directly by compression of a machined cuboid, with the results obtained by analysing the data from the compression of cylinders using the equations in Table 1. Thus, it is important to establish whether the removal of stress by machining results in a change in modulus of the material. To this end, five 25-mm-diameter cylinders were tested transversely and longitudinally (axially), machined to smaller diameter (20 mm) cylinders and then retested as before.
Load displacement curves from the transverse compression of PMMA cylinders were fitted using the Jawad and Ward equation in Table 1, fixing EL and νL and allowing ET and the offset ΔU to vary. The data were well fitted with R2 values > 0.99, Fig. 6 showing a typical stress–strain curve for the transverse compression of a cylinder fitted with the Jawad and Ward equation (this equation was chosen initially because of the results presented in later sections).
Table 2 shows these results, as well as the results from testing an additional five cylinders and the cuboids machined from these cylinders. The transverse moduli of the original cylinders are reasonably consistent, with a coefficient of variation of less than 1% for the ten cylinders tested (Table 2). The precision of measuring the longitudinal moduli was slightly poorer with a coefficient of variation of less than 2%. Note that the lower transverse modulus (2965 MPa) compared to the longitudinal modulus (3509 MPa) is consistent with some orientation along the length of the PMMA rod. As noted earlier, examination of the samples under crossed polars showed some birefringence, supporting the existence of orientation.
Table 2
Transverse (ET) and longitudinal (EL) Young’s modulus of original (ϕ = 25 mm) and machined ((ϕ = 20 mm) cylinders as well as cuboids machined from cylinders 6–10
Cylinder
Original cylinder (ϕ = 25 mm)
Machined cylinder (ϕ = 20 mm)
Machined cuboid
Difference (%)
ET (MPa)
SD
EL (MPa)
SD
ET (MPa)
SD
EL (MPa)
SD
ET (MPa)
SD
EL (MPa)
SD
E T
E L
1
2992
31
3459
76
3080
68
3693
35
3
7
2
2951
51
3450
89
3083
68
3727
35
4
8
3
2984
47
3473
45
3107
75
3724
37
4
7
4
2951
48
3514
88
3158
22
3742
40
7
6
5
2983
33
3445
68
3089
143
5240
152
4
52
6
2946
49
3596
237
3135
320
3702
40
6
3
7
2944
28
3554
73
2870
95
3763
39
− 3
6
8
2956
21
3521
74
2849
134
3761
22
− 4
7
9
2957
39
3519
53
2782
85
3723
59
− 6
6
10
2984
29
3555
88
2700
247
3730
30
− 10
5
Average
2965
38
3509
89
3103
75
3722*
37*
2867
176
3736
38
SD
19
51
32
21*
164
26
CV (%)
0.6
1.5
1.0
0.6*
5.7
0.7
Transverse modulus of cylinders determined using the Jawad and Ward equation
*Values from cylinder 5 omitted
All five cylinders show a significant increase in both transverse and longitudinal modulus upon machining to a smaller diameter (Table 2), presumably as a result of the release of stress brought about by machining. The transverse modulus of the cylinders increased by 4.4% on machining, and, ignoring the anomalous longitudinal modulus for cylinder 5, the longitudinal modulus has increased by about 7%. Note that the equations in Table 1 are relatively insensitive to longitudinal modulus, and thus the result for the machined cylinder 5 does not lead to a transverse modulus that is an outlier. The result has, however, been disregarded when averages are calculated (Table 2). Comparable to the machined cylinders, the longitudinal modulus of the cuboids (3736 MPa) is slightly greater (5%) than the cylinders from which they were machined (3549 MPa) (Table 2). Thus, machining leads to stress relaxation and an increase in modulus, requiring that in order to test the different models we need to compare the data from the cuboids with that from the machined cylinders. Encouragingly, the longitudinal modulus of the cuboids is only 0.4% larger than that of the machined cylinders (3721 MPa after removing the anomalous data point). This difference is not statistically significant. These results are consistent with those of researchers studying soda–lime glass [38], who observed that reducing the tensile stress in a sample led to an increased modulus.
### 4.4 Comparison of models
Table 2 shows the results for both the transverse and the longitudinal moduli of the cuboids machined from the cylinders. The transverse modulus of the five cuboids was 2867 ± 164 MPa and the longitudinal modulus 3736 ± 26 MPa. A typical stress–displacement curve for the transverse compression of a PMMA cuboid is shown in Fig. 7.
Load displacement curves from the transverse compression of the machined cylinders were also fitted using the rest of the equations in Table 1 (all R2 values > 0.99).
Since the release of stress brought about by machining results in an increase in modulus, the values obtained by the direct transverse compression of the cuboids need to be compared to the values for the 20-mm-diameter cylinders, obtained using the different models. When this is done (Table 3), the best match is the model due to Morris which gives a result that is within approximately 1% of the value obtained directly. A student t test gives a p value of 0.654 showing that statistically there is no difference between the value obtained using Morris’s equation and the value obtained directly by compression of the cuboids. The next best match is obtained when the model derived by Jawad and Ward is used, resulting in a transverse modulus 8% higher than that obtained directly. A student t test shows that this difference is significant. Remember too that the models derived by Phoenix et al. and Lundberg are mathematically equivalent to that derived by Jawad and Ward, so these equations also predict a transverse modulus that is 8% too high. The Cheng equation predicts a modulus that is 10% too low whilst Foppl’s and Sherif et al.’s equations predict values that are too high by 13% and 19%, respectively.
Table 3
Transverse Young’s modulus (ET) of machined (ϕ = 20 mm) cylinders calculated using different models and compared to the transverse modulus of the cuboid (2867 ± 164 MPa)
Model
n
Diameter 20 mm
Difference from cuboid (%)
p
Significance
ET (MPa)
SD
Cheng
5
2594
26
− 9.5
0.020
y
Morris
5
2903
32
1.3
0.654
n
5
3103
32
8.2
0.031
y
Foppl
5
3248
33
13.3
0.006
y
Sherif
5
3415
34
19.1
0.001
y
Sherif et al. [26] also compressed PMMA cylinders and compared the results obtained using their equation with that obtained using the equations of Foppl [24] and also Lundberg [25] (which they ascribe to Dorr [39]). Their results show the Sherif equation to be the best predictor, followed by that of Foppl and finally Lundberg. It should be noted, however, that Sherif et al. [26] do not mention compliance issues, nor instrument setup problems, which we found to be significant. Furthermore, they assumed the PMMA cylinders to be isotropic, measuring the modulus by an axial compression of the cylinders. PMMA cylinders are almost always anisotropic with EL > ET (see Table 2). Using an experimental modulus that is too high will automatically favour the Sherif model which predicts the highest modulus (Table 3). The results from the preliminary work of Hillbrick et al. [28] favoured the Cheng et al. equation, however, this data were corrected for compliance using platen on platen compression curves which showed more than 300 micrometres of compliance at the loads needed to compress the PMMA cylinders. The compression of the PMMA cylinders was approximately 400 micrometres; hence, a correction that is almost as large as the actual measurement is unlikely to result in very accurate data. Later work using a video extensometer [29] recommended either the Jawad and Ward equation or the Foppl equation although in this work attention was not paid to the effects of instrument setup.
### 4.5 Finite element modelling
Figure 8 shows the absolute relative errors in displacement between the results obtained using the three-dimensional finite element modelling and that obtained using two of the analytical models, namely (a) Jawad and Ward and (b) Cheng et al. Recall that the finite element calculations relate to a nominal 2% state of strain.
Figure 8a illustrates that in the case of the Jawad and Ward model, the relative difference from the finite element modelling can be determined fully based on the non-dimensional fraction νL/√Erat. The differences are smallest when νL/√Erat is close to zero and increase as νL/√Erat increases or decreases. In the practical example of the PMMA cylinders used in the experimental section of the work, νL, EL, ET were measured experimentally to be 0.399, 3736 MPa and 2867 MPa, respectively. This gives a value of 0.35 for the x axis (i.e. νL/√Erat) in Fig. 8a. At a 2% nominal strain, the finite element model computes an absolute difference of 5% relative to the Jawad and Ward analytical model (Fig. 8a). Thus, for the same force, or stress, when the finite element model gives 2% strain the Jawad and Ward model will predict either 2.1% (5% higher) or 1.9% (5% lower). Considering the stresses at these strains, we note that the stress predicted by the finite element model, at 2% strain, will be 6% higher or lower than that predicted by the Jawad and Ward model.
Using the graphical representation of the different analytical models in Fig. 2, the two possible options for positioning of the finite element model would place it either slightly below the curve corresponding to the Morris equation (approximately 8% higher stress at 2% strain than Jawad and Ward) or close to the curve corresponding to the Foppl equation (5% lower stress at 2% strain than Jawad and Ward).
Turning to the relationship between the absolute relative difference between the Cheng et al. model and the finite element modelling shown in Fig. 8b, the difference is now a function of both νL/√Erat and νT, i.e. forming a surface in νL/√Erat and νT space. This surface has been projected and averaged into the hexagon-bin plot in Fig. 8b. Compared to Fig. 8a, it can be seen that in this case the relative errors follow a more complex pattern with the minimal difference (schematically shown as the hexagons shaded in dark blue in Fig. 8b) forming an approximate parabola passing through the points (± 0.5, − 1.0) and (0.0, − 0.5) in (νL/√Erat,, νT) space. Again, in the practical example of the PMMA cylinders used in the experimental section of the work, the relevant value for the x axis (i.e. νL/√Erat) in Fig. 8b is also 0.35. The transverse Poisson’s ratio, νT, i.e. the value for the vertical axis in Fig. 8b has not been independently determined. However, it is expected to be in the range 0.2–0.5 [40, 41] and a reasonable first-order approximation is that νT = νL ≈ 0.4. Using this value, Fig. 8b indicates that in this example the absolute relative difference between the strain computed by the Cheng et al. model and the finite element modelling is approximately 15%. This translates to a finite element model curve that at 2% strain would have a stress either 3% or 48% above the Jawad and Ward equation.
Considering both sets of results, the most probable positioning of the finite element model curve is 3% above (from comparison with the Cheng et al. model) or 6% above (from comparison with the Jawad and Ward equation model) the Jawad and Ward equation. In other words, we must conclude that the finite element modelling result lies between the Jawad and Ward and Morris models. Encouragingly, this independently determined result is consistent with the outcome of the experimental approach.
## 5 Conclusions
Two independent approaches were undertaken to compare the different models, one being an experimental approach and the other using finite element modelling. In the case of the experimental approach, the difficulties in carrying out accurate compression tests have been highlighted. The major issues are correcting for instrument compliance and setting up the instrument with platens that are parallel to each other and at right angles to the applied force. These issues were addressed by using a video extensometer and keeping the instrument setup constant for all measurements.
PMMA cylinders were machined into smaller diameter cylinders as well as cuboid shapes, and both cylinders and cuboids were compressed both axially and transversely. Machining of the initial cylinders to a smaller diameter showed that machining resulted in an increase in the transverse modulus of 4% and an increase in longitudinal modulus of 7%. These increases were ascribed to a release of stress in the cylinders upon machining. The release of stress was confirmed by viewing both the initial and the machined samples under crossed polars.
Transverse compression of a new set of cylinders yielded force–compression curves after which the cylinders were machined into cuboids and tested. The data from the transverse compression of the cuboids were used to calculate a transverse modulus of 2867 ± 164 MPa for the machined PMMA. Data from the axial compression of the cuboids were used to calculate a Poisson’s ratio of 0.399.
The transverse force–compression curves of the machined cylinders were analysed using several different theoretical models which allow the calculation of the transverse modulus of fibres/cylinders from compression experiments. These values were compared to the value obtained from the transverse compression of the cuboids. The model developed by Morris was shown to best fit the experimental transverse compression data giving a corrected transverse modulus of 2888 MPa which was 1% higher, but statistically identical (p = 0.654) to the value obtained by direct compression of the cuboid. The Jawad and Ward model, as well as the mathematically equivalent models derived by Phoenix et al. and Lundberg, gave results that were within 8% of the value obtained from the transverse compression of the cuboids although these results were no longer statistically identical (p = 0.031).
A separate and independent approach using finite element numerical modelling was adopted, and the results were compared with two of the analytical models. The finite element approach gave results that lie between the Jawad and Ward and Morris models. The close agreement in the outcomes of the finite element modelling and the experimental approach is indeed most encouraging. Considering the results from both approaches, and given the difficulties of carrying out compression tests and the uncertainties involved in several of the measurements, we conclude that the most accurate of the different analytical models are the equations by Morris as well as those due to Jawad and Ward, Phoenix et al. and Lundberg.
## Notes
### Acknowledgements
We would like to acknowledge CSIRO for funding this work and Niall Finn for critical reading of the manuscript and helpful suggestions.
### Conflict of interest
The authors declare that they have no conflict of interest.
## References
1. 1.
Cheng M, Chen WM, Weerasooriya T (2004) Experimental investigation of the transverse mechanical properties of a single Kevlar KM2 fiber. Int J Solids Struct 41(22–23):6215–6232
2. 2.
Kawabata S (1990) Measurement of the transverse mechanical properties of high-performance fibers. J Text Inst 81(4):432–447
3. 3.
Lim J et al (2010) Mechanical behavior of A265 single fibers. J Mater Sci 45(3):652–661
4. 4.
Morton WE, Hearle JWS (1993) Physical properties of textile fibres, 3rd edn. The Textile Institute, ManchesterGoogle Scholar
5. 5.
Pinnock PR, Ward IM (1963) Dynamic mechanical measurements on polyethylene terephthalate. Proc Phys Soc 81(2):260–275
6. 6.
Coleman JN et al (2006) Small but strong: a review of the mechanical properties of carbon nanotube–polymer composites. Carbon 44:1624–1652
7. 7.
Mittal V (ed) (2013) Modeling and prediction of polymer nanocomposite properties. Wiley, New YorkGoogle Scholar
8. 8.
Tan P, Tong L, Steven GP (1997) Modelling for predicting the mechanical properties of textile composites—a review. Compos A 28A:903–922
9. 9.
Torquato S (2000) Modeling of physical properties of composite materials. Int J Solids Struct 37:411–422
10. 10.
Hadley DW, Ward IM, Ward J (1965) The transverse compression of anisotropic fibre monofilaments. Proc R Soc Lond Ser A 285:275–286
11. 11.
Fujita K, Sawada Y, Nakanishi Y (2001) Effect of cross-sectional textures on transverse compressive properties of pitch-based carbon fibers. J Soc Mater Sci Jpn 7(2):116–121
12. 12.
Naito K, Tanaka Y, Yang J-M (2017) Transverse compressive properties of polyacrylonitrile (PAN)-based and pitch-based single carbon fibers. Carbon 118:168–183
13. 13.
Phoenix SL, Skelton J (1974) Transverse compressive moduli and yield behavior of some orthotropic, high-modulus filaments. Text Res J 44(12):934–940
14. 14.
Jones MCG et al (1997) The lateral deformation of cross-linkable PPXTA fibres. J Mater Sci 32(11):2855–2871
15. 15.
Singletary J et al (2000) The transverse compression of PPTA fibers—part I—single fibre transverse compression testing. J Mater Sci 35(3):583–592
16. 16.
Sockalingam S et al (2016) Transverse compression behavior of Kevlar KM2 single fiber. Compos A Appl Sci Manuf 81:271–281
17. 17.
Hadley DW, Pinnock PR, Ward IM (1969) Anisotropy in oriented fibres from synthetic polymers. J Mater Sci 4(2):152–165
18. 18.
Kotani T, Sweeney J, Ward IM (1994) The measurement of transverse mechanical properties of polymer fibres. J Mater Sci 29(21):5551–5558
19. 19.
Pinnock PR, Ward IM, Wolfe JM (1966) The compression of anisotropic fibre monofilaments. II. Proc R Soc Lond A 291(1425):267–278
20. 20.
Stamoulis G, Wagner-Kocher C, Renner M (2007) Experimental study of the transverse mechanical properties of polyamide 6.6 monofilaments. J Mater Sci 42(12):4441–4450
21. 21.
M’Ewen E, Li X (1949) Stresses in elastic cylinders in contact along a generatrix (including the effect of tangential friction). Philos Mag Ser 40(7):454–459
22. 22.
Morris S (1968) The determination of the lateral-compression modulus of fibres. J Text Inst 59(11):536–547
23. 23.
Jawad SA, Ward IM (1978) The transverse compression of oriented nylon and polyethylene extrudates. J Mater Sci 13(7):1381–1387
24. 24.
Foppl A (1907) Die wichtigsten lehren der hoheren elastizitatstheorie, in Vorlesungen uber Technische Mechanik. B. G. Teubner, Leipzig, pp 311–372Google Scholar
25. 25.
Lundberg G (1949) Cylinder compressed between two plane bodies. As cited by H. McCallion and N. Truong, The deformation of rough cylinders compressed between smooth flat surfaces of hard blocks. Wear 79(3):347–361Google Scholar
26. 26.
Sherif SM, Segerlind LJ, Frame JS (1976) An equation for the modulus of elasticity of a radially compressed cylinder. Trans Am Soc Agric Eng 19(4):782–791
27. 27.
Kar NK et al (2012) Diametral compression of pultruded composite rods. Compos Sci Technol 72:1283–1290
28. 28.
Hillbrick L et al (2013) Transverse modulus of carbon fibres. Proceedings of the fiber society spring meeting and technical conference. In: The fiber society spring meeting and technical conference, Geelong, AustraliaGoogle Scholar
29. 29.
Hillbrick L et al (2015) Determination of the transverse modulus of cylindrical samples by compression between two parallel flat plates, in Carbon fibre-future directions conference, Geelong, AustraliaGoogle Scholar
30. 30.
Hertz H (1896) On the contact of elastic solids. Miscellaneous papers, chap 5. MacMillan, LondonGoogle Scholar
31. 31.
Hertz H (1896) On the contact of rigid elastic solids and on hardness. Miscellaneous papers, chap 6. MacMillan, LondonGoogle Scholar
32. 32.
McCallion H, Truong N (1982) The deformation of rough cylinders compressed between smooth flat surfaces of hard blocks. Wear 79(2):347–361
33. 33.
Zantopulos H (1988) An alternate solution of the deformation of a cylinder between two flat plates. J Tribol 110:727–729
34. 34.
Hoeprich MR, Zantopulos H (1981) Line contact deformation: a cylinder between two flat plates. J Tribol 103(1):21–25
35. 35.
Kosarev OI (2010) Contact deformation of a cylinder under its compression by two flat plates. J Mach Manuf Reliab 39(4):359–366
36. 36.
David OB et al (2013) Evaluation of the mechanical properties of PMMA reinforced with carbon nanotubes—experiments and modeling. Exp Mech 54:175–186
37. 37.
Musgrave M (1990) On the constraints of positive-definite strain energy in anisotropic elastic media. Q J Mech Appl Mech 43:605–621
38. 38.
Kese KO, Li ZC, Bergman B (2004) Influence of residual stress on elastic modulus and hardness of soda-lime glass measured by nanoindentation. J Mater Res 19(10):3109–3119
39. 39.
Dörr VJ (1955) Oberflachenverformungen und Randkrafte bei runden Rollen und Bohrungen. Der Stahlbau 24:202–206Google Scholar
40. 40.
Mott PH, Roland CM (2009) Limits to Poisson’s ratio in isotropic materials. Phys Rev B 80:132104
41. 41.
Mott PH, Roland CM (2013) Limits to Poisson’s ratio in isotropic materials. General result for arbitrary deformation. Phys Scr 87(5):055404
© Springer Nature Switzerland AG 2019
## Authors and Affiliations
• Linda K. Hillbrick
• 1
• Jamieson Kaiser
• 1
• Mickey G. Huson
• 1
• Geoffrey R. S. Naylor
• 1
• Elliott S. Wise
• 2
• Anthony D. Miller
• 3
• Stuart Lucas
• 1
1. 1.CSIRO ManufacturingWaurn PondsAustralia
2. 2.CSIRO Computational and Simulation ScienceClaytonAustralia
3. 3.CSIRO Computational and Simulation ScienceUrrbraeAustralia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137259125709534, "perplexity": 1717.548197390958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00276.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/if-dark-matter-milky-way-were-composed-entirely-machos-evidence-shows-it-not | Question
If the dark matter in the Milky Way were composed entirely of MACHOs (evidence shows it is not), approximately how many would there have to be? Assume the average mass of a MACHO is 1/1000 that of the Sun, and that dark matter has a mass 10 times that of the luminous Milky Way galaxy with its $10^{11}$ stars of average mass 1.5 times the Sun’s mass.
$1.5\times 10^{15}\textrm{ MACHO'S}$
Solution Video | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909412860870361, "perplexity": 608.3926391994638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00237.warc.gz"} |
http://talks.cam.ac.uk/talk/index/167768 | # Domains of generators of Levy-type processes
FD2W01 - Deterministic and stochastic fractional differential equations and jump processes
We study the domain of the generator of stable processes, stable-like processes and more general pseudo- and integro-differential operators which naturally arise both in analysis and as infinitesimal generators of Lévy- and Lévy-type (Feller) processes. In particular we obtain conditions on the symbol of the operator ensuring that certain (variable order) Hölder and Hölder–Zygmund spaces are in the domain. We use toolsfrom probability theory to investigate the small-time asymptotics of the generalized moments of a Lévy or Lévy-type process, $$\lim_{t\to 0} (E^xf(X_t)-f(x))/t$$ for functions $f$ which are not necessarily bounded or differentiable. The pointwise limit exists for fixed if $f$ satisfies a Hölder condition at x. Moreover, we give sufficient conditions which ensure that the limit exists uniformly in the space of continuous functions vanishing at infinity. Our results apply, in particular, to stable-like processes, relativistic stable-like processes, solutions of Lévy-driven SDEs and Lévy processes.
This talk is part of the Isaac Newton Institute Seminar Series series. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657690525054932, "perplexity": 649.6323853887387}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00371.warc.gz"} |
https://math.stackexchange.com/questions/1046321/approximating-log-x-with-roots | # Approximating $\log x$ with roots
The following is a surprisingly good (and simple!) approximation for $\log x+1$ in the region $(-1,1)$: $$\log (x+1) \approx \frac{x}{\sqrt{x+1}}$$
Three questions:
• Is there a good reason why this would be the case?
• How does one go about constructing the "next term"?
• Are the any papers on "generalized Pade approximations" that involve radicals?
• I suggest you write down the Taylor series of $\log(x+1)$ and $(x+1)^{-1/2}$ and see the few first terms agree. – LinAlgMan Dec 1 '14 at 10:42
• @LinAlgMan - That is true, but it is not the reason. This approximation works much better than the Taylor series, even to high orders, probably because it accounts for the pole. – nbubis Dec 1 '14 at 10:43
• May I ask how you got this interesting approximation ? – Claude Leibovici Dec 1 '14 at 10:49
• @ClaudeLeibovici - the function came up in a physics calculation, and upon plotting, I noticed it looked oddly familiar. – nbubis Dec 1 '14 at 10:51
• @Lucian - I'm not sure I see the connection. – nbubis Dec 8 '14 at 20:12
Let's rewrite both sides in terms of $y = x + 1$: we get
$$\log y \approx \sqrt{y} - \frac{1}{\sqrt{y}}$$
on, let's say, the interval $\left( \frac{1}{2}, 2 \right)$ (I hesitate to discuss the entire interval $(0, 2)$; it seems to me that the approximation is not all that good near $0$). The RHS should look sort of familiar: let's perform a second substitution $y = e^{2z}$ to get
$$2z \approx e^z - e^{-z} = 2 \sinh z$$
on the interval $\left( - \varepsilon, \varepsilon \right)$ where $\varepsilon = \frac{\log 2}{2} \approx 0.346 \dots$. Of course now we see that the LHS is just the first term in the Taylor series of the RHS, and on a smaller interval than originally. Furthermore, the Taylor coefficients of $2 \sinh z$, unlike the Taylor coefficients of our original functions, decrease quite rapidly. The next term is $\frac{z^3}{3}$, which on this interval is at most
$$\frac{\varepsilon^3}{3} \approx 0.0138 \dots$$
and this is more or less the size of the error in the approximation between $\log 2$ and $\frac{1}{\sqrt{2}}$ obtained by setting $y = 2$, or equivalently $x = 1$.
With the further substitution $t = \sinh z$, the RHS is just the first term in the Taylor series of the LHS. To get the "next term" we could look at the rest of the Taylor series of $\sinh^{-1} t$. The next term is $- \frac{t^3}{6}$, which gives
$$z \approx \frac{e^z - e^{-z}}{2} - \frac{(e^z - e^{-z})^3}{48}$$
or
$$\log y \approx \left( \sqrt{y} - \frac{1}{\sqrt{y}} \right) - \frac{1}{24} \left( \sqrt{y} - \frac{1}{\sqrt{y}} \right)^3.$$
I don't know if this is useful for anything. The series to all orders just expresses the identity
$$\log y = 2 \sinh^{-1} \frac{\left( \sqrt{y} - \frac{1}{\sqrt{y}} \right)}{2}.$$
• Very nicely done!! As you noticed though, near $y\to 0$, higher orders seem to actually ruin the approximation, which I find curious. – nbubis Dec 1 '14 at 11:26
• @nbubis: well, the radius of convergence of the Taylor series of $\sinh^{-1} t$ is $1$, and as $y \to 0$, $\sqrt{y} - \frac{1}{\sqrt{y}}$ gets arbitrarily large (in absolute value)... – Qiaochu Yuan Dec 1 '14 at 11:59
The simplest Pade approximant we could build seems to be $$\log(1+x)\approx\frac{x}{1+\frac{x}{2}}$$ and we can notice the similarity of denominators close to $x=0$.
However, the approximation given in the post seems to be significantly better for $x<\frac 12$.
• Indeed. It is probably better because it has a pole in the right place. – nbubis Dec 1 '14 at 11:00
I am being rather late, yet still there is some interesting information I can add.
The Padet approximant is of the form $\log(1+x)\approx \frac{x}{1+x/2}$ as noted by other posters. This partially explains why $\frac{x}{\sqrt{1+x}}$ is a good approximation, what it does not explain is why the square-root approximation is better than a supposedly "great" Pade approximation. @nbubis had an idea that it works better because it has pole in the correct spot, but it seems that it is actually a red herring.
Let's take a look on more general Pade $(1,n)$ approximation, it equals $\log(1+x)\approx\frac{x}{1+x/2-x^2/12+x^4/24+...}$.
Now the reason $\frac{x}{\sqrt{1+x}}$ approximation performs better can be explained by $\sqrt{1+x} \approx 1+x/2 -x^2/8$ and noting that $1+x/2-x^2/8$ is closer to the "true value" of the denominator than the first Pade approximant $1+x/2$.
To see that it is indeed the case consider the approximation
$\log(1+x)=\frac{x}{(1+5x/6)^{3/5}}$.
Now, as you can quickly check, $(1+5x/6)^{3/5}$ has the Taylor expansion $\approx 1+x/2 -x^2/12$ which aggrees with first 3 terms of $(1,n)$ Pade approximant. Now if you plot it it will turn out that it is even better than originally suggested $\frac{x}{\sqrt{1+x}}$ despite having pole in the wrong place.
Regarding method by @QiaochuYuan, you can perform the same "trick" to get better approximations which will be performing better in the neighbourhood of $x=0$ but worse when $x$ is large, for example
$\log (1+x) \approx \frac{x}{(1+5x/6)^{3/5}} + \frac{x^4}{108 (1+5x/6)^{12/5}}$
But in disguise what you are actually making is finding better approximations to some Pade approximant.
Some of the other approximations you can find in the same way are
$\log(1+x)\approx \frac{x}{\sqrt{1+x+x^2/12}}$ and $\log(1+x)\approx \frac{x}{(1+3x/2+x^2/2)^{1/3}}$ which are good simply beause they coincide with Pade approximant up to the terms of high order. I guess the first among those two is another reason why $\frac{x}{\sqrt{1+x}}$ worked so well.
Short version: It's actually a coincidence, $\sqrt{1+x}$ happen to have Taylor expansion $\sqrt{1+x}\approx 1+x/2-x^2/8$ which coincides with expansion of $x/\log(1+x)\approx 1+x/2 -x^2/12$ up to 2 terms and the third term is not that different to mess up the approximation.
• I like this! That makes a lot of sense. – nbubis Dec 1 '17 at 6:00
A different approach.
The function $f(x)=x-\log(1+x)\sqrt{1+x}$ is continuous and increasing on $[-1,1]$ (I have not proved it, but a graph of $f'$ is sufficiently convincing.) For any $a\in(0,1)$ $$f(-a)\le f(x)\le f(1)=0.0197419,\quad-a\le x\le1.$$ Take for instance $a=0.8$ we obtain $$-\frac{0.0802375}{\sqrt{1+x}}\le\frac{x}{\sqrt{1+x}}-\log(1+x)\le\frac{0.0197419}{\sqrt{1+x}},\quad -.8\le x\le1.$$
This shows that $\log(1+x)\sqrt{1+x}$ is a good approximation of $x$. $$f(x)=-\frac{x^3}{24}+\dots$$ is an alternate series. This explains the vey good approximation for $x>0$, and the not so good for $x<0$.
• I'm not sure how that helps - this is just an evaluation of the approximation. – nbubis Dec 1 '14 at 11:56
One reason may be that their Taylor series around $x=0$ start the same: $$\log (x+1) \approx x-x^2/2+x^3/3+\cdots$$ $$\frac{x}{\sqrt{x+1}} \approx x-x^2/2+3x^3/8+\cdots$$ So they agree to order 2 for $|x|<1$. They almost agree to order 3 because $1/3 \approx 3/8$ roughly.
However, this is an a posteriori reason. I don't know why this approximation should be good a priori.
• That doesn't necessarily tell you much about how good of an approximation to expect on an interval as large as $(-1, 1)$. – Qiaochu Yuan Dec 1 '14 at 10:46
• This was what I was just typing ! The question is interesting. – Claude Leibovici Dec 1 '14 at 10:46
• For example, when $x = 1$ the LHS is $\log 2 \approx 0.693 \dots$ while the RHS is $\frac{1}{\sqrt{2}} \approx 0.707 \dots$. These agree substantially better that can be accounted for by the first two terms of the Taylor series, which give $0.5$. – Qiaochu Yuan Dec 1 '14 at 10:48
• This is clearly not the reason. The approximation does much better than the Taylor series way past second order.(probably because it has a pole) – nbubis Dec 1 '14 at 10:49
$$\log(x+1)=\lim_{n\to\infty}n(\sqrt[n]{x+1}-1).$$
In the case $n=2$, $$2(\sqrt{x+1}-1)=2\frac{x}{\sqrt{x+1}+1}\approx\frac x{\sqrt{x+1}}$$ for small $x$.
The approximation works better as it has a vertical asymptote at $x=-1$.
• Nice :) Though the approximation ends up being better than the derivation... – nbubis Dec 1 '14 at 11:14
• This still doesn't explain most of the agreement. Again taking $x = 1$ we have $2 (\sqrt{2} - 1) \approx 0.828 \dots$, which is maybe 15% bigger than either the LHS or the RHS, which agree to maybe within 2%. – Qiaochu Yuan Dec 1 '14 at 11:15
• Anyone who's still trying to answer the question should actually plot the functions (I did it in WolframAlpha) to see how close the agreement actually is. I think the pole is a red herring: the agreement is really not very good close to the pole. – Qiaochu Yuan Dec 1 '14 at 11:17
• I don't thank the downvoters. @QiaochuYuan: the question is not a contest to the best approximation. It is about why $\log (x+1) \approx \dfrac{x}{\sqrt{x+1}}$. – Yves Daoust Oct 31 '17 at 7:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847997188568115, "perplexity": 291.22871831001856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00107.warc.gz"} |
https://www.physicsforums.com/threads/cancelling-the-earths-magnetic-field.651655/ | # Cancelling the Earths magnetic field
1. Nov 12, 2012
### deborahcurrie
1. The problem statement, all variables and given/known data
A researcher would like to perform an experiment in a zero magnetic field, which means that the field of the earth must be cancelled. Suppose the experiment is done inside a solenoid of diameter 1.0{\rm m} , length 5.0{\rm m} , with a total of 9000 turns of wire. The solenoid is oriented to produce a field that opposes and exactly cancels the field of the earth.
What current is needed in the solenoid's wires?
2. Relevant equations
We use this for B=μ0*I*(N/L) for solenoids.
Bearth=5E-5T
μ0=1.257E-6
N=9000
L=5.0 m
3. The attempt at a solution
I=(B*L)/μu0*N)=(5E-5*5)/(9000*5) = 0.02209=2.2E-2
Mastering Physics says this is incorrect. What did I miss?
2. Nov 12, 2012
### Delphi51
The formula for I looks good. Trouble with the substitution, I think. Looks like the number for u0 didn't go in and there is an extra 5.
3. Nov 15, 2012
### deborahcurrie
That was my bad. My actual equation looks like:
(5E-5*5)/(1.257E-6*9000)=0.02209 which is wrong according to Mastering physics. Doesn't like 2.2E-2 either.
4. Nov 15, 2012
### haruspex
Earth's field varies greatly from place to place. Is this the value you were told to use?
5. Nov 15, 2012
### deborahcurrie
I talked with the instructor and found my error -- I just needed to adjust for units the question required. I usually catch this, but not this time. Thanks for your replies anyways. It's always good to know there is a safety line!
6. Nov 15, 2012
### haruspex
OK, I did wonder about the units. In your post it says e.g. "diameter 1.0{\rm m}". LaTex problem?
PS - I guess you mean the units the answer was required in, like mA?
7. Nov 16, 2012
### deborahcurrie
Not sure what a La Tex problem is -- but you are correct on the PS
Similar Discussions: Cancelling the Earths magnetic field | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657686948776245, "perplexity": 1516.2695328462719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00837.warc.gz"} |
https://www.computer.org/csdl/trans/tc/1970/09/01671647-abs.html | Issue No. 09 - September (1970 vol. 19)
ISSN: 0018-9340
pp: 850-851
K.W. Henderson , Stanford Linear Accelerator Center1 Stanford University
ABSTRACT
The matrix form of the Walsh functions as defined in the above-mentioned short note [1] can be generated by the modulo-2 product of two generating matrices: the natural binary code, and the transpose of the bit-reversed form of the first. As a result, the coefficients of the Walsh transform occur in bit-reversed order. By simply reordering the Walsh functions themselves to correspond to generation by the product of two such code matrices, neither or both in bit-reversed form, the Walsh coefficients occur in their natural order.
INDEX TERMS
Code matrix, Walsh-Fourier transform, Walsh functions, Walsh matrix.
CITATION
K. Henderson, "Comment on "Computation of the Fast Walsh-Fourier Transform"," in IEEE Transactions on Computers, vol. 19, no. , pp. 850-851, 1970.
doi:10.1109/T-C.1970.223054 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483695387840271, "perplexity": 2736.8442280210293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948089.47/warc/CC-MAIN-20180426051046-20180426071046-00281.warc.gz"} |
https://www.physicsforums.com/threads/work-is-not-done-by-static-friction-when-accelerating-a-car.983675/ | # Work is not done by static friction when accelerating a car
• I
• Start date
• Tags
• #1
Homework Helper
Gold Member
9,036
2,421
## Summary:
A derivation is provided to support the statement that work is not done by static friction when accelerating a car.
## Main Question or Discussion Point
A recent thread posed the question whether work is done by static friction in the case of an accelerating car. Before I had a chance to reply, the thread was closed on the grounds that the subject was "beaten to death". Undaunted, I am determined to deliver the coup de grâce here with a simple derivation.
We assume rolling without slipping. Let
##I_{cm}=qmR^2~~~(0 \leq q\leq 1)## = the moment of inertia of the wheel, radius ##R## and mass ##m##, about its CM.
##F## = the force acting on the wheel at the CM; it is the torque on the wheel divided by the radius ##R##.
##f##= the force of static friction on the wheel.
From the FBD shown above, we get
##F-f=ma_{cm}##
##fR=I_{cm}\alpha=qmR^2(a_{cm}/R)##
These equations give
##a_{cm}=\dfrac{F}{(q+1)m}~\rightarrow~\alpha=\dfrac{F}{(q+1)mR};~~~~f=\dfrac{qF}{q+1}##
We invoke the SUVAT equation sans time in the linear and rotational forms to find the changes in translational and rotational kinetic energy, after the CM of the wheel has advanced by ##\Delta s_{cm}##.
$$\Delta K_{trans}=\frac{1}{2}m(2a_{cm}\Delta s_{cm})=\frac{1}{2}m\left[2\frac{F}{(q+1)m}\Delta s_{cm}\right]=\frac{F\Delta s_{cm}}{(q+1)}$$
$$\Delta K_{rot}=\frac{1}{2}I_{cm}(2\alpha\Delta \theta)=\frac{1}{2}qmR^2\left[2\frac{F}{(q+1)mR}\frac{\Delta s_{cm}}{R}\right]=\frac{qF\Delta s_{cm}}{(q+1)}=f\Delta s_{cm}$$Interpretation
The input work crossing the system boundary is ##F\Delta s_{cm}## and is equal to ##\Delta K=\Delta K_{trans}+\Delta K_{rot}## in agreement with the work-energy theorem. Static friction does no work on the system but partitions the input work between two internal degrees of freedom, translational and rotational, according to the size of parameter ##q##. Specifically, ##f\Delta s_{cm}## is the amount of input work diverted into change in rotational energy. When ##q=0## (point mass) all the input work goes into translational internal energy; when ##q=1## (a ring with all the mass at ##R##) we have equipartition of energy. R.I.P.
Note: It often helps to view energy transformations in terms of the first law of thermodynamics. This idea is presented among others in a trilogy of insight contributions currently under preparation.
Likes Motore and Dale
Related Classical Physics News on Phys.org
• #2
rcgldr
Homework Helper
8,690
521
I don't understand the FBD. I assume these are the forces exerted onto the wheel. The static friction force at the contact point should be greater than the opposing force at the axle, corresponding to forwards acceleration of the wheel. I assume the opposing force at the axle is the reaction (to acceleration) force from the rest of the vehicle.
I'm not sure how torque should be shown in a FBD.
• #3
Homework Helper
Gold Member
9,036
2,421
I don't understand the FBD. I assume these are the forces exerted onto the wheel. The static friction force at the contact point should be greater than the opposing force at the axle, corresponding to forwards acceleration of the wheel. I assume the opposing force at the axle is the reaction (to acceleration) force from the rest of the vehicle.
I'm not sure how torque should be shown in a FBD.
##F## is the net external horizontal force acting at the CM of the wheel. Yes, torque cannot be easily shown in a FBD, that is why it has been replaced by a force divided by the radius of the wheel. Its composition is irrelevant. The net horizontal force must be in the forward direction and greater than static friction because ##a_{cm}## is directed forward. The force of static friction ##f## is whatever is necessary to provide the observed forward acceleration.
The point of this exercise is to show that if a wheel is accelerated by a horizontal force ##F##, then (a) the work input is ##F\Delta s_{cm}## and (b) ##f\Delta s_{cm}## is not the work done by friction but the amount of input work that is converted to rotational energy. The balance of the input work is converted to translational energy.
• #4
Dale
Mentor
29,563
5,887
With that, I think we will close this topic. Again.
• #5
davenn
Gold Member
2019 Award
9,238
7,508
With that, I think we will close this topic. Again.
but you didn't
• #6
rcgldr
Homework Helper
8,690
521
Assuming that "forward" in the FBD is to the right, then it appears that the FBD is a image of a wheel being pulled by a string at the axle with force ##F##, and that static friction force is an opposing force ##f##, causing the wheel to roll as it moves to the right.
For the accelerating car case, a FBD should have a static friction force at the bottom of the wheel, pointed to the right, which is the direction of acceleration, and a reactive (the reaction of the rest of the car to acceleration) force acting on the axle of the wheel, pointed to the left. The static friction force would be greater than the reactive force, corresponding to a net forwards (right) force, resulting in a forwards (right) acceleration of wheel (and car).
• #7
vanhees71
Gold Member
2019 Award
15,074
6,553
Well, fbd's are not the simplest way to understand things. I'd use the Hamilton formalism. It's clear that forces due to constraints don't do work, which is easily shown using the Lagrange-multiplier concept to impose the constraints, which leads to a simple derivation of the forces due to the constraints and the fact that they don't "do work".
It's of course different as soon as you take into account dissipation. There mechanical work is transferred to heat.
• #8
Dale
Mentor
29,563
5,887
but you didn't
Oops! Done
Likes davenn, vanhees71, Motore and 1 other person
• Last Post
Replies
163
Views
4K
• Last Post
Replies
21
Views
13K
• Last Post
Replies
17
Views
3K
• Last Post
Replies
9
Views
3K
• Last Post
Replies
14
Views
10K
• Last Post
Replies
13
Views
2K
• Last Post
Replies
26
Views
8K
• Last Post
Replies
54
Views
11K
• Last Post
Replies
1
Views
293
• Last Post
Replies
11
Views
796 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106638431549072, "perplexity": 880.8179502835525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00193.warc.gz"} |
https://www.physicsforums.com/threads/metal-pole-falls.903546/ | # Metal Pole Falls
1. Feb 11, 2017
### Arman777
1. The problem statement, all variables and given/known data
A uniform metal pole of height $30.0m$ and a mass $100kg$ is initially standing upright but then falls over one side without its lower end sliding or losing contact with the ground.What is the linear speed of the pole's upper end just before impact ?
2. Relevant equations
$τ=Frsinθ$
$τ=I∝$
3. The attempt at a solution
I think there must be some $F$ force that the pole will start to fall.And theres also $F'$ force between pole and ground.
so total torque of the object will be ($I$ take the touching point between pole and ground rotation axis).
So If total length is $L$ then I can write for this instance $(t=0)$
$FL=τ(t=0)$
In between hitting the ground (lets call that time $T$).The torque will be
$FL+mgsinθ\frac L 2=τ(0,T)$, but here sinθ will change every moment.
And in the impact it will be $FL+mg\frac L 2=τ(T)$
then from here I tried to take a some time interval like when $sinθ=\frac {\sqrt 2} {2}$ and substract these values so that $Fr$ will cancel out but I dont know I stucked.
Then I thought I can just use the center of mass of pole's motion.It will make a parabola.And If I can calculate its speed when hits the ground I can easily calculate the end poing of pole.
Here I used normal projectile motion equations to find speed.but came out wrong.Maybe I am forgetting the centripetal force.Or Writing some equation wrong.
Thanks
2. Feb 11, 2017
### BvU
As a function of what ? Why ?
3. Feb 11, 2017
### Arman777
Cause its the center of mass and the acceleration in that point will be g.A function of time ?
4. Feb 11, 2017
### haruspex
Any conservation laws that might be relevant?
5. Feb 12, 2017
### Arman777
Yeah I can apply that maybe or I apllied $mgL=\frac 1 2mv^2$ but I get wrong result
6. Feb 12, 2017
### kuruman
That's because in writing the above equation, you are assuming that the entire mass of the rod is concentrated at the tip of the rod at distance L from the pivot. Is it?
7. Feb 12, 2017
### Arman777
yeah thats not true..then I find the energy change of the center of mass.which its $mg\frac L 2=\frac 1 2mv^2$
from that the velocity of the top end should be $2v$ ,which is not correct answer
8. Feb 12, 2017
### kuruman
You cannot assume that the entire mass is concentrated at the center of mass either. The rod is not in free fall. It is rotating about its end. You need to consider the rotational kinetic energy of the rod just before it hits the ground. It is not $\frac{1}{2}mv_{cm}^2$. What is it?
9. Feb 12, 2017
### Arman777
It says uniform but ok
$E=\frac 1 2Iω^2$ and $I=\frac 1 6mL^2$
10. Feb 12, 2017
### kuruman
The moment of inertia of a uniform rod about its end is $I = \frac{1}{3}mL^2$, otherwise it's OK. Now conserve mechanical energy remembering that the change in potential energy of the CM is what's relevant.
11. Feb 12, 2017
### Arman777
how can we calcualte it?
12. Feb 12, 2017
### kuruman
Calculate what?
13. Feb 12, 2017
### Arman777
Moment of inertia
14. Feb 12, 2017
### kuruman
I just gave you the correct formula for it. Doesn't the statement of the problem give you the numbers that go in it?
15. Feb 12, 2017
### Arman777
I am asking the formula the equation..
16. Feb 12, 2017
### Arman777
OK I found it never mind
17. Feb 12, 2017
### Arman777
Ok I solved thanks a lot
Draft saved Draft deleted
Similar Discussions: Metal Pole Falls | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398148655891418, "perplexity": 1229.7031837648585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134830-00051.warc.gz"} |
https://hal.inria.fr/hal-01248792 | # Nonlinear Unknown Input Observability: Analytical expression of the observable codistribution in the case of a single unknown input
1 CHROMA - Robots coopératifs et adaptés à la présence humaine en environnements dynamiques
CITI - CITI Centre of Innovation in Telecommunications and Integration of services, Inria Grenoble - Rhône-Alpes
Abstract : This paper investigates the unknown input observability problem in the nonlinear case. Specifically, the systems here analyzed are characterized by dynamics that are nonlinear in the state and linear in the inputs and characterized by a single unknown input and multiple known inputs. Additionally, it is assumed that the unknown input is a differentiable function of time (up to a given order). The goal of the paper is not to design new observers but to provide a simple analytic condition in order to check the weak local observability of the state. In other words, the goal is to extend the well known observability rank condition to these systems. Specifically, the paper provides a simple algorithm to directly obtain the entire observable codistribution. As in the standard case of only known inputs, the observable codistribution is obtained by recursively computing the Lie derivatives along the vector fields that characterize the dynamics. However, in correspondence of the unknown input, the corresponding vector field must be suitably rescaled. Additionally, the Lie derivatives must be computed also along a new set of vector fields that are obtained by recursively performing suitable Lie bracketing of the vector fields that define the dynamics. In practice, the entire observable codistribution is obtained by a very simple recursive algorithm. However, the analytic derivations required to prove that this codistribution fully characterizes the weak local observability of the state are complex and, for the sake of brevity, are provided in a separate technical report. The proposed analytic approach is illustrated by checking the weak local observability of several nonlinear systems driven by known and unknown inputs.
Type de document :
Communication dans un congrès
SIAM - CT15, Jul 2015, Paris, France. 2015, 〈10.1137/1.9781611974072.2〉
Domaine :
Littérature citée [22 références]
https://hal.inria.fr/hal-01248792
Contributeur : Agostino Martinelli <>
Soumis le : lundi 28 décembre 2015 - 18:28:15
Dernière modification le : mercredi 11 avril 2018 - 01:56:27
### Fichier
ltexpprt.pdf
Fichiers produits par l'(les) auteur(s)
### Citation
Agostino Martinelli. Nonlinear Unknown Input Observability: Analytical expression of the observable codistribution in the case of a single unknown input. SIAM - CT15, Jul 2015, Paris, France. 2015, 〈10.1137/1.9781611974072.2〉. 〈hal-01248792〉
### Métriques
Consultations de la notice
## 175
Téléchargements de fichiers | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237430453300476, "perplexity": 1498.5785939430332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161638.66/warc/CC-MAIN-20180925123211-20180925143611-00052.warc.gz"} |
https://www.physicsforums.com/threads/calculating-angular-acceleration-and-angular-velocity.558243/ | # Calculating angular acceleration and angular velocity
• Start date
• #1
13
0
## Homework Statement
Calculate the angular acceleration and angular velocity of a 2kg object rotating in a
circle of 1.5m radius in a time of 3s.
w=v/r
2*pie*r
## The Attempt at a Solution
Hi all, every attempt so far has been a failure, I just can't work out how to calculate this, hours spent, Please help.
Thanks
Dave.
Related Introductory Physics Homework Help News on Phys.org
• #2
Doc Al
Mentor
44,987
1,259
Is that a complete statement of the problem? Is it moving at constant speed? Or does it start from rest, by any chance?
What have you done so far?
• #3
Correct me if I am wrong, but cant this be calculated pretty much directly assuming the body is at a constant speed? Really dont wanna overhelp here.
• #4
13
0
Hi, Yes that is the complete question so I guess it will start from rest if they are asking for the acceleration?
My course information provides very short explanitions and no examples of angular acceleration and angular velocity. (ICS)
As for what I have done so far - mainly just looking at it. I know for angular accelaration I need to use w=v/r If i'm honest Im not sure how to get v?
I'm doing quite well apart from this question so i'm sure i'm missing something simple?
• #5
Doc Al
Mentor
44,987
1,259
Hi, Yes that is the complete question so I guess it will start from rest if they are asking for the acceleration?
If that's the word for word exact problem statement, it's very poorly worded. You shouldn't have to guess what they mean. (Please double check!)
As for what I have done so far - mainly just looking at it. I know for angular accelaration I need to use w=v/r If i'm honest Im not sure how to get v?
That formula relates angular speed (ω) to tangential speed (v).
Here's how I interpret the problem, although this is just a guess: It starts from rest and travels, with constant acceleration, one complete circle in the given time.
What kinematic formulas for accelerated motion might you apply?
• #6
Nah, it wont be at rest to start, its going to be a constant velocity. The term angular acceleration refers to the fact that its constantly changing direction, and hence velocity (which is a vector). As for finding this velocity...
Average speed = distance over time. You now need to consider how to find the distance the object travels in one rotation... :)
• #7
Doc Al
Mentor
44,987
1,259
The term angular acceleration refers to the fact that its constantly changing direction, and hence velocity (which is a vector).
Not usually. Angular acceleration refers to the rate at which the angular velocity is changing. (And if the angular velocity is constant...)
• #8
Yea agreed, I didnt word my explanation very well at all... :)
• #9
13
0
Hi yes that is the question word for word, it has definitely confused me.
I could use s = ut + 1/2at if it starts at rest that will give me the acceleration in ms-2
2 pi r= 9.42m
9.42 / 3 = 3.14ms-1
Thats about as far as I can go.
• #10
Doc Al
Mentor
44,987
1,259
I could use s = ut + 1/2at if it starts at rest that will give me the acceleration in ms-2
I think you mean s = ut + 1/2at2 (you left out the square). You can let s, u, and a represent angular quantities.
Angular speed has units of rad/s; angular acceleration has units of rad/s2.
2 pi r= 9.42m
9.42 / 3 = 3.14ms-1
That would be a calculation of average speed. (What would be the final speed?)
Use the formula above (modified for rotational quantities) to find the acceleration.
There are several ways to solve this. Just dive in.
• #11
13
0
Hi, this is the last question of my assessment and I have spent over 10 hours looking at it and I really dont understand it. (I have had no problem with the other 25)
I'm sure there is an easy and simple explanation as to where i'm going wrong? I can remember doing this in school and I'm sure it was not this h
Thanks
Dave.
• Last Post
Replies
26
Views
69K
• Last Post
Replies
1
Views
3K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553244113922119, "perplexity": 736.4130158029402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00158.warc.gz"} |
https://publications.iitm.ac.in/publication/far-ir-reflectance-study-on-b-site-disordered-ba-zn13ta23o3 | X
Far-IR reflectance study on B-site disordered Ba (Zn1/3Ta2/3)O3 dielectric resonator
V. R.K. Murthya
Published in Elsevier Science Ltd, Exeter
2000
Volume: 35
Issue: 8
Pages: 1325 - 1332
Abstract
A far-IR reflectance spectrum of a disordered Ba(Zn1/3Ta2/3)O3 dielectric resonator has been studied. The spectrum was analyzed with the Generalized Oscillator model. The real part of the dielectric constant (εr) and loss tangent (tanδ) calculated from the parameters obtained from the fit is comparable to the values measured at microwave frequencies. The possible mode identification and their individual mode contribution to the real and imaginary parts of the complex dielectric constant are also analyzed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959909677505493, "perplexity": 2810.4742987858076}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00553.warc.gz"} |
http://mathhelpforum.com/calculus/63326-finding-derivatives.html | 1. ## Finding derivatives
y= (x-2)/(x^2-x+1)
Find y and y
for the problem i get to
y=(-x^2+4x-1)/(x^2-x+1)^2 by u/v method but the answer of this derivative problem is (Given in the book)
-3(x^2-3x+1)/(x^2-x+1)^3
How can i Get to this form ?
Thanks
2. $y=\frac{x-2}{x^2-x+1}$
I used the quotient rule (I think you did too):
$\frac{dy}{dx}=\frac{v\frac{du}{dx}-u \frac{dv}{dx}}{v^2}$
where u is the numerator and v is the denominator. In this case:
$u=x-2$ and $v=x^2-x+1$.
So I did it like this:
$\frac{dy}{dx}=\frac{(x^2-x+1)(1)-(x-2)(2x-1)}{(x^2-x+1)^2}$
$\frac{dy}{dx}=\frac{x^2-x+1-2x^2+5x-2}{(x^2-x+1)^2}$
$\frac{dy}{dx}=\frac{-x^2+4x-1}{(x^2-x+1)^2}$
So you're right!
I think it's a typo or just this in another form.
3. Originally Posted by Showcase_22
I think it's a typo or just this in another form.
Yeah! but i have to come in that form somehow and i want to know the way.Besides, i have to go for second derivative of this function too
4. If you put x=1 into both the equations you get two different answers so they're not the same.
The only way to get $(x^2-x+1)^3$ as the denominator is to differentiate again. I thought this would be f''(x).
However, I began to differentiate again (using the quotient rule, it's similar to before) and I got a cubic appearing on top (hence this can't be f''(x)).
Try and differentiate f'(x) again and see what you get. I'm pretty sure i'm doing it correctly.
5. Originally Posted by Showcase_22
If you put x=1 into both the equations you get two different answers so they're not the same.
The only way to get $(x^2-x+1)^3$ as the denominator is to differentiate again. I thought this would be f''(x).
However, I began to differentiate again (using the quotient rule, it's similar to before) and I got a cubic appearing on top (hence this can't be f''(x)).
Try and differentiate f'(x) again and see what you get. I'm pretty sure i'm doing it correctly.
Answer of f(x) seems to be
6x(2x^2-8x+5)/(x^2-x+1)^4 from the book
6. Well I think we can safely say the answer in the back of the book for f'(x) is a typo.
The good news is that you're doing it correctly!
7. Originally Posted by Showcase_22
Well I think we can safely say the answer in the back of the book for f'(x) is a typo.
Actually i m not sure about it. So, anybody here can try this one and make a comment.This answer is declared correct by my instructor.By the way it is the problem from anton's book. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418678879737854, "perplexity": 408.38806157540046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00788.warc.gz"} |
https://wlou.blog/2018/02/26/tensor-products-for-group-actions-part-2/ | ## Tensor Products for Group Actions, Part 2
In this previous post, tensor products of $G$-sets were introduced and some basic properties were proved, this post is a continuation, so I’ll asume that you’re familiar with the contents of that post.
In this post, unless specified otherwise, $G,H,K,I$ will denote groups. Groups will sometimes be freely identified with the corresponding one-object category. Left $G$-sets will sometimes just be referred to as $G$-sets.
After some lemmas, we will begin this post by some easy consequences of the Hom-Tensor adjunction, which is the main result of the previous post.
Lemma The category $G\textrm{-}\mathbf{Set}$ is complete and cocomplete and the limits and colimits look like in the category of sets (i.e. the forgetful functor $G\textrm{-}\mathbf{Set} \to \mathbf{Set}$ is continuous and cocontinuous) with the “obvious” actions from $G$. Similarly for $\mathbf{Set}\textrm{-}G$.
Proof This is not too difficult to prove directly (you can reduce the existence of (co)limits to (co)products and (co)equalizers by general nonsense), but it also follows directly from the fact that $G\textrm{-}\mathbf{Set}$ is the functor category $[G, \mathbf{Set}]$. The reason is that if $\mathcal{C}$ and $\mathcal{D}$ are categories and $\mathcal{D}$ is (co)complete (and $\mathcal{C}$ is small to avoid any set-theoretic trouble), then the functor category $[\mathcal{C},\mathcal{D}]$ is also (co)complete and the (co)limits may be computed “pointwise”. In the case of $[G, \mathbf{Set}]$, $G$ has only one object, so the (co)limits look like they do in $\mathbf{Set}$.
Lemma If $X$ is a $(G,H)$-set, $Y$ is a $(H,K)$-set and $Z$ is a $(K,I)$-set, then we have a natural isomorphism of $(G,I)$-sets
$(X \otimes_H Y) \otimes_K Z \cong X \otimes_H (Y \otimes_K Z)$
Proof The proof is the same as the proof for modules, mutatis mutandis. Use the universal property of tensor products a lot to get well-defined maps $(X \otimes_H Y) \otimes_K Z \to X \otimes_H (Y \otimes_K Z) , (x \otimes y) \otimes z \mapsto x \otimes (y \otimes z)$ and $X \otimes_H (Y \otimes_K Z) \to (X \otimes_H Y) \otimes_K Z, x\otimes (y\otimes z) \mapsto (x \otimes y) \otimes z$.
Lemma If $H \leq G$ is a subgroup, and we regard $G$ as a $(G,H)$-set via left and right multiplication, then $\mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G,-): G\textrm{-}\mathbf{Set} \to H\textrm{-}\mathbf{Set}$ is naturally isomorphic to the restriction functor $\mathrm{res}_H^G: G\textrm{-}\mathbf{Set} \to H\textrm{-}\mathbf{Set}$ (this functor takes any $G$-set, which we may think of as a group homomorphism or a functor and restricts it to the subgroup/subcategory given by $H$.)
Proof Define a (natural) map $\varphi: \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G,X) \to \mathrm{Res}_H^G(X)$ via $\varphi(f)=f(1)$. This is $H$-equivariant, because $\varphi(hf)(1)=f(1h)=f(h)=hf(1)=h\varphi(f)$. On the other hand, given $x \in \mathrm{Res}_H^G(X)$ (which is just $X$ as a set), we can define $f \in \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G,X)$ via $f(g)=gx$. This defines an inverse for $\varphi$.
Corollary The restriction functor $\mathrm{Res}_H^G$ has a left adjoint $G \otimes_H - := \mathrm{Ind}_H^G$.
The notation $\mathrm{Ind}$ is chosen because we can think of this functor as an analog to the induced representation from linear representation theory, where we think of group actions as non-linear represenations. (Similar to the induced representation, one can give an explicit description of $\mathrm{Ind}_H^G$ after choosing coset representatives for $G/H$ etc.)
In linear represenation theory, the adjunction between restriction and induction is called Frobenius reciprocity, so if we wish to give our results fancy names (as mathematicians like to do) we can call this corollary “non-linear Frobenius reciprocity”.
If we take $H$ to be the trivial subgroup, we obtain a corollary of the corollary:
Corollary The forgetful functor $G\textrm{-}\mathbf{Set}\to \mathbf{Set}$ has a left adjoint, the “free $G$-set functor”.
Proof If $H$ is the trivial group, then $H$-sets are the same as sets and the restriction functor $G\textrm{-}\mathbf{Set}\to H\textrm{-}\mathbf{Set}$ is the same as the forgetful functor. Since $G \otimes_H$ commutes with coproducts and $H$ is a one-point set, we can also describe this more explicitly: for a set $X$, we have $G \otimes_H X \cong G \otimes_H \coprod_{x \in X} H \cong \coprod_{x \in X} G \otimes_H H \cong \coprod_{x \in X} G := G^{(X)}$
We can also use the Hom-Tensor adjunction to get a description of some tensor products. Let $1$ denote a one-point set (simultanously the trivial group), considered as a $(1,G)$-set with (necessarily) trivial actions.
Lemma For a $G$-set $X$, $1 \otimes_G X$ is naturally isomorphic to the set of orbits $X/G$ and both are left adjoint to the functor $\mathbf{Set} \to G\textrm{-}\mathbf{Set}$ which endows every set with a trivial $G$-action.
Proof Let $Y$ be a set and $X$ be a $G$-set. Denote $Y^{triv}$ the $G$-set with $Y$ as its set and a trivial action. If we have any $f \in \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(X,Y^{triv})$, then $f$ must be constant on the orbits, since $f(gx)=gf(x)=f(x)$, so $f$ descends to a map of sets $X/G \to Y$. Conversely, if we have any map $h: X/G \to Y$, then we can define a $G$-equivariant map $f:X \to Y^{triv}$ by setting $f(x)=h([x])$, where $[x]$ denotes the orbit of $x$. These maps are mutually inverse natural bijection which shows that “set of orbits”-functor is left adjoint to $Y \mapsto Y^{triv}$. On the other hand, we can identify $Y^{triv}$ with $\mathrm{Hom}_{\mathbf{Set}}(1,Y)$ (where the $G$ action is induced from the trivial right $G$-action on $1$), so the left adjoint must be given by $X \mapsto 1 \otimes_G X$. Since adjoints are unique (by a Yoneda argument), we have a natural bijection $1 \otimes_G X \cong X/G$
The set of orbits $X/G$ carries some information about the $G$-set, but we can do a more careful construction which also includes $X/G$ in a natural way as part of the information.
Definition If $X$ is a $G$-set, then the action groupoid $X//G$ is the category with $\mathrm{Obj}(X//G) := X$ and $\mathrm{Hom}_{X//G}(x,y):= \{g \in G \mid gx=y\}$. Composition is given by $\mathrm{Hom}_{X//G}(y,z) \times\mathrm{Hom}_{X//G}(x,y) \to \mathrm{Hom}_{X//G}(x,z), (h,g) \mapsto hg$.
The fact that this is called a groupoid is not important here, one can think of that as just a name (it means that every morphism in $X//G$ is an isomorphism).
The set of isomorphism classes of $X//G$ correspond to the orbits $X/G$. For $x \in X//G$, the endomorphisms $\mathrm{End}_{X//G}$ is the stabilizer group $G_x$. The following lemma shows how to reconstruct a $G$-set $X$ from $X//G$, assuming that we know how all the Hom-sets lie inside $G$.
Lemma (“reconstruction lemma”) If $X$ is a $G$-set, then we define the functor $G: (X//G)^{op} \to G\textrm{-}\mathbf{Set}$ with $G(x)=G$ for all $x \in X//G$ and for $g \in \mathrm{Hom}_{(X//G)}(x,y)$, we define the map $G(g): G(y) \to G(x)$ via $a \mapsto ag$. Then we have $\varinjlim\limits_{x \in (X//G)^{op}}G(x) \cong X$
Proof For $x \in X//G$, define a map $G(x)=G \to X$ via $g \mapsto gx$. This defines a cocone over $G(.)$, so we get an induced map $\varphi: \varinjlim\limits_{x \in (X//G)^{op}}G(x) \to X$. $\varinjlim\limits_{x \in (X//G)^{op}}G(x)$ can be described explicitly as $\coprod_{x \in (X//G)^{op}} G(x)/\sim$, where the equivalence relation $\sim$ is generated by $ga \in G(x) \sim a \in G(gx)$. To see that $\varphi$ is surjective, note that $x \in X$ is the image of $1 \in G(x)$. To see that $\varphi$ is injective, suppose $g \in G(x)$ and $h \in G(y)$ are sent to the same element, i.e. $gx=hy$, then we have $(h^{-1}g)x=y$, so that we may assume $h=1$. Then $gx=y$ implies that $1 \in G(y)=G(gx) \sim g \in G(x)$, so the two elements which map to the same element are already equal in $\varinjlim\limits_{x \in (X//G)^{op}}G(x)$.
The previous lemma can be thought of as a generalization of the orbit-stabilizer theorem. (The proof has strong similarities as well.) For illustration, let us derive the usual orbit-stabilizer theorem from it.
Lemma Let $X$ be a $G$-set, then we have an isomorphism of $G$-sets $G/G_x \cong Gx$, where $Gx$ is the orbit of $x$ (with the restricted action) and $G/G_x$ is the coset space of the stabilizer subgroup with left multiplication as the action.
Proof We may replace $X$ with $Gx$ so that we have a transitive action. Then the previous lemma gives us an isomorphism $X \cong \varinjlim\limits_{x \in (X//G)^{op}}G(x)$.
Consider the one-object category $(G_x)$. This can be identified with a full subcategory of $X//G$ corresponding to the object $x$. Because we have a transitive action, all objects in $X//G$ are isomorphic (isomorphism classes correspond to orbits), so that the inclusion functor $G_x \to X//G$ is also essentially surjective, so it is a category equivalence.
We may thus replace the colimit by the colimit $\varinjlim\limits_{x \in (G_x)^{op}}G(x)$. As $(G_x)^{op}$ has just one object, this colimit is a colimit over a bunch of parallel morphism $G(x) \to G(x)$, so it is the simultanous coequalizer of these morphisms. We know how to compute coequalizers in $G\textrm{-}\mathbf{Set}$: the same way that we compute coequalizers in $\mathbf{Set}$. So we have the families of maps $\cdot g: G(x)=G \to G, a \mapsto ag$, where $g$ varies over $G_x$. The coequalizer is the quotient $G/\sim$, where $\sim$ is generated by $a \sim ag$ for each $a \in G$ and $g \in G_x$. But this is exactly the equivalence relation that defines $G/G_x$.
There is another case where the colimit takes a simple form after replacing $X//G$ with an equivalent category.
Lemma A $G$-set $X$ is free in the sense that it is in the essential image of the “free $G$-set functor” $Y \mapsto G \otimes_{1} Y$ or equivalently it is a coproduct of copies of $G$ with the standard action iff the action of $G$ on $X$ is free in the sense that $\forall x \in X \forall g \in G: (gx=x \Rightarrow g=1)$.
Proof It’s clear that if we have a disjoint union $X= \coprod_{i \in I} G$, then no element of $G$ other than $1$ can fix an element in $X$. For the other direction, suppose that we have the condition $\forall x \in X \forall g \in G: (gx=x \Rightarrow g=1)$. This implies that the morphism sets in the action groupoid are really small: Suppose $g,h \in \mathrm{Hom}(x,y)$ such that $gx=y=hx$, which implies that $h^{-1}gx=x$, so $h^{-1}g=1$ by assumption, thus $h=g$. This means that for any pair of objects in $X//G$, there is at most one morphism between them. So if we consider the set $X/G$ as a discrete category (i.e. the only morphisms are the identities), then if we take a representative for each orbit $X/G$, this defines an inclusion of categories $X/G \to X//G$. As elements in $X/G$ represent isomorphism classes in $X//G$, this inclusion is always essentially surjective. By our computations of the Hom-sets, it is also fully faithful if the action of $G$ on $X$ is free. So if we apply the “reconstruction lemma” we get $X \cong \varinjlim\limits_{x \in (X//G)^{op}}G(x) \cong \varinjlim\limits_{x \in X/G} G$. But a colimit over a discrete category is just a coproduct, so this is isomorphic to $\coprod_{x \in X/G} G$ which shows that $X$ is free.
After some further lemmas, we will come to the main result of this post, which is also an application of the reconstruction lemma.
In the previous post, I described $G$-sets in different ways, among them as functors $G \to \mathbf{Set}$, but I didn’t do the same for $(G,H)$-sets. The following lemma remedies this deficiency.
Lemma $(G,H)$-sets may be identified with left $G \times H^{op}$-sets or with functors $G \to \mathbf{Set}\textrm{-}H$ or with functors $H^{op} \to G\textrm{-}\mathbf{Set}$. In other words, we have equivalences of categories $G\textrm{-}\mathbf{Set}\textrm{-}H \cong G\times H^{op}\textrm{-}\mathbf{Set} \cong [G,\mathbf{Set}\textrm{-}H] \cong [H^{op},G\textrm{-}\mathbf{Set}]$.
The proof of this lemma is a lot of rewriting of definitions, not more difficult than proving the corresponding statements for one-sided $G$-sets.
This lemma has a useful consequence, which one could also verify by hand:
Observation If $F: G\textrm{-}\mathbf{Set} \to H\textrm{-}\mathbf{Set}$ is a functor and $X$ is a $(G,K)$-set, then $F(X)$ is a $(H,K)$-set in a “natural” way.
Proof Think of $X$ as functor $X:K^{op} \to G\textrm{-}\mathbf{Set}$, composing with $F$, gives us a functor $F(X): K^{op} \to H\textrm{-}\mathbf{Set}$, which we may also think of as a $(H,K)$-set.
More explicitly, the action of $K$ on $F(X)$ can be described as follows: for $k \in K$, the right-multiplication-map $X \to X, x \mapsto xk$ is left $G$-equivariant, so it induces a left $H$-equivariant map $F(X) \to F(X)$, we can define the action of $k$ on $F(X)$ via this map.
The following lemma is an analog of the classical Eilenberg-Watts theorem from homological algebra which describes colimit-preserving functors $R\textrm{-}\mathbf{Set} \to S\textrm{-}\mathbf{Set}$ as tensor products with a $(S,R)$-bimodule.
Thereom (Eilenberg-Watts theorem for group actions) Every colimit-preserving functor $F: G\textrm{-}\mathbf{Set} \to H\textrm{-}\mathbf{Set}$ is naturally equivalent to $X \otimes_G$ for a $(H,G)$-set $X$. One can explicitly choose $X = F(G)$ (with the $(H,G)$-set structure from the previous observation, as $G$ is a $(G,G)$-set.)
Proof Let $X$ be a $G$-set and $Y$ be a $H$-set, then we have a natural bijection $\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G) \otimes_G X, Y) \cong \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(X,\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y))$
Using the reconstruction lemma, we get $\mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(X,\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)) \cong \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(\varinjlim\limits_{x \in (X//G)^{op}}G(x),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)) \cong \varprojlim\limits_{x \in (X//G)^{op}}\mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G(x),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y))$
For every $x \in (X//G)^{op}$, $G(x)=G$, so $\mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G(x),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)) \cong \mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y))$ via the map $f \mapsto f(1)$. We need to consider how this identification behaves under the morphisms involved in the colimit. For $g \in G$, we have the map $G(gx) \to G(x), a \mapsto ag$, this induces a map $\varphi_g: \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G(x),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)) \to \mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G(gx),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y))$ given by $\varphi_g(f)(h)=f(hg)$. If we make the indentification described above by evaluating both sides at $1$, we get $\varphi_g(f)(1)=f(1g)=gf(1)$. Using the definition of the $G$-action on the Hom-set $\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)$, this left multiplication translates to right multiplication on $F(G)$. Because of the construction of the right $G$-action on $F(G)$, this right multiplication is the map that is induced from right multiplication $G \to G$. We may summarize this computation by stating that $\varprojlim\limits_{x \in (X//G)^{op}}\mathrm{Hom}_{G\textrm{-}\mathbf{Set}}(G(x),\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G),Y)) \cong \varprojlim\limits_{x \in (X//G)^{op}}\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G(x)),Y)$
Using the assumption that $F$ preserves colimits, we get $\varprojlim\limits_{x \in (X//G)^{op}}\mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(G(x)),Y) \cong \mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(\varinjlim\limits_{x \in (X//G)^{op}}F(G(x)),Y) \cong \mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(\varinjlim\limits_{x \in (X//G)^{op}} G(x)),Y) \cong \mathrm{Hom}_{H\textrm{-}\mathbf{Set}}(F(X),Y)$ where we used the reconstruction lemma again in the last step.
We conclude $F(X) \cong F(G) \otimes_G X$ by the Yoneda lemma.
This theorem (like the classical Eilenberg-Watts-theorem) is remarkable not only because it gives a concrete description of every colimit-preserving functor between certain categories, but also because it shows that such functors are completely determined by the image of one object $G$ and how it acts on the endomorphisms of that object (which are precisely the right-multiplications.)
It’s natural to ask at this point when two functors of the form $X \otimes_G$ and $Y \otimes_G$ for $(H,G)$-sets are naturally isomorphic. It’s not difficult to see that it is sufficient that $X$ and $Y$ are isomorphic as $(H,G)$-sets. The following lemma shows that this is also necessary, among other things.
Lemma For $(H,G)$-sets $X$ and $Y$, every natural transformation $\eta:X \otimes_G \to Y \otimes_G$ is induced by a unique $(H,G)$-equivariant map $f: X \to Y$
Proof Assume we have a natural transformation $\eta_A: X \otimes_G A \to Y \otimes_G A$, then we have in particular a left $H$-equivariant map $\eta_G: X \otimes_G G \to Y \otimes_G G$. We have $X \otimes_G G \cong X$ and $Y \otimes_G G \cong Y$, so this gives us a $H$-equivariant map $X \to Y$ which I call $f$. Clearly $f$ is uniquely determined by this construction. For a fixed $g \in G$, right multiplication by $g$ defines a left $G$-equivariant map $G \to G$. Under the isomorphism $X \otimes_G G \cong X$ these maps describe the right $G$ action on $X$. Naturality with respect to these maps implies that $f$ is right $G$-equivariant.
This lemma allows a reformulation of the previous theorem.
Theorem (Eilenberg-Watts theorem for group actions, alternative version)
The following bicategories are equivalent:
– The bicategory where the objects are groups, 1-morphisms between two groups $G, H$ are $(G,H)$-sets $X$, where the composition of 1-morphisms is given by taking tensor products and 2-morphisms between two $(G,H)$-sets are given by $(G,H)$-equivariant maps.
– The 2-subcategory of the 2-category of categories $\mathbf{Cat}$ where the objects are all the categories $G\textrm{-}\mathbf{Set}$ for groups $G$, 1-morphisms are colimit-preserving functors $G\textrm{-}\mathbf{Set} \to H\textrm{-}\mathbf{Set}$ and 2-morphisms are natural transformations between such functors.
This concludes my second blog post. If you want, please share or leave comments below. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 304, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954032897949219, "perplexity": 116.2078800215592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572516.46/warc/CC-MAIN-20190916080044-20190916102044-00162.warc.gz"} |
https://www.physicsforums.com/threads/show-that-a-unitary-operator-maps-one-on-basis-to-another.715097/ | Show that a unitary operator maps one ON-basis to another
1. Oct 7, 2013
jhughes
1. The problem statement, all variables and given/known data
Given an inner product space $V$, a unitary operator $U$ and a set $\left\{\epsilon_i\right\}_{i=1,2,\dots}$ which is an orthonormal basis of $V$, show that the image of $\left\{\epsilon_i\right\}$ under $U$ is also an orthonormal basis of $V$
2. Relevant equations
3. The attempt at a solution
From the definition of a unitary operator we have $U^{\dagger}=U^{-1}$. From this we can pretty easily demonstrate that the inner product is preserved under $U$, i.e.
$\left\langle a|b\right\rangle=\left\langle a^{\prime}|b^{\prime}\right\rangle\quad\mbox{where }\left|a^{\prime}\right\rangle=U\,\left|a\right\rangle\quad\mbox{and }\left|b^{\prime}\right\rangle=U\,\left|b\right\rangle$
"$\left\{\epsilon_i\right\}_{i=1,2,\dots}$ is orthonormal" can be stated as
$\left\langle \epsilon_i|\epsilon_j\right\rangle=\delta_{i,j}$
Since $U$ preserves the inner product, if this statement is true for $\left\{\epsilon_i\right\}$ then it must be true for the image under $U$ as well. In other words, if {\epsilon_i\right\} is an orthonormal set, then the image of {\epsilon_i\right\} under $U$ is an orthornomal set as well.
So we have "ortho" and "normal", but where I've hit a mental block is at the "basis" part. For an $N$-dimensional space, it is clear that any set contained in the space which is orthonormal and has the same number of elements as an ON-basis of the space must also be an ON-basis of the space. What I'm stuck on is how to generalize this to an infinite-dimensional space. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791828393936157, "perplexity": 136.6341865179204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00653.warc.gz"} |
http://math.stackexchange.com/questions/247933/brouwers-fixed-point-theorem-for-free/247936 | I think I found a proof of Brouwer's fixed point theorem which is much simpler than any of the proofs in my books.
One part is standard: Suppose there is an $f:D^n \rightarrow D^n$ with no fixed points. Then we can draw the ray from $f(x)$ through $x$ to get a retraction $r:D^n \rightarrow S^{n-1}$.
Now, suppose such an $r$ existed. Then we obviously have $S^{n-1} \stackrel{i}{\rightarrow} D^n \stackrel{r}\rightarrow S^{n-1}$ where $ri=\text{id}_{S^{n-1}}$.
Taking de Rahm cohomology and writing $f^*\equiv H^p(f)$ for maps, we get that $i^*r^*=\text{id}^*_{S^{n-1}}$ is an isomorphism, such that $i^*:H^p(D^n)\rightarrow H^p(S^{n-1})$ is an epimorphism for all $p$, but this is impossible for $p=n-1$.
Thus there cannot exist such an $r$ and we have proved Brouwer's fixed point theorem.
So this proof uses nothing except that $H^p$ is a contravariant functor. If we were to do this with homology, we would need to use the notion of degree of maps, but my book on de Rahm cohomology does this by using contractibility and homotopy invariance. Is there some heavy stuff hidden under the surface here that I'm not seeing?
-
This is a good example of the extent to which heavy is partly a matter of taste and background: from my point of view this argument hides an enormous amount of unfamiliar background, and the combinatorial proof via Sperner’s lemma is simple! – Brian M. Scott Nov 30 '12 at 9:48
There is no need of the use of degree. Just apply the covariant functor $H_{n-1}(-)$ and you will get your desired contradiction. More precisely on homology we would have
$$\begin{array}{ccccc} &H_{n-1}(S^{n-1}) &\stackrel{ i_\ast}{\longrightarrow} &H_{n-1}(D^n)& \stackrel{r_\ast}{\longrightarrow} & H_{n-1}(S^{n-1}) \\ \end{array}$$
which are respectively (left to right) $\Bbb{Z}$, 0, $\Bbb{Z}$ contradicting the fact that the identity map factors through zero. I should add that this is one of the standard proofs of the Brouwer fixed point theorem. For example Rotman does it this way in his book on Algebraic Topology, while Hatcher proves it in the case $n=2$ by applying the functor $\pi_1(-)$ instead of homology.
If you want a proof that uses degree, it is not so hard. Suppose you have a map $f:D^n \to D^n$ that has no fixed points. Then we can treat $f$ as a map from the northern hemisphere $D^n_+$ of $S^n$ to itself. Now we can extend $f$ to a map on $S^n$ as follows. We define
$$g(x) = \begin{cases} f(x), & \text{if}\hspace{2mm} x \in D^n_+,\\ f\circ r(x), & \text{if}\hspace{2mm} x \in D^n_{-} \end{cases}$$
where $r(x)$ is reflection about the plane through the equator and $D^n_{-}$ is the southern hemisphere. It is clear that $g(x)$ is a continuous function; furthermore $g(x)$ has no fixed points. Hence we can homotope $i\circ g$ to the antipodal map on $S^n$ that has degree $(-1)^{n+1}$, where $i : D^n_{+} \hookrightarrow S^n$ is inclusion.
However $i \circ g$ is not surjective because for example no point in the southern hemisphere is in the image. It follows that $\deg i \circ g = 0$ contradicting the fact that we found it to be $(-1)^{n+1}$.
-
Ah, of course, the identity map on $H_{n-1}(S^{n-1})=\mathbb{Z}$ cannot factor through $H_{n-1}(D^n)=0$. – Espen Nielsen Nov 30 '12 at 9:37
@espen180 Yes basically you can even take singular (co)homology and you will get the same contradiction. – user38268 Nov 30 '12 at 10:39
de Rham cohomology is only functorial with respect to smooth maps.
-
I see your point. It should not be a problem if I take singular cohomology instead. – Espen Nielsen Nov 30 '12 at 13:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528795480728149, "perplexity": 92.31870875213123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111392.88/warc/CC-MAIN-20160428161511-00096-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/kinematics-vectors-fairly-easy-stuff.126630/ | # Kinematics (Vectors, fairly easy stuff)
1. Jul 19, 2006
### Jehuty
1) A scientist runs to the east at 1.2m/s when he suddenly realises he forgot his chemicals at home. He then dashes at 4.6 m/s [North]. What is the scientists change in velocity?
So far I have 4.6 m/s - 1.2 m/s = 3.4, does anyone know what I am supposed to do with the directions?
2) (sorry I don't know how to put in the degree sign infront of the numbers, I will use ' instead) The velocity of a plane relative to the air is 320 km/h [E 35' S]A wind velocity relative to the ground is 75 km/H [E]. What is the velocity of the plane relative to the ground.
Again, in this one it's the direction (or angle) that makes me unsure of my answer, I am using the Vog = Vom + Vmg (velocity of object realtive to ground = velocity of medium relative to ground + velocity of object relative to medium) but the fact that the plane is flying at an angle of 35 degrees confuses me.
help would be appreciated
2. Jul 19, 2006
### Staff: Mentor
-1- To include the directional part, you need to draw the velocity vectors, and do a graphical subtraction. Draw an arrow 1.2 units long pointing to the right (east), and at the tip of that arrow, draw another arrow 4.6 units long pointing up (north). The addition of the two vectors is the hypotenuse of the triangle (start at the beginning of the first vector and end at the pointy end of the second vector). Do you have an idea of how you would construct the difference (subtraction) between these two vectors?
-2- In this one you add the plane's velocity vector and the wind vector. Use the method from above.
3. Jul 19, 2006
### Jehuty
I think for the angle I would have to use tan-1(hyp)? (-1 being in the degree mode) right?
4. Jul 19, 2006
### Staff: Mentor
Actually, an easier way to do -1- would be to express each vector in rectangular coordinates, and do a simple subtraction. Have you seen this notation before? Assume that the horizontal left-right axis on your paper is the x-axis, and the vertical up-down axis is the y axis. Then each velocity vector can be expressed as the sum of its x and y components:
V = (Vx,Vy)
So for example, the first velocity vector would be V1 = (1.2m/s,0).
What would V2 be in this notation? What do you get for V2-V1 if you subtract each component of the vector?
5. Jul 19, 2006
### Jehuty
V1 = (1.2m/s, 0)
V2 = (0, 4.6)
Well...if I put those in the pythagorean equation, I get a final answer 4.9 and then I put it in tan-1 and I get approximatly 79 degrees...am I on the right track or am I still off?
Edit: oh wait, I see, am I supposed to subtract the x-component of V1 with the x-component of V2 and then do the same for the y-component and that would be my delta V?
6. Jul 19, 2006
### Staff: Mentor
Correct for V1 and V2. Now just subtract them to get V2-V1. You'll see that the difference vector basically says that the x-motion stops and the y-motion starts. You can also see that if you add the two vectors instead, you get that hypotenuse vector I mentioned. I think you can also see that subtracting the V1 vector is like turning it around (flipping it horizontally in this case) and adding that to V2. The difference vector in this case is the hypotenuse of the triangle you get by flipping V1 horizontally, moving it to the right so that its tip touches the bottom of V2, and then taking that hypotenuse. Make sense?
7. Jul 19, 2006
### Jehuty
Yes, thanks, it makes sense now. For number two would I do the same? Since I have an angle this time, I think I need to do something like
V1 = (320km/h SIN35, 320km/h COS35)
V2 = (75, 0)
8. Jul 19, 2006
### Staff: Mentor
That looks right. What you are doing is finding the x and y (rectangular) components of the vector V1. It's called converting from polar coordinates to rectangular coordinates. In polar coordinates, you are given magnitude and direction (as an angle from one of the axes). In rectangular coordinates, you are given the x and y components of the vector, and the magnitude is the hypotenuse of those two components, and the angle is the $$tan^{-1}$$ of the y-component divided by the x-component.
There should be a section in your textbook that talks about converting between coordinate systems. As you can see, using rectangular coordinates makes adding and subtracting vectors easy. Good work!
Last edited: Jul 20, 2006
9. Jul 19, 2006
### Staff: Mentor
One other thing -- I didn't check the signs of the components of V1 (I don't understand the notation of East-South, etc., and I don't have a diagram to help me figure it out). But remember that if the x-component points left, it will have a negative magnitude. And if the y-component points down, it will have a negative magnitude.
10. Jul 19, 2006
### Jehuty
Thank you so much for all that you've done, you've really been a big help.
11. Jul 19, 2006
### Staff: Mentor
Thanks, -Mike- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072108864784241, "perplexity": 654.3985787807637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543577.51/warc/CC-MAIN-20161202170903-00412-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.cfd-online.com/W/index.php?title=Boussinesq_eddy_viscosity_assumption&diff=5575&oldid=5540 | # Boussinesq eddy viscosity assumption
(Difference between revisions)
Jump to: navigation, search
Revision as of 18:03, 6 May 2006 (view source)Jola (Talk | contribs) (a start, should be extended significantly)← Older edit Revision as of 21:30, 9 May 2006 (view source)Jola (Talk | contribs) (added formulas)Newer edit → Line 1: Line 1: - In 1877 Boussinesq postulated that the momentum transfer caused by turbulent eddies can be modeled with an eddy viscosity. This is in analogy with how the momentum transfer caused by the molecular motion in a gas can be described by a molecular viscosity. + In 1877 Boussinesq postulated that the momentum transfer caused by turbulent eddies can be modeled with an eddy viscosity. This is in analogy with how the momentum transfer caused by the molecular motion in a gas can be described by a molecular viscosity. The Boussinesq assumption states that the [[Reynolds stress tensor]], $\tau_{ij}$, is proportional to the mean strain rate tensor, $S_{ij}$, and can be written in the following way: + + :$\tau_{ij} = 2 \, \mu_t \, S_{ij}$ + + Where $\mu_t$ is a scalar property called the [[Eddy viscosity|eddy viscosity]]. The same equation can be written more explicitly as: + + :$-\overline{\rho u'_i u'_j} = \mu_t \, \left( \frac{\partial U_i}{\partial x_j} + \frac{\partial U_j}{\partial x_i} \right)$ The Boussinesq eddy viscosity assumption is also often called the Boussinesq hypothesis or the Boussinesq approximation. The Boussinesq eddy viscosity assumption is also often called the Boussinesq hypothesis or the Boussinesq approximation.
## Revision as of 21:30, 9 May 2006
In 1877 Boussinesq postulated that the momentum transfer caused by turbulent eddies can be modeled with an eddy viscosity. This is in analogy with how the momentum transfer caused by the molecular motion in a gas can be described by a molecular viscosity. The Boussinesq assumption states that the Reynolds stress tensor, $\tau_{ij}$, is proportional to the mean strain rate tensor, $S_{ij}$, and can be written in the following way:
$\tau_{ij} = 2 \, \mu_t \, S_{ij}$
Where $\mu_t$ is a scalar property called the eddy viscosity. The same equation can be written more explicitly as:
$-\overline{\rho u'_i u'_j} = \mu_t \, \left( \frac{\partial U_i}{\partial x_j} + \frac{\partial U_j}{\partial x_i} \right)$
The Boussinesq eddy viscosity assumption is also often called the Boussinesq hypothesis or the Boussinesq approximation.
## References
Boussinesq, J. (1877), "Théorie de l’Écoulement Tourbillant", Mem. Présentés par Divers Savants Acad. Sci. Inst. Fr., Vol. 23, pp. 46-50. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802183508872986, "perplexity": 1150.4641380181206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00376-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.varsitytutors.com/sat_math-help/how-to-multiply-integers | # SAT Math : How to multiply integers
## Example Questions
### Example Question #1 : How To Multiply Integers
in physics where.
If the mass is increased by 3 three times the original and the acceleration is increased by 7 times the original, how many times greater is the new force than the original force?
Explanation:
We simply multiply the new mass by the new acceleration to obtain a new force that is 21 times greater than the original.
### Example Question #1 : How To Multiply Integers
An office wants to buy 22 computers at $900 each. The budget is$20,000 and the tax on computers is 9%. How many computers can the office afford?
Explanation:
The office can only afford 20 computers.
1.09 * 900 = \$981 is the actual price of each computer, with tax.
Divide:
20,000/981 = 20.39
Since the office cannot purchase a partial computer, we round down to 20 computers.
### Example Question #1 : Operations
A paint crew can paint walls in hours. At this rate, how many walls can the paint crew paint in hours?
Explanation:
First, divide by to determine how many walls the paint crew can paint in hour.
divided by is equal to .
Then, multiply this number by hours.
### Example Question #1 : How To Multiply Integers
Convert three yards to inches.
Explanation:
To solve this problem, we need to know the conversions between yards to feet, and feet to inches. Write their correct conversions.
Convert three yards to feet.
Convert nine feet to inches.
### Example Question #1751 : Sat Mathematics
Calculate in the following series:
Explanation:
Each element is obtained from the previous one by alternating the following operations: multiplying by 5 and adding 6.
If we let be the first element, then
Solve for :
### Example Question #2 : How To Multiply Integers
Janet can pick between 10 and 20 tomatoes in an hour. What is a possible amount of time for Janet to pick 120 tomatoes?
Explanation:
Since Janet can pick between 10 and 20 tomatoes in an hour, there is a range for which she can pick 120 tomatoes. Therefore, to find the possible amount of time it will take Janet to pick 120 tomatoes set up two fractions.
The first fraction will use the rate of 10 tomatoes an hour.
Therefore, identifying the variables are as follows.
Substituting these into the fraction results in a possible time
The second fraction will use the rate of 20 tomatoes an hour.
Therefore, identifying the variables are as follows.
Substituting these into the fraction results in a possible time | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629693984985352, "perplexity": 1738.4845515928278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00111.warc.gz"} |
http://www-old.newton.ac.uk/programmes/SIS/seminars/2007081011002.html | # SIS
## Seminar
### Progress on QCD at an infinite number of colours
Neuberger, H (Rutgers)
Friday 10 August 2007, 11:00-12:00
Seminar Room 2, Newton Institute Gatehouse
#### Abstract
At an infinite number of colours, N, various phase transitions occur in QCD, replacing either smooth crossovers, or phase transitions at finite N. Some of these transitions obey large N universally, raising the prospect of bridging analytically the gap between short and long distance physics in planar QCD. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037463665008545, "perplexity": 4110.841343339558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463425.63/warc/CC-MAIN-20150226074103-00294-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://brilliant.org/problems/the-answer-is-not-1-3/ | # The answer is not 1
Calculus Level 4
If $$\displaystyle \frac{x}{1-x-x^2}$$ can be expressed in the form $$\displaystyle \sum_{n=1}^\infty a_n x^n$$ for a certain subset of reals such that $$\displaystyle a_i \in \mathbb{R}$$ and $$\forall i \in \mathbb{N} ,$$ what is the value of $$\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n}}?$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391712546348572, "perplexity": 105.85347882167517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00075.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2011_AMC_10B_Problems/Problem_7&diff=39173&oldid=39155 | # Difference between revisions of "2011 AMC 10B Problems/Problem 7"
## Problem
The sum of two angles of a triangle is of a right angle, and one of these two angles is larger than the other. What is the degree measure of the largest angle in the triangle?
## Solution
The sum of two angles in a triangle is of a right angle
If is the measure of the first angle, then the measure of the second angle is .
Now we know the measure of two angles are and . By the Triangle Sum Theorem, the sum of all angles in a triangle is so the final angle is . Therefore, the largest angle in the triangle is | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253035187721252, "perplexity": 221.5877133731853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00688.warc.gz"} |
https://proofwiki.org/wiki/Schanuel%27s_Conjecture | Schanuel's Conjecture
Conjecture
Let $z_1, \cdots, z_n$ be complex numbers that are linearly independent over the rational numbers $\Q$.
Then:
the extension field $\Q \left({z_1, \cdots, z_n, e^{z_1}, \cdots, e^{z_n}}\right)$ has transcendence degree at least $n$ over $\Q$
where $e^z$ is the complex exponential of $z$.
Source of Name
This entry was named for Stephen Hoel Schanuel. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317653179168701, "perplexity": 519.7645895053789}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00559.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=65K05&jrnl=one&onejrnl=mcom | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(65K05) AND publication=(mcom) Sort order: Date Format: Standard display
Results: 1 to 30 of 33 found Go to page: 1 2
[1] Yannan Chen and Wenyu Sun. A dwindling filter line search method for unconstrained optimization. Math. Comp. 84 (2015) 187-208. Abstract, references, and article information View Article: PDF [2] Lei-Hong Zhang and Wei Hong Yang. An efficient algorithm for second-order cone linear complementarity problems. Math. Comp. 83 (2014) 1701-1726. Abstract, references, and article information View Article: PDF [3] Jinyan Fan. Accelerating the modified Levenberg-Marquardt method for nonlinear equations. Math. Comp. 83 (2014) 1173-1187. Abstract, references, and article information View Article: PDF [4] Junfeng Yang and Xiaoming Yuan. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comp. 82 (2013) 301-329. Abstract, references, and article information View Article: PDF [5] Jerry Eriksson and Mårten E. Gulliksson. Local results for the Gauss-Newton method on constrained rank-deficient nonlinear least squares. Math. Comp. 73 (2004) 1865-1883. MR 2059740. Abstract, references, and article information View Article: PDF This article is available free of charge [6] Yu-Hong Dai. A family of hybrid conjugate gradient methods for unconstrained optimization. Math. Comp. 72 (2003) 1317-1328. MR 1972738. Abstract, references, and article information View Article: PDF This article is available free of charge [7] Y. H. Dai and Y. Yuan. A three-parameter family of nonlinear conjugate gradient methods. Math. Comp. 70 (2001) 1155-1167. MR 1826579. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Q. Ni and Y. Yuan. A subspace limited memory quasi-Newton algorithm for large-scale nonlinear bound constrained optimization. Math. Comp. 66 (1997) 1509-1520. MR 1422793. Abstract, references, and article information View Article: PDF This article is available free of charge [9] A. R. Conn, Nick Gould and Ph. L. Toint. A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds. Math. Comp. 66 (1997) 261-288. MR 1370850. Abstract, references, and article information View Article: PDF This article is available free of charge [10] Luís F. Portugal, Joaquím J. Júdice and Luís N. Vicente. A comparison of block pivoting and interior-point algorithms for linear least squares problems with nonnegative variables . Math. Comp. 63 (1994) 625-643. MR 1250776. Abstract, references, and article information View Article: PDF This article is available free of charge [11] Hou De Han. A direct boundary element method for Signorini problems . Math. Comp. 55 (1990) 115-128. MR 1023048. Abstract, references, and article information View Article: PDF This article is available free of charge [12] Andrew R. Conn, Nicholas I. M. Gould and Philippe L. Toint. Testing a class of methods for solving minimization problems with simple bounds on the variables . Math. Comp. 50 (1988) 399-430. MR 929544. Abstract, references, and article information View Article: PDF This article is available free of charge [13] J. Stoer and R. A. Tapia. On the characterization of $q$-superlinear convergence of quasi-Newton methods for constrained optimization . Math. Comp. 49 (1987) 581-584. MR 906190. Abstract, references, and article information View Article: PDF This article is available free of charge [14] Krzysztof C. Kiwiel. An algorithm for nonsmooth convex minimization with errors . Math. Comp. 45 (1985) 173-180. MR 790650. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Angelo Lucia. An explicit quasi-Newton update for sparse optimization calculations . Math. Comp. 40 (1983) 317-322. MR 679448. Abstract, references, and article information View Article: PDF This article is available free of charge [16] Jorge Nocedal. Updating quasi-Newton matrices with limited storage . Math. Comp. 35 (1980) 773-782. MR 572855. Abstract, references, and article information View Article: PDF This article is available free of charge [17] J. N. Lyness. A bench mark experiment for minimization algorithms . Math. Comp. 33 (1979) 249-264. MR 514822. Abstract, references, and article information View Article: PDF This article is available free of charge [18] J. N. Lyness. The affine scale invariance of minimization algorithms . Math. Comp. 33 (1979) 265-287. MR 514823. Abstract, references, and article information View Article: PDF This article is available free of charge [19] John Greenstadt. Revision of a derivative-free quasi-Newton method . Math. Comp. 32 (1978) 201-221. MR 0474810. Abstract, references, and article information View Article: PDF This article is available free of charge [20] Ph. L. Toint. Some numerical results using a sparse matrix updating formula in unconstrained optimization . Math. Comp. 32 (1978) 839-851. MR 0483452. Abstract, references, and article information View Article: PDF This article is available free of charge [21] Alan E. Berger and Richard S. Falk. An error estimate for the truncation method for the solution of parabolic obstacle variational inequalities . Math. Comp. 31 (1977) 619-628. MR 0438707. Abstract, references, and article information View Article: PDF This article is available free of charge [22] Larry Nazareth. Generation of conjugate directions for unconstrained minimization without derivatives . Math. Comp. 30 (1976) 115-131. MR 0398100. Abstract, references, and article information View Article: PDF This article is available free of charge [23] Michael J. Best and Klaus Ritter. A class of accelerated conjugate direction methods for linearly constrained minimization problems . Math. Comp. 30 (1976) 478-504. MR 0431675. Abstract, references, and article information View Article: PDF This article is available free of charge [24] Paul T. Boggs. The convergence of the Ben-Israel iteration for nonlinear least squares problems . Math. Comp. 30 (1976) 512-522. MR 0416018. Abstract, references, and article information View Article: PDF This article is available free of charge [25] Donald Goldfarb. Factorized variable metric methods for unconstrained optimization . Math. Comp. 30 (1976) 796-811. MR 0423804. Abstract, references, and article information View Article: PDF This article is available free of charge [26] Richard P. Brent. Table errata: {\it Algorithms for minimization without derivatives}\ (Prentice-Hall, Englewood Cliffs, N. J., 1973) . Math. Comp. 29 (1975) 1166. MR 0371062. Abstract, references, and article information View Article: PDF This article is available free of charge [27] William W. Hager and Gilbert Strang. Free boundaries and finite elements in one dimension . Math. Comp. 29 (1975) 1020-1031. MR 0388768. Abstract, references, and article information View Article: PDF This article is available free of charge [28] U. Dieter. How to calculate shortest vectors in a lattice . Math. Comp. 29 (1975) 827-833. MR 0379386. Abstract, references, and article information View Article: PDF This article is available free of charge [29] Shmuel S. Oren. Corrigenda: Self-scaling variable metric algorithms without line search for unconstrained minimization'' (Math. Comp. {\bf 27} (1973), 873--885) . Math. Comp. 28 (1974) 887. MR 0343617. Abstract, references, and article information View Article: PDF This article is available free of charge [30] Richard S. Falk. Error estimates for the approximation of a class of variational inequalities . Math. Comp. 28 (1974) 963-971. MR 0391502. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 30 of 33 found Go to page: 1 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662577509880066, "perplexity": 2534.150085543929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00063-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/127841/giving-topx-y-an-appropriate-topology/127853 | # Giving $Top(X,Y)$ an appropriate topology
I am not sure if its OK to ask this question here.
Let $Top$ be the category of topological spaces. Let $X,Y$ be objects in $Top$.
Let $F:\mathbb{I}\rightarrow Top(X,Y)$ be a function (I will denote the image of $t$ by $F_t$). Let $F_{*}:X\times \mathbb{I}\rightarrow Y$ be the function that sends $(x,t)$ to $F_t(x)$.
Is there a topology on $Top(X,Y)$ such that $F$ is continuous iff $F_{*}$ is continuous ?
Motivation: In the definition of a homotopy $F$ from $f$ to $g$ (for some $f,g\in Top(X,Y)$) it is tempting to think of $F$ to be a "path" (as in the definition of $PY^X$) from $f$ to $g$. Now I really wanted to see if $F$ could be thought of as real path from $f$ to $g$ in $Top(X,Y)$. More precisely, I wanted to know whether $F_{*}:\mathbb{I}\rightarrow Top(X,Y)$ that sends $(x,t)$ to $F_t(x)$ is a path or not. Note that $F_{*}(0)=f,F_{*}(1)=g$,thus if $F_{*}$ is continuous it would be a path from $f$ to $g$ in $Top(X,Y)$.
Hence, I still think that the case when $\mathbb{I}$ is the unit interval is still of some intrest.
-
If $\mathbb{I}$ denotes $[0,1]$, then I don't know the answer, but this case is rather useless. For $\mathbb{I}$ a general topological space, the answer is no. Such topology can be given iff $X$ is locally compact. If you are fine with smaller subcategory of $\mathcal{T}op$, then there is a classical solution: work in a category of compactly generated hausdorff spaces and equip $Hom(X,Y)$ with compact-open topology. – Anton Fetisov Apr 17 '13 at 13:04
to Anton Fetisov: why is the case I rather useless? – johndoe Apr 17 '13 at 17:03
@Anton Fetisov I mean the interval [0,1]. I also don't think that this case is useless. I will add a motivation to my question. – Amr Apr 17 '13 at 21:43
@Amr: it seems you misplaced some * in the edited version – johndoe Apr 18 '13 at 5:53
@johnodoe Yes. I will fix this now – Amr Apr 18 '13 at 9:43
Briefly, this works very nicely when $X$ is locally compact, but not otherwise. Then the function space carries the compact-open topology.
John Isbell gave a survey of the story and literature in his paper General Function Spaces, Products and Continuous Lattices, in Math Proc Cam Phil Soc 100 (1986) 193--205.
It is an ongoing matter in theoretical computer science.
There is frequent and ongoing literature on this subject going back to when Ralph Fox introduced the compact-open topology in On Topologies for Function-Spaces in Bull AMS 51 (1945).
It was originally considered in homotopy theory, then in category theory and topological lattice theory. After that theoretical computer science took over, under the headings of domain theory, realisability and "exact" real computation.
Along the way some very important concepts have been identified, in particular the universal property of the exponential in a cartesian closed category (as stated elsewhere on this page) but also that of a continuous lattice.
Briefly, a distributive continuous lattice is exactly the topology of a locally compact space.
I say this primarily as a warning to those (students in particular) who may think that a little bit of tweeking of the category or the universal property might yield better results. There are a lot of broken ideas along the way, some of which you will find surveyed in Isbell's paper. Breaking a correct idea like the universal property (by restricting its test object to a single space) is not going to help.
The most important topological space is not the real interval but the Sierpinski space, for which I write $\Sigma$. Classically, it has open open and one closed point. It is important because there are (constructive) bijections amongst
• continuous functions $\phi:X\to\Sigma$,
• open subspaces $U\subset X$ and
• closed subspaces $C\subset X$.
In particular, putting $Y\equiv\Sigma$ in the desired universal property, a continuous map $\phi:\Gamma\times X\to\Sigma$ is an open subspace of $\Gamma\times X$ and you want that to correspond to a continuous function $\Gamma\to\Sigma^X$.
With $\Gamma\equiv{\bf 1}$, this means that the points of $\Sigma^X$ must be the open subspaces of $X$.
With $\Gamma\equiv\Sigma^X$, we want the transpose of $id:\Sigma^X\to\Sigma^X$ to be continuous, but this is $ev:\Sigma^X\times X\to\Sigma$ defined by $ev(U,x)\equiv(x\in U)$. This map defines an open subspace of $\Sigma^X\times X$, which is a union of rectangles ${\cal V}\times V$. If $x\in U$ then $(U,x)\in{\cal V}\times V$ and $x\in V\subset K\subset U$ where $K\equiv\bigcap{\cal V}$ is compact.
So this works exactly when $X$ is locally compact and $\Sigma^X$ is its lattice of open subspaces, itself equipped with the Scott topology, which has a basis consisting of ${\cal V}\equiv\lbrace W|K\subset W\rbrace$ for $K$ compact.
I forget why $K$ is compact, but a good place to look would be the paper Local Compactness and Continuous Lattices by Karl Hofmann and Mike Mislove in Springer Lecture Notes in Mathematics 871 (1981) 209-248. It was in this paper that the interpolation property $x\in V\subset K\subset U$ was introduced as the definition of a locally compact space that is (sober but) not necessarily Hausdorff.
[PS: Peter Johnstone has a neat argument involving preservation of injectivity, in the final chapter of his book Stone Spaces.]
So this is the reason why local compactness of $X$ is necessary.
If $X$ is locally compact then the exponentials $Y^X$ exist for all spaces $Y$. However, even when $Y$ is locally compact, $Y^X$ need not be, for example Baire space $N^N$ is not, so locally compact spaces do not form a cartesian closed category. Nevertheless, $\Sigma^X$ is always locally compact when $X$ is.
Of course the argument for necessity above does not work if you only allow $\Gamma\equiv[0,1]$ in the universal property. However, it is not a good idea to mess around with such definitions.
If you seriously want to use the collection of maps $X\to Y$ as another space then you require a notation and a way of computing with functions as first-class objects. This notation is called the (typed) lambda calculus.
When the universal property of the exponential was recognised in the 1960s, it was not only related to this question in general topology but also to the formulation of symbolic logic, that is, to the lambda calculus and to proof theory.
I always write $\Gamma$ for the test object of a universal property because it plays exactly the same role in category theory as the context does in symbolic logic, which is customarily written with this letter. The context of an expression is the list of parameters (free variables) in it and their types (the spaces over which they range).
If you restrict $\Gamma$ to be just the singleton or interval then you cannot have general parameters in your expressions.
Dana Scott initially got involved in this subject because he wanted to show that the untyped lambda calculus is meaningless. However, he fairly quickly discovered models of it, in the form of topological lattices such that $X\cong X^X$. See, for example, his Data Types as Lattices in the SIAM Journal on Computing 5 (1976) 522-587.
Out of this grew veritable industries called domain theory and denotational semantics. In the 1980s, cartesian closed categories of domains came two-a-penny (I was responsible for some of them), where "domains" were particular kinds of partial orders equipped with the Scott topology. Denotational semantics used these to give mathematical meanings to constructs in programming languages in order to demonstrate the correctness of programs.
If you do not like the story for the whole of the traditional category of topological spaces then there are many alternatives.
The "official" answer in homotopy theory was the (full sub)category of compactly generated spaces.
On the other hand, there are ways of enlarging the traditional category to make it cartesian closed. Equilogical Spaces and Filter Spaces by Pino Rosolini in Rendiconti del Circolo Matematico di Palermo 64 (2000) 157--175 gives an excellent survey of them, explaining how they are reflctive subcategories of presheaves on the traditional category. In particular, Scott had introduced equilogical spaces, defined as topological spaces equipped with formal equivalence relations; the theory is set out in full in Equilogical Spaces by Andrej Bauer, Lars Birkedal and Dana Scott.
Having gone to the trouble of writing this lengthy account of (some of) the history of this question, I would like to turn it back on the homotopy theorists.
When topics like this were considered by categorists in the 1960s, they aimed their papers at (for example) topologists. Therefore they did not spell out the topology, because their intended readers would know it. This is very frustrating for subsequent students of category theory: the papers just contain the category theory and it is impossible to trace back to the preceding mathematical ideas.
So I would be grateful if the homotopy theorists here would explain, without rehearsing the category theory, what the motivations were and are in their own subject for asking for "convenient" or cartesian closed categories.
PS Thanks to Tyler Lawson for the comment below answering this question. Is there a slightly more detailed explanation of these methods, say of the length of a MO answer, or a survey paper?
In the context of an application of this kind, the next question is whether the cartesian closed categories that have been used (and mentioned above) are the most appropriate for the job. On the face of it, you're happy with "any old" CCC. But, when you look at the extra objects of this category, do the extensions of topological notions to them behave in the way that you would like? That is, according to whatever other intuitions of topology you have, such as developing results along the lines that Tyler mentions?
Many early applications of category theory imported the benefits of "set theory" by working in the Yoneda embedding (presheaves) or a smaller category of sheaves. Rosolini showed (in the paper cited above) how the CCC extensions of categories of topological spaces are subcategories of the Yoneda embedding. There is a close technical analogy in that both kinds of subcategory are reflective, but for sheaves the reflector (left adjoint to inclusion) preserves all finite limits, whereas in these CCCs it preserves products but not all equalisers or pullbacks.
My personal view is that these extensions are not topology but set theory with topological decoration. In this context, by "set theory" I mean, not the study of $\in$, but that of discrete spaces, whereas I believe (following Marshall Stone) that mathematical structures should be intrinsically topological. I have a reseach programme called Equideductive Topology that tries to look at such extensions without importing set theory.
-
doesn't Isbell consider the case where I (as in the OP question) might run over the whole category of spaces? In other words, it seems to me that the OP question is a special case of the problem considered in Isbell's survey and might very likely have an affirmative answer. Am I wrong? – johndoe Apr 17 '13 at 17:08
Just a little note on local compactness. A lot of texts define this to mean a space such that every point has a compact neighborhood. But this often isn't as "convenient" (to use the word pointedly) as the stronger condition that every point has a basis of compact neighborhoods, which is essentially the interpolation property mentioned above. However, the conditions coincide if the space is assumed to be Hausdorff. – Todd Trimble Apr 22 '13 at 19:31
As you haven't really received much response on what our motivations are, let me at least mention that these function spaces and their cartesian-closed properties are absolutely critical to Serre's method for calculating homotopy groups: he uses path spaces to replace a map $X \to Y$ by a nicer map, uses the adjunction to show that this new map is a Serre fibration, and then uses this technique to calculate homotopy groups by "slicing off" one of them at a time (leading to his proofs of finite generation/finiteness). These techniques were so effective that they are now ubiquitous. – Tyler Lawson Jan 10 '15 at 19:20
(Hurewicz used the k to abbreviate 'kompakt erzeugte')
-
May I be the the first to say welcome to MO, Bill! Your insight would be appreciated whenever you can spare it. – David Roberts Apr 28 '13 at 2:20
@Bill: Great to have your response! I wonder about the Peano curve as a pathology. It is not smooth. But then neither is Brownian motion. Should we regard Brownian motion as continuous? I recall Jim EElls (I think it was he) telling of discussing in a bar with a colleague about wild orbits, and the barman interrupted to say he knew a lot about those, as he had been a transformer engineer! Again, a topological topos allows for continuity (smoothness?) of functions with variable domain, such as the solutions of differential equations with parameters, e.g. $x \mapsto \log(x + t)$. – Ronnie Brown May 2 '13 at 10:20
@Paul: Paul asks for the motivation: here is my story.
I gave an MSc course on homotopy theory at Liverpool in 1960-61, and was struck then by the nice properties of the category of simplicial sets as against that of topological spaces, thus suggesting the convenience of simplicial sets. My thesis topic then was the algebraic topology of function spaces, and in the process of solving the particular problem I used exponential laws for spaces, simplicial sets, based simplicial stets, chain complexes, simplicial abelian groups, and maybe others. At the end of this work it struck me that the exponential law depended on the product as well as the hom, and I wrote this up as a small introduction.
I also knew that the weak (i.e. k-ified) product has been studied by Whitehead and by Danny Cohen, so it seemed reasonable to try this for the exponewntial law. To my surprise, it all worked well and became the first chapter of my thesis, on the category of Hausdorff k-spaces, which was submitted, and the thesis was reproduced in the old purple Banda and well circulated, e.g. to Princeton.
Writing up this general topology part as papers it all became more expansive and was published as my first two papers, in 1963 and 1964. In writing up the Introduction of the first paper, I speculated: "It may be that the category of Hausdorff k-spaces is adequate and convenient for all purposes of topology." The major properties for conveninece were listed in the second paper, mainly being cartesian closed. I should say that a referee of the initial version had drawn my attention to the important point about cartesian closed, i.e. the usual properties of the product. Also, later workers eliminated the Hausdorff assumption.
For more discussion, see the the ncatlab on convenient categories of topological spaces.
There is also a nice paper of Lawvere discussing the various equivalences between $$(X^Y)^I, (X^I)^Y, X^{Y \times I}$$ in terms of motion and phase spaces, which I will try to find a reference to.
Finally, Spanier's suggestion of quasi-topological spaces is even more convenient, since it is locally cartesian closed, but was rejected mainly because the quasitopologies on the 2-point set formed a class. Maybe Peter Johnstone's "Topological topos" would be adequate and convenient for topology!
-
Thanks for writing this, Ronnie. I was of course aware of your contribution when I wrote my answer above, but as you can guess it was intended to be a "broad brush" history and was written in something of a hurry. I feel, though, that you are still telling me (what has since come to be regarded as) category theory and would like to have more idea of what the applications were/are in homotopy theory. Do you use iterated function-spaces, for example? What would lambda calculus for homotopy theory look like? (Maybe the last question is being answered by the Homotopy Type Theorists.) – Paul Taylor Apr 22 '13 at 17:02
I am interested in the reference to Lawvere's paper, thank you in advance. – johndoe Apr 23 '13 at 12:51
@RonnieBrown Hello. I am currently learning AT for the first time from your book "Topology and groupoids"! I was asking this question to see if the track groupoid of chapter 7 can be interpreted as a fundamental groupoid. I think the paper of "Lawvere" will probably help me find an answer. Thank you. – Amr Apr 25 '13 at 17:25
@Amr I think the Lawvere paper was on Volterra, see his web page. But the results of Section 5.9 of T&G show in essence how to regard the track groupoid as a fundamental groupoid. That Section was added to the 1988 edition. In reply to Paul, my papers [3,4,7] in my list show my motivation from determining some facts about the homotopy type of $X^Y$ by induction on the Postnikov system of $X$, with a view to determining some extensions in Barratt's track exact sequence, since he used Whitney tube systems!!! – Ronnie Brown Apr 26 '13 at 12:11
It might be of interest to the original poster to know that $Top(X,Y)$ endowed with the compact-open topology guarantees at least one direction in the implication. In other words, if $F_*\colon X\times\mathbb{I}\to Y$ is continuous then $F\colon \mathbb{I}\to Top(X,Y)$ is. Hence you can safely interpret any homotopy as a path in the function space $Top(X,Y)$, but (allegedly) there are paths in $Top(X,Y)$ which do not correspond to homotopies.
I, for my part, would like to see a counterexample for the opposite direction, since all the counterexamples I know of seem to use a space different than $\mathbb{I}$.
Reference: Dugundji, Topology, Chapter XII, Theorem 3.1.
-
I guess you wish a "nice topology" for basic algebraic topology. In the category of compactly generated spaces, the "compact open" topology behaves well, making this category a cartesian closed category.
Your original question might be reformulated as: does the product functor $(\mathbb{I}\times -): Top\to Top$ have a right adjoint functor? The answer is "no" and this is related with the bad behavior of the quotient topology with respect product topology, which implies bad behavior of the product with respect to pushouts.
http://en.wikipedia.org/wiki/Closed_monoidal_category , for aspects of category theory related with your question.
http://math.stackexchange.com/questions/31697/when-is-the-product-of-two-quotient-maps-a-quotient-map (There are examples of products that don't preserve the quotient)
Here, there is a related question/answer Categories with products that preserve quotients
Well, as I said, product doesn't preserve quotient (in the category Top) and, then, it doesn't preserve pushout. Therefore it's not left adjoint.
Also, you may argue that, if there is such a topology, you may conclude that products preserve quotient topology (which is an absurd). Assume that there is such a topology: then, if $q:A\to B$ is a quotient map, take the product $q\times Z: A\times Z\to B\times Z$. We need to prove that, under our conditions, this map should be a quotient map (what is not true). Given $f: B\times Z\to K$ a function, let
$(f\circ (q\times Z))_\ast : A\to Hom(Z,K)$
be the "adjoint" map of $(f\circ (q\times Z))$. By our hypothesis, $(f\circ (q\times Z))_\ast$ is continuous if, and only if, $(f\circ (q\times Z))$ is so.
Note that
$(f\circ (q\times Z))_\ast$
is equal to
$f_\ast\circ q$
in which $f_\ast$ denotes the adjoint map of $f$. Since $q$ is quotient map, $f_\ast\circ q$ is continuous if and only if $f_\ast$ is continuous (which happens if and only if $f$ is continuous (by our hypothesis)).
Therefore $(f\circ (q\times Z))$ is continuous if and only if $f$ is continuous. This would prove that $(q\times Z )$ is a quotient map, what isn't true in general. And, therefore, we conclude that there isn't such a topology.
-
shouldn't A be the interval I in order to prove the inexistence of such a topology? – johndoe Apr 17 '13 at 14:10
ok! If he was asking about the particular case in which $\mathbb{I}$ is a closed interval, you may put $A=\mathbb{I}$ in the "proof" above. But, now, to complete the proof, you need an example of a quotient map $q:\mathbb{I}\to X$ such that $q:\mathbb{I}\times Z\to Y\times Z$ is not a quotient map. – Fernando Apr 17 '13 at 15:00
Sorry, I meant: "to complete the proof, you need an example of a quotient map $q: \mathbb{I}\to Y$ such that $q\times Z:\mathbb{I}\times Z\to Y\times Z$ is not a quotient map" – Fernando Apr 17 '13 at 15:25
The answer is no in general as explained by the above answers.
Since this problem is tagged algebraic topology, I guess that you care about function spaces because you care about homotopy theory. Here is how I think about function spaces when I am doing homotopy theory:
The space of functions from $X$ to $Y$ is the space representing the contravariant functor $$Z \mapsto {\rm maps}(X \times Z,Y)$$ this object might not exist in the category of topological spaces, but this does not really matter from the perspective of homotopy theory: you can just work inside a different category!
EDIT:
I just want to add my opinion on Lennart's comment. Suppose that we "enlarge" the category of spaces to include the representing object ${\rm maps}(X,Y)$. We can extract a lot of information about the object ${\rm maps}(X,Y)$. Its points are just morphisms $* \to {\rm maps}(X,Y)$ which we can identify with the set of maps from $X$ to $Y$. We also have a great description of maps from other spaces into ${\rm maps}(X,Y)$. I can't imagine that you can extract much more from the compact open topology (but I could be wrong here)
-
I should add that I am still also learning homotopy theory and if anyone more experienced disagrees with my answer I would love to hear from you :D – Daniel Barter Apr 17 '13 at 15:56
In all my experience, one really likes to have $Map(X,Y)$ really as a space not just as something like a functor from topological spaces to sets. As this works totally fine if everything is in the category of compactly generated spaces (which contain kind of all spaces usual topologists care about), there is usually no need to enlargen the category of topological spaces. – Lennart Meier Apr 17 '13 at 18:26
@Lennart: thanks for the comment! I have edited my answer to address your concerns. – Daniel Barter Apr 17 '13 at 19:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837292790412903, "perplexity": 313.3343679580611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00221-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/22011-differentiability.html | # Math Help - Differentiability
1. ## Differentiability
Find all values of x for which the function is differentiable.
$P(x)= sin(|x|)-1$
2. Originally Posted by Truthbetold
Find all values of x for which the function is differentiable.
$P(x)= sin(|x|)-1$
Can you think of any points of the graph of $y = |x|$ where the function isn't differentiable?
-Dan
3. Originally Posted by topsquark
Can you think of any points of the graph of $y = |x|$ where the function isn't differentiable?
-Dan
To shorten the below: no.
If something is not differentiable only when it at the very middle (where the right and left come together, usually in examples as (0,0,) of cusps, corners, vertical tangents, and a non-removable discontinuity, then I cannot think or see, using a graphing calculator, any points.
4. Originally Posted by Truthbetold
To shorten the below: no.
If something is not differentiable only when it at the very middle (where the right and left come together, usually in examples as (0,0,) of cusps, corners, vertical tangents, and a non-removable discontinuity, then I cannot think or see, using a graphing calculator, any points.
$f(x) = |x|$
Consider x near 0. When x approaches 0 from the left, the first derivative is -1. When x approaches 0 from the right, the first derivative is 1. So the derivative of |x| does not exist at x = 0.
We can make a similar argument for $f(x) = sin(|x|) - 1$. See the graph below. Notice the cusp at x = 0.
-Dan
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936593234539032, "perplexity": 542.8582856908964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137056.60/warc/CC-MAIN-20140914011217-00312-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://git.rockbox.org/cgit/rockbox.git/tree/manual/rockbox_interface/browsing_and_playing.tex?id=f1b839bf83b28db98b8f11ab35c3e2e7b8e49ddc | summaryrefslogtreecommitdiffstats log msg author committer range
blob: 269eb9779922ec817cc3e08008492e38d6ab5bc0 (plain)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 % $Id$ % \chapter{Browsing and playing} \section{\label{ref:file_browser}File Browser} \screenshot{rockbox_interface/images/ss-file-browser}{The file browser}{} Rockbox lets you browse your music in either of two ways. The \setting{File Browser} lets you navigate through the files and directories on your \dap, entering directories and executing the default action on each file. To help differentiate files, each file format is displayed with an icon. The \setting{Database Browser}, on the other hand, allows you to navigate through the music on your player using categories like album, artist, genre, etc. You can select whether to browse using the \setting{File Browser} or the \setting{Database Browser} by selecting either \setting{Files} or \setting{Database} in the \setting{Main Menu}. If you choose the \setting{File Browser}, the \setting{Show Files} setting lets you select what types of files you wish to view. See \reference{ref:ShowFiles} for more information on the \setting{Show Files} setting. \note{The \setting{File Browser} allows you to manipulate your files in ways that are not available within the \setting{Database Browser}. Read more about \setting{Database} in \reference{ref:database}. The remainder of this section deals with the \setting{File Browser}.} \opt{ondio}{ Unlike the Archos Firmware, Rockbox provides multivolume support for the MultiMediaCard, this means the \dap{} can access both data volumes (internal memory and the MMC), thus being able to for instance, build playlists with files from both volumes. In the \setting{File Browser} a new directory will appear as soon as the device has read the content after inserting the card. This new directory's name is generated as \fname{}, and will behave exactly as any other directory on the \dap{}. } \opt{h10,h10_5gb}{\note{ If your \dap{} is a MTP model, the Music directory where all your music is stored may be hidden in the \setting{File Browser}. This may be fixed by either either changing its properties (on a computer) to not hidden, or by changing the \setting{Show Files} setting to all. }} \subsection{\label{ref:controls}File Browser Controls} \begin{table} \begin{btnmap}{}{} \ActionStdPrev{}/\ActionStdNext{} & Go to previous/next item in list. If you are on the first/last entry, the cursor will wrap to the last/first entry.\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,RECORDER_PAD} { \ButtonOn+\ButtonUp{}/ \ButtonDown & Move one page up/down in the list.\\ } \opt{IRIVER_H10_PAD} { \ButtonRew{}/ \ButtonFF & Move one page up/down in the list.\\ } % \ActionTreeParentDirectory & Go to the parent directory.\\ % \ActionTreeEnter & Executes the default action on the selected file or enters a directory.\\ % \ActionTreeWps & If there is an audio file playing, returns to the \setting{While Playing Screen} (WPS) without stopping playback.\\ % \nopt{player}% {% \ActionTreeStop & Stops audio playback.\\% }% % \ActionStdContext{} & Enter the \setting{Context Menu}\\ % \ActionStdMenu{} & Enter the \setting{Main Menu}\\ % \opt{RECORDER_PAD}{ \ButtonFTwo & Switches to the Browse/Play Quick Menu \\ % \ButtonFThree & Switches to the Display Quick Menu \\ % } % \opt{SANSA_E200_PAD}{ \ActionStdRec & Switches to the Recording screen \\ % } \end{btnmap} \end{table} \opt{RECORDER_PAD}{ The functions of the F keys are also summarised on the button bar at the bottom of the screen. } \subsection{\label{ref:Contextmenu}\label{ref:PartIISectionFM}Context Menu} \screenshot{rockbox_interface/images/ss-context-menu}{The Context Menu}{} The \setting{Context Menu} allows you to perform certain operations on files or directories. To access the \setting{Context Menu}, position the selector over a file or directory and access the context menu with \ActionStdContext{}. \note{The \setting{Context Menu} is a context sensitive menu. If the \setting{Context Menu} is invoked on a file, it will display options available for files. If the \setting{Context Menu} is invoked on a directory, it will display options for directories.} The \setting{Context Menu} contains the following options (unless otherwise noted, each option pertains both to files and directories): \begin{description} \item [Playlist.] Enters the \setting{Playlist Submenu} (see \reference{ref:playlist_submenu}). \item [Playlist Catalog.] Enters the \setting{Playlist Catalog Submenu} (see \reference{ref:playlist_catalog}). \item [Rename.] This function lets the user modify the name of a file or directory. \item [Cut.] Copies the name of the currently selected file or directory to the clipboard and marks it to be cut'. \item [Copy.] Copies the name of the currently selected file or directory to the clipboard and marks it to be copied'. \item [Paste.] Only visible if a file or directory name is on the clipboard. When selected it will move or copy the clipboard to the current directory. \item [Delete.] Deletes the currently selected file. This option applies only to files, and not to directories. Rockbox will ask for confirmation before deleting a file. Press \ActionYesNoAccept{} to confirm deletion or any other key to cancel. \item [Delete Directory.] Deletes the currently selected directory and all of the files and subdirectories it may contain. Deleted directories cannot be recovered. Use this feature with caution! \item [Open with.] Runs a viewer plugin on the file. Normally, when a file is selected in Rockbox, Rockbox automatically detects the file type and runs the appropriate plugin. The \setting{Open With} function can be used to override the default action and select a viewer by hand. For example, this function can be used to view a text file even if the file has a non-standard extension (i.e., the file has an extension of something other than \fname{.txt}). See \reference{ref:Viewersplugins} for more details on viewers. \item [Create Directory.] Create a new directory in the current directory on the disk. \item [Properties.] Shows properties such as size and the time and date of the last modification for the selected file. If used on a directory, the number of files and subdirectories will be shown, as well as the total size. \opt{recording}{ \item [Set As Recording Directory.] Save recordings in the selected directory. } \item [Add to Shortcuts.] Adds a link to the selected item in the \fname{shortcuts.link} file. If the file does not already exist it will be created in the root directory. Note that if you create a shortcut to a file, Rockbox will not open it upon selecting, but simply bring you to it's location in the \setting{File Browser}. \end{description} \subsection{\label{sec:virtual_keyboard}Virtual Keyboard} \screenshot{rockbox_interface/images/ss-virtual-keyboard}{The virtual keyboard}{} This is the virtual keyboard that is used when entering text in Rockbox, for example when renaming a file or creating a new directory. \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,RECORDER_PAD,GIGABEAT_PAD,SANSA_E200_PAD,% SANSA_C200_PAD,MROBE100_PAD}{ \begin{table} \begin{btnmap}{}{} \ActionKbdLeft{}~/ \ActionKbdRight{}~/ \ActionKbdUp{}~/ \ActionKbdDown & Move about the virtual keyboard (moves the solid cursor) \\ % \ActionKbdCursorLeft{} or \ActionKbdCursorRight & Move the line cursor within the text line \\ % \ActionKbdSelect & Inserts the selected keyboard letter at the current cursor position \\ % \ActionKbdAbort & Exits the virtual keyboard without saving any changes \\ % \opt{RECORDER_PAD}{ \ButtonFOne & Shifts between the upper case, lower case and accented keyboards \\ } % \ActionKbdDone & Exits the virtual keyboard and saves any changes \\ \ActionKbdBackSpace & Deletes the character before the line cursor \\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,GIGABEAT_PAD,MROBE100_PAD}{ \ActionKbdMorseInput & Enters Morse input mode \\ \ActionKbdMorseSelect & Tap to select a character in Morse input mode \\ } \end{btnmap} \end{table} } \opt{IPOD_4G_PAD,IPOD_3G_PAD,IRIVER_H10_PAD,IAUDIO_X5_PAD}{ \textbf{Picker area} \begin{table} \begin{btnmap}{}{} \ActionKbdUp/\ActionKbdDown & Move about the virtual keyboard. If you move out of the picker area, you get to the \emph{Line edit mode}. \\ \ActionKbdLeft/\ActionKbdRight & (moves the solid cursor). \\ \ActionKbdSelect & Inserts the currently selected keyboard letter at the current filename cursor position \\ \ActionKbdDone & Exits the virtual keyboard and saves any changes \\ \ActionKbdAbort & Exits the virtual keyboard without saving any changes\\ \opt{IPOD_4G_PAD,IPOD_3G_PAD,IRIVER_H10_PAD}{ \ActionKbdMorseInput & Enters Morse input mode \\ \ActionKbdMorseSelect & Tap to select a character in Morse input mode \\ } \end{btnmap} \end{table} \textbf{Line edit mode} \begin{table} \begin{btnmap}{}{} \ActionKbdLeft/\ActionKbdRight & Move left and right\\ \ActionKbdSelect & Deletes the letter to the left of the cursor\\ \ActionKbdUp/\ActionKbdDown & Returns to the picker area\\ \end{btnmap} \end{table} } \opt{ondio}{ \begin{table} \begin{btnmap}{Picker area}{} \ButtonUp/\ButtonDown/\ButtonLeft/\ButtonRight & Move about the virtual keyboard (moves the solid cursor). If you move out of the picker area with \ButtonUp/\ButtonDown, you get to the line edit mode. \\ \ButtonMenu & Selects the letter underneath the cursor. \\ Long \ButtonMenu & Accepts the change and returns to the File Browser.\\ \ButtonOff & Quit the virtual keyboard without saving the changes.\\ \end{btnmap} \end{table} \begin{table} \begin{btnmap}{Line edit mode}{} \ButtonLeft/\ButtonRight & Move left and right\\ \ButtonMenu & Deletes the letter to the left of the cursor\\ Long \ButtonMenu & Accepts the deletion\\ \ButtonUp/\ButtonDown & Returns to the picker area\\ \end{btnmap} \end{table} } \opt{player}{ The current text line to be entered or edited is always listed on the first line of the display. The second line of the display can contain the character selection bar, as in the screenshot above. \begin{table} \begin{btnmap}{}{} \ButtonOn & Toggle picker- and line edit mode\\ \ButtonLeft/\ButtonRight & moves back and forth in the selected \\ & line (picker of input line) \\ \ButtonPlay & Picks character in character bar, or acts as backspace \\ & in the text line.\\ Long \ButtonPlay & Accept\\ \ButtonStop & Cancel\\ \ButtonMenu & Flips picker lines\\ \end{btnmap} \end{table} } \input{rockbox_interface/tagcache.tex} \input{rockbox_interface/wps.tex} %Include playlist section \input{working_with_playlists/main.tex} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806817173957825, "perplexity": 2038.286566607951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00665.warc.gz"} |
https://stats.stackexchange.com/questions/387736/a-different-proof-for-kl-divergence-non-negativity | # A different proof for KL divergence non-negativity
KL divergence's non-negativity can be proved in many ways. One could use the inequality $$\log x \leq x - 1$$ as a main step in the proof, another one could leverage the property of concave of the logarithm function to yield the non-negativity.
Although those proofs are concise and simple, I found they less obvious to come up with, particularly, for one who is not familiar with concavity and inequality.
Therefore, I am trying to find an alternative proof which doesn't require knowledge of logarithm inequality as well as concavity.
The following is what I've come up with:
$$KL(p||q)\geq0$$
$$\Leftrightarrow\sum_i p_i \ln p_i \geq \sum_i p_i\ln q_i$$
$$\Leftrightarrow\sum_i \ln p_i^{p_i} \geq \sum_i p_i\ln q_i^{p_i}$$
$$\Leftrightarrow e^{\sum_i \ln p_i^{p_i}} \geq e^{\sum_i p_i\ln q_i^{p_i}}$$
$$\Leftrightarrow e^{\ln p_1^{p_1}}...e^{\ln p_n^{p_n}} \geq e^{\ln q_1^{p_1}}...e^{\ln q_n^{p_n}}$$
$$\Leftrightarrow p_1^{p_1}...p_n^{p_n} \geq q_1^{p_1}... q_n^{p_n}$$
Constrains: $$0\leq p_i,q_i \leq 1$$ and $$\sum_i p_i=1$$ and $$\sum_i q_i=1$$
To prove that $$KL(p||q)\geq0$$ , now I need to prove that:
$$p_1^{p_1}...p_n^{p_n} \geq q_1^{p_1}... q_n^{p_n}$$ $$(*)$$
Hopefully, $$(*)$$ is simpler to prove in terms of not using logarithm function. However, I am stuck here.
I would appreciate any idea helps to prove $$(*)$$ or corrections for the transformation (if any).
Let $$f(\boldsymbol{q})=f(q_1,q_2,...,q_{n-1})=q_1^{p_1}q_2^{p_2}...q_{n-1}^{p_{n-1}}(1-\sum_{i=1}^{n-1}{q_i})^{1-\sum_{i=1}^{n-1}p_i}$$. I just removed $$p_n$$, $$q_n$$ because they depend on other $$p_i,q_i$$. Here, we know $$p_i$$, and we try to maximize $$f(\boldsymbol{q})$$. In the end, we'll see that it's going to be maximized when $$p_i=q_i$$.
$$\frac{\partial f(\boldsymbol{q})}{\partial q_i}=p_iq_i^{p_i-1}(1-\sum_{k=1}^{n-1}{q_k})^{1-\sum_{k=1}^{n-1}p_k}-q_i^{p_i}(1-\sum_{k=1}^{n-1}{q_k})^{-\sum_{k=1}^{n-1}p_k}=0$$
Solving this yields $$p_iq_n=q_1p_n$$. Writing this for all $$i$$ in $$1,2,...n-1$$, yields $$p_i=q_i$$ via some algebra. So, any choice of $$q_i$$ other than $$p_i$$ yields a smaller $$f(\boldsymbol{q})$$. Leaving to prove that this is actually maximum to you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199126958847046, "perplexity": 199.65667190328926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00221.warc.gz"} |
http://mathoverflow.net/feeds/user/14601 | User lkeer - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T08:13:47Z http://mathoverflow.net/feeds/user/14601 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/122182/relating-curvature-and-torsion-of-a-connection-to-those-of-a-curve Relating curvature and torsion of a connection to those of a curve lkeer 2013-02-18T15:28:16Z 2013-02-23T22:50:06Z <p>I'm currently trying to relate two descriptions of the curvature and torsion of a connection and am running into some confusion.</p> <p>I know that an affine connection $A$ on an $n$-dimensional manifold $M$ can be split into two parts $A = \omega + e$, where $\omega$ takes values in the Lie algebra of rotations $\mathfrak{so}(n)$ while $e$ takes values in the Lie algebra of translations $\mathfrak{t}(n)$.</p> <p>The curvature form $\Omega = d\omega + \omega \wedge \omega$ can then be obtained from the $\mathfrak{so}(n)$ part, and the torsion form $\theta = de + \omega \wedge e$ from the $\mathfrak{t}(n)$ part.</p> <p>However, I am also aware that Cartan also related the curvature and torsion of an affine connection on $M$ to the older idea of curvature and torsion of curves in $M$ (I assume this is the origin of the word 'torsion' for this quantity?). If you take a curve from $M$ and develop it in a flat Euclidean space, then the curvature and torsion of the connection on $M$ both induce 'extra' curvature and torsion in the developed curve. I think that the modern version of this development would be a horizontal lift of the curve in $M$ into the principal bundle over $M$.</p> <p>I'm struggling to see how these two ideas fit together. In particular, a curve in $\mathbb{R}^n$ has $n-1$ of these Frenet-Serret invariants, not just curvature and torsion.</p> <blockquote> <p>What I would like to understand is why only the first two invariants of the curve appear in the affine connection, seeing as there are $n-1$ of these for a curve in $\mathbb{R}^n$. And what does the torsion of a curve have to do with the translation group? </p> </blockquote> <p>I would really appreciate any help on understanding this, or any reference suggestions.</p> http://mathoverflow.net/questions/62630/what-theorem-of-liouvilles-is-gian-carlo-rota-referring-to-here What theorem of Liouville's is Gian-Carlo Rota referring to here? lkeer 2011-04-22T16:16:08Z 2011-04-24T00:59:35Z <p>I am very curious about this remark in Lesson Four of Rota's talk, <a href="http://www.ega-math.narod.ru/Tasks/GCRota.htm" rel="nofollow">Ten Lessons I Wish I Had Learned Before I Started Teaching Differential Equations</a>:</p> <blockquote> <p>"For second order linear differential equations, formulas for changes of dependent and independent variables are known, but such formulas are not to be found in any book written in this century, even though they are of the utmost usefulness.</p> <p>"Liouville discovered a differential polynomial in the coefficients of a second order linear differential equation which he called the invariant. He proved that two linear second order differential equations can be transformed into each other by changes of variables if and only if they have the same invariant. This theorem is not to be found in any text. It was stated as an exercise in the first edition of my book, but my coauthor insisted that it be omitted from later editions."</p> </blockquote> <p>Does anyone know where to find this theorem?</p> http://mathoverflow.net/questions/122182/relating-curvature-and-torsion-of-a-connection-to-those-of-a-curve/122235#122235 Comment by lkeer lkeer 2013-02-20T09:30:20Z 2013-02-20T09:30:20Z Thank you! I have that paper and should have thought to go back to it. As you suggest, I was misled by the idea of Cartan connections, which I didn't realise came later. I really thought that Cartan did make that link to torsion of a curve in 'Riemannian Geometry in an Orthogonal Frame', but looking back through it I'm not so sure where I got this idea from. http://mathoverflow.net/questions/122182/relating-curvature-and-torsion-of-a-connection-to-those-of-a-curve/122209#122209 Comment by lkeer lkeer 2013-02-18T20:50:28Z 2013-02-18T20:50:28Z Sorry, I realise I haven't been very clear here. I do understand that the curvatures of a curve are extrinsic, but I think that there is still a link to the curvature of a connection. The modern analogue of Cartan's development in Euclidean space would I think be the horizontal lift of a curve in $M$ to a curve in the principal bundle over $M$. E.g. given a connection with torsion and no curvature, the horizontal lift of a circle in $\mathbb{R}^2$ would be a helix. http://mathoverflow.net/questions/62630/what-theorem-of-liouvilles-is-gian-carlo-rota-referring-to-here/62649#62649 Comment by lkeer lkeer 2011-04-23T07:21:37Z 2011-04-23T07:21:37Z Thank you very much, I will look up this reference. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999093770980835, "perplexity": 318.3500381972685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708144156/warc/CC-MAIN-20130516124224-00083-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.research.ed.ac.uk/en/publications/testing-emstrongkstrongem-modal-distributions-optimal-algorithms- | # Testing k-Modal Distributions: Optimal Algorithms via Reductions
Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio, Gregory Valiant, Paul Valiant
Research output: Working paper
## Abstract
We give highly efficient algorithms, and almost matching lower bounds, for a range of basic statistical problems that involve testing and estimating the L_1 distance between two k-modal distributions p and q over the discrete domain {1,…,n} . More precisely, we consider the following four problems: given sample access to an unknown k-modal distribution p ,
Testing identity to a known or unknown distribution:
1. Determine whether p=q (for an explicitly given k-modal distribution q ) versus p is \$\eps\$-far from q ;
2. Determine whether p=q (where q is available via sample access) versus p is \$\eps\$-far from q ;
Estimating L 1 distance ("tolerant testing'') against a known or unknown distribution:
3. Approximate d TV (p,q) to within additive \$\eps\$ where q is an explicitly given k-modal distribution q ;
4. Approximate d TV (p,q) to within additive \$\eps\$ where q is available via sample access.
For each of these four problems we give sub-logarithmic sample algorithms, that we show are tight up to additive \$\poly(k)\$ and multiplicative \$\polylog\log n+\polylog k\$ factors. Thus our bounds significantly improve the previous results of \cite{BKR:04}, which were for testing identity of distributions (items (1) and (2) above) in the special cases k=0 (monotone distributions) and k=1 (unimodal distributions) and required O((logn) 3 ) samples.
As our main conceptual contribution, we introduce a new reduction-based approach for distribution-testing problems that lets us obtain all the above results in a unified way. Roughly speaking, this approach enables us to transform various distribution testing problems for k-modal distributions over {1,…,n} to the corresponding distribution testing problems for unrestricted distributions over a much smaller domain {1,…,ℓ} where ℓ=O(klogn).
Original language English Computing Research Repository (CoRR) abs/1112.5659 Published - 2011
## Fingerprint
Dive into the research topics of 'Testing k-Modal Distributions: Optimal Algorithms via Reductions'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8610637784004211, "perplexity": 4880.084554548582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00431.warc.gz"} |
http://thermalfluidscentral.org/encyclopedia/index.php?title=Introduction_to_Heat_Transfer&diff=9390&oldid=1455 | # Introduction to Heat Transfer
(Difference between revisions)
Revision as of 07:03, 13 October 2009 (view source)← Older edit Current revision as of 21:35, 20 November 2010 (view source) (14 intermediate revisions not shown) Line 1: Line 1: - Transport phenomena include momentum transfer, heat transfer, and mass transfer, all of which are fundamental to an understanding of both single and multiphase systems. It is assumed that the reader has basic undergraduate-level knowledge of transport phenomena as applied to single-phase systems, as well as the associated thermodynamics, fluid mechanics, and heat transfer. + [[Image:Transport_(3).jpg|thumb|400 px|alt=One-dimensional conduction|
'''Figure 1: One-dimensional conduction.'''
]] - + Heat transfer is a process whereby thermal energy is transferred in response to a temperature difference. There are three modes of heat transfer: conduction, convection, and radiation. Conduction is heat transfer across a ''stationary'' medium, either solid or fluid. For an electrically nonconducting solid, conduction is attributed to atomic activity in the form of lattice vibration, while the mechanism of conduction in an electrically-conducting solid is a combination of lattice vibration and translational motion of electrons. Heat conduction in a liquid or gas is due to the random motion and interaction of the molecules. For most engineering problems, it is impractical and unnecessary to track the motion of individual molecules and electrons, which may instead be described using the macroscopic averaged temperature. The heat transfer rate is related to the temperature gradient by Fourier’s law. For the one-dimensional heat conduction problem shown in Fig. 1, in which temperature varies along the $y - direction only, the heat transfer rate is obtained by Fourier’s law - ==Heat Transfer== + - + - Heat transfer is a process whereby thermal energy is transferred in response to a temperature difference. There are three modes of heat transfer: conduction, convection, and radiation. Conduction is heat transfer across a ''stationary'' medium, either solid or fluid. For an electrically nonconducting solid, conduction is attributed to atomic activity in the form of lattice vibration, while the mechanism of conduction in an electrically-conducting solid is a combination of lattice vibration and translational motion of electrons. Heat conduction in a liquid or gas is due to the random motion and interaction of the molecules. For most engineering problems, it is impractical and unnecessary to track the motion of individual molecules and electrons, which may instead be described using the macroscopic averaged temperature. The heat transfer rate is related to the temperature gradient by Fourier’s law. For the one-dimensional heat conduction problem shown in Fig. 3, in which temperature varies along the [itex]y$-direction only, the heat transfer rate is obtained by Fourier’s law +
${q''_y} = - k\frac{{dT}}{{dy}}\qquad \qquad(1)$
${q''_y} = - k\frac{{dT}}{{dy}}\qquad \qquad(1)$
- where ${q''_y}$ is the heat flux along the $y$-direction, i.e., the heat transfer rate in the $y$-direction per unit area (W/m2), and $dT/dy$ (K/m) is the temperature gradient. The proportionality constant $k$ is thermal conductivity (W/m-K) and is a property of the medium. + where ${q''_y} is the heat flux along the [itex]y - direction, i.e., the heat transfer rate in the [itex]y - direction per unit area (W/m2), and [itex]dT/dy$ (K/m) is the temperature gradient. The proportionality constant $k is thermal conductivity (W/m-K) and is a property of the medium. - + - [[Image:Transport_(3).jpg|thumb|400 px|alt=One-dimensional conduction|Figure 1: One-dimensional conduction.]] + For heat conduction in a multidimensional isotropic system, eq. (1) can be rewritten in the following generalized form: For heat conduction in a multidimensional isotropic system, eq. (1) can be rewritten in the following generalized form: Line 32: Line 27: [itex]{\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T \qquad \qquad(5)$
${\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T \qquad \qquad(5)$
- Equation (2) or (5) is valid in a system that is uniform in all aspects except for the temperature gradient, i.e., no gradients of mass concentration or pressure. In a multicomponent system, mass transfer can also contribute to the heat flux (Curtiss and Bird, 1999). + Equation (2) or (5) is valid in a system that is uniform in all aspects except for the temperature gradient, i.e., no gradients of mass concentration or pressure. In a multicomponent system, [[mass transfer]] can also contribute to the heat flux [[#References|(Curtiss and Bird, 1999)]]. -
${\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T + \sum\limits_{i = 1}^N {{h_i}{{\mathbf{J}}_i}} + c{R_u}T\sum\limits_{i = 1}^N {\sum\limits_{j = 1(j \ne i)}^N {\frac{{{x_i}{x_j}}}{{{\rho _i}}}\frac{{D_i^T}}{{{{\mathfrak{D}_{ij} D}_{ij}}}}} \left( {\frac{{{{\mathbf{J}}_i}}}{{{\rho _i}}} - \frac{{{{\mathbf{J}}_j}}}{{{\rho _j}}}} \right)} \qquad \qquad(6)$
+
${\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T + \sum\limits_{i = 1}^N {{h_i}{{\mathbf{J}}_i}} + c{R_u}T\sum\limits_{i = 1}^N {\sum\limits_{j = 1(j \ne i)}^N {\frac{{{x_i}{x_j}}}{{{\rho _i}}}\frac{{D_i^T}}{{{{\mathfrak{D}_{ij}}}}}} \left( {\frac{{{{\mathbf{J}}_i}}}{{{\rho _i}}} - \frac{{{{\mathbf{J}}_j}}}{{{\rho _j}}}} \right)} \qquad \qquad(6)$
- where ${J_i}$ is diffusive mass flux relative to mass-averaged velocity, which will be discussed in the latter part of this section. The second term on the right-hand side represents the interdiffusional convection term, which is not zero even though $\sum\limits_{i = 1}^N {{{\mathbf{J}}_i}} = 0$. ${h_i}$ is partial enthalpy (J/kg) for the ''i''th species. The third term on the right-hand side is the contribution of the concentration gradient to the heat flux, which is referred to as the diffusion-thermo or Dufour effect. $c$ is the molar concentration (kmol/m3) of the mixture, ${R_u}$ is the universal gas constant, ${x_i}$ and ${x_j}$ are molar fractions of the ''i''th and ''j''th components respectively, $D_i^T$ is the multicomponent thermal diffusivity (m2/s) (which will be considered in the discussion of mass transfer in the latter part of this section) and ${\mathfrak{D}_{ij} D_{ij}}$ is the Maxwell-Stefan diffusivity, which is related to the multicomponent Fick diffusivity, ${\mathbb{D}_{ij}}$ , defined in eq. (57). A binary system gives + where ${J_i} is diffusive mass flux relative to mass-averaged velocity, which will be discussed in the latter part of this section. The second term on the right-hand side represents the interdiffusional convection term, which is not zero even though [itex]\sum\limits_{i = 1}^N {{{\mathbf{J}}_i}} = 0$. ${h_i} is partial [[Enthalpy and energy|enthalpy]] (J/kg) for the ''i''th species. The third term on the right-hand side is the contribution of the concentration gradient to the heat flux, which is referred to as the diffusion-thermo or Dufour effect. [itex]c is the molar concentration (kmol/m3) of the mixture, [itex]{R_u} is the universal gas constant, [itex]{x_i} and [itex]{x_j} are molar fractions of the ''i''th and ''j''th components respectively, [itex]D_i^T$ is the multicomponent thermal diffusivity (m2/s) (which will be considered in the discussion of [[mass transfer]] in the latter part of this section) and $\mathfrak{D_{ij}}$ is the Maxwell-Stefan diffusivity, which is related to the multicomponent Fick diffusivity, ${\mathbb{D}_{ij}}$ (see [[mass transfer]]). A binary system gives -
${\mathfrak{D}_{ij} D_{12}} = \frac{{{x_1}{x_2}}}{{{\omega _1}{\omega _2}}}{\mathbb{D}_{12}}\qquad \qquad(7)$
+
$\mathfrak{D_{12}} = \frac{{{x_1}{x_2}}}{{{\omega _1}{\omega _2}}}{\mathbb{D}_{12}}\qquad \qquad(7)$
- where ${\omega _1}{\rm{ and }}{\omega _2}$ are mass fractions of components 1 and 2 respectively. + where ${\omega _1}$ and ${\omega _2} are mass fractions of components 1 and 2 respectively. For a ternary system: - For a ternary system: + - [itex]{\mathfrak{D}_{ij} D_{12}} = \frac{{{x_1}{x_2}}} + [itex]\mathfrak{D_{12}} = \frac{{{x_1}{x_2}}} {{{\omega _1}{\omega _2}}}\frac{{{\mathbb{D}_{12}}{\mathbb{D}_{33}} - {\mathbb{D}_{13}}{\mathbb{D}_{23}}}} {{{\omega _1}{\omega _2}}}\frac{{{\mathbb{D}_{12}}{\mathbb{D}_{33}} - {\mathbb{D}_{13}}{\mathbb{D}_{23}}}} {{{\mathbb{D}_{12}} + {\mathbb{D}_{33}} - {\mathbb{D}_{13}} - {\mathbb{D}_{23}}}} {{{\mathbb{D}_{12}} + {\mathbb{D}_{33}} - {\mathbb{D}_{13}} - {\mathbb{D}_{23}}}} \qquad \qquad(8)$
- For a system of more than four components, the Maxwell-Stefan diffusivity can be obtained using methods described in Curtiss and Bird (1999; 2001). The interdiffusional convection term represented by the second term on the right- hand side of eq. (6) is usually important for multicomponent diffusion systems. While the Dufour energy flux represented by the third term on the right-hand side of eq. (6) is negligible for many engineering problems, it may become important for cases with a very large temperature gradient. + [[Image:Transport_(4).jpg|thumb|400 px|alt=Forced convective heat transfer|
'''Figure 2: Forced convective heat transfer.'''
]] - +
- '''Table 1''' Typical values of mean convective heat transfer coefficients +
'''Typical values of mean convective heat transfer coefficients'''
- {| {{table}} + {| class="wikitable" border="1" | align="center" style="background:#f0f0f0;"|'''Mode''' | align="center" style="background:#f0f0f0;"|'''Mode''' | align="center" style="background:#f0f0f0;"|'''Geometry''' | align="center" style="background:#f0f0f0;"|'''Geometry''' | align="center" style="background:#f0f0f0;"|'''$\bar h{\rm{ (W/}}{{\rm{m}}^{\rm{2}}}{\rm{ - K)}}$''' | align="center" style="background:#f0f0f0;"|'''$\bar h{\rm{ (W/}}{{\rm{m}}^{\rm{2}}}{\rm{ - K)}}$''' - |- - | |- |- | Forced convection||Air flows at 2 m/s over a 0.2 m square plate||12 | Forced convection||Air flows at 2 m/s over a 0.2 m square plate||12 Line 86: Line 77: | ||Solidification around a horizontal tube in a
superheated liquid phase change material||1000-1500 | ||Solidification around a horizontal tube in a
superheated liquid phase change material||1000-1500 |- |- - | |}
|}
+ For a system of more than four components, the Maxwell-Stefan diffusivity can be obtained using methods described in [[#References|Curtiss and Bird (1999; 2001)]]. The interdiffusional convection term represented by the second term on the right- hand side of eq. (6) is usually important for multicomponent diffusion systems. While the Dufour energy flux represented by the third term on the right-hand side of eq. (6) is negligible for many engineering problems, it may become important for cases with a very large temperature gradient. - The second mode of heat transfer is convection, which occurs between a wall at one temperature, ${T_w}$, and a moving fluid at another temperature, ${T_\infty }$ ; this is exemplified by forced convective heat transfer over a flat plate, as shown in Fig. 1.9. The mechanism of convection heat transfer is a combination of random + The second mode of heat transfer is convection, which occurs between a wall at one temperature, ${T_w}, and a moving fluid at another temperature, [itex]{T_\infty }$; this is exemplified by forced convective heat transfer over a flat plate, as shown in Fig. 2. The mechanism of convection heat transfer is a combination of random molecular motion (conduction) and bulk motion (advection) of the fluid. Newton’s law of cooling is used to describe the rate of heat transfer: - + - [[Image:Transport_(4).jpg|thumb|400 px|alt=Forced convective heat transfer|Figure 2: Forced convective heat transfer.]] + - + - molecular motion (conduction) and bulk motion (advection) of the fluid. Newton’s law of cooling is used to describe the rate of heat transfer: +
$q'' = h({T_w} - {T_\infty }) \qquad \qquad(9)$
$q'' = h({T_w} - {T_\infty }) \qquad \qquad(9)$
- where $h$ is the convective heat transfer coefficient (W/m2-K), which depends on many factors including fluid properties, flow velocity, geometric configuration, and any fluid phase change that may occur as a result of heat transfer. Unlike thermal conductivity, the convective heat transfer coefficient is not a property of the fluid. Typical values of mean convective heat transfer coefficients for various heat transfer modes are listed in Table 1. Convective heat transfer is often measured using the Nusselt number defined by: + where $h is the convective heat transfer coefficient (W/m2-K), which depends on many factors including fluid properties, flow velocity, geometric configuration, and any fluid [[Phase change|phase change]] that may occur as a result of heat transfer. Unlike thermal conductivity, the convective heat transfer coefficient is not a property of the fluid. Typical values of mean convective heat transfer coefficients for various heat transfer modes are listed in the table on the right. Convective heat transfer is often measured using the Nusselt number defined by: [itex]Nu = \frac{{hL}}{k} \qquad \qquad(10)$
$Nu = \frac{{hL}}{k} \qquad \qquad(10)$
- where $L$ and $k$ are characteristic length and thermal conductivity of the fluid, respectively. + where $L and [itex]k are characteristic length and thermal conductivity of the fluid, respectively. - The third mode of heat transfer is radiation. The transmission of thermal radiation does not require the presence of a propagating medium and, therefore can occur in a vacuum. Thermal radiation is a form of energy emitted by matter at a nonzero temperature and its wavelength is primarily in the range between 0.1 to 10 μm. Emission can be from a solid surface as well as from a liquid or gas. Thermal radiation may be considered to be the propagation of electromagnetic waves or alternately as the propagation of a collection of particles, such as photons or quanta of photons. When matter is heated, some of its molecules or atoms are excited to a higher energy level. Thermal radiation occurs when these excited molecules or atoms return to lower energy states. Although thermal radiation can result from changes in the energy states of electrons, as well as changes in vibrational or rotational energy of molecules or atoms, all of these radiant energies travel at the speed of light. The wavelength [itex]\lambda$ of radiative emissions is related to their frequency, $V$ by + The third mode of heat transfer is radiation. The transmission of thermal radiation does not require the presence of a propagating medium and, therefore can occur in a vacuum. Thermal radiation is a form of energy emitted by matter at a nonzero temperature and its wavelength is primarily in the range between 0.1 to 10 μm. Emission can be from a solid surface as well as from a liquid or gas. Thermal radiation may be considered to be the propagation of electromagnetic waves or alternately as the propagation of a collection of particles, such as photons or quanta of photons. When matter is heated, some of its molecules or atoms are excited to a higher energy level. Thermal radiation occurs when these excited molecules or atoms return to lower energy states. Although thermal radiation can result from changes in the energy states of electrons, as well as changes in vibrational or rotational energy of molecules or atoms, all of these radiant energies travel at the speed of light. The wavelength $\lambda of radiative emissions is related to their frequency, [itex]V$ by
$\lambda \nu = c \qquad \qquad(11)$
$\lambda \nu = c \qquad \qquad(11)$
- where $c$ is the speed of light and has a value of 2.998 x 108 m/s in a vacuum. A quantitative description of the mechanism of thermal radiation requires quantum mechanics. An electromagnetic wave with frequency of $v$ can also be viewed as a particle –a photon– with energy of + where $c is the speed of light and has a value of 2.998 x 108 m/s in a vacuum. A quantitative description of the mechanism of thermal radiation requires quantum mechanics. An electromagnetic wave with frequency of [itex]v can also be viewed as a particle –a photon– with energy of [itex]\varepsilon = h\nu \qquad \qquad(12)$
$\varepsilon = h\nu \qquad \qquad(12)$
- where $h$ = 6.626068×10-34 m2-kg/s is Planck’s constant. Both mass and charge of a photon are zero. + where $h = 6.626068×10-34 m2-kg/s is Planck’s constant. Both mass and charge of a photon are zero. - For a blackbody, defined as an ideal surface that emits the maximum energy that can be emitted by any surface at the same temperature, the spectral emissive power, [itex]{E_{b,\lambda }}$ (W/m3), can be obtained by Planck’s law + For a blackbody, defined as an ideal surface that emits the maximum energy that can be emitted by any surface at the same temperature, the spectral emissive power, ${E_{b,\lambda }} (W/m3), can be obtained by Planck’s law - [itex]{E_{b,\lambda }} = \frac{{{c_1}}}{{{\lambda ^2}({e^{{c_2}/(\lambda T)}} - 1)}} \qquad \qquad(13)$
+
${E_{b,\lambda }} = \frac{{{c_1}}}{{{\lambda ^5}({e^{{c_2}/(\lambda T)}} - 1)}} \qquad \qquad(13)$
- where ${c_1} = 3.742 \times {10^{ - 16}}{\rm{ W - }}{{\rm{m}}^{\rm{2}}}$ and ${c_2} = 1.4388 \times {10^{ - 2}}{\rm{ m - K}}$ are radiation constants. The unit of the surface temperature ''T'' in eq. (13) is K. + where ${c_1} = 3.742 \times {10^{ - 16}}{\rm{ W - }}{{\rm{m}}^{\rm{2}}}$ and ${c_2} = 1.4388 \times {10^{ - 2}}{\rm{ m - K}}$ are radiation constants. The unit of the surface temperature $T$ in eq. (13) is $K$. - [[Image:Transport_(5).jpg|thumb|400 px|alt=Radiation heat transfer between a small surface and its surroundings|Figure 3: Radiation heat transfer between a small surface and its surroundings.]] + [[Image:Transport_(5).jpg|thumb|400 px|alt=Radiation heat transfer between a small surface and its surroundings|
'''Figure 3: Radiation heat transfer between a small surface and its surroundings.'''
]] - The emissive power for a blackbody, ''E''b (W/m2), is + The emissive power for a blackbody, ${E_b} (W/m2), is [itex]{E_b} = \int_0^\infty {{E_{b,\lambda }}d\lambda } \qquad \qquad(14)$
${E_b} = \int_0^\infty {{E_{b,\lambda }}d\lambda } \qquad \qquad(14)$
Line 129: Line 116:
${E_b} = {\sigma _{SB}}{T^4} \qquad \qquad(15)$
${E_b} = {\sigma _{SB}}{T^4} \qquad \qquad(15)$
- where ${\sigma _{SB}}$ = 5.67 x 10-8W/m2-K4 is the Stefan-Boltzmann constant. + where ${\sigma _{SB}} = 5.67 x 10-8W/m2-K4 is the Stefan-Boltzmann constant. For a real surface, the emissive power is obtained by For a real surface, the emissive power is obtained by Line 135: Line 122: [itex]E = \varepsilon {E_b} \qquad \qquad(16)$
$E = \varepsilon {E_b} \qquad \qquad(16)$
- where ε is the emissivity, defined as the ratio of emissive power of the real surface to that of a blackbody at the same temperature. Since a blackbody is the best emitter, the emissivity of any surface must be less than or equal to 1. A simple but important case of radiation heat transfer is the radiation heat exchange between a small surface with area $A$, emissivity ε, and temperature ${T_w}$, and a much larger surface surrounding the small surface. If the temperature of the surroundings is ${T_{sur}}$, the heat transfer rate per unit area from the small object is obtained by + where ε is the emissivity, defined as the ratio of emissive power of the real surface to that of a blackbody at the same temperature. Since a blackbody is the best emitter, the emissivity of any surface must be less than or equal to 1. A simple but important case of radiation heat transfer is the radiation heat exchange between a small surface with area $A, emissivity ε, and temperature [itex]{T_w}, and a much larger surface surrounding the small surface. If the temperature of the surroundings is [itex]{T_{sur}}, the heat transfer rate per unit area from the small object is obtained by [itex]q'' = \varepsilon {\sigma _{SB}}(T_w^4 - T_{sur}^4) \qquad \qquad(17)$
$q'' = \varepsilon {\sigma _{SB}}(T_w^4 - T_{sur}^4) \qquad \qquad(17)$
Line 142: Line 129: ==References== ==References== - - Bird, R.B., Stewart, W.E., and Lightfoot, E.N., 2002, ''Transport Phenomena'', 2nd edition, John Wiley & Sons, New York. Curtiss, C. F., and Bird, R. B., 1999, “Multicomponent Diffusion,” ''Industrial and Engineering Chemistry Research'', Vol. 38, pp. 2115-2522. Curtiss, C. F., and Bird, R. B., 1999, “Multicomponent Diffusion,” ''Industrial and Engineering Chemistry Research'', Vol. 38, pp. 2115-2522. Line 149: Line 134: Curtiss, C. F., and Bird, R. B., 2001, “Errata,” ''Industrial and Engineering Chemistry Research'', Vol. 40, p. 1791. Curtiss, C. F., and Bird, R. B., 2001, “Errata,” ''Industrial and Engineering Chemistry Research'', Vol. 40, p. 1791. - Faghri, A., and Zhang, Y., 2006, “Transport Phenomena in Multiphase Systems”, Burlington, MA. + Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA. - + - Kays, W.M., Crawford, M.E., and Weigand, B., 2004, ''Convective Heat Transfer'', 4th ed., McGraw-Hill, New York, NY. + - Poling, B.E., Prausnitz, J. M., and O’Connell, J.P., 2000, ''The Properties of Gases and Liquids'', 5th edition, McGraw-Hill, New York, NY. + Faghri, A., Zhang, Y., and Howell, J. R., 2010, Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO. + ==Further Reading== - Siegel, R. and Howell, J., 2002, ''Thermal Radiation Heat Transfer'', 4th edition, Taylor and Francis, New York, NY. + ==External Links==
## Current revision as of 21:35, 20 November 2010
Figure 1: One-dimensional conduction.
Heat transfer is a process whereby thermal energy is transferred in response to a temperature difference. There are three modes of heat transfer: conduction, convection, and radiation. Conduction is heat transfer across a stationary medium, either solid or fluid. For an electrically nonconducting solid, conduction is attributed to atomic activity in the form of lattice vibration, while the mechanism of conduction in an electrically-conducting solid is a combination of lattice vibration and translational motion of electrons. Heat conduction in a liquid or gas is due to the random motion and interaction of the molecules. For most engineering problems, it is impractical and unnecessary to track the motion of individual molecules and electrons, which may instead be described using the macroscopic averaged temperature. The heat transfer rate is related to the temperature gradient by Fourier’s law. For the one-dimensional heat conduction problem shown in Fig. 1, in which temperature varies along the y - direction only, the heat transfer rate is obtained by Fourier’s law
${q''_y} = - k\frac{{dT}}{{dy}}\qquad \qquad(1)$
where q''y is the heat flux along the y - direction, i.e., the heat transfer rate in the y - direction per unit area (W/m2), and dT / dy (K/m) is the temperature gradient. The proportionality constant k is thermal conductivity (W/m-K) and is a property of the medium.
For heat conduction in a multidimensional isotropic system, eq. (1) can be rewritten in the following generalized form:
${\mathbf{q''}} = - k\nabla T \qquad \qquad(2)$
where both the heat flux and the temperature gradient are vectors, i.e.,
${\mathbf{q''}} = {\mathbf{i}}{q''_x} + {\mathbf{j}}{q''_y} + {\mathbf{k}}{q''_z} \qquad \qquad(3)$
While the thermal conductivity for isotropic materials does not depend on the direction, it is dependent on direction for anisotropic materials. Unlike isotropic material whose thermal conductivity is a scalar, the thermal conductivity of anisotropic material is a tensor of the second order:
${\mathbf{k}} = \left[ {\begin{array}{*{20}{c}} {{k_{xx}}} & {{k_{xy}}} & {{k_{xz}}} \\ {{k_{yx}}} & {{k_{yy}}} & {{k_{yz}}} \\ {{k_{zx}}} & {{k_{zy}}} & {{k_{zz}}} \\ \end{array}} \right] \qquad \qquad(4)$
and eq. (2) will become:
${\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T \qquad \qquad(5)$
Equation (2) or (5) is valid in a system that is uniform in all aspects except for the temperature gradient, i.e., no gradients of mass concentration or pressure. In a multicomponent system, mass transfer can also contribute to the heat flux (Curtiss and Bird, 1999).
${\mathbf{q''}} = - {\mathbf{k}} \cdot \nabla T + \sum\limits_{i = 1}^N {{h_i}{{\mathbf{J}}_i}} + c{R_u}T\sum\limits_{i = 1}^N {\sum\limits_{j = 1(j \ne i)}^N {\frac{{{x_i}{x_j}}}{{{\rho _i}}}\frac{{D_i^T}}{{{{\mathfrak{D}_{ij}}}}}} \left( {\frac{{{{\mathbf{J}}_i}}}{{{\rho _i}}} - \frac{{{{\mathbf{J}}_j}}}{{{\rho _j}}}} \right)} \qquad \qquad(6)$
where Ji is diffusive mass flux relative to mass-averaged velocity, which will be discussed in the latter part of this section. The second term on the right-hand side represents the interdiffusional convection term, which is not zero even though $\sum\limits_{i = 1}^N {{{\mathbf{J}}_i}} = 0$. hi is partial enthalpy (J/kg) for the ith species. The third term on the right-hand side is the contribution of the concentration gradient to the heat flux, which is referred to as the diffusion-thermo or Dufour effect. c is the molar concentration (kmol/m3) of the mixture, Ru is the universal gas constant, xi and xj are molar fractions of the ith and jth components respectively, $D_i^T$ is the multicomponent thermal diffusivity (m2/s) (which will be considered in the discussion of mass transfer in the latter part of this section) and $\mathfrak{D_{ij}}$ is the Maxwell-Stefan diffusivity, which is related to the multicomponent Fick diffusivity, ${\mathbb{D}_{ij}}$ (see mass transfer). A binary system gives
$\mathfrak{D_{12}} = \frac{{{x_1}{x_2}}}{{{\omega _1}{\omega _2}}}{\mathbb{D}_{12}}\qquad \qquad(7)$
where ω1 and ω2 are mass fractions of components 1 and 2 respectively. For a ternary system:
$\mathfrak{D_{12}} = \frac{{{x_1}{x_2}}} {{{\omega _1}{\omega _2}}}\frac{{{\mathbb{D}_{12}}{\mathbb{D}_{33}} - {\mathbb{D}_{13}}{\mathbb{D}_{23}}}} {{{\mathbb{D}_{12}} + {\mathbb{D}_{33}} - {\mathbb{D}_{13}} - {\mathbb{D}_{23}}}} \qquad \qquad(8)$
Figure 2: Forced convective heat transfer.
Typical values of mean convective heat transfer coefficients
Mode Geometry $\bar h{\rm{ (W/}}{{\rm{m}}^{\rm{2}}}{\rm{ - K)}}$ Forced convection Air flows at 2 m/s over a 0.2 m square plate 12 Air at 2 atm flowing in a 2.5 cm-diameter tube with a velocity of 10 m/s 65 Water flowing in a 2.5 cm-diameter tube with a mass flow rate of 0.5 kg/s 3500 Airflow across 5 cm-diameter cylinder with velocity of 50 m/s 180 Free convection (ΔT = 20oC) Vertical plate 0.3 m high in air 4.5 Horizontal cylinder with a diameter of 2 cm in water 890 Evaporation Falling film on a heated wall 6000-27000 Condensation of water at 1 atm Vertical surface 4000-11300 Outside horizontal tube 9500-25000 Boiling of water at 1 atm Pool 2500-3500 Forced convection 5000-100000 Natural convection- controlled melting and solidification Melting in a rectangular enclosure 500-1500 Solidification around a horizontal tube in a superheated liquid phase change material 1000-1500
For a system of more than four components, the Maxwell-Stefan diffusivity can be obtained using methods described in Curtiss and Bird (1999; 2001). The interdiffusional convection term represented by the second term on the right- hand side of eq. (6) is usually important for multicomponent diffusion systems. While the Dufour energy flux represented by the third term on the right-hand side of eq. (6) is negligible for many engineering problems, it may become important for cases with a very large temperature gradient.
The second mode of heat transfer is convection, which occurs between a wall at one temperature, Tw, and a moving fluid at another temperature, ${T_\infty }$; this is exemplified by forced convective heat transfer over a flat plate, as shown in Fig. 2. The mechanism of convection heat transfer is a combination of random molecular motion (conduction) and bulk motion (advection) of the fluid. Newton’s law of cooling is used to describe the rate of heat transfer:
$q'' = h({T_w} - {T_\infty }) \qquad \qquad(9)$
where h is the convective heat transfer coefficient (W/m2-K), which depends on many factors including fluid properties, flow velocity, geometric configuration, and any fluid phase change that may occur as a result of heat transfer. Unlike thermal conductivity, the convective heat transfer coefficient is not a property of the fluid. Typical values of mean convective heat transfer coefficients for various heat transfer modes are listed in the table on the right. Convective heat transfer is often measured using the Nusselt number defined by:
$Nu = \frac{{hL}}{k} \qquad \qquad(10)$
where L and k are characteristic length and thermal conductivity of the fluid, respectively.
The third mode of heat transfer is radiation. The transmission of thermal radiation does not require the presence of a propagating medium and, therefore can occur in a vacuum. Thermal radiation is a form of energy emitted by matter at a nonzero temperature and its wavelength is primarily in the range between 0.1 to 10 μm. Emission can be from a solid surface as well as from a liquid or gas. Thermal radiation may be considered to be the propagation of electromagnetic waves or alternately as the propagation of a collection of particles, such as photons or quanta of photons. When matter is heated, some of its molecules or atoms are excited to a higher energy level. Thermal radiation occurs when these excited molecules or atoms return to lower energy states. Although thermal radiation can result from changes in the energy states of electrons, as well as changes in vibrational or rotational energy of molecules or atoms, all of these radiant energies travel at the speed of light. The wavelength λ of radiative emissions is related to their frequency, V by
$\lambda \nu = c \qquad \qquad(11)$
where c is the speed of light and has a value of 2.998 x 108 m/s in a vacuum. A quantitative description of the mechanism of thermal radiation requires quantum mechanics. An electromagnetic wave with frequency of v can also be viewed as a particle –a photon– with energy of
$\varepsilon = h\nu \qquad \qquad(12)$
where h = 6.626068×10-34 m2-kg/s is Planck’s constant. Both mass and charge of a photon are zero.
For a blackbody, defined as an ideal surface that emits the maximum energy that can be emitted by any surface at the same temperature, the spectral emissive power, Eb (W/m3), can be obtained by Planck’s law
${E_{b,\lambda }} = \frac{{{c_1}}}{{{\lambda ^5}({e^{{c_2}/(\lambda T)}} - 1)}} \qquad \qquad(13)$
where ${c_1} = 3.742 \times {10^{ - 16}}{\rm{ W - }}{{\rm{m}}^{\rm{2}}}$ and ${c_2} = 1.4388 \times {10^{ - 2}}{\rm{ m - K}}$ are radiation constants. The unit of the surface temperature T in eq. (13) is K.
Figure 3: Radiation heat transfer between a small surface and its surroundings.
The emissive power for a blackbody, Eb (W/m2), is
${E_b} = \int_0^\infty {{E_{b,\lambda }}d\lambda } \qquad \qquad(14)$
Substituting eq. (13) into eq. (14), Stefan-Boltzmann’s law is obtained
${E_b} = {\sigma _{SB}}{T^4} \qquad \qquad(15)$
where σSB = 5.67 x 10-8W/m2-K4 is the Stefan-Boltzmann constant.
For a real surface, the emissive power is obtained by
$E = \varepsilon {E_b} \qquad \qquad(16)$
where ε is the emissivity, defined as the ratio of emissive power of the real surface to that of a blackbody at the same temperature. Since a blackbody is the best emitter, the emissivity of any surface must be less than or equal to 1. A simple but important case of radiation heat transfer is the radiation heat exchange between a small surface with area A, emissivity ε, and temperature Tw, and a much larger surface surrounding the small surface. If the temperature of the surroundings is Tsur, the heat transfer rate per unit area from the small object is obtained by
$q'' = \varepsilon {\sigma _{SB}}(T_w^4 - T_{sur}^4) \qquad \qquad(17)$
For a detailed treatment of radiation heat transfer, including radiation of nongray surfaces and participating media, consult Siegel and Howell (2002).
## References
Curtiss, C. F., and Bird, R. B., 1999, “Multicomponent Diffusion,” Industrial and Engineering Chemistry Research, Vol. 38, pp. 2115-2522.
Curtiss, C. F., and Bird, R. B., 2001, “Errata,” Industrial and Engineering Chemistry Research, Vol. 40, p. 1791.
Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA.
Faghri, A., Zhang, Y., and Howell, J. R., 2010, Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225175380706787, "perplexity": 750.6509780170285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00234.warc.gz"} |
https://www.physicsforums.com/threads/proof-of-limit.133834/ | Proof of Limit
1. Sep 27, 2006
Haftred
I am trying to prove the limit of the function (2x^2 + y^2) / (x^2 + y^2) as (x,y) ---> (-1 , 2) is 6/5.
So I have 0 < sq((x+1)^2 + (y-2)^2) < delta
and f(x) - 6/5 < Epsilon
I found a common denominator and made epsilon the quotient of two polynomials. Also, i recognized that you could factor the numerator to yield some function of delta. However, the denominator is becoming a problem. I get that the numerator is less than 4D^2 + 8D + D^2 + 4D, after using the triangle inequality, but i don't know what to do with the denominator. Am i close to the right method, or am i totally doing it wrong? Is proving this even possible?
2. Sep 28, 2006
Edgardo
Last edited: Sep 28, 2006
3. Sep 28, 2006
murshid_islam
how about converting into polar coordinates?
4. Sep 28, 2006
HallsofIvy
Staff Emeritus
The question is, what are you allowed to use? It is obvious that both numerator and denominator are continuous functions and the denominator does not go to 0. By the "limit theorems" it obviously goes to 6/6 since the numerator goes to 6 and the denominator goes to 5. Are you saying that you are required to do an epsilon-delta proof?
5. Sep 28, 2006
Haftred
yes, it's obvious what the limit is, but I need to prove it using the delta-epsilon proof.
6. Sep 28, 2006
HallsofIvy
Staff Emeritus
Then your delta will measure distance from (-1, 2). It might be best to use murshid islam's suggestion: convert to polar coordinates. But you will also need to translate (-1, 2) to the origin. That is, x= -1+ r cos($\theta$), y= 2+ r sin($\theta$). That way, $\delta= r$.
7. Sep 28, 2006
Haftred
Ok converting to polar coordinates seems like a good idea; however, the denominator is still causing me problems.
I will show all the work I have done so far:
We want to show that the limit of the function:
$$\frac{2x^2 + y^2}{x^2+ y^2}$$ is 1.2 as one approaches the point (-1,2).
We want to use the $$\delta - \epsilon$$ proof:
$$x = -1 + r\cos\Theta$$
$$y = 2 + r\sin\Theta$$
$$r < \delta$$
$$\epsilon < f(x) - 1.2 = \frac{4x^2 - y^2}{5x^2+5^2}$$
I substituted the values of x and y in terms of $$r$$ and $$\Theta$$
And I end up with:
ab((4r^2cos^(t) - r^2sin^2(t) - 8rcos(t) - 4rsin(t) / (5r^2 - 10rcos(t) + 20rsin(t) + 25)) < Epsilon
I sill cannot prove it if I substitute delta for 'r'; the denominator is stil causing problems.
Last edited: Sep 28, 2006
8. Sep 28, 2006
Haftred
sorry x = -1 + rcos(t) is what i used, not 1 + rcos(t)
Similar Discussions: Proof of Limit | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652002453804016, "perplexity": 653.9159418326324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612283.85/warc/CC-MAIN-20170529111205-20170529131205-00407.warc.gz"} |
https://www.physicsforums.com/threads/electric-dipole-moment-of-copper-and-sulphate-ions.379347/ | # Homework Help: Electric Dipole Moment of copper and sulphate ions
1. Feb 17, 2010
### kihr
1. The problem statement, all variables and given/known data
The electric dipole moment of a CuSO4 molecule is 3.2 * 10^-32 (i.e. 10 raised to the power of -32)C-m. Find the separation between the copper and sulphate ions.
2. Relevant equations
Electric deipole moment = 2aq where 2a=separation between the two charges q and -q
3. The attempt at a solution
I am unable to proceed with the given data, because I need to find out the value of q. Please help me with some clues.The answer is 10^-3m. Thanks.
2. Feb 17, 2010
### gabbagabbahey
I suggest you Google "Copper Sulfate" and find out how ionized each component is (eg. If the compound is formed by doubly ionized Copper, you would have $\text{Cu}^{2+}$ and $\text{SO}_4{}^{2-}$ and so $q$ would be 2 times the charge of the electron). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842477560043335, "perplexity": 1044.492235268861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865145.53/warc/CC-MAIN-20180623171526-20180623191526-00599.warc.gz"} |
http://mathonline.wikidot.com/alternative-definitions-for-the-limit-superior-inferior-of-a | Alternative Definitions for the Limit Sup/Inf of a Seq. of Real Numbers
# Alternative Definitions for the Limit Superior/Inferior of a Sequence of Real Numbers
Recall from The Limit Superior and Limit Inferior of a Sequence of Real Numbers page that if $(a_n)_{n=1}^{\infty}$ is a sequence of real numbers then we defined the limit superior of $(a_n)_{n=1}^{\infty}$ to be:
(1)
\begin{align} \quad \limsup_{n \to \infty} a_n = \lim_{n \to \infty} \left ( \sup_{k \geq n} \{ a_k \} \right ) \end{align}
Similarly, we defined the limit inferior of $(a_n)_{n=1}^{\infty}$ to be:
(2)
\begin{align} \quad \liminf_{n \to \infty} a_n = \lim_{n \to \infty} \left ( \inf_{k \geq n} \{ a_k \} \right ) \end{align}
We will now look at some equivalent definitions that are often used to define the limit superior and limit inferior of a sequence of real numbers. Sometimes we may use:
(3)
\begin{align} \quad \limsup_{n \to \infty} a_n = \inf_{n \geq 1} \left \{ \sup_{k \geq n} \left \{ a_k \right \} \right \} \end{align}
(4)
\begin{align} \quad \liminf_{n \to \infty} a_n = \sup_{n \geq 1} \left \{ \inf_{k \geq n} \left \{ a_k \right \} \right \} \end{align}
Noticing that $\left ( \sup_{k \geq n} \left \{ a_k \right \} \right )_{n=1}^{\infty}$ is a decreasing sequence of real numbers, the above definition makes sense (since a decreasing sequence of real numbers converges to its infimum. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999711513519287, "perplexity": 160.3521671944633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00226.warc.gz"} |
https://research-information.bris.ac.uk/en/publications/ksupsup-%CF%80supsup%CE%BD%CE%BD-decay-and-np-searches-at-na62 | # K+ → π+νν¯ decay and NP searches at Na62
for the NA62 Collaboration
Research output: Contribution to journalArticle (Academic Journal)peer-review
## Abstract
The decay K+ → π+νν¯ with a very precisely predicted branching ratio of less than 10 10 is one of the best candidates to reveal indirect effects of new physics at the highest mass scales. The NA62 experiment at CERN SPS is designed to measure the branching ratio of the K+ → π+νν¯. In 2016, the first data set good for physics has been collected. The preliminary result on BR(K+ → π+νν¯) from the full 2016 data set is presented here. Due to the high beam energy and hermetic detector coverage, NA62 has also the opportunity to directly search for a multitude of long-lived beyond-Standard-Model particles, such as dark photons, dark scalars, axion-like particles, and heavy neutral leptons. An overview of the broader NA62 physics program with status and prospects will be illustrated.
Original language English 95-102 8 Acta Physica Polonica B, Proceedings Supplement 13 1 https://doi.org/10.5506/APhysPolBSupp.13.95 Published - 2020 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890163242816925, "perplexity": 4215.282677123309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00013.warc.gz"} |
http://mathmistakes.org/dividing-by-log/ | Last week we posted a somewhat similar mistake involving logarithms. Here’s the student work:
While you should feel free to comment on today’s mistake independent of the earlier one, I think it would be especially productive to compare these two pieces of student work. Could the same student have done both pieces of work? Are there differences between the way each student understands logarithms?
By the way, we now support equation editing in LaTeX, so feel free to write things such as $\frac{2\log(6x)}{\log}$ in the comments. This post is also tagged with a Common Core Standard and categorized according the hierarchy that you see in the menu on the left.
• Here, this student has dropped the 2 in front of the first term, and didn’t cancel out the log. So if they’re thinking “just divide out the logs” they’ve made a mistake in simplifying.
In addition they still don’t seem to understand how to simplify $\log 6x = 45$, so they don’t seem to have an understanding of the meaning of the logarithm. In the last problem, we didn’t know if a student could solve equations like $\log x = 45$.
• mr bombastic
Usually when I get errors this severe, it isn’t just a misconception about logs. It is also about very poor algebra I equation solving skills, and poor work habits. This is much worse than the previous problem.
• I think this problem goes deeper than logs. Something about how we teach functions leaves students unable to tell when f(a+b)=f(a)+f(b) holds true and when it doesn’t.
• mpershan
Can you say more? I’m not quite understanding you.
• mr bombastic
Couldn’t agree more! Square roots and rational expressions especially – root(x^2 + 9) becomes x + 3; x/(x + 5) becomes 1 + x/5 etc. I have pondered doing a more advanced order of operations unit that includes expressions with variables, log/ln, composition – exciting stuff, right? But, many students are not so clear about order of operations with numbers, let alone when variables are involved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019460439682007, "perplexity": 713.7191354320556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541142.66/warc/CC-MAIN-20161202170901-00437-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/40203/covering-a-space-with-closures-of-disjoint-sets-from-a-basis | # covering a space with closures of disjoint sets from a basis
We are given a compact metric topological space $X$ and a base $\beta$ of its topology. Is it always possible to find a subset of the basis, say $\beta_0 \subset \beta$, such that each point of the space is contained in a closure of a set from $\beta_0$, i.e. $\bigcup_{U \in \beta_0} \overline{U} = X$, and the sets from $\beta_0$ are pairwise disjoint, i.e. $\forall U,V \in \beta_0 : \ U = V \vee U \cap V = \emptyset$.
My guess is that it is impossible in general, because of two arguments. Firstly, there exist dense open subsets of $[0,1]$ with arbitralily small measure, so we need to be careful choosing sets for $\beta_0$ (making it any maximal subset of $\beta$ with pairwise disjoint will not suffice). Secondly, it would just be too good to be true, and a proof of a certain measure-theoretical problem would become too simple.
-
For your example of the unit interval: Take $\beta_0 = (0,\frac{1}{4})$ and $\beta_1 = (\frac{1}{4},\frac{1}{2})$ and $\beta_3 = (\frac{1}{2},1)$. Those are all basis elements (if you consider only open interval with rational end points) and you have that $[0,\frac{1}{4}]\cup [\frac{1}{4},\frac{1}{2}]\cup [\frac{1}{2},1] = [0,1]$. – Asaf Karagila May 20 '11 at 9:24
@Asaf Karagila: I don't think you get to choose the original basis $\beta$, it's given to you. $\beta$ could be something other than the open intervals with rational endpoints, and you can't be sure the sets you chose are in $\beta$. – Nate Eldredge May 20 '11 at 15:07
@Nate: Of course it could be the discrete topology, which means $[0,1]$ is not compact at all :-) I just gave an example. – Asaf Karagila May 20 '11 at 15:15
@Asaf: Your example works for the standard basis for the topology of $[0,1]$, but the given basis $\beta$ might not contain the sets you have listed. Any construction of an example of this being true needs to start with an arbitrary basis for the space $X$. – wckronholm May 20 '11 at 15:15
I think you should use "base", not "basis": a vector space has a basis, a topological space has a base. – Henno Brandsma May 21 '11 at 8:10
It is sufficient to take $X = [0,1]$ and compose a countable basis $\beta$ of such intervals that no two have the same endpoint and $X \not \in \beta$ and $\beta$ contains no intervals of the form $(0,x)$ nor $(x,1)$. Then, for any two sets $U,V \in \beta$ with $U \cap V = \emptyset$ we can squeze an (nondegenetare) interval between $U$ and $V$ and for any $U \in \beta$ we have $\# \partial U \leq 2$.
Now, suppose that we manage to select $\beta_0$ such that $\bigcup_{U \in \beta_0} \overline{U} = X$ and sets from $\beta_0$ are pairwise disjoint. Since $\overline{U} \setminus U$ has at most two elements, the set $X \setminus \bigcup_{U \in \beta_0} U \subset \bigcup_{U \in \beta_0} \overline{U} \setminus U$ is at most countable. I will show that $X \setminus \bigcup_{U \in \beta_0} U$ has no isolated points, and it is easy to check that it is closed and nonempty. But a complete space with no isolated points has to have a power of continuum, which leads to contradiction.
Suppose $X \setminus \bigcup_{U \in \beta_0} U$ has an isolated point $x \neq 0,1$. We know that $x \in \partial U$ for some $U \in \beta$. Without loss of generality assume that $x$ is the left endpoint of the interval $U$ so that $U = (x,y)$ for some $y$. Since $x$ is an isolated point of $X \setminus \bigcup_{U \in \beta_0} U$ there has to exist an interval $(z,x)$ which is disjoint from $X \setminus \bigcup_{U \in \beta_0} U$ or, in other words, contained in $\bigcup_{U \in \beta_0} U$. If we select any $t \in (z,x)$ it belongs to some $U = (a,b) \in \beta_0$ with $b \leq x$. But $x$ is already an endpoint of $(x,y) \in \beta$ so, by choice of $\beta$ it can't also be the endpoint of $(a,b)$, thus $b < x$. Note that $b \not \in V$ for $V \in \beta$, since sets in $\beta_0$ are pairwise disjoint, so $X \not \in \bigcup_{U \in \beta_0} U$. At the same time, $b \in (z,x) \subset \bigcup_{U \in \beta_0} U$ which is a contradiction.
The above is a little technical and lacks a few points, but I don't think it's worth to go into more detail here. In fact, one can see that $X \setminus \bigcup_{U \in \beta_0} U$ is homeomorphic to the Cantor set, and thus uncountable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385916590690613, "perplexity": 90.78791172659601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006637.79/warc/CC-MAIN-20141125155646-00163-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://sro.sussex.ac.uk/id/eprint/20866/ | # The s-energy of spherical designs on S-2
Hesse, Kerstin (2009) The s-energy of spherical designs on S-2. Advances in Computational Mathematics, 30 (1). pp. 37-59. ISSN 1019-7168
Full text not available from this repository.
## Abstract
This paper investigates the s-energy of (finite and infinite) well separated sequences of spherical designs on the unit sphere S 2. A spherical n-design is a point set on S 2 that gives rise to an equal weight cubature rule which is exact for all spherical polynomials of degree =n. The s-energy E s (X) of a point set $X=\{\mathbf{x}_1,\ldots,\mathbf{x}_m\}\subset S^2$ of m distinct points is the sum of the potential $\|\mathbf{x}_i-\mathbf{x}_j\|^{-s}$ for all pairs of distinct points $\mathbf{x}_i,\mathbf{x}_j\in X$. A sequence = {X m } of point sets X m S 2, where X m has the cardinality card(X m )=m, is well separated if $\arccos(\mathbf{x}_i\cdot\mathbf{x}_j)\geq\lambda/\sqrt{m}$ for each pair of distinct points $\mathbf{x}_i,\mathbf{x}_j\in X_m$, where the constant is independent of m and X m . For all s>0, we derive upper bounds in terms of orders of n and m(n) of the s-energy E s (X m(n)) for well separated sequences = {X m(n)} of spherical n-designs X m(n) with card(X m(n))=m(n).
Item Type: Article School of Mathematical and Physical Sciences > Mathematics Kerstin Hesse 06 Feb 2012 19:30 09 Jul 2012 14:05 http://sro.sussex.ac.uk/id/eprint/20866 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305692315101624, "perplexity": 1367.033838626706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526966.26/warc/CC-MAIN-20190419021416-20190419043416-00415.warc.gz"} |
http://www.intmath.com/plane-analytic-geometry/ans-8.php?a=3 | We need to sketch r = 2.5
In this example, we cannot see "θ" in the function we are given. This means the radius, r is constant, no matter what value angle θ takes.
Let's investigate this using the following interactive graph.
Drag the blue dot left and right to change the angle θ. Does the size of r change as you change the angle?
Hint: Let go of the blue dot to see a smooth curve at any time. You can trace out positive or negative angles.
Here's an image in case you can't see the graph above.
What is the Equivalent in Rectangular Coordinates?
We convert the function given in this question to rectangular coordinates to see how much simpler it is when written in polar coordinates.
To convert r = 2.5 into rectangular coordinates, we use
r2 = x2 + y2
In this example, r = 2.5, so r^2= 6.25.
So this gives us: x2 + y2 = 6.25
Not surprisingly, this is the similar to the equation for a circle we obtained in The Circle section earlier.
Get the Daily Math Tweet! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980241596698761, "perplexity": 561.1003615073121}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608726.4/warc/CC-MAIN-20170527001952-20170527021952-00504.warc.gz"} |
https://www.physicsforums.com/threads/chain-rule-for-vector-function.575390/ | Chain Rule for Vector Function
1. Feb 7, 2012
zoso335
1. The problem statement, all variables and given/known data
I'm trying to figure out how to take grad(f(x(t)) where x(t) is a vector. Since it's part of a physics problem, it's assumed x(t) is in 3-dimensional space.
3. The attempt at a solution
My guess is that grad(f(x(t)) = ((∂f/∂x)(∂x/∂x),(∂f/∂x)(∂x/∂y),(∂f/∂x)(∂x/∂z)) but I really am not sure about this. Can anyone point me in the right direction?
2. Feb 8, 2012
tiny-tim
hi zoso335!
∇(f(x,y,z)) = (∂/∂x(f(x,y,z)) , ∂/∂y(f(x,y,z)) , ∂/∂z(f(x,y,z)))
= (∂f/∂x , ∂f/∂y , ∂f/∂z)
Similar Discussions: Chain Rule for Vector Function | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884530901908875, "perplexity": 1301.5023070083191}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686983.8/warc/CC-MAIN-20170920085844-20170920105844-00371.warc.gz"} |
https://en.wikipedia.org/wiki/Scale_invariant | # Scale invariance
(Redirected from Scale invariant)
The Wiener process is scale-invariant.
In physics, mathematics, statistics, and economics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor. The technical term for this transformation is a dilatation (also known as dilation), and the dilatations can also form part of a larger conformal symmetry.
• In mathematics, scale invariance usually refers to an invariance of individual functions or curves. A closely related concept is self-similarity, where a function or curve is invariant under a discrete subset of the dilatations. It is also possible for the probability distributions of random processes to display this kind of scale invariance or self-similarity.
• In classical field theory, scale invariance most commonly applies to the invariance of a whole theory under dilatations. Such theories typically describe classical physical processes with no characteristic length scale.
• In quantum field theory, scale invariance has an interpretation in terms of particle physics. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved.
• In statistical mechanics, scale invariance is a feature of phase transitions. The key observation is that near a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena. Such theories are scale-invariant statistical field theories, and are formally very similar to scale-invariant quantum field theories.
• Universality is the observation that widely different microscopic systems can display the same behaviour at a phase transition. Thus phase transitions in many different systems may be described by the same underlying scale-invariant theory.
## Scale-invariant curves and self-similarity
In mathematics, one can consider the scaling properties of a function or curve $f(x)$ under rescalings of the variable $x$. That is, one is interested in the shape of $f(\lambda x)$ for some scale factor $\lambda$, which can be taken to be a length or size rescaling. The requirement for $f(x)$ to be invariant under all rescalings is usually taken to be
$f(\lambda x)=\lambda^{\Delta}f(x)$
for some choice of exponent $\Delta$, and for all dilations $\lambda$. This is equivalent to f being a homogeneous function.
Examples of scale-invariant functions are the monomials $f(x)=x^n$, for which one has $\Delta = n$, in that clearly
$f(\lambda x) = (\lambda x)^n = \lambda^n f(x).$
An example of a scale-invariant curve is the logarithmic spiral, a kind of curve that often appears in nature. In polar coordinates (r, θ) the spiral can be written as
$\theta = \frac{1}{b} \ln(r/a).$
Allowing for rotations of the curve, it is invariant under all rescalings $\lambda$; that is $\theta(\lambda r)$ is identical to a rotated version of $\theta(r)$.
### Projective geometry
The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of a homogeneous polynomial, and more generally to a homogeneous function. Homogeneous functions are the natural denizens of projective space, and homogeneous polynomials are studied as projective varieties in projective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry of schemes, it has connections to various topics in string theory.
### Fractals
It is sometimes said that fractals are scale-invariant, although more precisely, one should say that they are self-similar. A fractal is equal to itself typically for only a discrete set of values $\lambda$, and even then a translation and rotation must be applied to match the fractal up to itself. Thus, for example the Koch curve scales with $\Delta=1$, but the scaling holds only for values of $\lambda=1/3^n$ for integer n. In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve.
Some fractals may have multiple scaling factors at play at once; such scaling is studied with multi-fractal analysis.
## Scale invariance in stochastic processes
If $P(f)$ is the average, expected power at frequency $f$, then noise scales as
$P(f) = \lambda^{-\Delta} P(\lambda f)$
with $\Delta=0$ for white noise, $\Delta=-1$ for pink noise, and $\Delta=-2$ for Brownian noise (and more generally, Brownian motion).
More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by the probability distribution. Examples of scale-invariant distributions are the Pareto distribution and the Zipfian distribution.
### Scale invariant Tweedie distributions
Tweedie distributions are a special case of exponential dispersion models, a class of statistical models used to describe error distributions for the generalized linear model and characterized by closure under additive and reproductive convolution as well as under scale transformation.[1] These include a number of common distributions: the normal distribution, Poisson distribution and gamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positive stable distributions, and extreme stable distributions. Consequent to their inherent scale invariance Tweedie random variables Y demonstrate a variance var(Y) to mean E(Y) power law:
$\text{var}\,(Y) = a[\text{E}\,(Y)]^p$,
where a and p are positive constants. This variance to mean power law is known in the physics literature as fluctuation scaling,[2] and in the ecology literature as Taylor's law.[3]
Random sequences, governed by the Tweedie distributions and evaluated by the method of expanding bins exhibit a biconditional relationship between the variance to mean power law and power law autocorrelations. The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest 1/f noise.[4]
The Tweedie convergence theorem provides a hypothetical explanation for the wide manifestation of fluctuation scaling and 1/f noise.[5] It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types.[4]
Much as the central limit theorem requires certain kinds of random variables to have as a focus of convergence the Gaussian distribution and express white noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express 1/f noise and fluctuation scaling.[4]
### Cosmology
In physical cosmology, the power spectrum of the spatial distribution of the cosmic microwave background is near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude, P(k), of primordial fluctuations as a function of wave number, k, is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal of cosmic inflation.
## Scale invariance in classical field theory
Classical field theory is generically described by a field, or set of fields, $\varphi$, that depend on coordinates, x. Valid field configurations are then determined by solving differential equations for $\varphi(x)$, and these equations are known as field equations.
For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields:
$x\rightarrow\lambda x,$
$\varphi\rightarrow\lambda^{-\Delta}\varphi.$
The parameter $\Delta$ is known as the scaling dimension of the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory is not scale-invariant.
A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution, $\varphi(x)$, one always has other solutions of the form $\lambda^{\Delta}\varphi(\lambda x)$.
### Scale invariance of field configurations
For a particular field configuration, $\varphi(x)$, to be scale-invariant, we require that
$\varphi(x)=\lambda^{-\Delta}\varphi(\lambda x)$
where $\Delta$ is again the scaling dimension of the field.
We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations will not be scale-invariant, and in such cases the symmetry is said to be spontaneously broken.
### Classical electromagnetism
An example of a scale-invariant classical field theory is electromagnetism with no charges or currents. The fields are the electric and magnetic fields, $\mathbf{E}(\mathbf{x},t)$ and $\mathbf{B}(\mathbf{x},t)$, while their field equations are Maxwell's equations. With no charges or currents, these field equations take the form of wave equations
$\nabla^2 \mathbf{E} = \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2}$
$\nabla^2\mathbf{B} = \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2}$
where c is the speed of light.
These field equations are invariant under the transformation
$x\rightarrow\lambda x,$
$t\rightarrow\lambda t.$
Moreover, given solutions of Maxwell's equations, $\mathbf{E}(\mathbf{x},t)$ and $\mathbf{B}(\mathbf{x},t)$, we have that $\mathbf{E}(\lambda\mathbf{x},\lambda t)$ and $\mathbf{B}(\lambda\mathbf{x},\lambda t)$ are also solutions.
### Massless scalar field theory
Another example of a scale-invariant classical field theory is the massless scalar field (note that the name scalar is unrelated to scale invariance). The scalar field, φ(x, t) is a function of a set of spatial variables, x, and a time variable, t.
Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation,
$\frac{1}{c^2} \frac{\partial^2 \varphi}{\partial t^2}-\nabla^2 \varphi = 0,$
and is invariant under the transformation
$x\rightarrow\lambda x,$
$t\rightarrow\lambda t.$
The name massless refers to the absence of a term $\propto m^2\varphi$ in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. In relativistic field theories, a mass-scale, m is physically equivalent to a fixed length scale through
$L=\frac{\hbar}{mc},$
and so it should not be surprising that massive scalar field theory is not scale-invariant.
#### φ4 theory
The field equations in the examples above are all linear in the fields, which has meant that the scaling dimension, Δ, has not been so important. However, one usually requires that the scalar field action is dimensionless, and this fixes the scaling dimension of φ. In particular,
$\Delta=\frac{D-2}{2},$
where D is the combined number of spatial and time dimensions.
Given this scaling dimension for φ, there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is massless φ4 theory for D=4. The field equation is
$\frac{1}{c^2} \frac{\partial^2 \varphi}{\partial t^2}-\nabla^2 \varphi+g\varphi^3=0.$
(Note that the name φ4 derives from the form of the Lagrangian, which contains the fourth power of φ.)
When D=4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ=1. The field equation is then invariant under the transformation
$x\rightarrow\lambda x,$
$t\rightarrow\lambda t,$
$\varphi\rightarrow\lambda^{-1}\varphi.$
The key point is that the parameter g must be dimensionless, otherwise one introduces a fixed length scale into the theory: For φ4 theory, this is only the case in D=4.
## Scale invariance in quantum field theory
The scale-dependence of a quantum field theory (QFT) is characterised by the way its coupling parameters depend on the energy-scale of a given physical process. This energy dependence is described by the renormalization group, and is encoded in the beta-functions of the theory.
For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known as fixed points of the corresponding renormalization group flow.[6]
### Quantum electrodynamics
A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory.
However, in nature the electromagnetic field is coupled to charged particles, such as electrons. The QFT describing the interactions of photons and charged particles is quantum electrodynamics (QED), and this theory is not scale-invariant. We can see this from the QED beta-function. This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant.
### Massless scalar field theory
Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point.
However, even though the classical massless φ4 theory is scale-invariant in $D=4$, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g.
Even though the quantized massless φ4 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson-Fisher fixed point, below.
### Conformal field theory
Scale-invariant QFTs are almost always invariant under the full conformal symmetry, and the study of such QFTs is conformal field theory (CFT). Operators in a CFT have a well-defined scaling dimension, analogous to the scaling dimension, $\Delta$, of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known as anomalous scaling dimensions.
### Scale and conformal anomalies
The φ4 theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to be anomalous. A classically scale invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe called cosmic inflation, as long as the theory can be studied through perturbation theory.[7]
## Phase transitions
In statistical mechanics, as a system undergoes a phase transition, its fluctuations are described by a scale-invariant statistical field theory. For a system in equilibrium (i.e. time-independent) in D spatial dimensions, the corresponding statistical field theory is formally similar to a D-dimensional CFT. The scaling dimensions in such problems are usually referred to as critical exponents, and one can in principle compute these exponents in the appropriate CFT.
### The Ising model
An example that links together many of the ideas in this article is the phase transition of the Ising model, a simple model of ferromagnetic substances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form a D-dimensional periodic lattice. Associated with each lattice site is a magnetic moment, or spin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.)
The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature, Tc , spontaneous magnetization is said to occur. This means that below Tc the spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions.
An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distance r. This has the generic behaviour:
$G(r)\propto\frac{1}{r^{D-2+\eta}},$
for some particular value of $\eta$, which is an example of a critical exponent.
#### CFT description
The fluctuations at temperature Tc are scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is the Wilson-Fisher fixed point, a particular scale-invariant scalar field theory.
In this context, G(r) is understood as a correlation function of scalar fields,
$\langle\phi(0)\phi(r)\rangle\propto\frac{1}{r^{D-2+\eta}}.$
Now we can fit together a number of the ideas seen already.
From the above, one sees that the critical exponent, η, for this phase transition, is also an anomalous dimension. This is because the classical dimension of the scalar field,
$\Delta=\frac{D-2}{2}$
is modified to become
$\Delta=\frac{D-2+\eta}{2},$
where D is the number of dimensions of the Ising model lattice.
So this anomalous dimension in the conformal field theory is the same as a particular critical exponent of the Ising model phase transition.
Note that for dimension D ≡ 4−ε, η can be calculated approximately, using the epsilon expansion, and one finds that
$\eta=\frac{\epsilon^2}{54}+O(\epsilon^3)$.
In the physically interesting case of three spatial dimensions, we have ε=1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is that η is numerically small in three dimensions.
On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of the minimal models, a family of well-understood CFTs, and it is possible to compute η (and the other critical exponents) exactly,
$\eta_{_{D=2}}=\frac{1}{4}$.
### Schramm–Loewner evolution
The anomalous dimensions in certain two-dimensional CFTs can be related to the typical fractal dimensions of random walks, where the random walks are defined via Schramm–Loewner evolution (SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2d critical Ising model and the more general 2d critical Potts model. Relating other 2d CFTs to SLE is an active area of research.
## Universality
A phenomenon known as universality is seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems:
Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories.
The set of different microscopic theories described by the same scale-invariant theory is known as a universality class. Other examples of systems which belong to a universality class are:
• Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales.
• The frequency of network outages on the Internet, as a function of size and duration.
• The frequency of citations of journal articles, considered in the network of all citations amongst all papers, as a function of the number of citations in a given paper.
• The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale.
• The electrical breakdown of dielectrics, which resemble cracks and tears.
• The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures.
• The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation.
• The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks).
The key observation is that, for all of these different systems, the behaviour resembles a phase transition, and that the language of statistical mechanics and scale-invariant statistical field theory may be applied to describe them.
## Other examples of scale invariance
### Newtonian fluid mechanics with no applied forces
Under certain circumstances, fluid mechanics is a scale-invariant classical field theory. The fields are the velocity of the fluid flow, $\mathbf{u}(\mathbf{x},t)$, the fluid density, $\rho(\mathbf{x},t)$, and the fluid pressure, $P(\mathbf{x},t)$. These fields must satisfy both the Navier–Stokes equation and the continuity equation. For a Newtonian fluid these take the respective forms
$\rho\frac{\partial \mathbf{u}}{\partial t}+\rho\mathbf{u}\cdot\nabla \mathbf{u} = -\nabla P+\mu \left(\nabla^2 \mathbf{u}+\frac{1}{3}\nabla\left(\nabla\cdot\mathbf{u}\right)\right)$
$\frac{\partial \rho}{\partial t}+\nabla\cdot \left(\rho\mathbf{u}\right)=0$
where $\mu$ is the dynamic viscosity.
In order to deduce the scale invariance of these equations we specify an equation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider the isothermal ideal gas, which satisfies
$P=c_s^2\rho,$
where $c_s$ is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations
$x\rightarrow\lambda x,$
$t\rightarrow\lambda t,$
$\rho\rightarrow\lambda^{-1} \rho,$
$\mathbf{u}\rightarrow\mathbf{u}.$
Given the solutions $\mathbf{u}(\mathbf{x},t)$ and $\rho(\mathbf{x},t)$, we automatically have that $\mathbf{u}(\lambda\mathbf{x},\lambda t)$ and $\lambda\rho(\lambda\mathbf{x},\lambda t)$ are also solutions.
### Computer vision
Main article: Scale space
In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed.[8] Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data.[9][10] Examples of applications include blob detection, corner detection, ridge detection, and object recognition via the scale-invariant feature transform.
## References
1. ^ Jørgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall. ISBN 0412997118.
2. ^ Eisler, Z.; Bartos, I.; Kertész, J. (2008). "Fluctuation scaling in complex systems: Taylor's law and beyond". Adv Phys 57 (1): 89–142. arXiv:0708.2053. Bibcode:2008AdPhy..57...89E. doi:10.1080/00018730801893043.
3. ^ Kendal, W. S.; Jørgensen, B. (2011). "Taylor's power law and fluctuation scaling explained by a central-limit-like convergence". Phys. Rev. E 83 (6): 066115. Bibcode:2011PhRvE..83f6115K. doi:10.1103/PhysRevE.83.066115.
4. ^ a b c Kendal, W. S.; Jørgensen, B. (2011). "Tweedie convergence: A mathematical basis for Taylor's power law, 1/f noise, and multifractality". Phys. Rev. E 84 (6): 066120. Bibcode:2011PhRvE..84f6120K. doi:10.1103/PhysRevE.84.066120.
5. ^ Jørgensen, B.; Martinez, J. R.; Tsao, M. (1994). "Asymptotic behaviour of the variance function". Scand J Statist 21 (3): 223–243. JSTOR 4616314.
6. ^ J. Zinn-Justin (2010) Scholarpedia article "Critical Phenomena: field theoretical approach".
7. ^ Salvio, Strumia (2014-03-17). "Agravity". JHEP 1406 (2014) 080. arXiv:1403.4226. Bibcode:2014JHEP...06..080S. doi:10.1007/JHEP06(2014)080.
8. ^ Lindeberg, T. (2013) Invariance of visual operations at the level of receptive fields, PLoS ONE 8(7):e66990.
9. ^ Lindeberg, Tony (1998). "Feature detection with automatic scale selection". International Journal of Computer Vision 30 (2): 79–116. doi:10.1023/A:1008045108935.
10. ^ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 80, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376040697097778, "perplexity": 540.517492020448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095711.51/warc/CC-MAIN-20150627031815-00139-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://wojowu.wordpress.com/2016/10/24/proof-sketch-of-thues-theorem/ | # Proof sketch of Thue’s theorem
In this post a proof of the following theorem is going to be sketched, following the treatment in Borevich and Shafarevich’s Number Theory. This sketch is by no means meant to be highly detailed and I am writing it mostly for my own purposes, so I avoid proving some things, even if they aren’t that straightforward.
Thue’s theorem: Suppose $f(x,y)=a_0x^n+a_1x^{n-1}y+\dots+a_ny^n$ is a binary form which has degree $n\geq 3$, is irreducible (i.e. $f(x,1)$ is an irreducible polynomial in $x$) and $f(x,1)$ has at least one nonreal root in $\mathbb C$. Then for any nonzero integer $c$ the equation $f(x,y)=c$ has only finitely many integral solutions.
Proof: Suppose otherwise…
## Setup
First of all, we may suppose $a_0=1$, for otherwise, we replace $f(x,y)$ with $a_0^{n-1}f(\frac{1}{a_0}x,y)$, which still has integer coefficients. Write
$f(x,1)=(x+\theta_1)(x+\theta_2)\dots(x+\theta_n)$.
The numbers $\theta_1,\dots,\theta_n$ are all conjugates of $\theta=\theta_1$, since we assumed $f(x,1)$ is irreducible. It’s then easy to see
$f(x,y)=(x+y\theta_1)(x+y\theta_2)\dots(x+y\theta_n)=N(x+y\theta) \qquad (1)$
where $N$ is the norm of the field $k=\mathbb Q(\theta)$. Also put $K=\mathbb Q(\theta_1,\dots,\theta_n)$. Hence we are interested in the solutions of $N(\alpha)=c$, where $\alpha$ is in the module (i.e. the additive subgroup) $M$ generated by $1,\theta$. Extend this two-element set to a basis of $k$ $\mu_1=1,\mu_2=\theta,\mu_3,\dots,\mu_n$ and denote by $\overline{M}$ the module generated by these. To recover elements of $M$ among these, we use the dual basis, i.e. elements $\mu_1^*,\dots,\mu_n^*$ such that $T(\mu_i\mu_j^*)=0$ for $i\neq j$ and $latex T(\mu_i\mu_i^*)=1$. Trace of $\alpha\mu_i^*$ recovers then the coefficient of $\mu_i$ in $\alpha$, hence we want
$T(\alpha\mu_i^*)=0$ for $i=3,\dots,n$.
A general result (Theorem 1, Section 5.2, Chapter 2 in Borevich-Shafarevich, slightly rephrased) about elements of fixed norm in a module states the following.
Theorem 1: For a module $\overline{M}$ of rank $n$ in a field $k$ of degree $n$ there are elements $\gamma_1,\dots,\gamma_k\in\overline{M}$ and $\varepsilon_1,\dots,\varepsilon_r\in k$ such that every solution of $N(\alpha)=c,\alpha\in\overline{M}$ can be uniquely written as
$\alpha=\gamma_b\varepsilon_1^{u_1}\dots\varepsilon_r^{u_r}$.
Moreover, $r=s+t-1$, where $s$ is the number of real embeddings of $k$ into $\mathbb C$ and $2t$ is the number of complex embeddings.
Therefore $\alpha$ as above is in $M$ if it satisfies the system of equations
$T(\gamma_a\mu_i^*\varepsilon_1^{u_1}\dots\varepsilon_r^{u_r})=0$ for $i=3,\dots,n$.
Since we assume there are infinitely many $\alpha$ solving the above system, and $\gamma_a$ ranges over a finite set, we can choose one of the $\gamma$ such that infinitely many solutions of the above have $\gamma_a=\gamma$. We can now write this system as
$\displaystyle\sum_{j=1}^n\sigma_j(\gamma\mu_i^*)\sigma_j(\varepsilon_1)^{u_1}\dots\sigma_j(\varepsilon_r)^{u_r}=0$ for $i=3,\dots,n\qquad (2)$,
where $\sigma_j$ are embeddings of $k$ into $K$ ordered so that $\sigma_j(\theta)=\theta_j$.
So now we want to derieve a contradiction from the assumption that $(2)$ has infinitely many solutions in integers $a_1,\dots, a_r$.
## Entering the $\frak P$-adic world
The idea now is to prove that $(2)$ not only has finitely many integral solutions, but it has finitely many solutions in $\frak P$-adic integers, where $\frak P$ is some prime of $K$. More precisely, we take any prime (= prime ideal in the ring of integers) $\frak P$ and the corresponding valuation $\nu=\nu_{\frak P}$. Then we construct the completion $K_{\frak P}$ of $K$ with respect to this valuation. By a “$\frak P$-adic number” we mean any element of $K_{\frak P}$, and ones with nonnegative valuations are going to be called “$\frak P$-adic integers”.
We now want to make sense of equations $(2)$ for $a_i$ not necessarily integers, but also $\frak P$-adic integers. The problem reduces to making sense of $a^b$ for $a$ a fixed $\frak P$-adic number and $b$ a $\frak P$-adic integer, which is meant to vary. For this, we employ exponential and logarithmic functions: we will write $a^b=\exp(b\log a)$. $\exp$ and $\log$ are defined using their power series:
$\displaystyle\exp x=\sum_{n=0}^\infty\frac{x^n}{n!}$,
$\displaystyle\log(1+x)=\sum_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n}$.
These two functions are each other’s inverses, that is,
$\exp\log(1+x)=1+x,\log\exp x=x$.
There are many ways to justify this, the most straightforward one being that we know these equalities hold for complex numbers, hence they are formal equalities of power series, hence they must also hold for $\frak P$-adic numbers. However, these functions are not defined everywhere. Nevertheless, they can be shown to have positive radius of convergence. More precisely:
Lemma 1: There is a rational integer $\kappa$ such that both $\exp x$ and $\log(1+x)$ are defined for $\nu(x)\geq\kappa$. Moreover, $\nu(\log(1+x))\geq\kappa$, so $(1+x)^b=\exp(b\log(1+x))$ is defined for any $\frak P$-adic integer $b$.
Unfortunately, there is no reason to expect numbers $\varepsilon_i$ suit our purposes. However, we can change them so that this is the case. First of all, we may suppose that $\frak P$ is such that all of $\sigma_j(\varepsilon_i)$ have valuation zero (there are finitely many of these numbers, and they have nonzero valuation only with respect to finitely many prime ideals). Now we look at reduction modulo $\frak P^\kappa$ (or, more precisely, modulo any element with valuation $\kappa$). The quotient ring is finite, say it’s of size $d$. Then $\varepsilon_i^d$ always is congruent to $1$ modulo $\frak P^\kappa$, i.e. $\varepsilon_i=1+x$ for $x$ of valuation at least $\kappa$.
Moreover, we can replace the set of $\gamma_i$ by products of $\gamma_i$ and suitable powers of $\varepsilon_i$. we only need to multiply by powers between $0$ and $d-1$. To avoid introducing more notation, we will just assume that $\varepsilon_i$, and hence also $\sigma_j(\varepsilon_i)$, are of the form which allows us to speak of their exponential functions.
## Analytic fluff
The exponential function on $\frak P$-adic numbers satisfies all the familiar properties. Thanks to this, equations $(2)$ can be rewritten as
$\displaystyle\sum_{j=1}^nA_{ij}\exp L_j(u_1,\dots,u_r)=0$ for $i=3,\dots,n,\qquad (3)$
where $A_{ij}=\sigma_j(\gamma\mu_i^*)$ and $L_j(u_1,\dots,u_r)=\displaystyle\sum_{k=1}^ru_k\log\sigma_j(\varepsilon_k)$. Note that the involved functions are all continuous functions of $u_k$.
Now we use the fact that $\frak P$-adic integers are compact (under the topology induced by the valuation). Since we assumed $(3)$ has infinitely many ($\frak P$-adic) integral solutions, there must be a subsequence of these solutions which converges to some tuple $(u_1^*,\dots,u_r^*)$. By continuity, this tuple constitutes another solution to $(3)$. By a change of variables $v_i=u_i-u_i^*$, we get a system of equations
$\displaystyle\sum_{j=1}^nA_{ij}^*\exp L_j(v_1,\dots,v_r)=0$ for $i=3,\dots,n,\qquad (4)$
where $A_{ij}^*=A_{ij}\exp L_j(u_1^*,\dots,u_r^*)$, which by above has a sequence of solutions converging to the origin. We point out at this point that the equations in $(4)$ are linearly independent, i.e. the matrix $(A_{ij}^*)$ of coefficients has rank $n-2$. This is because $A_{ij}$ is the product of $\exp L_j(u_1^*,\dots,u_r^*)\sigma_j(\gamma)$ and $\sigma_j(\mu_i^*)$, and the matrix of all $\sigma_j(\mu_i^*)$ is invertible, as square of its determinant is discriminant of linearly independent tuple, hence is nonzero.
We consider the local analytic manifold $V$ of $(4)$, i.e. the set of solutions of this system in some small neighbourhood of the origin. By assumption on the sequence of solutions converging to the origin, this manifold consists of more than one point. Hence, by a general theorem, it must contain an analytic curve – there is a system of $r$ (formal) power series $\omega_1(t),\dots,\omega_r(t)$, not all identically zero and all with no constant term, which plugged in for $v_k$ in $(4)$. Equivalently, if we put $P_j(t)=L_j(\omega_1(t),\dots,\omega_r(t))$, we get
$\displaystyle\sum_{j=1}^nA_{ij}^*P_j(t)=0$ for $i=3,\dots,n. \qquad (5)$
where $P_j(t)$ are power series with no constant terms.
## Finishing the proof
We have the system $(5)$ of equations involving (exponentials of) $P_j(t)$. However, $P_j(t)$ are also linear combinations of $r$ power series. Therefore, by linear algebra, we can find a system of $n-r$ independent linear equations
$\displaystyle\sum_{j=1}^nP_j(t)=0$ for $i=1,\dots,n-r\qquad (6)$
satisfied by these power series. We will now use the assumption we haven’t used yet: that $f(x,1)$ has a complex root. Recall this implies the field $k$ has at least one complex embedding, i.e. $t\geq 1$ (see statement of theorem 1). Therefore $n-r=s+2t-s-t+1=t+1\geq 2$. Using $(5)$ and $(6)$ we can therefore use the following lemma:
Lemma 2: Suppose formal power series (over some field of characteristic zero) $P_1(t),\dots,P_n(t)$ with no constant term satisfy a system of $n-2$ equations of the form
$\displaystyle\sum_{j=1}^nA_{ij}^*\exp P_j(t)=0$
and also a system of two equations of the form
$\displaystyle\sum_{j=1}^nB_{ij}P_j(t)=0$.
Then $P_j(t)=P_k(t)$ for some $j\neq k$.
Before we provide a proof of this lemma, we will show why it helps us complete the proof. Recalling the definition of $P_j(t)$, this implies that any analytic curve contained in the manifold $V$ is also contained in the manifold $W$ defined by the equation
$\displaystyle\prod_{1\leq j.
It follows (though not immediately) that $V\subseteq W$. We will obtain a contradiction as soon as we deduce $W$ contains only finitely many points $(v_1,\dots,v_r)$ corresponding to the solutions of $(3)$, since we assumed that $V$ contains infinitely many such points. Equivalently, since product in the definition of $W$ consists of finitely many terms, we need to show only finitely many tuples can satisfy
$L_j(v_1,\dots,v_r)=L_k(v_1,\dots,v_r)$
for $j\neq k$.
Let $(u_1,\dots,u_r)$ be a solution of $(3)$ coming from $\alpha=x+y\theta,x,y\in\mathbb Q$, and $u_i=u_i^*+v_i$. We have
$\sigma_j(\alpha)=\sigma_j(\gamma)\sigma_j(\varepsilon_1)^{u_1}\dots\sigma_j(\varepsilon_r)^{u_r}=\sigma_j(\gamma)\sigma_j(\varepsilon_1)^{u_1*}\dots\sigma_j(\varepsilon_r)^{u_r*}\sigma_j(\varepsilon_1)^{v_1}\dots\sigma_j(\varepsilon_r)^{v_r}$
$=c_j\exp L_j(v_1,\dots,v_r)$
where $c_j$ is a constant independent of $\alpha$. Similarly,
$\sigma_k(\alpha)=c_k\exp L_k(v_1,\dots,v_r)$.
Assuming $L_j(v_1,\dots,v_r)=L_k(v_1,\dots,v_r)$, this implies
$\displaystyle\frac{\sigma_j(\alpha)}{c_j}=\frac{\sigma_k(\alpha)}{c_k},\frac{\sigma_j(\alpha)}{\sigma_k(\alpha)}=\frac{c_j}{c_k}$.
Taking $\alpha'=x'+y'\theta$ to be a different such solution, this implies
$\displaystyle\frac{\sigma_j(\alpha)}{\sigma_k(\alpha)}=\frac{\sigma_j(\alpha')}{\sigma_k(\alpha')},\frac{x+y\theta_j}{x+y\theta_k}=\frac{x'+y'\theta_j}{x'+y'\theta_k}$
and hence $(xy'-x'y)(\theta_j-\theta_k)=0$ and $xy'=x'y,\frac{x}{x'}=\frac{y}{y'}$ ($x',y'$ can’t be both zero, so neither can be). It follows that $\alpha'$ is a rational multiple of $\alpha$, say $\alpha'=d\alpha$. But recall that $\alpha,\alpha'$ have the same norm, so $d$ has norm $1$, hence it is $\pm 1$. Therefore $\alpha,\alpha'$ are equal or opposite. Hence there are only two possible values of $\alpha$, which is certainly a finite amount! As explained above, this gives us a contradiction. $\square$
### Proof of lemma 2
Since $n$ power series $\exp P_j$ satisfy $n-2$ independent linear equations, we can express all of them in terms of just two, say $\exp P_{n-1}$ and $P_n$. Put
$\exp P_i=a_i\exp P_{n-1}+b_i\exp P_n\qquad (7)$.
Suppose $a_i=0$. Then $\exp P_i$ and $b_i\exp P_n$ are equal. They have constant terms equal to, respectively, $1,b_i$ since $P_i$ have no constant term, so $\exp P_i=\exp P_n$ and we can deduce from this (computing coefficients one-by-one) that $P_i=P_n$. Hence we may assume $a_i\neq 0$ (as otherwise we are already done). Putting $Q_i=P_i-P_n$ we then have
$\exp Q_i=a_i\exp Q_{n-1}+b_i$
and we may also assume $Q_i$ are nonzero. Differentiation gives
$Q_i'\exp Q_i=a_iQ_{n-1}'\exp Q_{n-1}$.
Previous two equations combined give
$\displaystyle Q_i'=Q_{n-1}'\exp Q_{n-1}\frac{1}{c_i+\exp Q_{n-1}}\qquad (8)$
with $c_i=\frac{b_i}{a_i}$ for $i=1,\dots,n-2$. We now use the other pair of assumed equations. By subtracting suitable multiples of $P_n$ from them we find
$\displaystyle\sum_{j=1}^{n-1} B_{ij}Q_j=k_iP_n$ dla $i=1,2$.
If either $k_i$ is zero, this gives us a nontrivial linear relation between $Q_j$. Otherwise, subtracting suitable multiples and using independence we again get a nontrivial linear relation. In either case, we get
$\displaystyle\sum_{j=1}^{n-1}d_jQ_j=0$
for $d_i$ not all zero. Differentiation and $(8)$ give us
$Q_{n-1}'\exp Q_{n-1}\left(\displaystyle\sum_{i=1}^{n-2}\frac{d_i}{c_i+\exp Q_{n-1}}+\frac{d_i}{\exp Q_{n-1}}\right)=Q_{n-1}'\exp Q_{n-1}\left(\sum_{i=1}^{n-1}\frac{d_i}{c_i+\exp Q_{n-1}}\right)=0$
(setting $c_{n-1}=0$). As $Q_{n-1}',\exp Q_{n-1}\neq 0$ we deduce
$\displaystyle\sum_{i=1}^{n-1}\frac{d_i}{c_i+\exp Q_{n-1}}=0$.
Hence we get that the rational function
$latex \displaystyle\sum_{i=1}^{n-1}\frac{d_i}{c_i+z}$
vanishes when we put $z=\exp Q_{n-1}$. But unless this function vanishes identically, this would imply $\exp Q_{n-1}$ is algebraic overits field of coefficients. But no nonconstant power series over a field is algebraic, so this can’t be as $Q_{n-1}\neq 0$. Thus this rational function is identically zero. This means that some two $c_i$ are equal (otherwise this function would have a pole as $z\rightarrow -c_i$ for any $c_i$ with $d_i\neq 0$. Therefore $c_j=c_k$ for some $j\neq k$.
Since $\frac{b_j}{a_j}=c_j=c_k=\frac{b_k}{a_k}$, $(7)$ gives us
$\frac{1}{a_k}\exp P_j=\frac{1}{a_k}\exp P_k$.
Comparing constant coefficients and then other coefficients, we get $P_j=P_k$ with $j\neq k$. $\square$
## Ultrabrief summary
The proof goes roughly as follows:
1. Suppose otherwise.
2. Using (a variation of) Dirichlet’s unit theorem and general results on modules, reduce the problem to showing finiteness of certain exponential equation in many variables.
3. Generalize the context of the question to $\frak P$-adic-analytic setting so that we can speak of exponentials of (some) non-rational-integers.
4. Using some difficult words like “local analytic manifold” reduce (a big part of) the problem to (essentially) showing it cannot contain an analytic curve.
5. Use a fancy lemma to deduce the manifold is too algebraically constrained to contain infinitely many integral points.
6. Write an ultrabrief summary.
Clearly two of these steps are (arguably) the most ingenious and crucial ones: passing from a number field to its completion and then reducing the problem to analoguous problem in functional setting (i.e. there is no formal power series blah blah). Both the complete fields (called more precisely local fields) and functional questions have many times in mathematics proven themselves to be much easier to work with than in number fields. The former’s advantage is mainly ability for us to use analytic tools (and difficult words), while in functional setting we have an incredibely useful tool – differentiation.
You can see simplicity of working in functional setting e.g. in the proof of Riemann hypothesis. In the future I will probably make more posts showcasing the local methods like this one, possibly less difficult ones (or perhaps more). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 253, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801011681556702, "perplexity": 218.1569748323792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00658.warc.gz"} |
http://math.stackexchange.com/questions/120270/how-can-we-show-that-pi-xy-piy-le-frac13-x-c-using-the-sieve | # How can we show that $\pi (x+y) - \pi(y) \le \frac{1}{3} x + C$ using the sieve of eratosthenes?
How do we show that For $x,y \ge 0$ real numbers, there exists a constant C suchthat: $$\pi(x+y)-\pi(y) \le \frac{1}{3}x+C$$ Where $\pi(.)$ denotes thes prime counting function, is true?
the hint is to sieve n with $y< n \le x+y$:
$$\pi (x+y) \le 1+ \sum _{n \le x+y} 1+1-1 - \sum_{2|n}1 - \sum_{3|n}1 + \sum_{6|n}1 + \sum_{n\le x+y} 1 =$$ $$1+ \sum _{n \le x+y} 1+1-1 - \sum_{2|n}1 - \sum_{3|n}1 + \sum_{6|n}1 + [x+y]$$
because: $1\le n = dm \le x+y \Leftrightarrow \frac{1}{d}\le m \le \frac{x+y}{y}$
so: $$\sum_{n\le x+y , d|n}1 = [\frac{x+y}{d}]$$
then that gives: $$\pi (x+y) < 1+ [x+y] - [\frac{x+y}{2}] - [\frac{x+y}{3}] + [\frac{x+y}{6}]$$
so that will give: $\pi (x+y) < \frac{x+y}{3} + 3$ but also we get : $\pi(y) < \frac{y}{3} + 3$ so for any constant $C\ge 0$ it will surely hold that:
$$\pi(x+y) - \pi(y) < \frac{x}{3} \le \frac{x}{3} + C$$
Is this correct?
-
There is $\leq 1/3 \pi(x)$ in the title, but $\leq 1/3 x$ in the post. – dtldarek Mar 14 '12 at 23:15
Thanks for the correction dtldarek. – VVV Mar 14 '12 at 23:18
Hint: Which numbers modulo $6$ can be prime? – TMM Mar 14 '12 at 23:22
What if you forget about the $C$ for a moment and bring $\Delta y$ to the LHS. With $\Delta y \to 0$ you get: $$\lim_{\Delta y\to 0} \frac{\pi(y+\Delta y)-\pi(y)}{\Delta y}=\pi(y)'\le 1/3.$$ Now use $\pi(y)\approx \frac{y}{\log y}$ and therefore $\pi(y)'\approx \frac{\log y-1}{(\log y)^2}=\frac{1}{\log y}-\frac{1}{(\log y)^2}$ which has a global maximum of $1/4$ at $y=e^2$. See here. – draks ... Mar 15 '12 at 14:31
Thank you. The prime number theorem wasnt proven yet, so the only thing that comes in question is the sieve of eratosthenes (as TMM suggests, sieving n numbers in the interval of $y<n\le x+y$ but it looks like I did it wrong (I believe). – VVV Mar 15 '12 at 15:53
Hint: Which numbers modulo $6$ can be prime? (Certainly not those divisible by $2$ or $3\ldots$)
So on an interval of width $x$ from $y$ to $y + x$, how many primes ($\pi(y+x) - \pi(y)$) do we expect at most on this interval?
-
I put my comment to your answer in the edit of my question, thanks. – VVV Mar 15 '12 at 10:33
You are asking us to show that your inequality holds for any $x, y \ge 0$ and for any constant $C$, which is obviously false. Perhaps this is what you mean:
Show that there exists a constant $C$ such that for any real numbers $x, y \ge 0$, $\pi(x+y)-\pi(y) \le \frac{1}{3}x+C$.
-
Yea, exactly TonyK. – VVV Mar 15 '12 at 22:52
@VVV: Still not right, I'm afraid: now your statement is vacuously true. I suppose that's an improvement! – TonyK Mar 16 '12 at 8:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456541538238525, "perplexity": 387.4767129969761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.8/warc/CC-MAIN-20160723071024-00174-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/295348/nadirashvili-surface-part-2 | # Nadirashvili surface (part 2)
The article is 'Hadamard and Calabi Yau conjectures on negatively curved an minimal surfaces' Nadirashvili. In the proof of proposition 4.3 it asserts that the function y is holomorphic. I'm not sure about it (actually i think it is false). What do you think about it? Thank you for your help.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802933096885681, "perplexity": 438.4180516761376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00063-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/84904/selecting-two-random-points-inside-a-sphere-which-are-a-fixed-distance-apart | Selecting two random points inside a sphere which are a fixed distance apart
Without appealing to a guess-and-check approach, how might I select a pair of random points inside of a sphere of radius $R$ s.t. the points always a distance $d \leq R$ apart? Can the selected points be 'random' in the sense that a large number of such pairs can be split into two equal-sized populations that independently appear uniformly distributed within the sphere's volume? If it simplifies matters, and I'm not sure it would, I would also be interested in the case of a cube of edge-length $R$.
So that I can better understand the distribution of point pairs: Imagine I select a pair of points meeting the above separation distance criterion. If I randomly select another such pair, what is the probability that the distance between the first points in either pair is $\leq \delta_1$, and the distance between the second points in either pair is $\leq \delta_2$ for some $\delta_1,\delta_2 \leq R$?
-
Are you looking for a theoretical or a computational answer? – Koen S Jan 4 '12 at 21:54
I'm not sure I understand? I'm looking for a description of a method, and to understand how the pairs of points will be distributed in the volume of the sphere? – SeptemberGrass Jan 4 '12 at 22:02
From an algorithmic point of view, a very simple and computationally effective way to produce points is exactly the guess and check that you say you don't want. [ You don't say, but I'm assuming you're working in 3 dimensions? As the dimensionality increases, so the methods that I'm talking about become progressively worse ]
Let $X$ be a randomly chosen point in the ball of radius $R$; let $Z$ be a randomly chosen point in the ball of radius 1. Then set $Y=X+dZ/|Z|$. If $Y$ lies in the sphere of radius $R$, then keep the pair $(X,Y)$; otherwise generate a new pair.
This is efficient because it's simple to generate points in a ball and the expected number of attempts you have to make to get a valid pair is small (if you take a spherical shell of radius $d$ centred at a point on the boundary of a ball of radius $R$, a lower bound for the probability of success is the proportion of the shell lying inside the ball. I haven't done the calculation, but this must be large, even for $d=R$.) Since the success probability is high, you don't have to repeat too often to get a valid pair.
Next notice that each pair of valid points has the same chance of being selected so that you are truly picking from the distribution you want.
To answer your question about uniform distribution, you can see from the construction that the first coordinate (and hence the second coordinate also since the distribution is symmetric) is not uniformly distributed in the sphere. For example if $d$ is a lot smaller than $R$, then if you pick $X$ near the boundary of the sphere, the probability of the pair being rejected is about 50%, whereas if you pick $X$ far from the boundary then the probability of rejection is 0.
This means that if you take the marginal distribution of either coordinate, it is biased towards being at the centre of the ball.
-
This is pretty much the same as my current approach, save for that I'm picking the second point on the surface of a sphere about the first in a different manner. However, it's very useful that this this jumps out to other folks as a reasonable strategy! – SeptemberGrass Jan 5 '12 at 6:57
I'm assuming this is in 3 dimensions. Note that if you first select $X$ in the ball of radius $R$ and take $Y = X + d Z$ where $\|Z\| = 1$, then $\|Y\|^2 = \|X\|^2 + d^2 + 2 d \|X\| \cos \theta$ where $\theta$ is the angle between $X$ and $Z$. Noting that the area of a spherical cap of opening angle $\theta$ is proportional to $1 - \cos \theta$, we see that the probability of $Y$ being in the ball is $\min\left(1, \frac{1}{2} + \frac{R^2 - \|X\|^2 - d^2}{4 d \|X\|}\right)$. So you could select $X$ using a distribution proportional to this factor times the uniform density in the ball, then select $Z$ uniformly in the spherical cap $\cos \theta \le \min\left(1, \frac{R^2 - \|X\|^2 - d^2}{2d\|X\|}\right)$
-
Thanks so much for your answer! – SeptemberGrass Jan 5 '12 at 6:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577681422233582, "perplexity": 128.75631309287778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442288.9/warc/CC-MAIN-20141017005722-00271-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.nag.com/numeric/CL/nagdoc_cl23/html/D03/d03prc.html | d03 Chapter Contents
d03 Chapter Introduction
NAG C Library Manual
# NAG Library Function Documentnag_pde_parab_1d_keller_ode_remesh (d03prc)
## 1 Purpose
nag_pde_parab_1d_keller_ode_remesh (d03prc) integrates a system of linear or nonlinear, first-order, time-dependent partial differential equations (PDEs) in one space variable, with scope for coupled ordinary differential equations (ODEs), and automatic adaptive spatial remeshing. The spatial discretization is performed using the Keller box scheme (see Keller (1970)) and the method of lines is employed to reduce the PDEs to a system of ODEs. The resulting system is solved using a Backward Differentiation Formula (BDF) method or a Theta method (switching between Newton's method and functional iteration).
## 2 Specification
#include #include
void nag_pde_parab_1d_keller_ode_remesh (Integer npde, double *ts, double tout,
void (*pdedef)(Integer npde, double t, double x, const double u[], const double udot[], const double ux[], Integer ncode, const double v[], const double vdot[], double res[], Integer *ires, Nag_Comm *comm),
void (*bndary)(Integer npde, double t, Integer ibnd, Integer nobc, const double u[], const double udot[], Integer ncode, const double v[], const double vdot[], double res[], Integer *ires, Nag_Comm *comm),
void (*uvinit)(Integer npde, Integer npts, Integer nxi, const double x[], const double xi[], double u[], Integer ncode, double v[], Nag_Comm *comm),
double u[], Integer npts, double x[], Integer nleft, Integer ncode,
void (*odedef)(Integer npde, double t, Integer ncode, const double v[], const double vdot[], Integer nxi, const double xi[], const double ucp[], const double ucpx[], const double ucpt[], double r[], Integer *ires, Nag_Comm *comm),
Integer nxi, const double xi[], Integer neqn, const double rtol[], const double atol[], Integer itol, Nag_NormType norm, Nag_LinAlgOption laopt, const double algopt[], Nag_Boolean remesh, Integer nxfix, const double xfix[], Integer nrmesh, double dxmesh, double trmesh, Integer ipminf, double xratio, double con,
void (*monitf)(double t, Integer npts, Integer npde, const double x[], const double u[], double fmon[], Nag_Comm *comm),
double rsave[], Integer lrsave, Integer isave[], Integer lisave, Integer itask, Integer itrace, const char *outfile, Integer *ind, Nag_Comm *comm, Nag_D03_Save *saved, NagError *fail)
## 3 Description
nag_pde_parab_1d_keller_ode_remesh (d03prc) integrates the system of first-order PDEs and coupled ODEs given by the master equations:
$Gi x,t,U,Ux,Ut,V,V. = 0 , i=1,2,…,npde , a≤x≤b,t≥t0,$ (1)
$Rit,V,V.,ξ,U*,Ux*,Ut*=0, i=1,2,…,ncode.$ (2)
In the PDE part of the problem given by (1), the functions ${G}_{i}$ must have the general form
$Gi=∑j=1npdePi,j ∂Uj ∂t +∑j=1ncodeQi,jV.j+Si=0, i=1,2,…,npde,$ (3)
where ${P}_{i,j}$, ${Q}_{i,j}$ and ${S}_{i}$ depend on $x$, $t$, $U$, ${U}_{x}$ and $V$.
The vector $U$ is the set of PDE solution values
$Ux,t=U1x,t,…,Unpdex,tT,$
and the vector ${U}_{x}$ is the partial derivative with respect to $x$. The vector $V$ is the set of ODE solution values
$Vt=V1t,…,VncodetT,$
and $\stackrel{.}{V}$ denotes its derivative with respect to time.
In the ODE part given by (2), $\xi$ represents a vector of ${n}_{\xi }$ spatial coupling points at which the ODEs are coupled to the PDEs. These points may or may not be equal to some of the PDE spatial mesh points. ${U}^{*}$, ${U}_{x}^{*}$ and ${U}_{t}^{*}$ are the functions $U$, ${U}_{x}$ and ${U}_{t}$ evaluated at these coupling points. Each ${R}_{i}$ may only depend linearly on time derivatives. Hence equation (2) may be written more precisely as
$R=A-BV.-CUt*,$ (4)
where $R={\left[{R}_{1},\dots ,{R}_{{\mathbf{ncode}}}\right]}^{\mathrm{T}}$, $A$ is a vector of length ncode, $B$ is an ncode by ncode matrix, $C$ is an ncode by $\left({n}_{\xi }×{\mathbf{npde}}\right)$ matrix and the entries in $A$, $B$ and $C$ may depend on $t$, $\xi$, ${U}^{*}$, ${U}_{x}^{*}$ and $V$. In practice you only need to supply a vector of information to define the ODEs and not the matrices $B$ and $C$. (See Section 5 for the specification of odedef.)
The integration in time is from ${t}_{0}$ to ${t}_{\mathrm{out}}$, over the space interval $a\le x\le b$, where $a={x}_{1}$ and $b={x}_{{\mathbf{npts}}}$ are the leftmost and rightmost points of a mesh ${x}_{1},{x}_{2},\dots ,{x}_{{\mathbf{npts}}}$ defined initially by you and (possibly) adapted automatically during the integration according to user-specified criteria.
The PDE system which is defined by the functions ${G}_{i}$ must be specified in pdedef.
The initial $\left(t={t}_{0}\right)$ values of the functions $U\left(x,t\right)$ and $V\left(t\right)$ must be specified in uvinit. Note that uvinit will be called again following any remeshing, and so $U\left(x,{t}_{0}\right)$ should be specified for all values of $x$ in the interval $a\le x\le b$, and not just the initial mesh points.
For a first-order system of PDEs, only one boundary condition is required for each PDE component ${U}_{i}$. The npde boundary conditions are separated into ${n}_{a}$ at the left-hand boundary $x=a$, and ${n}_{b}$ at the right-hand boundary $x=b$, such that ${n}_{a}+{n}_{b}={\mathbf{npde}}$. The position of the boundary condition for each component should be chosen with care; the general rule is that if the characteristic direction of ${U}_{i}$ at the left-hand boundary (say) points into the interior of the solution domain, then the boundary condition for ${U}_{i}$ should be specified at the left-hand boundary. Incorrect positioning of boundary conditions generally results in initialization or integration difficulties in the underlying time integration functions.
The boundary conditions have the master equation form:
$GiL x,t,U,Ut,V,V. = 0 at x = a , i=1,2,…,na ,$ (5)
at the left-hand boundary, and
$GiR x,t,U,Ut,V,V. = 0 at x = b , i=1,2,…,nb ,$ (6)
at the right-hand boundary.
Note that the functions ${G}_{i}^{L}$ and ${G}_{i}^{R}$ must not depend on ${U}_{x}$, since spatial derivatives are not determined explicitly in the Keller box scheme functions. If the problem involves derivative (Neumann) boundary conditions then it is generally possible to restate such boundary conditions in terms of permissible variables. Also note that ${G}_{i}^{L}$ and ${G}_{i}^{R}$ must be linear with respect to time derivatives, so that the boundary conditions have the general form:
$∑ j=1 npde E i,j L ∂Uj ∂t + ∑ j=1 ncode H i,j L V.j + KiL = 0 , i=1,2,…,na ,$ (7)
at the left-hand boundary, and
$∑ j=1 npde E i,j R ∂Uj ∂t + ∑ j=1 ncode H i,j R V.j + KiR = 0 , i=1,2,…,nb ,$ (8)
at the right-hand boundary, where ${E}_{i,j}^{L}$, ${E}_{i,j}^{R}$, ${H}_{i,j}^{L}$, ${H}_{i,j}^{R}$, ${K}_{i}^{L}$ and ${K}_{i}^{R}$ depend on $x,t,U$ and $V$ only.
The boundary conditions must be specified in bndary.
The problem is subject to the following restrictions:
(i) ${P}_{i,j}$, ${Q}_{i,j}$ and ${S}_{i}$ must not depend on any time derivatives; (ii) ${t}_{0}<{t}_{\mathrm{out}}$, so that integration is in the forward direction; (iii) The evaluation of the function ${G}_{\mathit{i}}$ is done approximately at the mid-points of the mesh ${\mathbf{x}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{npts}}$, by calling pdedef for each mid-point in turn. Any discontinuities in the function must therefore be at one or more of the fixed mesh points specified by xfix; (iv) At least one of the functions ${P}_{i,j}$ must be nonzero so that there is a time derivative present in the PDE problem.
The algebraic-differential equation system which is defined by the functions ${R}_{i}$ must be specified in odedef. You must also specify the coupling points $\xi$ in the array xi.
The first-order equations are approximated by a system of ODEs in time for the values of ${U}_{i}$ at mesh points. In this method of lines approach the Keller box scheme is applied to each PDE in the space variable only, resulting in a system of ODEs in time for the values of ${U}_{i}$ at each mesh point. In total there are ${\mathbf{npde}}×{\mathbf{npts}}+{\mathbf{ncode}}$ ODEs in time direction. This system is then integrated forwards in time using a Backward Differentiation Formula (BDF) or a Theta method.
The adaptive space remeshing can be used to generate meshes that automatically follow the changing time-dependent nature of the solution, generally resulting in a more efficient and accurate solution using fewer mesh points than may be necessary with a fixed uniform or non-uniform mesh. Problems with travelling wavefronts or variable-width boundary layers for example will benefit from using a moving adaptive mesh. The discrete time-step method used here (developed by Furzeland (1984)) automatically creates a new mesh based on the current solution profile at certain time-steps, and the solution is then interpolated onto the new mesh and the integration continues.
The method requires you to supply monitf which specifies in an analytic or numeric form the particular aspect of the solution behaviour you wish to track. This so-called monitor function is used to choose a mesh which equally distributes the integral of the monitor function over the domain. A typical choice of monitor function is the second space derivative of the solution value at each point (or some combination of the second space derivatives if more than one solution component), which results in refinement in regions where the solution gradient is changing most rapidly.
You must specify the frequency of mesh updates along with certain other criteria such as adjacent mesh ratios. Remeshing can be expensive and you are encouraged to experiment with the different options in order to achieve an efficient solution which adequately tracks the desired features of the solution.
Note that unless the monitor function for the initial solution values is zero at all user-specified initial mesh points, a new initial mesh is calculated and adopted according to the user-specified remeshing criteria. uvinit will then be called again to determine the initial solution values at the new mesh points (there is no interpolation at this stage) and the integration proceeds.
## 4 References
Berzins M (1990) Developments in the NAG Library software for parabolic equations Scientific Software Systems (eds J C Mason and M G Cox) 59–72 Chapman and Hall
Berzins M, Dew P M and Furzeland R M (1989) Developing software for time-dependent problems using the method of lines and differential-algebraic integrators Appl. Numer. Math. 5 375–397
Berzins M and Furzeland R M (1992) An adaptive theta method for the solution of stiff and nonstiff differential equations Appl. Numer. Math. 9 1–19
Furzeland R M (1984) The construction of adaptive space meshes TNER.85.022 Thornton Research Centre, Chester
Keller H B (1970) A new difference scheme for parabolic problems Numerical Solutions of Partial Differential Equations (ed J Bramble) 2 327–350 Academic Press
Pennington S V and Berzins M (1994) New NAG Library software for first-order partial differential equations ACM Trans. Math. Softw. 20 63–99
## 5 Arguments
1: npdeIntegerInput
On entry: the number of PDEs to be solved.
Constraint: ${\mathbf{npde}}\ge 1$.
2: tsdouble *Input/Output
On entry: the initial value of the independent variable $t$.
Constraint: ${\mathbf{ts}}<{\mathbf{tout}}$.
On exit: the value of $t$ corresponding to the solution values in u. Normally ${\mathbf{ts}}={\mathbf{tout}}$.
3: toutdoubleInput
On entry: the final value of $t$ to which the integration is to be carried out.
4: pdedeffunction, supplied by the userExternal Function
pdedef must evaluate the functions ${G}_{i}$ which define the system of PDEs. pdedef is called approximately midway between each pair of mesh points in turn by nag_pde_parab_1d_keller_ode_remesh (d03prc).
The specification of pdedef is:
void pdedef (Integer npde, double t, double x, const double u[], const double udot[], const double ux[], Integer ncode, const double v[], const double vdot[], double res[], Integer *ires, Nag_Comm *comm)
1: npdeIntegerInput
On entry: the number of PDEs in the system.
2: tdoubleInput
On entry: the current value of the independent variable $t$.
3: xdoubleInput
On entry: the current value of the space variable $x$.
4: u[npde]const doubleInput
On entry: ${\mathbf{u}}\left[\mathit{i}-1\right]$ contains the value of the component ${U}_{\mathit{i}}\left(x,t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
5: udot[npde]const doubleInput
On entry: ${\mathbf{udot}}\left[\mathit{i}-1\right]$ contains the value of the component $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial t}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
6: ux[npde]const doubleInput
On entry: ${\mathbf{ux}}\left[\mathit{i}-1\right]$ contains the value of the component $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial x}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
7: ncodeIntegerInput
On entry: the number of coupled ODEs in the system.
8: v[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{v}}\left[\mathit{i}-1\right]$ contains the value of component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
9: vdot[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{vdot}}\left[\mathit{i}-1\right]$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
10: res[npde]doubleOutput
On exit: ${\mathbf{res}}\left[\mathit{i}-1\right]$ must contain the $\mathit{i}$th component of $G$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$, where $G$ is defined as
$Gi=∑j=1npdePi,j ∂Uj ∂t +∑j=1ncodeQi,jV.j,$ (9)
i.e., only terms depending explicitly on time derivatives, or
$Gi=∑j=1npdePi,j ∂Uj ∂t +∑j=1ncodeQi,jV.j+Si,$ (10)
i.e., all terms in equation (3).
The definition of $G$ is determined by the input value of ires.
11: iresInteger *Input/Output
On entry: the form of ${G}_{i}$ that must be returned in the array res.
${\mathbf{ires}}=-1$
Equation (9) must be used.
${\mathbf{ires}}=1$
Equation (10) must be used.
On exit: should usually remain unchanged. However, you may set ires to force the integration function to take certain actions, as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling function with the error indicator set to NE_USER_STOP.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, then nag_pde_parab_1d_keller_ode_remesh (d03prc) returns to the calling function with the error indicator set to NE_FAILED_DERIV.
12: commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to pdedef.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void *. Before calling nag_pde_parab_1d_keller_ode_remesh (d03prc) you may allocate memory and initialize these pointers with various quantities for use by pdedef when called from nag_pde_parab_1d_keller_ode_remesh (d03prc) (see Section 3.2.1 in the Essential Introduction).
5: bndaryfunction, supplied by the userExternal Function
bndary must evaluate the functions ${G}_{i}^{L}$ and ${G}_{i}^{R}$ which describe the boundary conditions, as given in (5) and (6).
The specification of bndary is:
void bndary (Integer npde, double t, Integer ibnd, Integer nobc, const double u[], const double udot[], Integer ncode, const double v[], const double vdot[], double res[], Integer *ires, Nag_Comm *comm)
1: npdeIntegerInput
On entry: the number of PDEs in the system.
2: tdoubleInput
On entry: the current value of the independent variable $t$.
3: ibndIntegerInput
On entry: specifies which boundary conditions are to be evaluated.
${\mathbf{ibnd}}=0$
bndary must compute the left-hand boundary condition at $x=a$.
${\mathbf{ibnd}}\ne 0$
bndary must compute of the right-hand boundary condition at $x=b$.
4: nobcIntegerInput
On entry: specifies the number ${n}_{a}$ of boundary conditions at the boundary specified by ibnd.
5: u[npde]const doubleInput
On entry: ${\mathbf{u}}\left[\mathit{i}-1\right]$ contains the value of the component ${U}_{\mathit{i}}\left(x,t\right)$ at the boundary specified by ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
6: udot[npde]const doubleInput
On entry: ${\mathbf{udot}}\left[\mathit{i}-1\right]$ contains the value of the component $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial t}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
7: ncodeIntegerInput
On entry: the number of coupled ODEs in the system.
8: v[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{v}}\left[\mathit{i}-1\right]$ contains the value of component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
9: vdot[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{vdot}}\left[\mathit{i}-1\right]$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
Note: ${\mathbf{vdot}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$, may only appear linearly as in (11) and (12).
10: res[nobc]doubleOutput
On exit: ${\mathbf{res}}\left[\mathit{i}-1\right]$ must contain the $\mathit{i}$th component of ${G}^{L}$ or ${G}^{R}$, depending on the value of ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{nobc}}$, where ${G}^{L}$ is defined as
$GiL=∑j=1npdeEi,jL ∂Uj ∂t +∑j=1ncodeHi,jLV.j,$ (11)
i.e., only terms depending explicitly on time derivatives, or
$GiL=∑j=1npdeEi,jL ∂Uj ∂t +∑j=1ncodeHi,jLV.j+KiL,$ (12)
i.e., all terms in equation (7), and similarly for ${G}_{\mathit{i}}^{R}$.
The definitions of ${G}^{L}$ and ${G}^{R}$ are determined by the input value of ires.
11: iresInteger *Input/Output
On entry: the form of ${G}_{i}^{L}$ (or ${G}_{i}^{R}$) that must be returned in the array res.
${\mathbf{ires}}=-1$
Equation (11) must be used.
${\mathbf{ires}}=1$
Equation (12) must be used.
On exit: should usually remain unchanged. However, you may set ires to force the integration function to take certain actions as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling function with the error indicator set to NE_USER_STOP.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, then nag_pde_parab_1d_keller_ode_remesh (d03prc) returns to the calling function with the error indicator set to NE_FAILED_DERIV.
12: commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to bndary.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void *. Before calling nag_pde_parab_1d_keller_ode_remesh (d03prc) you may allocate memory and initialize these pointers with various quantities for use by bndary when called from nag_pde_parab_1d_keller_ode_remesh (d03prc) (see Section 3.2.1 in the Essential Introduction).
6: uvinitfunction, supplied by the userExternal Function
uvinit must supply the initial $\left(t={t}_{0}\right)$ values of $U\left(x,t\right)$ and $V\left(t\right)$ for all values of $x$ in the interval $\left[a,b\right]$.
The specification of uvinit is:
void uvinit (Integer npde, Integer npts, Integer nxi, const double x[], const double xi[], double u[], Integer ncode, double v[], Nag_Comm *comm)
1: npdeIntegerInput
On entry: the number of PDEs in the system.
2: nptsIntegerInput
On entry: the number of mesh points in the interval $\left[a,b\right]$.
3: nxiIntegerInput
On entry: the number of ODE/PDE coupling points.
4: x[npts]const doubleInput
On entry: the current mesh. ${\mathbf{x}}\left[\mathit{i}-1\right]$ contains the value of ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npts}}$.
5: xi[nxi]const doubleInput
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{xi}}\left[\mathit{i}-1\right]$ contains the ODE/PDE coupling point, ${\xi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxi}}$.
6: u[${\mathbf{npde}}×{\mathbf{npts}}$]doubleOutput
On exit: if ${\mathbf{nxi}}>0$, ${\mathbf{u}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the value of the component ${U}_{\mathit{i}}\left({x}_{\mathit{j}},{t}_{0}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$.
7: ncodeIntegerInput
On entry: the number of coupled ODEs in the system.
8: v[ncode]doubleOutput
On exit: if ${\mathbf{ncode}}>0$, ${\mathbf{v}}\left[\mathit{i}-1\right]$ must contain the value of component ${V}_{\mathit{i}}\left({t}_{0}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
9: commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to uvinit.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void *. Before calling nag_pde_parab_1d_keller_ode_remesh (d03prc) you may allocate memory and initialize these pointers with various quantities for use by uvinit when called from nag_pde_parab_1d_keller_ode_remesh (d03prc) (see Section 3.2.1 in the Essential Introduction).
7: u[neqn]doubleInput/Output
On entry: if ${\mathbf{ind}}=1$, the value of u must be unchanged from the previous call.
On exit: ${\mathbf{u}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the computed solution ${U}_{\mathit{i}}\left({x}_{\mathit{j}},t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$, evaluated at $t={\mathbf{ts}}$.
8: nptsIntegerInput
On entry: the number of mesh points in the interval [$a,b$].
Constraint: ${\mathbf{npts}}\ge 3$.
9: x[npts]doubleInput/Output
On entry: the initial mesh points in the space direction. ${\mathbf{x}}\left[0\right]$ must specify the left-hand boundary, $a$, and ${\mathbf{x}}\left[{\mathbf{npts}}-1\right]$ must specify the right-hand boundary, $b$.
Constraint: ${\mathbf{x}}\left[0\right]<{\mathbf{x}}\left[1\right]<\cdots <{\mathbf{x}}\left[{\mathbf{npts}}-1\right]$.
On exit: the final values of the mesh points.
10: nleftIntegerInput
On entry: the number ${n}_{a}$ of boundary conditions at the left-hand mesh point ${\mathbf{x}}\left[0\right]$.
Constraint: $0\le {\mathbf{nleft}}\le {\mathbf{npde}}$.
11: ncodeIntegerInput
On entry: the number of coupled ODE components.
Constraint: ${\mathbf{ncode}}\ge 0$.
12: odedeffunction, supplied by the userExternal Function
odedef must evaluate the functions $R$, which define the system of ODEs, as given in (4).
If ${\mathbf{ncode}}=0$, odedef will never be called and the NAG defined null void function pointer, NULLFN, can be supplied in the call to nag_pde_parab_1d_keller_ode_remesh (d03prc).
The specification of odedef is:
void odedef (Integer npde, double t, Integer ncode, const double v[], const double vdot[], Integer nxi, const double xi[], const double ucp[], const double ucpx[], const double ucpt[], double r[], Integer *ires, Nag_Comm *comm)
1: npdeIntegerInput
On entry: the number of PDEs in the system.
2: tdoubleInput
On entry: the current value of the independent variable $t$.
3: ncodeIntegerInput
On entry: the number of coupled ODEs in the system.
4: v[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{v}}\left[\mathit{i}-1\right]$ contains the value of component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
5: vdot[ncode]const doubleInput
On entry: if ${\mathbf{ncode}}>0$, ${\mathbf{vdot}}\left[\mathit{i}-1\right]$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$.
6: nxiIntegerInput
On entry: the number of ODE/PDE coupling points.
7: xi[nxi]const doubleInput
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{xi}}\left[\mathit{i}-1\right]$ contains the ODE/PDE coupling point, ${\xi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxi}}$.
8: ucp[${\mathbf{npde}}×{\mathbf{nxi}}$]const doubleInput
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucp}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the value of ${U}_{\mathit{i}}\left(x,t\right)$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
9: ucpx[${\mathbf{npde}}×{\mathbf{nxi}}$]const doubleInput
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucpx}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the value of $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial x}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
10: ucpt[${\mathbf{npde}}×{\mathbf{nxi}}$]const doubleInput
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucpt}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the value of $\frac{\partial {U}_{\mathit{i}}}{\partial t}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
11: r[ncode]doubleOutput
On exit: if ${\mathbf{ncode}}>0$, ${\mathbf{r}}\left[\mathit{i}-1\right]$ must contain the $\mathit{i}$th component of $R$, for $\mathit{i}=1,2,\dots ,{\mathbf{ncode}}$, where $R$ is defined as
$R=-BV.-CUt*,$ (13)
i.e., only terms depending explicitly on time derivatives, or
$R=A-BV.-CUt*,$ (14)
i.e., all terms in equation (4). The definition of $R$ is determined by the input value of ires.
12: iresInteger *Input/Output
On entry: the form of $R$ that must be returned in the array r.
${\mathbf{ires}}=-1$
Equation (13) must be used.
${\mathbf{ires}}=1$
Equation (14) must be used.
On exit: should usually remain unchanged. However, you may reset ires to force the integration function to take certain actions, as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling function with the error indicator set to NE_USER_STOP.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, then nag_pde_parab_1d_keller_ode_remesh (d03prc) returns to the calling function with the error indicator set to NE_FAILED_DERIV.
13: commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to odedef.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void *. Before calling nag_pde_parab_1d_keller_ode_remesh (d03prc) you may allocate memory and initialize these pointers with various quantities for use by odedef when called from nag_pde_parab_1d_keller_ode_remesh (d03prc) (see Section 3.2.1 in the Essential Introduction).
13: nxiIntegerInput
On entry: the number of ODE/PDE coupling points.
Constraints:
• if ${\mathbf{ncode}}=0$, ${\mathbf{nxi}}=0$;
• if ${\mathbf{ncode}}>0$, ${\mathbf{nxi}}\ge 0$.
14: xi[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array xi must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nxi}}\right)$.
On entry: ${\mathbf{xi}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxi}}$, must be set to the ODE/PDE coupling points, ${\xi }_{\mathit{i}}$.
Constraint: ${\mathbf{x}}\left[0\right]\le {\mathbf{xi}}\left[0\right]<{\mathbf{xi}}\left[1\right]<\cdots <{\mathbf{xi}}\left[{\mathbf{nxi}}-1\right]\le {\mathbf{x}}\left[{\mathbf{npts}}-1\right]$.
15: neqnIntegerInput
On entry: the number of ODEs in the time direction.
Constraint: ${\mathbf{neqn}}={\mathbf{npde}}×{\mathbf{npts}}+{\mathbf{ncode}}$.
16: rtol[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array rtol must be at least
• $1$ when ${\mathbf{itol}}=1$ or $2$;
• ${\mathbf{neqn}}$ when ${\mathbf{itol}}=3$ or $4$.
On entry: the relative local error tolerance.
Constraint: ${\mathbf{rtol}}\left[i-1\right]\ge 0.0$ for all relevant $i$.
17: atol[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array atol must be at least
• $1$ when ${\mathbf{itol}}=1$ or $3$;
• ${\mathbf{neqn}}$ when ${\mathbf{itol}}=2$ or $4$.
On entry: the absolute local error tolerance.
Constraint: ${\mathbf{atol}}\left[i-1\right]\ge 0.0$ for all relevant $i$.
Note: corresponding elements of rtol and atol cannot both be $0.0$.
18: itolIntegerInput
A value to indicate the form of the local error test. itol indicates to nag_pde_parab_1d_keller_ode_remesh (d03prc) whether to interpret either or both of rtol or atol as a vector or scalar. The error test to be satisfied is $‖{e}_{i}/{w}_{i}‖<1.0$, where ${w}_{i}$ is defined as follows:
On entry:
itol rtol atol ${w}_{i}$ 1 scalar scalar ${\mathbf{rtol}}\left[0\right]×\left|{\mathbf{u}}\left[i-1\right]\right|+{\mathbf{atol}}\left[0\right]$ 2 scalar vector ${\mathbf{rtol}}\left[0\right]×\left|{\mathbf{u}}\left[i-1\right]\right|+{\mathbf{atol}}\left[i-1\right]$ 3 vector scalar ${\mathbf{rtol}}\left[i-1\right]×\left|{\mathbf{u}}\left[i-1\right]\right|+{\mathbf{atol}}\left[0\right]$ 4 vector vector ${\mathbf{rtol}}\left[i-1\right]×\left|{\mathbf{u}}\left[i-1\right]\right|+{\mathbf{atol}}\left[i-1\right]$
In the above, ${e}_{\mathit{i}}$ denotes the estimated local error for the $\mathit{i}$th component of the coupled PDE/ODE system in time, ${\mathbf{u}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{neqn}}$.
The choice of norm used is defined by the argument norm.
Constraint: ${\mathbf{itol}}=1$, $2$, $3$ or $4$.
19: normNag_NormTypeInput
On entry: the type of norm to be used.
${\mathbf{norm}}=\mathrm{Nag_MaxNorm}$
Maximum norm.
${\mathbf{norm}}=\mathrm{Nag_TwoNorm}$
Averaged ${L}_{2}$ norm.
If ${U}_{\mathrm{norm}}$ denotes the norm of the vector u of length neqn, then for the averaged ${L}_{2}$ norm
$Unorm= 1neqn ∑i=1neqn Ui/wi 2 ,$
while for the maximum norm
$Unorm=maxiu[i-1]/wi.$
See the description of itol for the formulation of the weight vector $w$.
Constraint: ${\mathbf{norm}}=\mathrm{Nag_MaxNorm}$ or $\mathrm{Nag_TwoNorm}$.
20: laoptNag_LinAlgOptionInput
On entry: the type of matrix algebra required.
${\mathbf{laopt}}=\mathrm{Nag_LinAlgFull}$
Full matrix methods to be used.
${\mathbf{laopt}}=\mathrm{Nag_LinAlgBand}$
Banded matrix methods to be used.
${\mathbf{laopt}}=\mathrm{Nag_LinAlgSparse}$
Sparse matrix methods to be used.
Constraint: ${\mathbf{laopt}}=\mathrm{Nag_LinAlgFull}$, $\mathrm{Nag_LinAlgBand}$ or $\mathrm{Nag_LinAlgSparse}$
Note: you are recommended to use the banded option when no coupled ODEs are present (i.e., ${\mathbf{ncode}}=0$).
21: algopt[$30$]const doubleInput
On entry: may be set to control various options available in the integrator. If you wish to employ all the default options, then ${\mathbf{algopt}}\left[0\right]$ should be set to $0.0$. Default values will also be used for any other elements of algopt set to zero. The permissible values, default values, and meanings are as follows:
${\mathbf{algopt}}\left[0\right]$
Selects the ODE integration method to be used. If ${\mathbf{algopt}}\left[0\right]=1.0$, a BDF method is used and if ${\mathbf{algopt}}\left[0\right]=2.0$, a Theta method is used. The default value is ${\mathbf{algopt}}\left[0\right]=1.0$.
If ${\mathbf{algopt}}\left[0\right]=2.0$, then ${\mathbf{algopt}}\left[\mathit{i}-1\right]$, for $\mathit{i}=2,3,4$, are not used.
${\mathbf{algopt}}\left[1\right]$
Specifies the maximum order of the BDF integration formula to be used. ${\mathbf{algopt}}\left[1\right]$ may be $1.0$, $2.0$, $3.0$, $4.0$ or $5.0$. The default value is ${\mathbf{algopt}}\left[1\right]=5.0$.
${\mathbf{algopt}}\left[2\right]$
Specifies what method is to be used to solve the system of nonlinear equations arising on each step of the BDF method. If ${\mathbf{algopt}}\left[2\right]=1.0$ a modified Newton iteration is used and if ${\mathbf{algopt}}\left[2\right]=2.0$ a functional iteration method is used. If functional iteration is selected and the integrator encounters difficulty, then there is an automatic switch to the modified Newton iteration. The default value is ${\mathbf{algopt}}\left[2\right]=1.0$.
${\mathbf{algopt}}\left[3\right]$
Specifies whether or not the Petzold error test is to be employed. The Petzold error test results in extra overhead but is more suitable when algebraic equations are present, such as ${P}_{i,\mathit{j}}=0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{npde}}$, for some $i$ or when there is no ${\stackrel{.}{V}}_{i}\left(t\right)$ dependence in the coupled ODE system. If ${\mathbf{algopt}}\left[3\right]=1.0$, then the Petzold test is used. If ${\mathbf{algopt}}\left[3\right]=2.0$, then the Petzold test is not used. The default value is ${\mathbf{algopt}}\left[3\right]=1.0$.
If ${\mathbf{algopt}}\left[0\right]=1.0$, then ${\mathbf{algopt}}\left[\mathit{i}-1\right]$, for $\mathit{i}=5,6,7$, are not used.
${\mathbf{algopt}}\left[4\right]$
Specifies the value of Theta to be used in the Theta integration method. $0.51\le {\mathbf{algopt}}\left[4\right]\le 0.99$. The default value is ${\mathbf{algopt}}\left[4\right]=0.55$.
${\mathbf{algopt}}\left[5\right]$
Specifies what method is to be used to solve the system of nonlinear equations arising on each step of the Theta method. If ${\mathbf{algopt}}\left[5\right]=1.0$, a modified Newton iteration is used and if ${\mathbf{algopt}}\left[5\right]=2.0$, a functional iteration method is used. The default value is ${\mathbf{algopt}}\left[5\right]=1.0$.
${\mathbf{algopt}}\left[6\right]$
Specifies whether or not the integrator is allowed to switch automatically between modified Newton and functional iteration methods in order to be more efficient. If ${\mathbf{algopt}}\left[6\right]=1.0$, then switching is allowed and if ${\mathbf{algopt}}\left[6\right]=2.0$, then switching is not allowed. The default value is ${\mathbf{algopt}}\left[6\right]=1.0$.
${\mathbf{algopt}}\left[10\right]$
Specifies a point in the time direction, ${t}_{\mathrm{crit}}$, beyond which integration must not be attempted. The use of ${t}_{\mathrm{crit}}$ is described under the argument itask. If ${\mathbf{algopt}}\left[0\right]\ne 0.0$, a value of $0.0$, for ${\mathbf{algopt}}\left[10\right]$, say, should be specified even if itask subsequently specifies that ${t}_{\mathrm{crit}}$ will not be used.
${\mathbf{algopt}}\left[11\right]$
Specifies the minimum absolute step size to be allowed in the time integration. If this option is not required, ${\mathbf{algopt}}\left[11\right]$ should be set to $0.0$.
${\mathbf{algopt}}\left[12\right]$
Specifies the maximum absolute step size to be allowed in the time integration. If this option is not required, ${\mathbf{algopt}}\left[12\right]$ should be set to $0.0$.
${\mathbf{algopt}}\left[13\right]$
Specifies the initial step size to be attempted by the integrator. If ${\mathbf{algopt}}\left[13\right]=0.0$, then the initial step size is calculated internally.
${\mathbf{algopt}}\left[14\right]$
Specifies the maximum number of steps to be attempted by the integrator in any one call. If ${\mathbf{algopt}}\left[14\right]=0.0$, then no limit is imposed.
${\mathbf{algopt}}\left[22\right]$
Specifies what method is to be used to solve the nonlinear equations at the initial point to initialize the values of $U$, ${U}_{t}$, $V$ and $\stackrel{.}{V}$. If ${\mathbf{algopt}}\left[22\right]=1.0$, a modified Newton iteration is used and if ${\mathbf{algopt}}\left[22\right]=2.0$, functional iteration is used. The default value is ${\mathbf{algopt}}\left[22\right]=1.0$.
${\mathbf{algopt}}\left[28\right]$ and ${\mathbf{algopt}}\left[29\right]$ are used only for the sparse matrix algebra option, i.e., ${\mathbf{laopt}}=\mathrm{Nag_LinAlgSparse}$.
${\mathbf{algopt}}\left[28\right]$
Governs the choice of pivots during the decomposition of the first Jacobian matrix. It should lie in the range $0.0<{\mathbf{algopt}}\left[28\right]<1.0$, with smaller values biasing the algorithm towards maintaining sparsity at the expense of numerical stability. If ${\mathbf{algopt}}\left[28\right]$ lies outside this range then the default value is used. If the functions regard the Jacobian matrix as numerically singular then increasing ${\mathbf{algopt}}\left[28\right]$ towards $1.0$ may help, but at the cost of increased fill-in. The default value is ${\mathbf{algopt}}\left[28\right]=0.1$.
${\mathbf{algopt}}\left[29\right]$
Used as a relative pivot threshold during subsequent Jacobian decompositions (see ${\mathbf{algopt}}\left[28\right]$) below which an internal error is invoked. ${\mathbf{algopt}}\left[29\right]$ must be greater than zero, otherwise the default value is used. If ${\mathbf{algopt}}\left[29\right]$ is greater than $1.0$ no check is made on the pivot size, and this may be a necessary option if the Jacobian is found to be numerically singular (see ${\mathbf{algopt}}\left[28\right]$). The default value is ${\mathbf{algopt}}\left[29\right]=0.0001$.
22: remeshNag_BooleanInput
On entry: indicates whether or not spatial remeshing should be performed.
${\mathbf{remesh}}=\mathrm{Nag_TRUE}$
Indicates that spatial remeshing should be performed as specified.
${\mathbf{remesh}}=\mathrm{Nag_FALSE}$
Indicates that spatial remeshing should be suppressed.
Note: remesh should not be changed between consecutive calls to nag_pde_parab_1d_keller_ode_remesh (d03prc). Remeshing can be switched off or on at specified times by using appropriate values for the arguments nrmesh and trmesh at each call.
23: nxfixIntegerInput
On entry: the number of fixed mesh points.
Constraint: $0\le {\mathbf{nxfix}}\le {\mathbf{npts}}-2$
Note: the end points ${\mathbf{x}}\left[0\right]$ and ${\mathbf{x}}\left[{\mathbf{npts}}-1\right]$ are fixed automatically and hence should not be specified as fixed points.
24: xfix[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array xfix must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nxfix}}\right)$.
On entry: ${\mathbf{xfix}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxfix}}$, must contain the value of the $x$ coordinate at the $\mathit{i}$th fixed mesh point.
Constraint: ${\mathbf{xfix}}\left[\mathit{i}-1\right]<{\mathbf{xfix}}\left[\mathit{i}\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxfix}}-1$, and each fixed mesh point must coincide with a user-supplied initial mesh point, that is ${\mathbf{xfix}}\left[\mathit{i}-1\right]={\mathbf{x}}\left[\mathit{j}-1\right]$ for some $j$, $2\le j\le {\mathbf{npts}}-1$.
Note: the positions of the fixed mesh points in the array ${\mathbf{x}}$ remain fixed during remeshing, and so the number of mesh points between adjacent fixed points (or between fixed points and end points) does not change. You should take this into account when choosing the initial mesh distribution.
25: nrmeshIntegerInput
On entry: indicates the form of meshing to be performed.
${\mathbf{nrmesh}}<0$
Indicates that a new mesh is adopted according to the argument dxmesh. The mesh is tested every $\left|{\mathbf{nrmesh}}\right|$ timesteps.
${\mathbf{nrmesh}}=0$
Indicates that remeshing should take place just once at the end of the first time step reached when $t>{\mathbf{trmesh}}$.
${\mathbf{nrmesh}}>0$
Indicates that remeshing will take place every nrmesh time steps, with no testing using dxmesh.
Note: nrmesh may be changed between consecutive calls to nag_pde_parab_1d_keller_ode_remesh (d03prc) to give greater flexibility over the times of remeshing.
26: dxmeshdoubleInput
On entry: determines whether a new mesh is adopted when nrmesh is set less than zero. A possible new mesh is calculated at the end of every $\left|{\mathbf{nrmesh}}\right|$ time steps, but is adopted only if
$xinew>xiold+dxmesh×xi+1old-xiold,$
or
$xinew
dxmesh thus imposes a lower limit on the difference between one mesh and the next.
Constraint: ${\mathbf{dxmesh}}\ge 0.0$.
27: trmeshdoubleInput
On entry: specifies when remeshing will take place when nrmesh is set to zero. Remeshing will occur just once at the end of the first time step reached when $t$ is greater than trmesh.
Note: trmesh may be changed between consecutive calls to nag_pde_parab_1d_keller_ode_remesh (d03prc) to force remeshing at several specified times.
28: ipminfIntegerInput
On entry: the level of trace information regarding the adaptive remeshing.
${\mathbf{ipminf}}=0$
No trace information.
${\mathbf{ipminf}}=1$
Brief summary of mesh characteristics.
${\mathbf{ipminf}}=2$
More detailed information, including old and new mesh points, mesh sizes and monitor function values.
Constraint: ${\mathbf{ipminf}}=0$, $1$ or $2$.
29: xratiodoubleInput
On entry: input bound on adjacent mesh ratio (greater than $1.0$ and typically in the range $1.5$ to $3.0$). The remeshing functions will attempt to ensure that
$xi-xi-1/xratio
Suggested value: ${\mathbf{xratio}}=1.5$.
Constraint: ${\mathbf{xratio}}>1.0$.
30: condoubleInput
On entry: an input bound on the sub-integral of the monitor function ${F}^{\mathrm{mon}}\left(x\right)$ over each space step. The remeshing functions will attempt to ensure that
$∫x1xi+1Fmonxdx≤con∫x1xnptsFmonxdx,$
(see Furzeland (1984)). con gives you more control over the mesh distribution e.g., decreasing con allows more clustering. A typical value is $2/\left({\mathbf{npts}}-1\right)$, but you are encouraged to experiment with different values. Its value is not critical and the mesh should be qualitatively correct for all values in the range given below.
Suggested value: ${\mathbf{con}}=2.0/\left({\mathbf{npts}}-1\right)$.
Constraint: $0.1/\left({\mathbf{npts}}-1\right)\le {\mathbf{con}}\le 10.0/\left({\mathbf{npts}}-1\right)$.
31: monitffunction, supplied by the userExternal Function
monitf must supply and evaluate a remesh monitor function to indicate the solution behaviour of interest.
If ${\mathbf{ncode}}=0$, monitf will never be called and the NAG defined null void function pointer, NULLFN, can be supplied in the call to nag_pde_parab_1d_keller_ode_remesh (d03prc).
The specification of monitf is:
void monitf (double t, Integer npts, Integer npde, const double x[], const double u[], double fmon[], Nag_Comm *comm)
1: tdoubleInput
On entry: the current value of the independent variable $t$.
2: nptsIntegerInput
On entry: the number of mesh points in the interval $\left[a,b\right]$.
3: npdeIntegerInput
On entry: the number of PDEs in the system.
4: x[npts]const doubleInput
On entry: the current mesh. ${\mathbf{x}}\left[\mathit{i}-1\right]$ contains the value of ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npts}}$.
5: u[${\mathbf{npde}}×{\mathbf{npts}}$]const doubleInput
On entry: ${\mathbf{u}}\left[{\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}-1\right]$ contains the value of ${U}_{\mathit{i}}\left(x,t\right)$ at $x={\mathbf{x}}\left[\mathit{j}-1\right]$ and time $t$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$.
6: fmon[npts]doubleOutput
On exit: ${\mathbf{fmon}}\left[i-1\right]$ must contain the value of the monitor function ${F}^{\mathrm{mon}}\left(x\right)$ at mesh point $x={\mathbf{x}}\left[i-1\right]$.
Constraint: ${\mathbf{fmon}}\left[i-1\right]\ge 0.0$.
7: commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to monitf.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void *. Before calling nag_pde_parab_1d_keller_ode_remesh (d03prc) you may allocate memory and initialize these pointers with various quantities for use by monitf when called from nag_pde_parab_1d_keller_ode_remesh (d03prc) (see Section 3.2.1 in the Essential Introduction).
32: rsave[lrsave]doubleCommunication Array
If ${\mathbf{ind}}=0$, rsave need not be set on entry.
If ${\mathbf{ind}}=1$, rsave must be unchanged from the previous call to the function because it contains required information about the iteration.
33: lrsaveIntegerInput
On entry: the dimension of the array rsave. Its size depends on the type of matrix algebra selected.
If ${\mathbf{laopt}}=\mathrm{Nag_LinAlgFull}$, ${\mathbf{lrsave}}\ge {\mathbf{neqn}}×{\mathbf{neqn}}+{\mathbf{neqn}}+\mathit{nwkres}+\mathit{lenode}$.
If ${\mathbf{laopt}}=\mathrm{Nag_LinAlgBand}$, ${\mathbf{lrsave}}\ge \left(3×\mathit{ml}+\mathit{mu}+2\right)×{\mathbf{neqn}}+\mathit{nwkres}+\mathit{lenode}$.
If ${\mathbf{laopt}}=\mathrm{Nag_LinAlgSparse}$, ${\mathbf{lrsave}}\ge 4×{\mathbf{neqn}}+11×{\mathbf{neqn}}/2+1+\mathit{nwkres}+\mathit{lenode}$.
Where
$\mathit{ml}$ and $\mathit{mu}$ are the lower and upper half bandwidths given by $\mathit{ml}={\mathbf{npde}}+{\mathbf{nleft}}-1$ such that $\mathit{mu}=2×{\mathbf{npde}}-{\mathbf{nleft}}-1$, for problems involving PDEs only; or $\mathit{ml}=\mathit{mu}={\mathbf{neqn}}-1$, for coupled PDE/ODE problems. $\mathit{nwkres}=\left\{\begin{array}{ll}{\mathbf{npde}}×\left(3×{\mathbf{npde}}+6×{\mathbf{nxi}}+{\mathbf{npts}}+15\right)+{\mathbf{nxi}}+{\mathbf{ncode}}+7×{\mathbf{npts}}+{\mathbf{nxfix}}+1\text{,}& \text{when }{\mathbf{ncode}}>0\text{ and }{\mathbf{nxi}}>0\text{; or}\\ {\mathbf{npde}}×\left(3×{\mathbf{npde}}+{\mathbf{npts}}+21\right)+{\mathbf{ncode}}+7×{\mathbf{npts}}+{\mathbf{nxfix}}+2\text{,}& \text{when }{\mathbf{ncode}}>0\text{ and }{\mathbf{nxi}}=0\text{; or}\\ {\mathbf{npde}}×\left(3×{\mathbf{npde}}+{\mathbf{npts}}+21\right)+7×{\mathbf{npts}}+{\mathbf{nxfix}}+3\text{,}& \text{when }{\mathbf{ncode}}=0\text{.}\end{array}\right\$ $\mathit{lenode}=\left\{\begin{array}{ll}\left(6+\mathrm{int}\left({\mathbf{algopt}}\left[1\right]\right)\right)×{\mathbf{neqn}}+50\text{,}& \text{when the BDF method is used; or}\\ 9×{\mathbf{neqn}}+50\text{,}& \text{when the Theta method is used.}\end{array}\right\$
Note: when using the sparse option, the value of lrsave may be too small when supplied to the integrator. An estimate of the minimum size of lrsave is printed on the current error message unit if ${\mathbf{itrace}}>0$ and the function returns with NE_INT_2.
34: isave[lisave]IntegerCommunication Array
If ${\mathbf{ind}}=0$, isave need not be set.
If ${\mathbf{ind}}=1$, isave must be unchanged from the previous call to the function because it contains required information about the iteration. In particular the following components of the array isave concern the efficiency of the integration:
${\mathbf{isave}}\left[0\right]$
Contains the number of steps taken in time.
${\mathbf{isave}}\left[1\right]$
Contains the number of residual evaluations of the resulting ODE system used. One such evaluation involves evaluating the PDE functions at all the mesh points, as well as one evaluation of the functions in the boundary conditions.
${\mathbf{isave}}\left[2\right]$
Contains the number of Jacobian evaluations performed by the time integrator.
${\mathbf{isave}}\left[3\right]$
Contains the order of the ODE method last used in the time integration.
${\mathbf{isave}}\left[4\right]$
Contains the number of Newton iterations performed by the time integrator. Each iteration involves residual evaluation of the resulting ODE system followed by a back-substitution using the $LU$ decomposition of the Jacobian matrix.
The rest of the array is used as workspace.
35: lisaveIntegerInput
On entry: the dimension of the array isave. Its size depends on the type of matrix algebra selected:
• if ${\mathbf{laopt}}=\mathrm{Nag_LinAlgFull}$, ${\mathbf{lisave}}\ge 25+{\mathbf{nxfix}}$;
• if ${\mathbf{laopt}}=\mathrm{Nag_LinAlgBand}$, ${\mathbf{lisave}}\ge {\mathbf{neqn}}+25+{\mathbf{nxfix}}$;
• if ${\mathbf{laopt}}=\mathrm{Nag_LinAlgSparse}$, ${\mathbf{lisave}}\ge 25×{\mathbf{neqn}}+25+{\mathbf{nxfix}}$.
Note: when using the sparse option, the value of lisave may be too small when supplied to the integrator. An estimate of the minimum size of lisave is printed if ${\mathbf{itrace}}>0$ and the function returns with NE_INT_2.
On entry: the task to be performed by the ODE integrator.
${\mathbf{itask}}=1$
Normal computation of output values u at $t={\mathbf{tout}}$ (by overshooting and interpolating).
${\mathbf{itask}}=2$
Take one step in the time direction and return.
${\mathbf{itask}}=3$
Stop at first internal integration point at or beyond $t={\mathbf{tout}}$.
${\mathbf{itask}}=4$
Normal computation of output values u at $t={\mathbf{tout}}$ but without overshooting $t={t}_{\mathrm{crit}}$, where ${t}_{\mathrm{crit}}$ is described under the argument algopt.
${\mathbf{itask}}=5$
Take one step in the time direction and return, without passing ${t}_{\mathrm{crit}}$, where ${t}_{\mathrm{crit}}$ is described under the argument algopt.
Constraint: ${\mathbf{itask}}=1$, $2$, $3$, $4$ or $5$.
37: itraceIntegerInput
On entry: the level of trace information required from nag_pde_parab_1d_keller_ode_remesh (d03prc) and the underlying ODE solver as follows:
${\mathbf{itrace}}\le -1$
No output is generated.
${\mathbf{itrace}}=0$
Only warning messages from the PDE solver are printed.
${\mathbf{itrace}}=1$
Output from the underlying ODE solver is printed . This output contains details of Jacobian entries, the nonlinear iteration and the time integration during the computation of the ODE system.
${\mathbf{itrace}}=2$
Output from the underlying ODE solver is similar to that produced when ${\mathbf{itrace}}=1$, except that the advisory messages are given in greater detail.
${\mathbf{itrace}}\ge 3$
The output from the underlying ODE solver is similar to that produced when ${\mathbf{itrace}}=2$, except that the advisory messages are given in greater detail.
38: outfileconst char *Input
On entry: the name of a file to which diagnostic output will be directed. If outfile is NULL the diagnostic output will be directed to standard output.
39: indInteger *Input/Output
On entry: indicates whether this is a continuation call or a new integration.
${\mathbf{ind}}=0$
Starts or restarts the integration in time.
${\mathbf{ind}}=1$
Continues the integration after an earlier exit from the function. In this case, only the arguments tout and fail and the remeshing arguments nrmesh, dxmesh, trmesh, xratio and con may be reset between calls to nag_pde_parab_1d_keller_ode_remesh (d03prc).
Constraint: ${\mathbf{ind}}=0$ or $1$.
On exit: ${\mathbf{ind}}=1$.
40: commNag_Comm *Communication Structure
The NAG communication argument (see Section 3.2.1.1 in the Essential Introduction).
41: savedNag_D03_Save *Communication Structure
saved must remain unchanged following a previous call to a Chapter d03 function and prior to any subsequent call to a Chapter d03 function.
42: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ACC_IN_DOUBT
Integration completed, but small changes in atol or rtol are unlikely to result in a changed solution.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_FAILED_DERIV
In setting up the ODE system an internal auxiliary was unable to initialize the derivative. This could be due to your setting ${\mathbf{ires}}=3$ in pdedef or bndary.
NE_FAILED_START
atol and rtol were too small to start integration.
NE_FAILED_STEP
Error during Jacobian formulation for ODE system. Increase itrace for further details.
Repeated errors in an attempted step of underlying ODE solver. Integration was successful as far as ts: ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
Underlying ODE solver cannot make further progress from the point ts with the supplied values of atol and rtol. ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
NE_INCOMPAT_PARAM
On entry, ${\mathbf{con}}=〈\mathit{\text{value}}〉$, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{con}}\le 10.0/\left({\mathbf{npts}}-1\right)$.
On entry, ${\mathbf{con}}=〈\mathit{\text{value}}〉$, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{con}}\ge 0.1/\left({\mathbf{npts}}-1\right)$.
On entry, the point ${\mathbf{xfix}}\left[i-1\right]$ does not coincide with any ${\mathbf{x}}\left[j-1\right]$: $i=〈\mathit{\text{value}}〉$ and ${\mathbf{xfix}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
NE_INT
ires set to an invalid value in call to pdedef, bndary, or odedef.
On entry, ${\mathbf{ind}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ind}}=0$ or $1$.
On entry, ${\mathbf{ipminf}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ipminf}}=0$, $1$ or $2$.
On entry, ${\mathbf{itask}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{itask}}=1$, $2$, $3$, $4$ or $5$.
On entry, ${\mathbf{itol}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{itol}}=1$, $2$, $3$ or $4$.
On entry, ${\mathbf{ncode}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ncode}}\ge 0$.
On entry, ${\mathbf{nleft}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nleft}}\ge 0$.
On entry, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npde}}\ge 1$.
On entry, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npts}}\ge 3$.
On entry, ${\mathbf{nxfix}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxfix}}\ge 0$.
NE_INT_2
On entry, corresponding elements ${\mathbf{atol}}\left[i-1\right]$ and ${\mathbf{rtol}}\left[j-1\right]$ are both zero: $i=〈\mathit{\text{value}}〉$ and $j=〈\mathit{\text{value}}〉$.
On entry, lisave is too small: ${\mathbf{lisave}}=〈\mathit{\text{value}}〉$. Minimum possible dimension: $〈\mathit{\text{value}}〉$.
On entry, lrsave is too small: ${\mathbf{lrsave}}=〈\mathit{\text{value}}〉$. Minimum possible dimension: $〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{ncode}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nxi}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxi}}=0$ when ${\mathbf{ncode}}=0$.
On entry, ${\mathbf{ncode}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nxi}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxi}}\ge 0$ when ${\mathbf{ncode}}>0$.
On entry, ${\mathbf{nleft}}=〈\mathit{\text{value}}〉$, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nleft}}\le {\mathbf{npde}}$.
On entry, ${\mathbf{nxfix}}=〈\mathit{\text{value}}〉$, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxfix}}\le {\mathbf{npts}}-2$.
When using the sparse option lisave or lrsave is too small: ${\mathbf{lisave}}=〈\mathit{\text{value}}〉$, ${\mathbf{lrsave}}=〈\mathit{\text{value}}〉$.
NE_INT_4
On entry, ${\mathbf{neqn}}=〈\mathit{\text{value}}〉$, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$ and ${\mathbf{ncode}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{neqn}}={\mathbf{npde}}×{\mathbf{npts}}+{\mathbf{ncode}}$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
Serious error in internal call to an auxiliary. Increase itrace for further details.
NE_ITER_FAIL
In solving ODE system, the maximum number of steps ${\mathbf{algopt}}\left[14\right]$ has been exceeded. ${\mathbf{algopt}}\left[14\right]=〈\mathit{\text{value}}〉$.
NE_NOT_CLOSE_FILE
Cannot close file $〈\mathit{\text{value}}〉$.
NE_NOT_STRICTLY_INCREASING
On entry, $i=〈\mathit{\text{value}}〉$, ${\mathbf{xfix}}\left[i\right]=〈\mathit{\text{value}}〉$ and ${\mathbf{xfix}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xfix}}\left[i\right]>{\mathbf{xfix}}\left[i-1\right]$.
On entry, $i=〈\mathit{\text{value}}〉$, ${\mathbf{xi}}\left[i\right]=〈\mathit{\text{value}}〉$ and ${\mathbf{xi}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xi}}\left[i\right]>{\mathbf{xi}}\left[i-1\right]$.
On entry, mesh points x appear to be badly ordered: $i=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[i-1\right]=〈\mathit{\text{value}}〉$, $j=〈\mathit{\text{value}}〉$ and ${\mathbf{x}}\left[j-1\right]=〈\mathit{\text{value}}〉$.
NE_NOT_WRITE_FILE
Cannot open file $〈\mathit{\text{value}}〉$ for writing.
NE_REAL
On entry, ${\mathbf{dxmesh}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{dxmesh}}\ge 0.0$.
On entry, ${\mathbf{xratio}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xratio}}>1.0$.
NE_REAL_2
On entry, at least one point in xi lies outside $\left[{\mathbf{x}}\left[0\right],{\mathbf{x}}\left[{\mathbf{npts}}-1\right]\right]$: ${\mathbf{x}}\left[0\right]=〈\mathit{\text{value}}〉$ and ${\mathbf{x}}\left[{\mathbf{npts}}-1\right]=〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{tout}}=〈\mathit{\text{value}}〉$ and ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{tout}}>{\mathbf{ts}}$.
On entry, ${\mathbf{tout}}-{\mathbf{ts}}$ is too small: ${\mathbf{tout}}=〈\mathit{\text{value}}〉$ and ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
NE_REAL_ARRAY
On entry, $i=〈\mathit{\text{value}}〉$ and ${\mathbf{atol}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{atol}}\left[i-1\right]\ge 0.0$.
On entry, $i=〈\mathit{\text{value}}〉$ and ${\mathbf{rtol}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{rtol}}\left[i-1\right]\ge 0.0$.
NE_REMESH_CHANGED
remesh has been changed between calls to nag_pde_parab_1d_keller_ode_remesh (d03prc).
NE_SING_JAC
Singular Jacobian of ODE system. Check problem formulation.
NE_USER_STOP
In evaluating residual of ODE system, ${\mathbf{ires}}=2$ has been set in pdedef, bndary, or odedef. Integration is successful as far as ts: ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
NE_ZERO_WTS
Zero error weights encountered during time integration.
## 7 Accuracy
nag_pde_parab_1d_keller_ode_remesh (d03prc) controls the accuracy of the integration in the time direction but not the accuracy of the approximation in space. The spatial accuracy depends on both the number of mesh points and on their distribution in space. In the time integration only the local error over a single step is controlled and so the accuracy over a number of steps cannot be guaranteed. You should therefore test the effect of varying the accuracy arguments, atol and rtol.
The Keller box scheme can be used to solve higher-order problems which have been reduced to first-order by the introduction of new variables (see the example in Section 9). In general, a second-order problem can be solved with slightly greater accuracy using the Keller box scheme instead of a finite difference scheme (nag_pde_parab_1d_fd_ode_remesh (d03ppc) for example), but at the expense of increased CPU time due to the larger number of function evaluations required.
It should be noted that the Keller box scheme, in common with other central-difference schemes, may be unsuitable for some hyperbolic first-order problems such as the apparently simple linear advection equation ${U}_{t}+a{U}_{x}=0$, where $a$ is a constant, resulting in spurious oscillations due to the lack of dissipation. This type of problem requires a discretization scheme with upwind weighting
(nag_pde_parab_1d_cd_ode_remesh (d03psc) for example), or the addition of a second-order artificial dissipation term.
The time taken depends on the complexity of the system, the accuracy requested, and the frequency of the mesh updates. For a given system with fixed accuracy and mesh-update frequency it is approximately proportional to neqn.
## 9 Example
This example is the first-order system
$∂U1 ∂t + ∂U1 ∂x + ∂U2 ∂x = 0, ∂U2 ∂t +4 ∂U1 ∂x + ∂U2 ∂x = 0,$
for $x\in \left[0,1\right]$ and $t\ge 0$.
The initial conditions are
$U1x,0 = ex, U2x,0 = x2+sin2πx2,$
and the Dirichlet boundary conditions for ${U}_{1}$ at $x=0$ and ${U}_{2}$ at $x=1$ are given by the exact solution:
$U1x,t = 12 ex+t+ex-3t+14 sin2π x-3t 2-sin2π x+t 2+2t2-2xt, U2x,t = ex-3t-ex+t+12 sin2π x-3t 2+sin2π x+t 2+x2+5t2-2xt.$
### 9.1 Program Text
Program Text (d03prce.c)
None.
### 9.3 Program Results
Program Results (d03prce.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 658, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898724555969238, "perplexity": 1679.064297726333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00099-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://adicspaces.com/2015/11/17/i-10-almost-mathematics-ii/ | # I.10: Almost mathematics (II)
I continue by introducing standard notions from commutative algebra to the context of almost mathematics. For the references to the proof of the following definition/proposition, see Proposition 4.7 in Scholze’s “`Perfectoid spaces”‘. I follow closely the exposition in the Scholze’s paper in this blog post (consult it for all references).
Definition 42 Let ${A}$ be a ${K^{\circ a}}$-algebra
1. An ${A}$-module ${M}$ is flat if the functor ${X \mapsto M \otimes _A X}$ on ${A}$ is exact. If ${R}$ is a ${K^{\circ}}$-algebra and ${N}$ is an ${R}$-module, then ${R^a}$-module ${N^a}$ is flat if and only if for all ${R}$-modules ${X}$ and all ${i >0}$, the module ${\mathrm{Tor} ^R _i(N,X)}$ is almost zero.
2. An ${A}$-module ${M}$ is almost projective if the functor ${X \mapsto al\mathrm{Hom} _A(M,X)}$ on ${A}$-modules is exat. If ${R}$ is a ${K^{\circ}}$-algebra and ${N}$ is an ${R}$-module, then ${N^a}$ is almost projective over ${R^a}$ if and only if for all ${R}$-modules ${X}$ and all ${i >0}$, the module ${\mathrm{Ext} ^R _i(N,X)}$ is almost zero.
3. If ${R}$ is a ${K^{\circ}}$-algebra and ${N}$ is an ${R}$-module then we say ${M=N^a}$ is almost finitely generated (resp. almost finitely presented) ${R^a}$-module if and only if for all ${\epsilon \in \mathfrak{m}}$, there is some finitely generated (resp. finitely presented) ${R}$-module ${N_{\epsilon}}$ with a map ${f _{\epsilon}: N_{\epsilon} \rightarrow N_{\epsilon}}$ such that the kernel and cokernel of ${f_{\epsilon}}$ are annigilated by ${\epsilon}$. We say ${M}$ is uniformly almost finitely generated if there is some integer ${n}$ such that ${N_{\epsilon}}$ can be chosen to be generated by ${n}$ elements for all ${\epsilon}$.
Proposition 43 Let ${A}$ be a ${K^{\circ a}}$-algebra. Then an ${A}$-module ${M}$ is flat and almost finitely presented if and only if it is almost projective and almost finitely generated.
We shall call such ${A}$-modules ${M}$ finite projective. If moreover ${M}$ is uniformly almost finitely generated, we say ${M}$ is uniformly finite projective. For such modules we have a good notion of rank.
Theorem 44 Let ${A}$ be a ${K^{\circ a}}$-algebra and let ${M}$ be a uniformly finite projective ${A}$-module. Then there is a unique decomposition ${A = A_0 \times A_1 \times ... \times A_k}$ such that for each ${i=0,...,k}$ the ${A_i}$-module ${M_i = M \otimes _A A_i}$ has the property that ${\wedge ^i M_i}$ is invertible, and ${\wedge ^{i+1} M_i = 0}$. Here, we call ${A}$-module ${L}$ invertible if ${L \otimes _A al\mathrm{Hom} _A(L,A)= A}$.
We introduce the notion of étale morphisms
Definition 45 Let ${A}$ be a ${K^{\circ a}}$-algebra and let ${B}$ be an ${A}$-algebra. Let ${\mu : B\otimes _A B \rightarrow B}$ denote the multiplication morphism.
(1) The morphism ${A \rightarrow B}$ is unramified if there is an element ${e \in (B\otimes _A B)_{*}}$ such that ${e^2 = e}$, ${\mu(e) = 1}$ and ${xe =0}$ for all ${x\in \ker (\mu)_{*}}$.
(2) The morphism ${A \rightarrow B}$ is étale if it is unramified and ${B}$ is a flat ${A}$-module.
Definition 46 A morphism ${A \rightarrow B}$ of ${K^{\circ a}}$-algebras is finite étale if it is étale and ${B}$ is an almost finitely presented ${A}$-module. We write ${A_{fet}}$ for the category of finite étale ${A}$-algebras.
Observe that any finite étale ${A}$-algebra is also finite projective ${A}$-module. There is an equivalent characterization of finite étale morphisms in terms of trace morphisms. For a ${K^{\circ a}}$-algebra ${A}$ and a finite projective ${A}$-module ${P}$ we define ${P^{*} = al\mathrm{Hom}(P,A)}$ which is also a finite projective ${A}$-module. Moreover ${P{**} \simeq P}$ canonically and there is an isomorphism
$\displaystyle \mathrm{End}(P)^a = P \otimes _A P^{*}$
In particular, we get a trace morphism ${\mathrm{tr} _{P/A} : \mathrm{End}(P)^a \rightarrow A}$
Definition 47 Let ${A}$ be a ${K^{\circ a}}$-algebra and let ${B}$ be an ${A}$-algebra such that ${B}$ is a finite projective ${A}$-module. We define the trace form as the bilinear form
$\displaystyle t _{B/A} : B\otimes _A B \rightarrow A$
given by the composition of ${\mu : B \otimes _A B \rightarrow B}$ and the map ${B \rightarrow A}$ sending any ${b\in B}$ to the trace of the endomorphism ${b' \mapsto bb'}$ of ${B}$.
We remark that the last map is defined on ${(.)_{*}}$ as we cannot talk about elements ${b}$ of an almost object ${B}$. That is, we construct a map ${B _{*} \rightarrow \mathrm{End} _{A_{*}}(B_{*})^a}$ in the same way and then pass to the almost setting by taking ${(.)^a}$.
Theorem 48 If ${A,B}$ are as in the definition above, then ${A \rightarrow B}$ is finite étale if and only if the trace map is a perfect pairing, that is it induces an isomorphism ${B \simeq B^{*}}$.
We finish by saying that finite étale covers lift uniquely over nilpotents (recall ${\varpi}$ was a uniformizer of our base field):
Theorem 49 Let ${A}$ be a ${K^{\circ a}}$-algebra. Assume that ${A}$ is flat over ${K^{\circ a}}$ and ${\varpi}$-adically complete, that is
$\displaystyle A \simeq \varprojlim _n A/\varpi ^n$
Then the functor ${B \mapsto B \otimes _A A/\varpi}$ induces an equivalence of categories ${A _{fet} \simeq (A/\varpi)_{fet}}$. Any ${B \in A_{fet}}$ is again flat over ${K^{\circ a}}$ and ${\varpi}$-adically complete. Moreover ${B}$ is a uniformly finite projective ${A}$-module if and only if ${B \otimes _A A/\varpi}$ is a uniformly finite projective ${A/\varpi}$-module. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 135, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955632090568542, "perplexity": 84.38979559012655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00382.warc.gz"} |
http://mathhelpforum.com/geometry/174858-euclidean-taxicab-problem.html | # Math Help - Euclidean and Taxicab problem
1. ## Euclidean and Taxicab problem
Hi guys. I have a homework problem that I don't know how to approach.
Find graphically (using MATLAB) for each of the following cases, in both Euclidean geometry, and Taxicab geometry the loci of points that satisfy the following properties:
(a) The points whose distance from the axes origin is 8.
(b) The points which have the sum of their distances to the points A(-3,-4), and B(4, 3) equals to 16.
(c) The points which have the difference of their distances to the points A(-3,-4) and B(4,3) equals to 2.
(d) The point S such that d(S,P) + d(S,Q) = d(P,Q), where P(-2,3) and Q(1,-4)
The instructor gives hints to each of the cases:
a) a circle centered at the origin and whose radius is 8,
b) an ellipse with A and B as foci,
c) a hyperbola branch with A and B as foci, and
d) a point R(-1/2, -1/2). The solution for this case, in Taxicab geometry, is the set of all points inside the rectangle shown(not shaded in order to show point R).
-----------
I didn't get part c and part b. I know what Taxicab geometry is. I know how to compute the distance. I can use MATLAB to generate such graphical representation for part a and b.
Code:
% code for part a
x = -10:0.1:10;
y = -10:0.1:10;
Ax = 0;
Ay = 0;
Bx = -1;
By = 0;
[X,Y] = meshgrid(x,y);
Z1=(abs(X)+abs(Y))+(abs(X)+abs(Y)); % computes Taxicab-G
contour(X,Y,Z1,[16 16],'r')
hold on
Z2 = sqrt((X).^2+(Y).^2)+sqrt((X).^2+(Y).^2); % computes Eu-G
contour(X,Y,Z2,[16 16],'b--')
plot(Ax,Ay,'k*',Bx,By,'ko')
axis square
set(gca,'Xtick',[-10:1:10],'Ytick',[-10:1:10])
I think I can handle the MATLAB part, but I just didn't know how to solve for c and d.
The answer to part c is shown below
How do I solve these two analytically?
2. For D, i tried to solve the absolute value equation
(d(S,P) + d(S,Q) = d(P,Q)) where d(P,Q) is 10
[ |Sx - Px | + |Sy - Py| ] + [ | Sx - Qx| + | Sy - Qy| ] = |Px - Qx| + |Py - Qy|
I would get two unknowns and two equations. The two equations come from the absolute value equation, that is negative of the left expression, and the negative of the right expression
I get y = 7, x = -3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334227204322815, "perplexity": 1181.575144638206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00300-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/airfoil-pressure.177863/ | # Airfoil pressure
1. Jul 22, 2007
### hallo
Hi,
velocity of air=0, an airfoil or wing with is moving forwards with a velocity of 40m/s at an altitude of 1100m. At a certain point close to the wing (which is a point on top of the wing) ,the air speed relative to the wing is 50m/s.Find the pressure at that point.
Somebody pls help me.....
I trz to use the formula of 1/2*(density)*(velocity)^2 + pressure + (density)*g*z which is bernoulli"s equation and is wrong.....
thanx
2. Jul 23, 2007
### chaoseverlasting
1/2d*40*40 +p1=1/2d*50*50 +p2. I think this should be it. Is this what you did?
3. Jul 23, 2007
### hallo
yes,but how to get p1 and p2 ??
4. Jul 23, 2007
### Staff: Mentor
5. Jul 23, 2007
### hallo
static air pressure equals to 1.23 * 9.81 * 1100m . Am I rite?? but the answer is wrong if I use this value...
6. Jul 23, 2007
### Staff: Mentor
It would appear that one is using $\rho{gh}$, and therefore one is finding the pressure at the bottom of a column of air of 1100 m height. Of course, one can subtract that value from 1 atm and obtain an reasonable approximation of the air pressure at 1100m above sea level.
Last edited: Jul 23, 2007
Similar Discussions: Airfoil pressure | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154190421104431, "perplexity": 2378.377716967272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00877.warc.gz"} |
https://export.arxiv.org/abs/2205.06879 | physics.flu-dyn
(what is this?)
# Title: Reynolds number dependence of Lagrangian dispersion in direct numerical simulations of anisotropic magnetohydrodynamic turbulence
Abstract: Large-scale magnetic fields thread through the electrically conducting matter of the interplanetary and interstellar medium, stellar interiors, and other astrophysical plasmas, producing anisotropic flows with regions of high-Reynolds-number turbulence. It is common to encounter turbulent flows structured by a magnetic field with a strength approximately equal to the root-mean-square magnetic fluctuations. In this work, direct numerical simulations of anisotropic magnetohydrodynamic (MHD) turbulence influenced by such a magnetic field are conducted for a series of cases that have identical resolution, and increasing grid sizes up to $2048^3$. The result is a series of closely comparable simulations at Reynolds numbers ranging from 1,400 up to 21,000. We investigate the influence of the Reynolds number from the Lagrangian viewpoint by tracking fluid particles and calculating single-particle and two-particle statistics. The influence of Alfv\'enic fluctuations and the fundamental anisotropy on the MHD turbulence in these statistics is discussed. Single-particle diffusion curves exhibit mildly superdiffusive behaviors that differ in the direction aligned with the magnetic field and the direction perpendicular to it. Competing alignment processes affect the dispersion of particle pairs, in particular at the beginning of the inertial subrange of time scales. Scalings for relative dispersion, which become clearer in the inertial subrange for larger Reynolds number, can be observed that are steeper than indicated by the Richardson prediction.
Comments: 23 pages, 11 figures Subjects: Fluid Dynamics (physics.flu-dyn); High Energy Astrophysical Phenomena (astro-ph.HE); Solar and Stellar Astrophysics (astro-ph.SR); Computational Physics (physics.comp-ph); Plasma Physics (physics.plasm-ph) Cite as: arXiv:2205.06879 [physics.flu-dyn] (or arXiv:2205.06879v1 [physics.flu-dyn] for this version)
## Submission history
From: Jane Pratt [view email]
[v1] Fri, 13 May 2022 20:28:10 GMT (3370kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206936717033386, "perplexity": 2337.627268754718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00023.warc.gz"} |
https://24tutors.com/hc-verma-shm/ | # Simple Harmonics Motion
## Concept Of Physics
### H C Verma
1 The position, velocity and acceleration of a particle executing simple harmonic motion are found to have magnitudes $2$ cm, $1$ m/s and $10$ m/s $2$ at a certain instant. Find the amplitude and the time period of the motion.
##### Solution :
$\omega$
$v$ = $1$$m/sec 1 = 500 (r^{2}-0.0004) We Know that a = \omega^{2}$$x$
Again, amplitude $r$ is given by $v$ = $\omega$ $(\sqrt r^{2}-x^{2})$
$x$ =$2cm$ = $0.02m$
$\therefore$ $r$ = 4.9 $cm$
T = $\frac{2\pi}{\omega}$ = $\frac{2}{10\sqrt5}$ = $\frac{2x3.14}{10x2.236}$ = $0.28$seconds.
$\Rightarrow$ $v^{2}$ = $\omega^{2}$($r^{2}$-$x^{2}$)
Bacause, K.E. = P.E. so $(1/2)$ m $\omega^{2}$ ($r^{2}$- $y^{2}$) = $(1/2)$ m $\omega^{2}$$y^{2} r^{2} - y^{2} = y^{2} \Rightarrow 2y^{2} = r^{2} \Rightarrow y = \frac{r}{\sqrt2} = \frac{10}{\sqrt2} = 5\sqrt2 cm form the mean position r = 10cm 4 The maximum speed and acceleration of a particle executing simple harmonic motion are 10 cm/s and 50 cm/s^{2}. Find the position(s) of the particle when the speed is 8 cm/s. ##### Solution : V^{2} = \omega^{2} (r^{2}-y^{2}) \Rightarrow \omega^{2} = \frac{100}{r^{2}} ......(1) \Rightarrow (4 - y^{2}) = \frac{64}{25} \Rightarrow y^{2} = 1.44 \Rightarrow y = \pm 1.2 cm from mean position. \Rightarrow \omega^{2} = \frac{50}{y} = \frac{50}{r} ......(2) Again to find out the position where the speed is 8m/sec. \Rightarrow r \omega = 10 \therefore \omega = \sqrt\frac{100}{r^2} = 5 sec^{2} \Rightarrow 64 = 25 (4 - y^{2}) A_{max} = \omega^{2}$$r$ = $50$ $cm/sec$
$V_{max}$ = $10$ $cm/sec$
$\therefore$ $\frac{100}{r^2}$ = $\frac{100}{r}$ $\Rightarrow$ $r$ = $2cm$
5 A particle having mass 10 g oscillates according to the equation $x$ = ($2.0 cm$) sin[($100$ s$^{-1}$ )t + $\pi$/6]. Find (a) the amplitude, the time period and the spring constant (b) the position, the velocity and the acceleration at $t$ = $0$.
##### Solution :
$x$ = $2$ $cm$ sin $\big(\frac{\pi}{6}\big)$ = $2$ x $(1/2)$ = $1$ $cm$.from the mean position.
$a)$ Amplitude = $2cm$
$v$ = $A$ $cos$ $(0 + \frac{\pi}{6})$ = $200$ x $\frac{\sqrt3}{2}$ = $100$ $\sqrt3$ $sec^{-1}$ = $1.73m/s$
$\therefore$ $T$ = $\frac{2\pi}{100}$ = $\frac{\pi}{50}$$sec = 0.063 sec b) At t = 0 m =10g = 10^5 dyne/cm = 100 N/M. \\ [because \omega = \frac{2\pi}{T} = 100sec^-1] we know that x = A sin (\omega t + \phi) \omega = 100s^{-1} x = (2.0cm) sin [(100s^{-1}) t + (\pi/6) c) a = -\omega^2 x =100^2 x 1 = 100 m/s^2 We know that T = 2\pi \sqrt\frac{m}{k} \Rightarrow T^{2} = 4\pi^{2} x \frac{m}{k} \Rightarrow k = \frac{4\pi^2}{T^2}m 6 The equation of motion of a particle started at t = 0 is given by x = 5 sin (20 t + n/3) w here x is in centim etre and t in second. When does the particle\\ (a) first come to rest \\ (b) first have zero acceleration \\ (c) first have maximum speed ? ##### Solution : b) a = \omega^2 x = \omega^2 [5 sin (20t + \pi/3)] At the extream position, the velocity becomes '0'. when, v is maximum i.e cos (20t + \pi/3) = -1 = cos \pi \Rightarrow 20t = \pi - \pi/3 = 2\pi/3 \therefore 5 = 5 sin (20t + \pi/3) \Rightarrow t = \pi/120 sec it first comes to rest. a) Max. displacement from the mean position = Amplitude of the particle. c) v = A \omega cos (\omega t + \pi/3) \Rightarrow t = \pi/30 sec. \Rightarrow 20t + \pi/3 = \pi/2 For a = 0,5 sin (20t + \pi/3) = 0 \Rightarrow sin (20t + \pi/3) = sin (\pi) \therefore x = 5 = Amplitude. x = 5 sin (20t + \pi/3) \Rightarrow t = \pi/30 sec. \Rightarrow 20t = \pi - \pi/3 = 2\pi/3 sin (20t + \pi/3) = 1 = sin (\pi/2) 7 Consider a particle moving in simple harmonic motion according to the equation x = 2.0 cos (50 \pi t + tan^{-1} - 0.75) where x is in centimetre and t in second. The motion is started at t = 0. \\(a) When does the particle come to rest for the first time ? \\(b) When does the acceleration have its maximum magnitude for the first time ? \\(c) When does the particle come to rest for the second time ? ##### Solution : For Maximum acceleration cos (50 \pi t + 0.643) = -1 cos \pi (max)(so a is max) v = \frac{dv}{dt} = - 100 sin (50 \pi t + tan^{-1} - 0.75) = 2.0 cos (50\pit + 0.643) c) When the particle comes to rest for second time, As the particle comes to rest for the 1^{st} time b) Acceleration a = \frac{dv}{dt} = - 100$$\pi$ x 50 $\pi$ $cos$ (50 $\pi$ t + $0.643$)
$a)$ $x$ = $2.0$ $cos$ ($50$$\pit = tan^{-1} 0.75) = 2.0 cos (50$$\pi$t + $0.643$)
$\Rightarrow$ $t$ = $3.4$ x $10^{-2}$ $s$.
$t$ = $1.6$ x $10^{-2}$ $sec$
$\Rightarrow$ $t$ = $1.6$ x $10^{-2}$ $sec$
$\Rightarrow$ $sin$ (50$\pi$t + $0.643$) = $0$
$a)$ $x$ = $2.0$ $cos$ ($50$$\pit = tan^{-1} 0.75) 50 \pi t + 0.643 = 2$$\pi$
$\Rightarrow$ 50$\pi$t + $0.643$ = $\pi$
8 Consider a simple harmonic motion of time period T. Calculate the time taken for the displacement to change value from half the amplitude to the amplitude.
##### Solution :
$\Rightarrow$ $\frac{r}{2}$ = $r$ $sin$ $\omega$$t_{1} \Rightarrow sin \omega$$t_1$ = $\frac{1}{2}$ $\Rightarrow$ $\omega$$t_{1} = \frac{\pi}{2} \Rightarrow \frac{2\pi}{t} x t_1 = \frac{\pi}{6} \Rightarrow t_1 - \frac{t}{12} \Rightarrow r = r sin \omega$$t_{2}$ $\Rightarrow$ $sin$ $\omega$$t_2 = 1 \Rightarrow \omega$$t_{2}$ = $\frac{\pi}{2}$ $\Rightarrow$ $\big(\frac{2\pi}{t}\big)$ $t_2$ = $\frac{\pi}{2}$ $\Rightarrow$ $t_2$ - $\frac{t}{4}$
Now, $y_1$ = $r$ $sin$ $\omega$$t_1 Again, y_2 = r sin \omega$$t_2$
$y_1$ = $\frac{r}{2}$, $y_2$ = $r$ (for the two given position)
so, $t_2$ - $t_1$ = $\frac{t}{4}$ -$\frac{t}{12}$ = $\frac{t}{6}$
9 The pendulum of a clock is replaced by a spring-mass system with the spring having spring constant $0$'$1$ N/m. What mass should be attached to the spring ?
##### Solution :
$\therefore$ $m$ = $\frac{k}{\pi^2}$ = $\frac{0.1}{10}$ = $0.01kg$ $\approx$ $10$ $gm$
So, $4$$\pi^2 + \big(\frac{m}{k}\big) = 4 k = 0.1 N/M T = 2\pi \sqrt\frac{m}{k} = 2 sec [Time period of pendulum of a clock = 2 sec] 10 A block suspended from a vertical spring is in equilibrium. Show that the extension of the spring equals the length of an equivalent simple pendulum i.e., a pendulum having frequency same as that of the block ##### Solution : Time period of simple pendulum = 2\pi \sqrt\frac{1}{g} Time period of spring is 2\pi \sqrt\frac{m}{k} \Rightarrow \sqrt\frac{1}{g} = \sqrt\frac{m}{k} \Rightarrow \frac{1}{g} = \frac{m}{k} \Rightarrow 1 = x (proved) T_p = T_s [Frequency is same] \Rightarrow 1 = \frac{mg}{k} = \frac{F}{K} = x. (Because , restoring force = weight = F =msg) 11 A block of mass 0.5 kg hanging from a vertical spring executes simple harmonic motion of amplitude 0.1 m and time period 0'314 s. Find the maximum force exerted by .the spring on the block. ##### Solution : x = r = 0.1 m T = 0.314 sec m = 0.5 kg. Total force exerted on the block = weight of the block + spring force. f = Kx = 201.1 x 0.1 = 20N \therefore Force exerted by the spring on the block is T = 2$$\pi$$\sqrt\frac{m}{k} \Rightarrow 0.314 = 2\pi$$\sqrt\frac{0.5}{k}$ $\Rightarrow$ $k$ = $200$ N/m
$\therefore$ Maximum force = $f$ + weight = $20$ = $5$ = $25$N
12 A body of mass $2$ kg suspended through a vertical spring executes simple harmonic motion of period $4$ s. If the oscillations are stopped and the body hangs in equilibrium, find the potential energy stored in the spring.
##### Solution :
$T$ = $2$$\pi$$\sqrt\frac{m}{k}$ $\Rightarrow$ $4$ = $2$$\pi$$\sqrt\frac{2}{k}$ $\Rightarrow$ $2$ = $\pi$$\sqrt\frac{2}{k} \Rightarrow 4 = \pi^2$$\big(\frac{2}{k}\big)$ $\Rightarrow$ $k$ = $\frac{2\pi^2}{4}$ $\Rightarrow$ $k$ = $\frac{\pi^2}{2}$ = $5$ N/M
$\therefore$ Potential Energy = ($1/2$) $k$ $x^2$ = ($1/2$) x $5$x$16$ = $5$ x $8$ = $40$ $J$
$T$ =$4$$sec. \Rightarrow x = \frac{mg}{k} = \frac{2\times10}{5} = 4 \Rightarrowb4 = 2$$\pi$$\sqrt\frac{2}{k} \Rightarrow 2 = \pi$$\sqrt\frac{2}{k}$
$m$ = $2$$kg. But, we know that f = mg = kx 13 A spring stores 5 J of energy when stretched by 25 cm. It is kept vertical with the lower end fixed. A block fastened to its other end is made to undergo small oscillations. If the block makes 5 oscillations each second, what is the mass of the block ? ##### Solution : f = 5 Now P.E. = (1/2) Kx^2 E = 5j Again, T = 2$$\pi$$\sqrt\frac{m}{k} \Rightarrow \frac{1}{5} = 2$$\pi$$\sqrt\frac{m}{160} \Rightarrow m = 0.16$$kg$
So, $T$ = $1/5$$sec. x = 25 cm = 0.25$$m$
$\Rightarrow$($1/2$)$kx^2$ = $5$ $\Rightarrow$ ($1/2$) $k$ $(0.25)^{2}$ = $5$ =$\Rightarrow$ $K$ = $160$ N/m.
14 A small block of mass m is kept on a bigger block of mass M which is attached to a vertical spring of spring constant k as shown in the figure. The system oscillates vertically. $\\$(a) Find the resultant force on the smaller block when it is displaced through a distance x above its equilibrium position. $\\$ (b) Find the normal force on the smaller block at this position. When is this force smallest in magnitude ? $\\$(c) What can be the maximum amplitude with which the two blocks may oscillate together ?
##### Solution :
$a)$ From the free body diagram,
$\therefore$ $R$ + $m\omega^2x$ - $mg$ =$0$ ......(1) $\\$ Resultant force $m\omega^2x$ = $mg$ - $R$
$\Rightarrow$ $m\omega^2x$ = $m$$\big(\frac{k}{M+m}\big) \Rightarrow x = \frac{mkx}{M+m} [ \omega = \sqrt{K/(M+m)} for spring mass system ] b) R = mg - m\omega^2x = mg - m$$\frac{k}{M+m}$$x = mg - \frac{mkx}{M+m} For R to be smallest m\omega^2x should be max i.e. x is maximum.\\ The particle shoild be at the high point. c) We have R = mg - m\omega^2x The two blocks may oscillates together is such a way that R is grater than 0.At limiting condition,\\ R = 0, mg = m\omega^2x X = \frac{mg}{m\omega^2} = \frac{mg(M+m)}{mk} So, the maximum amplitude is = \frac{g(M+m)}{k} 15 The block of mass in, shown in figure (12-E2) is fastened to the spring and the block of mass m_2 is placed against it. \\ (a) Find the compression of the spring in the equilibrium position. \\ (b) The blocks are pushed a further distance (2/h) (m1+ m2)g sin\theta against the spring and released. Find the position where the two blocks separate.\\ (c) What is the common speed of blocks at the time of separation ? ##### Solution : a) At the equilibrium condition, kx = (m_1 + m_2)g sin \theta (Given) \Rightarrow x = \frac{(m_1 + m_2)g sin \theta}{k} b) x_1 = \frac{2}{k}$$(m_1 + m_2)$ $g$ $sin\theta$ (Given)
when the system is released, it will start to make SHM
where $\omega$ = $\sqrt\frac{k}{m_1+m_2}$
When the block lose contact, $P$ = $0$
So $m_2$ $g$ $sin$ $\theta$ = $m_2$ $x_2$ $\omega^2$ = $m_2x_2$ $\big(\frac{k}{m_1+m_2}\big)$
$\Rightarrow$ $x_2$ = x $k$$(m_1 + m_2)g sin \theta So the block will lose contact with each other when the springs attain its natural length. c) Let the common speed attained by both the blocks be v. 1/2 (m_1+m_2)$$v^2$ - $0$ = $1/2$ $k(x_1 + x_2)^{2}$ - $(m_1+m_2)$ $g$ $sin$ $\theta$ $(x + x_1)$ [$x + x_1$ = total compression]
$\Rightarrow$ $(1/2)$ $(m_1+m_2)$ $v^2$ = [$(1/2) k (3/k)$ ($(m_1 + m_2)g$ $sin$ $\theta$ - $(m_1 + m_2)g$ $sin$ $\theta$] $(x + x_1)$
$\Rightarrow$ $(1/2)$ $(m_1+m_2)$ $v^2$ = [$(1/2)$ $(m_1 + m_2)g$ $sin$ $\theta$ x $(3/k)$ $(m_1 + m_2)g$ $sin$ $\theta$
$\Rightarrow$ $v$ = $\sqrt\frac{3}{k(m_1+m_2)}$ $g$ $sin$$\theta 16 A particle of mass in is attatched to three springs A, B and C of equal force constants k as shown in figure (12-E6). If the particle is pushed slightly against the spring C and released, find the time period of oscillation. ##### Solution : P.E = (1/2)$$k$ $(x+\delta)^{2}$ = $(1/2)$ x $100(0.1+0.2)^{2}$ = $50$ x$0.09$ = $4.5$ $J$
$d)$ Let the amplitude be'$x$' which means the distance between the mean position and the extream position.
Since, in SHM,the total enegy remaining constant.
$e)$ Potential Energy at the left extreme is given by.
$\therefore$ P.E. + K.E. = $1/2$ $k\delta^2$ + $1/2$ M$v^2$
$(1/2)$$k$$(x+\delta)^{2}$ = $(1/2)$$k\delta^{2}+(1/2)$$mv^{2}$+$Fx$ = $2.5$ +$10$$x \\ [because (1/2)$$k\delta^{2}$+$(1/2)mv^{2}$ = $2.5$]
The different value in $(b)$ $(e)$ and $(f)$ do not violate law of conservation of energy as the work is done by $\\$ the external force $10$N.
$e)$ Potential Energy at the left extreme is given by.
Give, $k$ = $100$ N/m, $\\$ $m$ = $1kg$ and $F$ = $10$ N
$P.E$ = $(1/2)$$k (x+\delta)^{2} - F(2x) \\ [2x = distance between two extremes] f) Potential Energy at the right extream is given by, So, 50(x+0.1)^{2} = 2.5 +10x c) Time period = 2\pi$$\sqrt{\frac{M}{k}}$ = $2\pi$$\sqrt{\frac{1}{100}} = \frac{\pi}{5} sec So, in the extream position, compression of the spring is (x+\delta). \therefore 50x^{2} + 0.5 + 10x = 2.5 + 10x \\ \therefore 50x^{2} = 2 \Rightarrow x^{2} = \frac{2}{50} = \frac{4}{100} \Rightarrow x = \frac{2}{10}m = 20$$cm$.
$b)$ The below imparts a speed of $2m/s$ to block towards left.
Since, in SHM,the total energy remaining constant.
= $4.5$ - $10(0.4)$ = $0.5$$J \therefore 50x^{2} = 2 \Rightarrow x^{2} = \frac{2}{50} = \frac{4}{100} \Rightarrow x = \frac{2}{10}m = 20$$cm$.
$f)$ Potential Energy at the right extream is given by,
$P.E$ = $(1/2)$$k (x+\delta)^{2} = (1/2) x 100(0.1+0.2)^{2} = 50 x0.09 = 4.5 J So, in the extreme position, compression of the spring is (x+\delta). So, 50(x+0.1)^{2} = 2.5 +10x = (1/2) x 100 x (0.1)^2 + (1/2) x 1 x 4 = 0.5 + 2 = 2.5 J a) In the euilibrium position, \\ compression \delta = F/k = 10/100 = 0.1m = 10 cm So, in the extreme position, compression of the spring is (x+\delta). \therefore 50x^{2} + 0.5 + 10x = 2.5 + 10x P.E = (1/2)$$k$ $(x+\delta)^{2}$ - $F(2x)$ $\\$ [$2x$ = distance between two extremes]
17 Find the time period of the oscillation of mass m in figures $(12-$E4 a, b, c$)$ What is the equivalent spring constant of the pair of springs in each case ?
##### Solution :
$b)$ Let us, displace the block $m$ towards left through displacement '$x$'
Time period $T$ = $2\pi$ $\sqrt{\frac{displacement}{Acceleration}}$ = $2\pi$ $\sqrt{\frac{x}{\frac{m(K_1+K_2)}{m}}}$ =$2\pi$ $\sqrt{\frac{M}{K_1+K_2}}$
$T$ = $2\pi$ $\frac{M}{k}$ = $2\pi$ $\frac{m(k_1+k_2)}{k_1k_2}$
$T$ = $2\pi$ $\frac{M}{K}$ = $2\pi$ $\sqrt{\frac{M}{K_1+K_2}}$
$c)$ In series conn equivalent spring constant be $k$. So, $\frac{1}{k}$ = $\frac{1}{k_1}$ + $\frac{1}{k_2}$ = $\frac{k_2+k_1}{k_1k_2}$ $\Rightarrow$ $k$ = $\frac{k_1k_2}{k_2+k_1}$
Resultant force $F$ = $F_1+F_2$ = ($K_1+K_2$)$x$
$a)$ Equivalent spring constant = $k$ = $k_1$ + $k_2$ (parallel)
The equivalent spring constant $k$ = $k_1$ + $k_2$
18 The spring shown in figure $(12-E5)$ is unstretched when a man starts pulling on the cord. The mass of the block is $M$. If the man exerts a constant force $F$, find $\\$ $(a)$ the amplitude and the time period of the motion of the block, $\\$(b) the energy stored in the spring when the block passes through the equilibrium position and $\\$(c) the kinetic energy of the block at this position.
##### Solution :
$a)$ We have $F$ = $kx$ $\Rightarrow$ $x$ = $\frac{F}{k}$
Acceleration = $\frac{F}{m}$
Time period $T$ = $2\pi$ $\sqrt{\frac{displacement}{Acceleration}}$ = $2\pi$ $\sqrt{\frac{F/k}{F/m}}$ =$2\pi$ $\sqrt{\frac{m}{k}}$
Amplitude = max displacement = $F/k$
$b)$ The energy stored in the spring when the block passes through the equilibrium position
$(1/2)kx^2$ = $(1/2)k(F/k)^{2}$ = $(1/2)k(F^2/k^2)$ = $(1/2)$$(F^2/k) c) At the mean position, P.E. is 0.K.E. is (1/2)$$kx^{2}$ = $(1/2)$$(F^2/x) 19 A particle of mass in is attatched to three springs A, B and C of equal force constants k as shown in figure (12-E6). If the particle is pushed slightly against the spring C and released, find the time period of oscillation. ##### Solution : \therefore Total Resultant force = kx + \sqrt{(\frac{kx}{\sqrt{2}}) + (\frac{kx}{\sqrt{2}})} = kx + kx = 2kx. Tme period T = 2\pi \frac{displacement}{Acceleration} = 2\pi \sqrt{\frac{x}{\frac{2kx}{m}}} = 2\pi$$\sqrt{\frac{m}{2k}}$
Total resultant force on the particle is $kx$ due to spring $C$ and $\frac{kx}{\sqrt2}$ due to spring $A$ $B$.
$\frac{kx}{\sqrt2}$ respectively towards $xy$ and $xz$ respectively. So the total force on the block is due to the spring force '$C'$ as well as the component of two spring force $A$ and $B$$] Acceleration = \frac{2kx}{m} Suppose the particle is pushed slightly against the spring 'C' through displacement 'x'. [ Cause :- when the body pushed again 'C' the spring C , tries to pull the block towards XL.At that moment the spring A and B tries to pull the block with force \frac{kx}{\sqrt2} and 20 Repeat the previous exercise if the angle between each pair of springs is 120° initially. ##### Solution : In this case , if the particle 'm' is pushed against 'C' a by distance 'x'.\\ Total resultant force acting on man 'm' is given by, F = kx + \frac{kx}{2} = \frac{3kx}{2} [ Because net force A & B = \sqrt{(\frac{kx}{2})^2+(\frac{kx}{2})^2 + 2(\frac{kx}{2})(\frac{kx}{2})cos120^0} = \frac{kx}{2} \therefore a = \frac{F}{m} = \frac{3km}{2m} \Rightarrow \frac{a}{x} = \frac{3k}{2m} = \omega^2 \sqrt{\frac{3k}{2m}} \therefore Time period T \frac{2\pi}{\omega} = 2\pi$$\sqrt{\frac{2m}{3k}}$
21 The springs shown in the figure $(12-E7)$ are all unstretched in the beginning when a man starts pulling the block. The man exerts a constant force F on the block. Find the amplitude and the frequency of the motion of the block.
##### Solution :
$k_2$ and $k_3$ are in series.
Let equivalent spring constant be $k_4$
$\therefore$ $\frac{1}{k_4}$ = $\frac{1}{k_2}$ + $\frac{1}{k_3}$ = $\frac{k_2+k_3}{k_2k_3}$ $\Rightarrow$ $k_4$ = $\frac{k_2k_3}{k_2+k_3}$
Now $k_4$ and $k_1$ are in parallel.
So equivalent spring constant $k$ = $k_3$+$k_4$ = $\frac{k_2k_3}{k_2+k_3}$ +$k_1$ = $\frac{k_2k_3+k_1k_2+k_1k_3}{k_2+k_3}$
$\therefore$ $T$ = $2\pi$ = $\sqrt{\frac{M}{k}}$ = $2\pi$ $\sqrt{\frac{M(k_2+k_3)}{k_2k_3+k_1k_2+k_1k_3}}$
$b)$ frequency = $\frac{1}{T}$ = $\frac{1}{2\pi}$ $\sqrt{\frac{k_2k_3+k_1k_2+k_1k_3}{M(k_2+k_3)}}$
$c)$ Amplitude $x$ = $\frac{F}{k}$ = $\frac{F(k_2+k_3)}{k_1k_2+k_2k_3+k_1k_3}$
22 Find the elastic potential energy stored in each spring shown in figure $(12-E8)$, when the block is in equilibrium. Also find the time period of vertical oscillation of the block.
##### Solution :
$k_1$$k_2$$k_3$ are in series.
$\frac{1}{k}$ = $\frac{1}{k_1}$ + $\frac{1}{k_2}$ + $\frac{1}{k_3}$ $\\$ $\Rightarrow$ $k$ = $\frac{k_1k_2k_3}{k_1k_2+k_2k_3+k_1k_3}$
Time period $T$ = $2\pi$ $\sqrt{\frac{m}{k}}$ $2\pi$$\sqrt{\frac{k_1k_2k_3}{k_1k_2+k_2k_3+k_1k_3}} = 2\pi$$\sqrt{m(\frac{1}{k_1}+\frac{1}{k_2}+\frac{1}{k_3})}$
Now, Force = weight = mg.
$\therefore$ At $k_1$ spring, $x_1$ = $\frac{mg}{k_1}$
Similarly $x_2$ = $\frac{mg}{k_2}$ and $x_3$ = $\frac{mg}{k_3}$
$\therefore$ PE_1 = $(1/2)$$k_1x_1^{2} = \frac{1}{2}k_1$$(\frac{Mg}{k_1})^{2}$ = $\frac{1}{2}k_1$$\frac{m^2g^2}{k_1^{2}} = \frac{m^2g^2}{2k_1} Similarly PE_2 = \frac{m^2g^2}{2k_2} and PE_3 = \frac{m^2g^2}{2k_3} 23 The string, the spring and the pulley shown in figure (12-E9) are light. Find the time period of the mass m. ##### Solution : When only 'm' is hanging, let the extension in the spring be 'l' So T_1 = kl = mg. When a force F is applied, let the further extension be'x' \therefore T_2 = k(x+l) \therefore Driving force = T_2 - T_1 = k(x+l) - kl = kx \therefore Acceleration = \frac{kl}{m} T = 2\pi$$\sqrt{\frac{displacement}{Acceleration}}$ = $2\pi$$\sqrt{\frac{x}{\frac{Kx}{m}}} = 2\pi$$\frac{m}{k}$
24 Solve the previous problem if the pulley has a moment of inertia $I$ about its axis and the string does not slip over it
##### Solution :
Let us solve the problem by 'energy method'.
initial extension of the spring in the mean position.
$\delta$ = $\frac{mg}{k}$
During osillation, at any position '$x$' below the equilibrium position, let the velocity of '$m$' be $v$ and $\\$ angular velocity of the pulley be '$\omega$'. if $r$ is the radius of the pulley, then $v$ = $r\omega$. $\\$ At any instant, Total Energy = constant (for SHM)
$\therefore$ $(1/2)mv^2$ + $(1/2)I\omega^2$ + $(1/2)k[(x+\delta)^{2} - \delta^2]$ - $mgx$ = Constant
$\Rightarrow$ $\therefore$ $(1/2)mv^2$ + $(1/2)I\omega^2$ + $(1/2) kx^{2} - kx\delta$ - $mgx$ = Constant
$\Rightarrow$ $\therefore$ $(1/2)mv^2$ + $(1/2)I(v^2/r^2)$ + $(1/2) kx^{2}$ = Constant $\\$ $(\delta = mg/k$)
Taking derived of both sides either respect to '$t$'.
$mv$$\frac{dv}{dt} + \frac{I}{r^{2}}$$V$$\frac{dv}{dt}+kx$$\frac{dv}{dt}$ = $0$
$\Rightarrow$ $a$$\big(m+\frac{I}{r^2}\big) = kx \\ (\therefore x = \frac{dx}{dt} and a = \frac{dx}{dt}) \Rightarrow \frac{a}{x} = \frac{k}{m+\frac{I}{r^2}} = \omega^2 \Rightarrow = T = 2\pi$$\sqrt{\frac{m+\frac{I}{r^2}}{k}}$
25 Consider the situation shown in figure $(12-E10)$. Show that if the blocks are displaced slightly in opposite directions and released, they will execute simple harmonic motion. Calculate the time period
##### Solution :
The center of mass of the system should not change during the motion. So,if the block '$m$' on the left $\\$ moves towards right a distance '$x$',the block on the right moves left a distance '$x$'.So,total$\\$ compression of the spring is $2x$.
By energy method, $\frac{1}{2}$$k$$(2x)^{2}$+$\frac{1}{2}$$mv^2 + \frac{1}{2}mv^{2} + \frac{1}{2}mv^{2} = C \Rightarrow mv^{2} + 2kx^{2} = C. Taking derivation of both sides with respect to 't'. m x 2v \frac{dv}{dt} + 2k x 2x\frac{dx}{dt} = 0 \therefore ma + 2kx = 0 \\ [because v = dx/dt and a = dv/dt] \Rightarrow \frac{a}{x} = -$$\frac{2k}{m}$ = $\omega^2$ $\omega$ = $\sqrt{\frac{2k}{m}}$
$\Rightarrow$ Time period $T$ = $2\pi$$\sqrt{\frac{m}{2k}} 26 A rectangular plate of sides a and b is suspended from a ceiling by two parallel strings of length L each (figure 12-E11). The separation between the strings is d. The plate is displaced slightly in its plane keeping the strings tight. Show that it will execute simple harmonic motion. Find the time period. ##### Solution : Acceleration = a = \frac{F}{m} = g sin \theta \therefore a = g \theta = g(\frac{x}{L}) \\ [where g and L are constant] Time period T = 2\pi\sqrt{\frac{Displacement}{Acceleration}} = 2\pi$$\sqrt{\frac{x}{(\frac{gx}{L})}}$ = $2\pi$ $\sqrt{\frac{L}{g}}$
Driving force $F$ = $mg$ $sin$ $\theta$.
So the motion is simple Harmonic
For small angle $\theta$, $sin$ $\theta$ = $\theta$.
Here we have to consider oscillation of center of mass
$\therefore$ $a$ = $x_1$
27 A $1$ $kg$ block is executing simple harmonic motion of amplitude $0$'$1$ $m$ on a smooth horizontal surface under the restoring force of a spring of spring constant $100$ $N/m$. A block of mass $3$ $kg$ is gently placed on it at the instant it passes through the mean position. Assuming that the two blocks move together, find the frequency and the amplitude of the motion.
##### Solution :
Total mass = $3$ + $1$ = $4kg$ (when both the blocks are moving together)
$\therefore$ Initial momentum = Final momentum
$\therefore$ $(1/2)$ x ($1$ x $v^2$) = $(1/2)$ x $100$ $(0.1)^{2}$
$\therefore$ Frequency = $\frac{5}{2\pi}$ $Hz$.
$\therefore$ $1/4$ = $100$ $\delta^{2}$ $\Rightarrow$ $\delta$ = $\sqrt{\frac{1}{400}}$ = $0.05$$m = 5cm. Now the two block have velocity 1/4 m.s. at its mean poison. Amplitude = 0.1 m After the 3kg block is gentely placed on the 1kg, then let, 1kg + 3kg = 4kg block and the spring be one \\ system.For this mass spring system, there is do external force. (when oscillation takes place). the \\ The momentum should be conserved. Let, 4kg block has velocity v^|. KE. = (1/2)mv^{2} = (1/2)mx^{2} \\ Where x$$\rightarrow$ Amplitude = $0.1$ $m$.
When the block are going to the extreme position, there will be only potential energy.
$\therefore$ $T$ = $2\pi$$\sqrt{\frac{M}{k}} = 2\pi$$\sqrt{\frac{4}{100}}$ = $\frac{2\pi}{5}$ $sec$.
$\therefore$ $PE$ = $(1/2)$$k\delta^{2} = (1/2) x (1/4) where \delta \rightarrow new amplitude. \therefore 1 x v = 4 x v^{|} \Rightarrow v^{|} = 1/4 m/s \\ (as v = 1m/s from equation (1)) \Rightarrow v = 1$$m/sec$ $..........(1)$
So amplitude = $5cm$.
Again at the mean position, let $1kg$ block has velocity $v$.
$KE_{mass}$ = $(1/2)$$m^{|}v^{2} = (1/2) 4 x (1/4)^2 = (1/2) x (1/4). 28 The left block in figure (12-E13) moves at a speed v towards the right block placed in equilibrium. All collisions to take place are elastic and the surfaces are frictionless. Show that the motions of the two blocks are periodic. Find the time period of these periodic motions. Neglect the widths of the blocks. ##### Solution : is \frac{L}{V} + \frac{L}{V} = 2(\frac{L}{V}) When the block A moves with velocity 'V' and collides withe the block B, it transfers all energy to the \\ block B.(Because it is a elastic colosion).The block A will move a distance 'x' against the spring, again the block B will return to the original point and completes half of the oscillation. The block B collides with the block A and comes to rest at the point. \\ The block A again moves a further distance 'L' to return to its original position. \\ \therefore Time taken by the block to move from M \rightarrow N and N \rightarrow M \therefore So time period of the periodic motion is 2(\frac{L}{V}) + \pi\sqrt{\frac{m}{k}} So, the time period of B is \frac{2\pi\sqrt{\frac{m}{k}}}{2} = \pi\sqrt{\frac{m}{k}} 29 Find the time period of the motion of the particle shown in figure (12-E14). Neglect the small effect of the bend near the bottom. ##### Solution : Let the time taken to travel AB and BC be t_1 and t_2 respectively For part AB,a_1 = g sin 45^0. s_1 = \frac{0.1}{sin 45^0} = 2m Let v = velocity at B \\ \therefore v^2 - u^2 = 2a_1s_1 \Rightarrow v^2 = 2 x g sin 45^0. s_1 = \frac{0.1}{sin 45^0} = 2 \Rightarrow v = \sqrt{2}m/s \therefore t_1 = \frac{v-u}{a_1} = \frac{\sqrt{2} - 0}{\frac{g}{\sqrt2}} = \frac{2}{g} = \frac{2}{10} = 0.2 sec Again for part BC , a_2 = -g sin 60^0 , u = \sqrt{2} , v = 0 \therefore t_2 = \frac{0 - \sqrt{2}}{-g(\frac{\sqrt{3}}{2})} = \frac{2\sqrt{2}}{\sqrt{3g}} = \frac{2 \times (1.414)}{(1.732) \times 10} = 0.165$$sec$.
So, time period = $2(t_1 + t_2)$ = $2($0.2$+$0.1555$)$ = $0.71$ $sec$
30 All the surfaces shown in figure $(12-E15)$ are frictionless. The mass of the car is $M$, that of the block is $m$ and the spring has spring constant $k$. Initially, the car and the block are at rest and the spring is stretched through a length $x_0$, when the system is released. $\\$ (a) Find the amplitudes of the simple harmonic motion of the block and of the car as seen from the road. $\\$ (b) Find the time period(s) of the two simple harmonic motions.
##### Solution :
Let the amplitude of oscillation of '$m$' and '$M$' be $x_1$ and $x_2$ respectively.
$a)$ From law of conservation of momentum,
$mX_1$ = $mX_2$ .....$(1)$ $[$ because only internal force are present $]$
Again, $(1/2)$ $kx_g^{2}$ = $(1/2)$ $k$ $(x_1 + x_2)^2$
$\therefore$ $x_0$ = $x_1$ + $x_2$ .....$(2)$
$[$ Block and mass oscillation in opposite direction. But $x$ $\rightarrow$ stretched part $]$
From equation $(1)$ and $(2)$
$\therefore$ $x_0$ = $x_1$ + $\frac{m}{M}$ $x_1$ = $(\frac{M+m}{M})$ $x_1$
$\therefore$ $x_1$ $\frac{mx_0}{M+m}$
So , $x_2$ = $x_0$ - $x_1$ = $x_0$ $[1-\frac{M}{M+m}]$ = $\frac{mx_0}{M+m}$ respectively.
$b)$ At any position , let the velocity be $v_1$ and $v_2$ respectively.
By energy method $\\$ Total Energy = Constant
$(1/2)$ $Mv^2$ + $(1/2)$ $m(V_1 - V_2)^2$ + $(1/2)$$k$$(X_1 + X_2)^2$ = Constant ...(i)
$[$ $v_1$ - $v_2$ = Absolute velocity of mass '$m$' as seen from the road.$]$
Again , from law of conservation of momentum.
$\frac{1}{2}$$Mv_2^{2} + \frac{1}{2}$$m\frac{M^2}{m^2}v_2^2$ + $\frac{1}{2}kx_2^2$ $(1+\frac{M}{m})^2$ = constant
$\Rightarrow$ $\frac{a_2}{X_2}$ = - $\frac{k(M+m)}{Mm}$ = $\omega^2$
Here, $v_1$ = velocity of '$m$' with respect to $M$.
Taking derivation of both sides,
So, Time period, $T$ = $2\pi$$\sqrt{\frac{Mm}{k(M+m)}} \Rightarrow mv_2^2 + k$$(1+\frac{M}{m})^2$ $x_2^2$ = constant
$\Rightarrow$ $ma_2$ + $k$$\frac{(M+m)}{m}$$x_2$ = $0$ $[$ because , $v_2$ = $\frac{dx_2}{dt}$ $]$
putting the above value in equation $(1)$, we get
$\therefore$ $\omega$ = $\sqrt{\frac{k(M+m)}{Mm}}$
$mv_2$ = $m(v_1- v_2)$ $\Rightarrow$ $(v_1- v_2)$ = $\frac{M}{m}$$v_2 ....(2) \therefore M$$(1+\frac{M}{m})v_2$ + $k$$(1+\frac{M}{m})^2 x_2^2 = constant. M x 2v_2 \frac{dv_2}{dt} + k$$\frac{(M+m)}{m}$-$ex_2^2$ $\frac{dv_2}{dt}$ = $0$
31 A uniform plate of mass $M$ stays horizontally and symmetrically on two wheels rotating in opposite directions $(figure 12-E16)$. The separation between the wheels is $L$. The friction coefficient between each wheel and the plate is $N$. Find the time period of oscillation of the plate if it is slightly displayed along its length and released
##### Solution :
$\Rightarrow$ $R_1$$(\frac{\iota}{2} + \frac{\iota}{2}) = mg$$(\frac{2x+\iota}{2})$
$R_1$ + $R_2$ = $mg$
Since, $F_1$ > $F_2$ $\Rightarrow$ $F_1$ - $F_2$ = $ma$ = $\frac{2\mu mg}{\iota}x$
$\Rightarrow$ $R_1$ = $\frac{mg(2x+\iota)}{2\iota}$ ....$(2)$
$R_1(\frac{\iota}{2})$ - $R_1x$ = $mg$ $\frac{\iota}{2}$ - $R_1x$ + $mgx$ - $R_1$$\frac{\iota}{2} \Rightarrow R_1$$\frac{\iota}{2}$ + $R_1$$\frac{\iota}{2} = mg(x+\frac{\iota}{2}) In displaced position Similarly F_2 = \mu R_2 = \frac{\mu mg(\iota-2x)}{2\iota} \therefore Time period = 2\pi \sqrt{\frac{\iota}{2rg}} \Rightarrow R_1$$\frac{\iota}{2}$-$R_1x$ = $mg$ $\frac{\iota}{2}$ - $R_1x$ + $mgx$ - $R_1$$\frac{\iota}{2} \Rightarrow R_1\iota = \frac{mg(2x+\iota)}{2} Taking moment about G, we get Let 'x' be the displacement of the plank towards left.Now the center of gravity is also displaced through 'x' Now F_1 = \mu R_1 = \frac{\mu mg(\iota-2x)}{2\iota} \Rightarrow \frac{a}{x} = \frac{2\mu g}{\iota} = \omega^2 \Rightarrow \omega = \sqrt{\frac{2mu g}{\iota}} So , R_1$$(\iota/2 - x)$ = $R_2$$(\iota/2 - x) = (mg - R_1)$$(\iota/2 - x)$
32 A pendulum having time period equal to two seconds is called a seconds pendulum. Those used in pendulum clocks are of this type. Find, the length of a seconds pendulum at a place where $g$ = $\pi^2$m/s$^2$.
##### Solution :
$\Rightarrow$ $2$ = $2\pi$$\sqrt{\frac{\iota}{g}} \Rightarrow \frac{\iota}{g} = \frac{1}{\pi^2} \Rightarrow \iota = 1 cm ( \therefore \pi^2 \approx 10 ) T = 2\pi$$\sqrt{\frac{\iota}{g}}$
$T$ = $2$ $sec$.
33 The angle made by the string of a simple pendulum with the vertical depends on time as $\theta$ = $\frac{\pi}{90}$ $sin$ $[(\pi s^{-1})t]$.Find the length of the pendulum if $g$ = $\pi^2$m/s$^2$.
##### Solution :
$\therefore$ $\omega$ = $\pi sec^{-1}$ $($ comparing with the equation of S.H.M$)$
We know that $T$ = $2\pi$ $\sqrt{\frac{\iota}{g}}$ $\Rightarrow$ $2$ = $2$$\sqrt{\frac{\iota}{g}} \Rightarrow 1 = \sqrt{\frac{\iota}{g}} \Rightarrow \iota = 1$$m$.
$\theta$ = $\pi$ $sin$ $[(\pi sec^{-1})t]$
$\Rightarrow$ $\frac{2\pi}{T}$ = $\pi$ $\Rightarrow$ $T$ = $2$ $sec$.
From the equation.
$\therefore$ Length of the pendulum is $1$$m. 34 The pendulum of a certain clock has time period 2.04 s. How fast or slow does the clock run during 24 hours ? ##### Solution : = 43200 x (0.04) = 12$$sec$ = $28.8$ $min$
$T_2$ = $\frac{24\times3600}{(\frac{24 \times 3600 - 24}{})}$ = $2$ x $\frac{3600}{3599}$
The pendulum of the clock has time period $2.04$$sec. So in one day it is slower by, Given that, T_1 =2sec ,g_1 = 9.8$$m/s$$^2 But , in each oscillation it is slower by (2.04 - 2.00) = 0.04$$sec$
So, the clock runs $28.8$ minutes slower in one day.
Now , $\frac{g_2}{g_1}$ = $($$\frac{T_1}{T_2}$$)$$^2 For the pendulum, \frac{T_1}{T_2} = \sqrt{\frac{g_2}{g_1}} Now, No. or oscillation in 1 day = \frac{24 \times 3600}{2} = 43200 35 A pendulum clock giving correct time at a place where g = 9.800 m/s$$^2$ is taken to another place where it loses $24$ seconds during $24$ hours. Find the value of $g$ at this new place.
##### Solution :
For the pendulum, $\frac{T_1}{T_2}$ = $\sqrt{\frac{g_2}{g_1}}$ $\\$ Given that, $T_1$ =$2sec$ ,$g_1$ = $9.8$$m/s$$^2$$\\ T_2 = \frac{24\times3600}{(\frac{24 \times 3600 - 24}{})} = 2 x \frac{3600}{3599}$$\\$ Now , $\frac{g_2}{g_1}$ = $($$\frac{T_1}{T_2}$$)$$^2$$\\$ $\therefore$ $g_2$ = $(9.8)$ $($ $\frac{3599}{3600}$$)$$^2$ = $9.795$$m/s$$^2$
36 A simple pendulum is constructed by hanging a heavy ball by a $5.0$ $m$ long string. It undergoes small oscillations. $\\$(a) How many oscillations does it make per second ? $\\$(b) What will be the frequency if the system is taken on the moon where acceleration due to gravitation of the moon is $1.67$ $m/s$ $^2$.
##### Solution :
$b)$ When it is taken to the moon $T$ = $2\pi$$\frac{\iota}{g^{'}} \\ where g^{'} \rightarrow Acceleration in the moon. \therefore f = \frac{1}{T} = \frac{1}{2\pi}$$\sqrt{\frac{1.67}{5}}$ = $\frac{1}{2\pi}$$(0.577) = \frac{1}{2\pi\sqrt{3}}times. a) T = 2\pi$$\sqrt{\frac{\iota}{g}}$ = $2\pi$$\sqrt{0.5} = 2\pi$$(0.7)$ $\\$ $\therefore$ In $2\pi$$(0.7)$$sec$, the body completes $1$ oscillation, $\\$ In $1$ second, the body will complete $\frac{1}{2\pi(0.7)}$ oscillation $\\$ $\therefore$ $f$ = $\frac{1}{2\pi(0.7)}$ = $\frac{10}{14\pi}$ = $\frac{0.70}{\pi}$ times
= $2\pi$$\frac{5}{1.76} L = 5m. 37 The maximum tension in the string of an oscillating pendulum is double of the minimum tension. Find the angular amplitude. ##### Solution : The tension in the pendulum is maximum at the mean position and minimum on the extreme position Here (1/2) mv^2 - 0 = mg \iota$$($$1 - cos$$\theta$$) v^2 = 2g \iota$$($$1 - cos$$\theta$$) Now, T_{max} = mg + 2 mg ($$1$ - $cos$$\theta$$)$ $\\$ [$T$ = $mg$ + ($mv^2$/ $\iota$)]
Again, $T_{min}$ = $mg$ $cos$$\theta. \Rightarrow mg + 2mg - 2mg cos$$2\theta$ = $2mg$ $cos$$\theta \Rightarrow 3mg = 4mg cos$$\theta$
$\Rightarrow$ $\theta$ = $cos^{-1}$ $(3/4)$
$\Rightarrow$ $cos$$\theta = 3/4 38 A small block oscillates back and forth on a smooth concave surface of radius R (figure 12-E17). Find the time period of small oscillation. ##### Solution : Given that, R = radius. \\ Let N = normal reaction. \\ Driving force F = mg sin$$\theta$.$\\$ Acceleration = $a$ = $mg$ $sin$$\theta As, sin$$\theta$ is very small, $sin$$\theta \rightarrow \theta Let ‘x’ be the displacement from the mean position of the body, \therefore \theta = \frac{x}{R} \Rightarrow a = g\theta = g(x/R) \Rightarrow (a/x) = (g/R) So the body makes S.H.M. \therefore T = 2\pi$$\sqrt{\frac{Displacement}{Acceleration}}$ = $2\pi$$\sqrt{\frac{x}{gx/R}} = 2\pi$$\sqrt{\frac{R}{g}}$
$therefore$ Acceleration $a$ = $g\theta$
39 A spherical ball of mass $m$ and radius $r$ rolls without slipping on a rough concave surface of large radius $R$. It makes small oscillations about the lowest point, Find the time period.
##### Solution :
Let the angular velocity of the system about the point os suspension at any time be '$\omega$'
So, $v_c$ = $(R-r)$$\omega Again v_c = r$$\omega_1$ [where ,$\omega_1$ = rotational velocity of the sphere]
$\omega_1$ = $\frac{v_c}{r}$ = ($\frac{R + (-r)}{r}$)$\omega$ ....$(1)$
By Energy method, Total energy in SHM is constant.
So, $mg$$(R-r)(1-cos\theta) + (1/2)$$mv_c^{2}$+$(1/2)$ $|$$\omega_1^{2} = constant \therefore mg$$(R-r)(1-cos\theta)$ + $(1/2)$$m(R-r)^2 \omega^2 + (1/2) mr^2$$(\frac{R-r}{r})^2$$\omega^2 = constant \Rightarrow g$$(R-r)(1-cos\theta)$ + $(R-r)^2$ $\omega^2$ $[$$\frac{1}{2} + \frac{1}{5}$$]$ = constant
Taking derivative, $g(R-r)$ $sin$ $\theta$ $\frac{d\theta}{dt}$ = $\frac{7}{10}$ $(R-r)^2$$2\omega$$\frac{d\omega}{dt}$
$\Rightarrow$ $g$ $sin$ $\theta$ = $2$ x $\frac{7}{10}$ $(R-r)$$\alpha \Rightarrow g sin \theta = \frac{7}{5} (R-r)$$\alpha$
$\Rightarrow$ = $\frac{5gsin\theta}{7(R-r)}$ = $\frac{5g\theta}{7(R-r)}$
$\therefore$ $\frac{\alpha}{\theta}$ = $\omega$$^2 = \frac{5g\theta}{7(R-r)} = constant So the motion is S.H.M. Again \omega = \omega \sqrt{\frac{5g\theta}{7(R-r)}} \Rightarrow T = 2\pi \sqrt{\frac{7(R-r)}{5g}} 40 A simple pendulum of length 40 cm is taken inside a deep mine. Assume for the time being that the mine is 1600 km deep. Calculate the time period of the pendulum there. Radius of the earth = 6400 km ##### Solution : \therefore Time period T^{'} = 2\pi$$sqrt{\frac{l}{g\delta}}$
$\therefore$ $gd$ = $g(1-d/R)$ = $9.8$ $(1-\frac{1600}{6400})$ = $9.8$ $(1-\frac{1}{4})$ =$9.8$ x $\frac{3}{4}$ = $7.35$$m/s$$^2$
= $2\pi$$\sqrt{\frac{0.4}{7.35}} = 2\pi \sqrt{0.054} = 2\pi x 0.023 = 2 x 3.14 x 0.23 = 1.465 \approx 1.47$$sec$.
Length of the pendulum = $40$$cm = 0.4. \\ Let acceleration due to gravity be g at the depth of 1600km. 41 Assume that a tunnel is dug across the earth (radius = R) passing through its centre. Find the time a particle takes to cover the length of the tunnel if \\ (a) it is projected into the tunnel with a speed of \sqrt{gR} \\ (b) it is released from a height R above the tunnel \\ (c) it is thrown vertically upward along the length of tunnel with a speed of \sqrt{gR}. ##### Solution : Let M be the total mass of the earth.\\ At any position x, \therefore \frac{M^{'}}{M} = \frac{\rho\times(\frac{4}{3})\pi \times x^{3}}{\rho \times(\frac{4}{3})\pi \times R^{3}} = \frac{x^3}{R^3} \Rightarrow M^{'} = \frac{Mx^3}{R^3} So force on the particle is given by, \therefore F_X = \frac{GM^{'}m}{x^2} = \frac{GMm}{R^3} ......(1) So, acceleration of the mass 'M' at the position is given by, a_x = \frac{GM}{R^2}$$x$ $\Rightarrow$ $\frac{a_x}{x}$ = $w^2$ = $\frac{GM}{R^3}$ = $\frac{g}{R}$ ($\therefore$ $\frac{Gm}{R^2}$)
So, $T$ = $2\pi$$\sqrt\frac{R}{g} = Time period of oscillation. a) Now, using velocity - displacement equation. V = \omega \sqrt{(A^2 - R^2)} [Where, A = amplitude] Given when, y = R, v = \sqrt{gR} , \omega = \sqrt{\frac{g}{R}} \Rightarrow \sqrt{gR} = \sqrt{\frac{g}{R}} \sqrt{(A^2 - R^2)} [because \omega = \sqrt{\frac{g}{R}}] \Rightarrow R^2 A^2 - R^2 \Rightarrow A = \sqrt{2}R [Now, the phase of the particle at the point P is greater than \pi/2 but less than \pi and at Q is greater than \pi but less than 3\pi /2. Let the times taken by the particle to reach the positions P and Q be t_1 & t_2 respectively, then using displacement time equation] y = r sin \omega t We have ,R = \sqrt{2} R sin \omega t_1 \\ \Rightarrow$$\omega t_1$ = $3\pi /4$
& $-R$ = $\sqrt{2}$ $R$ $sin$ $\omega t_2$ $\\$ $\Rightarrow$$\omega t_2 = 5\pi /4 So , \omega (t_1 - t_2) = \pi / 2 \Rightarrow t_1 - t_2 = \frac{\pi}{2\omega} = \frac{\pi}{2\sqrt{(R/g)}} Time taken by the particle to travel from P to Q is t2 – t1 \frac{\pi}{2\sqrt{(R/g)}} sec. b) When the body is dropped from a height R, then applying conservation of energy, change in P.E. = gain in K.E. \Rightarrow \frac{GMm}{R} - \frac{GMm}{2R} = \frac{1}{2} mv^2 \Rightarrow v = \sqrt{gR} Since, the velocity is same at P, as in part (a) the body will take same time to travel PQ. c) When the body is projected vertically upward from P with a velocity \sqrt{gR}, its velocity will be Zero at the highest point.\\ The velocity of the body, when reaches P, again will be v = \sqrt{gR} , hence, the body will take same\\ time \frac{\pi}{2\sqrt{(R/g)}} to travel PQ. 42 Assume that a tunnel is dug along a chord of the earth, at a perpendicular distance R/2 from the earth's centre where R is the radius of the earth. The wall of the tunnel is frictionless.\\ (a) Find the gravitational force exerted by the earth on a particle of mass m placed in the tunnel at a distance x from the centre of the tunnel.\\ (b) Find the component of this force along the tunnel and perpendicular to the tunnel.\\ (c) Find the normal force exerted by the wall on the particle.\\ (d) Find the resultant force on the particle. \\ (e) Show that the motion of the particle in the tunnel is simple harmonic and find the time period. ##### Solution : M = 4/3 \pi R^3 \rho. M^1 = 4/3 \pi x^3 _1 \rho. m^1 = (\frac{M}{R^3})x^3_1 a) F = Gravitational force exerted by the earth on the particle of mass ‘x’ is, F = \frac{Gm^1 m}{x_1^2} = \frac{GMm}{R^3} \frac{x^3_1}{x^2_1} = \frac{GMm}{R^3}$$x_1$ = $\frac{GMm}{R^3}$ $\sqrt{x^2 + (\frac{R^2}{4})}$
$b)$ $F_y$ = $F$ $cos$ $\theta$ = $\frac{GMm}{R^3}$$\frac{x}{X_1} = \frac{GMm}{R^3} F_x = F sin \theta = \frac{GMmx_1}{R^3} \frac{R}{2X_1} = \frac{GMm}{2R^2} c) F_x = \frac{GMm}{2R^2} { since Normal force exerted by the wall N = F_x] d) Resultant force = \frac{GMmx}{R^3} e) Acceleration = \frac{Driving force}{mass} = \frac{GMmx}{R^3m} \frac{Gmx}{R^3} So, a \alpha x (The body makes SHM) \therefore \frac{a}{x} = W^2 = \frac{GM}{R^3} \Rightarrow w = \sqrt{\frac{GM}{R^3}} \Rightarrow T = 2 \pi$$\sqrt{\frac{R^3}{GM}}$
43 A simple pendulum of length $l$ is suspended through the ceiling of an elevator. Find the time period of small oscillations if the elevator $\\$(a) is going up with an acceleration $a_0$ $\\$(b) is going down with an acceleration $a_0$, and$\\$ (c) is moving with a uniform velocity.
##### Solution :
Here driving force $F$ = $m(g+a_0)$ $sin$ $\theta$ ....(1)
Acceleration $a$ = $\frac{F}{m}$ = ($g+a_0$)$sin$$\theta = \frac{(g+a_0)x}{l} (Because when \theta is small sin \theta \rightarrow \theta = \frac{x}{l} \\ \therefore a = \frac{(g+a_0)x}{l}. \therefore acceleration is proportional to displacement.\\ So, the motion is SHM.\\ Now \omega^2 = \frac{(g+a_0)}{l}. \therefore T = 2\pi \sqrt{\frac{l}{g+a_0}} b) When the elevator is going downwards with acceleration a_0 Driving force = F = m (g-a_0) sin$$\theta$.
Acceleration = $(g-a_0)$ $sin$$\theta = \frac{(g-a_0)x}{l} = \omega^2x T = \frac{2\pi}{\omega} = 2\pi \sqrt{\frac{l}{g-a_0}} c) When moving with uniform velocity a_0 = 0. For, the simple pendulum, driving force = \frac{mgx}{l} \Rightarrow a = \frac{gx}{l} = \Rightarrow \frac{x}{a} = \frac{l}{g} T = 2\pi \sqrt{\frac{displacement}{acceleration}} 2\pi \sqrt{\frac{l}{g}} 44 A simple pendulum of length 1 feet suspended from the ceiling of an elevator takes \pi/3 seconds to complete one oscillation. Find the acceleration of the elevator. ##### Solution : Let the elevator be moving upward accelerating ‘a_0’ \\Here driving force F = m(g + a0) sin \theta$$\\$ Acceleration = $(g + a0)$ $sin$ $\theta$
= $(g + a0)$ $\theta$ $(sin theta \rightarrow \theta)$
$\frac{(g+a_0)x}{l}$ = $\omega^2x$
$T$ = $2\pi$ $\sqrt{\frac{l}{g+a_0}}$
Given that, $T$ = $\pi / 3$ $Sec$,$l$ = $1ft$ and $g$ = $32$ $ft/sec^2$
$\frac{\pi}{3}$ = $2\pi$ $\sqrt{\frac{1}{32+a_0}}$
$\frac{1}{9}$ = $4$($\frac{1}{32 + a}$)
##### Solution :
From the freebody diagram,
$T$ = $\sqrt{(mg^2 + (\frac{mv^2}{r^2})}$
=M $\sqrt{g^2 + \frac{v^4}{r^2}}$ = ma , where $a$ = acceleration = $(g^2 + \frac{v^4}{r^2})^{1/2}$
The time period of small accellations is given by,
$T$ = $2\pi$ $sqrt{\frac{l}{g}}$ = $2\pi$ $\sqrt{\frac{l}{(g^2 + \frac{v^4}{r^2})^{1/2}}}$
47 The ear-ring of a lady shown in figure $(12-E18)$ has a $3$ $cm$ long light suspension wire.$\\$ $(a)$Find the time period of small oscillations if the lady is standing on the ground.$\\$ $(b)$ The lady now sits in a merry-go-round moving at $4$ $m/s$ in a circle of radius $2$ $m$. $\\$Find the time period of small oscillations of the ear-ring.
##### Solution :
$a)$ $l$ = $3cm$ = $0.03m$
$T$ = $2\pi$$\sqrt{\frac{l}{g}} = 2\pi$$\sqrt{\frac{0.03}{9.8}}$ = $0.34$ second.
$b)$ When the lady sets on the Merry-go-round the ear rings also experience centrepetal acceleration
$a$ = $\frac{v^2}{r}$ = $\frac{4^2}{2}$ = $8$ $m/s^2$
Resultant Acceleration $A$ = $\sqrt{g^2+a^2}$ = $\sqrt{100+64}$ = $12.8$ $m/s^2$
Time period $T$ = $2\pi$$\sqrt{\frac{l}{A}} = 2\pi \sqrt{\frac{0.03}{12.8}} = 0.30 second. 48 Find the timik period of small oscillations of the following systems.\\ (a) A metre stick suspended through the 20 cm mark. .\\(b) A ring of mass in and radius r suspended through a point on its periphery. .\\(c) A uniform square plate of edge a suspended through a corner. .\\(d) A unifrom disc of mass m and radius r suspended through a point r/2 away from the centre. ##### Solution : a) M.I about the pt A = l = l_{C.G} + Mh^2 = \frac{Ml^2}{12} + MH_2 = \frac{Ml^2}{12} + m(0.3)^2 = M(\frac{\frac{1}{12}}{0.09}) = M (\frac{1+1.08}{12}) = M (\frac{2.08}{12}) \therefore T = 2\pi$$\sqrt{\frac{l}{mgl^{'}}}$ = $2\pi$$\frac{2.08m}{m\times 9.8 \times\ 0.3} (l^{'} = dis. between C.G and pt. of suspension) \\ \approx 1.52sec. b) Moment of in isertia about A l = l_{C.G} + mr^2 = mr^2 + mr^2 = 2$$mr^2$
$\therefore$ Time period = $2\pi$$\sqrt{\frac{l}{mgl}} = 2\pi$$\sqrt{\frac{2mr^2}{mgr}}$ = $2\pi$$\sqrt{\frac{2r}{g}} c) l_{zz} (comes) = m (\frac{a^2+a^2}{3}) = \frac{2ma^2}{3} In the \DeltaABC, l^2+l^2 = a^2 \therefore l = \frac{a}{\sqrt2} \therefore T = 2\pi \sqrt{\frac{l}{mgl}} = 2\pi \sqrt{\frac{2ma^2}{3mgl}} = 2\pi \sqrt{\frac{2a^2}{3ga\sqrt{2}}} = 2\pi \sqrt{\frac{\sqrt{8a}}{3g}} d) h = r/2 , l = r/2 = Dist. Between C.G and suspension point. M.L. about A, l = l_{C.G.} + Mh^2 = \frac{mc^2}{2} + n(\frac{r}{2})^2 = mr^2 (\frac{1}{2}+\frac{1}{4}) = \frac{3}{4}$$mr^2$
$\therefore$ $T$ = $2\pi$ $\sqrt{\frac{l}{mgl}}$ = $2\pi$ $\sqrt{\frac{3Mr^2}{4mgl}}$ = $2\pi$ $\sqrt{\frac{3R^2}{4g(\frac{r}{2})}}$ = $2\pi$ $\sqrt{\frac{3r}{2g}}$
49 A uniform rod of length $l$ is suspended by an end and is made to undergo small oscillations. Find the length of the simple pendulum having the time period equal to that of the rod.
##### Solution :
$\Rightarrow$ $T$ = $2\pi$ $\sqrt{\frac{l}{mg\frac{l}{2}}}$ = $2\pi$$\sqrt{\frac{2ml^2}{3mgl}} = 2\pi$$\sqrt{\frac{2l}{3g}}$
$\therefore$ $T$ = $2\pi$$\sqrt{\frac{x}{g}}. So, \frac{2l}{3g} = \frac{x}{g} \Rightarrow x = \frac{2l}{3} \\ \therefore Length of the simple pendulum = \frac{2l}{3} l = l_{C.G.} + mh^2 = \frac{ml^2}{12} + \frac{ml^2}{4} = \frac{ml^2}{3} Let, the time period ‘T’ is equal to the time period of simple pendulum of length ‘x’. Let A \rightarrow suspension of point.\\ B \rightarrow Center of Gravity.\\ l^{'} = l/2, h = l/2$$\\$ Moment of inertia about $A$ is
50 A uniform disc of radius r is to be suspended through a small hole made in the disc. Find the minimum possible time period of the disc for small oscillations. What should be the distance of the hole from the centre for it to have minimum time period ?
##### Solution :
So putting the value of equation (1)
$T$ $2\pi$ $\sqrt{\frac{l}{mgl}}$ = $2\pi$ $\sqrt{\frac{m\big(\frac{r^2}{2}+X^2\big)}{mgx}}$ = $2\pi$ $\frac{m(r^2/2+x^2)}{2mgx}$ = $2\pi$ $\sqrt{\frac{r^2+2x^2}{2gx}}$ ....(1)
$\therefore$ $\frac{d}{dx}$$T^2 = \frac{d}{dx} \big(\frac{4\pi^2r^2}{2gx}+\frac{4\pi^2 2x^2}{2gx}\big) \Rightarrow -\frac{\pi^2r^2}{gx^2} + \frac{2\pi^2}{g} \Rightarrow 2x^2 = r^2 \Rightarrow x = \frac{r}{\sqrt2} M.I. about A = l_{C.G.} + mx^2 = mr^2/2+mx^2 = m(r^2/2+x^2) \Rightarrow -\frac{\pi^2r^2}{gx^2} + \frac{2\pi^2}{g} = 0 T = 2\pi \sqrt{\frac{r^2+\big(\frac{r^2}{2}\big)}{2gx}} = 2\pi \sqrt{\frac{2r^2}{2gx}} = 2\pi \sqrt{\frac{r^2}{g(\frac{r}{\sqrt2})}} 2\pi \sqrt{\frac{\sqrt{2}r^2}{gr}} = 2\pi \sqrt{\frac{\sqrt{2r}}{g}} For T is minimum \frac{dt^2}{dx} = 0 Suppose that the point is ‘x’ distance from C.G.\\ Let m = mass of the disc., Radius = r$$\\$ Here $l$ = $x$
$\Rightarrow$ $\frac{2\pi^2r^2}{g}$ $\big(-\frac{1}{x^2}\big)$ + $\frac{4\pi^2}{g}$ = $0$
51 A hollow sphere of radius $2$ $cm$ is attached to an $18$ $cm$ long thread to make a pendulum. Find the time period of oscillation of this pendulum. How does it differ from the time period calculated using the formula for a simple pendulum ?
##### Solution :
According to Energy equation,
$mgl(1-cos\theta)$ + $(1/2)$ $l\omega^2$ = const.
$mg(0.2)(1-cos\theta)$ + $(1/2)$ $l\omega^2$ = $C$.$\\$ Again, l = $2/3$ $m(0.2)^2$ + $m(0.2)^2$ $\\$ = $m\big[\frac{0.008}{3}+0.04 \big]$
= $m\big(\frac{0.1208}{3} \big)m$. Where I $\rightarrow$ Moment of Inertia about the pt of suspension $A$ $\\$ From equation$\\$ Differenting and putting the value of I and $1$ is
$\frac{d}{dt}\big[mg(0.2)(1-cos\theta)+\frac{1}{2}\frac{0.1208}{3}m\omega^2\big]$ = $\frac{d}{dt}$(C)
$\Rightarrow$ $mg(0.2) sin\theta + \frac{d\theta}{dt} + \frac{1}{2}\big(\frac{0.1208}{3}\big)m2\omega$ $\frac{d\omega}{dt}$ = $0$
For simple pendulum $T$ = $2\pi$ $\sqrt{\frac{0.19}{10}}$ = $0.86sec$.
$\Rightarrow$ $2$ $sin \theta$ = $\frac{0.1208}{3}$ $\alpha$ $\big[because, g = 10 m/s^2 \big]$
$\Rightarrow$ $\frac{\alpha}{\theta}$ = $\frac{6}{0.1208}$ = $\omega^2$ = $58.36$ $\\$ $\Rightarrow$ $\omega$ = $7.3.$ So $T$ = $\frac{2\pi}{\omega}$ = $0.89sec$.
% more = $\frac{0.89 - 0.86}{0.89}$ = $0.3$. $\\$ $\therefore$ it is about $0.3$% larger than the calculated value.
52 A closed circular wire hung on a nail in a wall undergoes small oscillations of amplitude 2°and time period 2 s. Find $\\$ (a) the radius of the circular wire, $\\$ (b) the speed of the particle farthest away from the point of suspension as it goes through its mean position, $\\$ (c) the acceleration of this particle as it goes through its mean position and $\\$ (d) the acceleration of this particle when it is at an extreme position.Take $g$ = $\pi^2$$m/s^2. ##### Solution : (For a compound pendulum) a) T = 2\pi \sqrt{\frac{l}{mgl}} = 2\pi \sqrt{\frac{l}{mgr}} \\ The MI of the circular wire about the point of suspension is given by \\ \therefore I = mr^2 + mr^2 = 2 mr^2 is Moment of inertia about A. \therefore 2= 2\pi \sqrt{2mr^2mgr} = 2\pi \sqrt{\frac{2r}{g}} \Rightarrow \frac{2r}{g} = \frac{1}{\pi^2} \Rightarrow r = \frac{g}{2\pi^2} = 0.5\pi = 50$$cm$.(Ans)
$b)$ $(1/2)$ $\omega^2$ - $0$ = $mgr$ $(1-cos\theta)$
$\Rightarrow$ $(1/2)$ $2mr^2$ - $\omega^2$ = $mgr$ $(1-cos2^0)$
$\Rightarrow$ $\omega^2$ = $g/r$ $(1-cos2^0)$
$\Rightarrow$ $\omega$ = $0.11$ $rad/sec$ $[$ putting the value of $g$ and $r$ $]$
$\Rightarrow$ $v$ = $\omega$ x $1r$ = $11$ $cm/sec$.
$c)$ Acceleration at the end position will be centripetal.$\\$ = $a_n$ = $\omega^2$ $(2r)$ = $(0.11)^2$ × $100$ = $1.2$ $cm/s^2$ $\\$ The direction of ‘$a_n$’ is towards the point of suspension.
So, tangential acceleration = $\alpha(2r)$ = $\frac{2\pi^3}{180}$ x $100$ = $34$ $cm/s^2$.
$\alpha$ = $\omega^2\theta$ = $\pi^2$ x $\frac{2\pi}{180}$ = $\frac{2\pi^3}{180}$ [$1^0$ = $\frac{\pi}{180}$ radious]
$d)$ At the extreme position the centrepetal acceleration will be zero. But, the particle will still have $\\$ acceleration due to the SHM.$\\$ Because, $T$ = $2$ $sec$.$\\$ Angular frequency $\omega$ = $\frac{2\pi}{T}$ $(\pi = 3.14)$ $\\$ So, angular acceleration at the extreme position,
53 A uniform disc of mass $m$ and radius $r$ is suspended through a wire attached to its centre. If the time period of the torsional oscillations be $T$, what is the torsional constant of the wire.
##### Solution :
M.I. of the centre of the disc. = $mr^2/2$
$T$ = $2\pi$ $\sqrt{\frac{l}{k}}$ = $2\pi$ $\sqrt{\frac{mr^2}{2k}}$ [where $K$ = Torsional constant]
$T^2$ = $4\pi^2$ $\frac{mr^2}{2k}$ = $2\pi^2$ $\frac{mr^2}{k}$
$\Rightarrow$ $2\pi^2$ $mr^2$ = $KT^2$ $\Rightarrow$ $K$ = $\frac{2mr^2\pi^2}{T^2}$
$\therefore$ Torsional constant $K$ = $\frac{2mr^2\pi^2}{T^2}$
54 Two small balls, each of mass $m$ are connected by a light rigid rod of length $L$. The system is suspended from its centre by a thin wire of torsional constant $k$. The rod is rotated about the wire through an angle $\theta_0$, and released. Find the tension in the rod' as the system passes through the mean position.
##### Solution :
The M.I of the two ball system
l = $2m$$(L/2)^2 = m L^2/2 At any position \theta during the oscillation, [fig-2] \\ Torque = k\theta So, work done during the displacement 0 to \theta_0, W = \int_a^b k\theta d\theta = k \theta_0^2/2 By work energy method,\\ (1/2) l \omega^2 - 0 = Work done = k \theta_0^2/2 \\ \therefore \omega^2 = \frac{k \theta{_0^2}}{2l} = \frac{k \theta{_0^2}}{mL} \\ Now, from the freebody diagram of the rod,\\ T^2 = \sqrt{(m\omega^2L^2)+(mg)^2} \\ = \sqrt{\big(m\frac{k\theta_0^2}{mL^2}\times L\big)^2 + m^2g^2} = \frac{k^2\theta_0^4}{L^2}+m^2g^2 55 A particle is subjected to two simple harmonic motions of same time period in the same direction. The amplitude of the first motion is 3.0 cm and that of the second is 4.0 cm. Find the resultant amplitude if the phase difference between the motions is (a) 0°, (b) 60°, (c) 90°. ##### Solution : a) When \phi = 0^0, \\ R = \sqrt{(3^2+ 4^2+2\times 3 \times 4 cos 0^0)} = 7cm Resultant amplitude = R = \sqrt{r^2_1 + r^2_2 +2r_1r_2cos\phi} b) When \phi = 60^0, \\ R = \sqrt{(3^2+ 4^2+2\times 3 \times 4 cos 60^0)} = 6.1cm \\ c) When \phi = 90^0, \\ R = \sqrt{(3^2+ 4^2+2\times 3 \times 4 cos 90^0)} = 5cm The particle is subjected to two SHMs of same time period in the same direction/ Given, r1 = 3cm, r2 = 4cm and \phi = phase difference. 56 Three simple harmonic motions of equal amplitudes A and equal time periods in the same direction combine. The phase of the second motion is 60° ahead of the first and the phase of the third motion is 60° ahead of the second. Find the amplitude of the resultant motion. ##### Solution : Three SHMs of equal amplitudes ‘A’ and equal time periods in the same dirction combine.\\ The vectors representing the three SHMs are shown it the figure. \\ Using vector method,\\ Resultant amplitude = Vector sum of the three vectors \\ = A +A cos 60^0 + A cos 60^0 = A + A/2 + A/2 = 2A \\ So the amplitude of the resultant motion is 2A. 57 A particle is subjected to two simple harmonic motions given by \\ x_1 = 2.0 sin(100 t) and x_2 = 2.0 sin(120 \pi t + \pi /3) where x is in centimeter and t in second. Find the displacement of the particle at \\(a) t = 0.0125,\\ (b) t = 0.025. ##### Solution : b) At t = 0.025s, \\ x = 2 [$$sin$ $(100\pi \times 0.025)$ + $sin$ $(120\pi \times 0.025 + \pi/3)$$] \\ = 2 [ sin 5\pi/2 + sin (3\pi + \pi/3)$$]$ $\\$ = $2$ $[1+(-0.8666)]$ = $0.27$$cm. a) At t = 0.0125s, \\ x = 2 [$$sin$ $(100\pi \times 0.0125)$ + $sin$ $(120\pi \times 0.0125 + \pi/3)$$] \\ = 2 [ sin 5\pi/4 + sin (3\pi/2 + \pi/3)$$]$ $\\$ = $2$ $[(-0.707)+(-0.5)]$ = $-2.41$$cm. x_1 = 2 sin 100 \pi t \\ x_2 = W sin (120 \pi t + \pi/3) \\ So, resultant displacement is given by,\\ x = x_1 + x_2 = 2 [$$sin$ $(100\pi t)$ + $sin$ $(120\pi t + \pi/3)$$] 58 A particle is subjected to two simple harmonic motions, one along the X-axis and the other on a line making an angle of 45° with the X-axis, The two motions are given by x = x_0 sin$$\omega$$t and s = s_0 sin$$\omega$$t Find the amplitude of the resultant motion. ##### Solution : R = \sqrt{(x^2+s^2+2xs cos 45^0)} \\ = \sqrt{{x_0^2 sin^2 wt + s_0^2 sin^2wt+2x_0s)0sin^2wtx(1/\sqrt2)}} \\ = [x+0^2 + s_0^2 = \sqrt2 x_0s_0]$$^{1/2}$ $sin$ $wt$ $\\$ $\therefore$ Resultant amplitude = $[x+0^2 + s_0^2 = \sqrt2 x_0s_0]$$^{1/2}$
The particle is subjected to two simple harmonic motions represented by,$\\$ $x$ = $x_0$ $sin$ $wt$ $\\$ $s$ = $s_0$ $sin$ $wt$ $\\$ and, angle between two motions = $\theta$ = 45° $\\$ $\therefore$ Resultant motion will be given by, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961915016174316, "perplexity": 1080.8432233113883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00050.warc.gz"} |
https://www.hammady.net/2008/03/24/tex-latex-lyx-authors-kit-and-more.html | If you are related to IT and computer industry, most probably you heard about these terms. Its all about documents preparation. If you have used Microsoft Word (sorry, no hyperlink here) or OpenOffice.org then you have already practiced with document processing systems.
So what is the difference between document preparation and processing? The latter are WYSIWYG systems. You take care of everything to see it as you want to get it. It is a convinient matter for small documents. However, if you are preparing a technical paper or a thesis then those won't save you much. In such complex documents you include figures, tables, complex equations and refer to them by numbers. You include a lot of citations and footnotes. You may even number your pages in different manners in the same document. You don't want to track such numberings so that moving any section around won't mess with the numbers.
You might say Word supports many such features. In Word you have to format eveything, nevertheless you may select styles to be applied to different sections. Changing the format of the style will make you pass over all formatted instances. On the contrary, in TeX, the typesetting system, you only define your logical structure instead of formatting constructs. You define `\title`, `\author`, `\chapter`, `\section`, `\subsection`... and the TeX (or LaTeX) compiler will take care of the formatting according to `\documentclass`, whether it is book, article, CV or whatever.
If you aim at posting your paper to a technical conference, you have to abide to formatting rules of the conference. This is simply done by downloading the class files specific to that conference (usually available at the conference site) and including them in your document. Just run the LaTeX compiler and you instantly have a PostScript/PDF conforming to the conference style.
TeX, LaTeX? What is the difference? TeX is a macro language invented by Donald Knuth in the late 1970s It include simple commands to define logical structure and formatting. It is something like assembly code, noboy writes in TeX. LaTeX is a bundle of macro definitions that abstracts TeX in easy-to-write macros (\section, \title, ...). You write your documents in LaTeX and run the LaTeX compiler to generate DVI (DeVice Independant) files that can be viewed with Evince or KDVI. You may run dvips to generate PostScripts or pdflatex directly on the LaTeX source to generate PDFs. There are many other ways to convert between such formats.
Another reason why Word is not adequate for complex documents is that it may even crash! I know examples where some MS Office documents crashed when became quite complex. They left their owner with locked documents that are rendered useless.
MS Office stands against openness. When you write your docuemtns in Word and send them to others, you are urging them to purchase Microsoft products to see your "closed-format" document. OpenOffice could be a solution, but Microsoft deliberately changes its format almost from version to version. You may read more about this format conflict.
For LaTeX newbies, it is not easy to be productive. You have to be very familiar with LaTeX to play with it. LyX is a front-end software for LaTeX. It is WYSIWYM, as they claim: What You See Is What You Mean! You define logical structure through menus, toolbars and keyboard shortcuts and you instantly get an immediate near-preview of the format. LyX has a special language syntax (TeX, LaTeX and LyX all store files in ASCII format and can be edited with any usual text editor). However, you can export your document in LaTeX, PS, PDF,... etc.
My experience with such tools suggests boot-straping with LyX as a fast startup with TeX. The more you work, the more you know about LaTeX. At a certain stage, when you find LyX restrictive, you will export in LaTeX and continue writing in LaTeX. You may Kile for that to autocomplete commands and easily generate DVIs and PostScripts. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099314212799072, "perplexity": 2117.765063108274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999779.19/warc/CC-MAIN-20190624231501-20190625013501-00559.warc.gz"} |
https://calculator-online.net/hypergeometric-calculator/ | # Hypergeometric Calculator
Calculation For:
Select Function:
Population Size (N)
Successes in population (K)
Sample Size (n)
Successes in Sample (k)
Increment
Repetition
Table of Content
1 What is hypergeometric distribution? 2 Hypergeometric Distribution Formula 3 About hypergeometric calculator 4 How to Use This hypergeometric distribution calculator? 5 FAQ's 6 Takeaway 7 References
Get The Widget!
Add Hypergeometric Calculator to your website through which the user of the website will get the ease of utilizing calculator directly. And, this gadget is 100% free and simple to use; additionally, you can add it on multiple online platforms.
Available on App
The hypergeometric calculator is a smart tool that allows you to calculate individual and cumulative hypergeometric probabilities. Apart from it, this hypergeometric calculator helps to calculate a table of the probability mass function, upper or lower cumulative distribution function of the hypergeometric distribution, draws the chart, and also finds the mean, variance, and standard deviation of the hypergeometric distribution.
## What is hypergeometric distribution?
Specifically, a hypergeometric distribution is said to be a probability distribution that simply represents the probabilities that are associated with the number of successes in a hypergeometric experiment. You can try this hypergeometric calculator to figure out hypergeometric distribution probabilities instantly.
Let’s elaborate it with an example:
Suppose that you randomly selected 5 cards from an ordinary deck of playing cards, here you might ask: what’s the probability distribution form the number of red cards in our selection. In this example, selecting a red card would be referred to as a success. Well, the probabilities associated with each possible outcome are an example of a hypergeometric distribution, as shown in the given chart:
Outcome Hypergeo Prob Cumu Prob 0 red cards 0.025 0.025 1 red card 0.150 0.175 2 red cards 0.325 0.500 3 red cards 0.325 0.825 4 red cards 0.150 0.975 5 red cards 0.025 1.00
By given this probability distribution, you can depict at a glance that the cumulative and individual probabilities are being associated with any outcome. For instance, the cumulative probability of selecting 1 or fewer red cards would be 0.175, and when it comes to the individual probability, selecting exactly 1 red card would be 0.15.
### Hypergeometric Distribution Formula:
The hypergeometric distribution probabilities or statistics can be derived from the given formula:
Formula:
h(k; N, n, K) = [ KCk ] [ N-KCn-k ] / [ NCn ]
Where;
N is said to be the Population Size
K is said to be the number of Successes in population
n is said to be the Sample Size
k is said to be the number of Successes in Sample
C is said to be combinations
h is said to be hypergeometric
The hypergeometric distribution calculator is an online discrete statistics tool that helps to determine the individual and cumulative hypergeometric probabilities. The hypergeometric calculator will assists you to calculate the following parameters and draw the chart for a hypergeometric distribution:
• probability mass function
• Lower Cumulative Distribution P
• Upper Cumulative Distribution Q
• Mean of hypergeometric distribution
• Variance hypergeometric distribution
• Standard Deviation hypergeometric distribution
### How to Use This hypergeometric distribution calculator:
This hypergeometric calculator is loaded with user-friendly interface; you just have to follow the given steps to get instant results:
#### Calculation for Hypergeometric Probability distribution:
Inputs:
• First of all, you have to select the option of Hypergeometric Probability distribution from the distribution from the drop-down menu
• Now, you have to enter the population size (N) into the designated field
• Very next, you have to enter the number of successes in population (K) into the given field
• Now, you have to enter the sample size (n) into the designated field
• Finally, you have to enter number of successes in sample (k) into the designated field of this hypergeometric probability calculator
Outputs:
Once done, you have to hit the calculate button, this distribute calculator will shows the following:
• Hypergeometric Probability: P(X = x)
• Cumulative Probability: P(X < x)
• Cumulative Probability: P(X ≤ x)
• Cumulative Probability: P(X > x)
• Cumulative Probability: P(X ≥ x)
• Mean
• Variance
• Standard Deviation
• Hypergeometric Distribution Probability Chart
#### Calculation for Hypergeometric Probability distribution (chart):
Inputs:
• First of all, you have to choose the option of Hypergeometric Probability distribution (chart) from the drop-down menu
• Very next, you have to select the function for which you want to calculate a table of the probability, it can either be in (probability mass f, lower cumulative distribution P, upper cumulative distribution Q)
• Now, you have to enter the population size (N) into the designated filed of this hypergeometric distribution calculator
• Then, you have to add the number of successes in population (K) into the given box
• Right after, you have to add the sample size (n) into the designated filed of the above calculator
• Then, enter the value of successes in sample (k) initial into the designated field
• Enter the value into the increment field, tell how much you want increment in every repetition for a successes in sample (k) initial
• Now, enter the value to tell how much steps you want to repeat
Outputs:
Once done, you have to hit the calculate button, this Hypergeometric distribution (chart) Calculator will shows:
• Table of probability according to the selected function
• Mean
• Variance
• Standard Deviation
• Draws the chart for a hypergeometric distribution
## FAQ’s
### How do you know when to use hypergeometric distribution?
You can use the hypergeometric distribution with populations that are so small, which the outcome of a trial has a large effect on the probability that the next outcome is a non-event or event. For instance, within a population of 10 people, only 7 people have A+ blood. So, try the above distribute calculator to find the hypergeometric distribution.
### What is a hypergeometric experiment?
The hypergeometric experiment has two particularities that are mentioned-below:
• The random selections from the finite population take place without any replacement
• Each item in the population can either be considered as a success or failure
However, a hypergeometric distribution indicates the probability that associated with the occurrence of a specific number of successes in a hypergeometric experiment.
### What is the number of successes?
When it comes to hypergeometric experiment, each item in the population can be represented as a success or a failure. The number of successess is said to be a count of the successes in a particular grouping. Therefore, the number of successes in the sample indicates a count of successes in the sample; and the number of successes in the population indicates a count of successes in the population.
### What is a hypergeometric probability?
A hypergeometric probability is said to be a probability that is associated with a hypergeometric experiment.
## Takeaway:
Well, the Hypergeometric distribution is to deal with circumstances, which taken into accounts when sample from data set with an acknowledged number of defective items. So, use the above hyper geometric calculator to get ease of hypergeometric distribution calculations! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280209898948669, "perplexity": 990.7987851364154}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00103.warc.gz"} |
http://books.duhnnae.com/2017/jul7/150111112786-Stochastic-kinetics-of-ribosomes-single-motor-properties-and-collective-behavior-Physics-Biological-Physics.php | # Stochastic kinetics of ribosomes: single motor properties and collective behavior - Physics > Biological Physics
Abstract: Synthesis of protein molecules in a cell are carried out by ribosomes. Aribosome can be regarded as a molecular motor which utilizes the input chemicalenergy to move on a messenger RNA mRNA track that also serves as a templatefor the polymerization of the corresponding protein. The forward movement,however, is characterized by an alternating sequence of translocation andpause. Using a quantitative model, which captures the mechanochemical cycle ofan individual ribosome, we derive an {\it exact} analytical expression for thedistribution of its dwell times at the successive positions on the mRNA track.Inverse of the average dwell time satisfies a Michaelis-Menten-like-equation and is consistent with the general formula for the average velocity ofa molecular motor with an unbranched mechano-chemical cycle. Extending thisformula appropriately, we also derive the exact force-velocity relation for aribosome. Often many ribosomes simultaneously move on the same mRNA track,while each synthesizes a copy of the same protein. We extend the model of asingle ribosome by incorporating steric exclusion of different individuals onthe same track. We draw the phase diagram of this model of ribosome traffic in3-dimensional spaces spanned by experimentally controllable parameters. Wesuggest new experimental tests of our theoretical predictions.
Author: Ashok Garai, Debanjan Chowdhury, Debashish Chowdhury, T.V. Ramakrishnan
Source: https://arxiv.org/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080170750617981, "perplexity": 4003.0094610232018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688208.1/warc/CC-MAIN-20170922041015-20170922061015-00322.warc.gz"} |
http://www.groupsrv.com/science/about532139-0-asc-60.html | Main Page | Report Page
## Science Forum Index » Physics - Research Forum » Event horizon / black holes and Schwarzschild metrics...
Author Message
Oh No...
Posted: Tue Jun 29, 2010 7:28 am
Thus spake eric gisse [quote]Oh No wrote: [...] The phrase "dynamical stellar collapse model" does not explicitly include a statement of smoothness [...] This is not true. Smoothness of initial data is specifically assumed. [/quote] Then explain: precisely which one of the words "dynamical", "stellar", "collapse" and "model" is synonymous with the word "smooth"? [quote]In the model I have proposed space outside of a pointlike particle obeys the Einstein field equation; it is not meaningful to talk of a region inside a pointlike particle. This leads to a discontinuity in the metric at the position of a pointlike particle, such that the event horizon has the topology of a point. Only in Schwarzschild is the central singularity a point. In the Kerr solution, it is an annulus. [/quote] That is not important to the model under consideration, because both of those features (as well as the apparent singularities of the Kerr solution) are contained in the region which I have described as not meaningful. [quote] If this is correct, then when many particles are placed at the same point in order to create a massive black hole (neglecting degeneracy pressure), then the black hole also has the topological structure of a point, I see a rather large amount of people in this thread proposing various 'models' and whatnot, while not really knowing what a model _is_. A model is not taking a piece of general relativity (a black hole) and then demanding it be a point or some crap like that. A model _predicts_ this from its' founding postulates, which nobody in this thread who has a model has actually done. [/quote] Actually, if you were to have studied the detail of the model I have described, as given in the referenced paper RQG III at http://rqgravity.net/Papers (see also RQG I & II for a full development), then you might appreciate a) that my conclusion is the result of fundamental postulates, and b) to demand a full development in a newsgroup is unreasonable. [quote] and there is no region inside the event horizon. It is then not possible to say that known physics has been tested in a region containing the event horizon of a black hole, or that dynamical stellar collapse in this model would lead to the creation of an interior region. No. Theory can not dictate observation. [/quote] Sorry, but that remark seems rather incoherent. First, it appears to suggest that observation is independent of theory -- but it were, theory would be meaningless. Second, it has no direct application to what was said. Regards -- Charles Francis moderator sci.physics.foundations. charles (dot) e (dot) h (dot) francis (at) googlemail.com (remove spaces and braces) http://www.rqgravity.net
Oh No...
Posted: Tue Jun 29, 2010 7:30 am
Thus spake Daryl McCullough [quote]Oh No says... In the model I have proposed space outside of a pointlike particle obeys the Einstein field equation; it is not meaningful to talk of a region inside a pointlike particle. This leads to a discontinuity in the metric at the position of a pointlike particle, such that the event horizon has the topology of a point. If this is correct, then when many particles are placed at the same point in order to create a massive black hole (neglecting degeneracy pressure), then the black hole also has the topological structure of a point, and there is no region inside the event horizon. It is then not possible to say that known physics has been tested in a region containing the event horizon of a black hole, or that dynamical stellar collapse in this model would lead to the creation of an interior region. I can't remember if you ever responded to the point made originally by Stephen Carlip (I think) about an spherically symmetric collapse from the point of view of those inside the sphere. [/quote] I did. [quote] Imagine that a spherical shell of stars centered on our sun suddenly were deflected towards our sun (so that the velocity was purely radially inward). For definiteness, let's assume that this shell starts at a distance of 1 light-year from our sun, so we know that this shell will not bother us for at least a year. Let's assume that there is enough matter in this shell to produce a black hole with a radius of 1 light-year. [/quote] ok. [quote]From the point of view of observers outside this shell, the geometry of spacetime would approach that of the Schwarzschild solution as the shell of stars gets closer to its own Schwarzschild radius. But in this case, it is *clearly* the case that there is more going on inside the event horizon, because *we* are inside the event horizon. Our lives are going on as normal (at least for another year). To slice off the manifold at the event horizon makes no sense in this case, because the interior of the event horizon includes stuff that we know is there. You would need both an interior and an exterior solution to describe both the manifold viewed by observers outside the event horizon, and also the manifold viewed by us unfortunate souls inside the event horizon. [/quote] This is all very fine, but it has nothing to do with what I have described. The event horizon in the instance you describe is more like the Rindler horizon, in the sense that is an artefact of coordinate systems. To apply what I said above to it, one has to look at the geometry generated by many individual elementary point-like particles distributed at different positions in space. Regards -- Charles Francis moderator sci.physics.foundations. charles (dot) e (dot) h (dot) francis (at) googlemail.com (remove spaces and braces) http://www.rqgravity.net
Juan R. González-Álvarez...
Posted: Fri Jul 23, 2010 4:49 am
Igor Khavkine wrote on Tue, 22 Jun 2010 23:47:17 +0000: [quote]Gerry Quinn wrote: In article , jowr.pi.nospam at (no spam) gmail.com says... We know there has to be a modification close to the singularity but there is no reason to expect that there will be a modification near the event horizon except for the cases where the horizon and singularity are 'close together'. There is no reason *within GR* to suspect it. I assert that there are many reasons to strongly suspect it, when considerations other than GR are taken into account. Actually the statement is stronger than you want it to sound: there is no reason within known and tested physics to suspect breakdown of GR at the event horizon. [/quote] That is not at all right. First, GR is an approximated model of gravity, not a final theory valid everywhere. Second, when you consider the well-established *fact* that energy-matter is quantum, you can start to consider quantum corrections to GR. One of the oldest corrections known predicts that BH would emit radiation. But other more elaborated corrections predict significant deviations from a pure GR description. There was a large thread in this newsgroup where all this was disccused and recent literature references given. I recall a recent paper where authors showed that exist deviations from GR at R=2M when one does not rely in a purely classical model arXiv:0902.0346v2 There are also more sophisticated models build over the quantum field theory of gravity (FTG) that show that the GR description is based in approximations as ignoring the graviton component T_g in the source term. (...) -- http://www.canonicalscience.org/ BLOG: http://www.canonicalscience.org/publications/canonicalsciencetoday/canonicalsciencetoday.html
mathematician...
Posted: Fri Jul 23, 2010 7:50 am
On Jun 1, 7:00 am, carlip-nos... at (no spam) physics.ucdavis.edu wrote: [quote]Peter wrote: Dear all, onhttp://jvr.freewebpage.org/is a translation of Schwarzschild's paper http://de.wikisource.org/wiki/%C3%9Cber_das_Gravitationsfeld_eines_Ma... The translators remark that Schwarzschild's results provide not any hint towards an event horizon and black holes. IMHO, this is correct, see, in particular, eq.(14). Eq. (14) is what is now called the Schwarzschild metric. The event horizon is at R=\alpha. There is certainly a hint of black holes -- the radial acceleration necessary to hold an object at rest diverges as R->\alpha, for instance. I suspect your confusion (and that of the translators of the paper) comes from the equation R=(r^3+\alpha^3)^{1/3}. You can certainly define a quantity r this way. But the coordinate value r=0 is not the origin. This is easy to see -- a sphere of constant r and t has a surface area of A = 4\pi R^2 = 4 pi (r^3+\alpha^3)^{2/3} In particular, the sphere at r=0 is, in fact, a sphere, with an area 4\pi\alpha^2, and not a point. Of course, the coordinate system of eq. (14) breaks down at R=\alpha. We now understand that this has no deep significance* -- polar coordinates on the plane break down at r=0, but that doesn't mean the origin isn't a real point. To understand the behavior of a black hole at or beyond the horizon, one must change to coordinates that are well-defined everywhere. There are a number of possibilities; you should look up Kruskal-Szekeres coordinates and Painleve-Gullstrand coordinates, for example. (*There is, actually, some significance to the breakdown of Schwarzschild coordinates at the horizon. The Schwarzschild metric is derived as a static solution of the field equations. But in fact, the spacetime inside the horizon is not static, so the assumption used to define the coordinates breaks.) Steve Carlip [/quote] You said above that the spacetime inside the horizon is not static, so the assumption used to define the coordinates breaks. Please Steve could you comment following two questions: Q1. The vaccuum inside the horizon is different kind than vaccuum outside the horizon? Q2. The arrow of time chances opposite inside the horizon compared to the arrow of time outside the horizon? Hannu
eric gisse...
Posted: Sun Jul 25, 2010 7:26 am
Juan R. =?iso-8859-1?q?Gonz=E1lez-=C1lvarez?= wrote: [...] [quote]Actually the statement is stronger than you want it to sound: there is no reason within known and tested physics to suspect breakdown of GR at the event horizon. That is not at all right. First, GR is an approximated model of gravity, not a final theory valid everywhere. Second, when you consider the well-established *fact* that energy-matter is quantum, you can start to consider quantum corrections to GR. One of the oldest corrections known predicts that BH would emit radiation. But other more elaborated corrections predict significant deviations from a pure GR description. There was a large thread in this newsgroup where all this was disccused and recent literature references given. I recall a recent paper where authors showed that exist deviations from GR at R=2M when one does not rely in a purely classical model arXiv:0902.0346v2 There are also more sophisticated models build over the quantum field theory of gravity (FTG) that show that the GR description is based in approximations as ignoring the graviton component T_g in the source term. (...) [/quote] That's all well and good, but is there any _evidence_ for these alternative models? Especially you take into account observations of Sgr. A* which image the black hole down to a few Schwarzschild radii, and other observations which show 0 evidence for a compact surface.
Gerry Quinn...
Posted: Wed Jul 28, 2010 7:33 am
In article , jowr.pi.nospam at (no spam) gmail.com says... [quote]Juan R. =?iso-8859-1?q?Gonz=E1lez-=C1lvarez?= wrote: [...] There are also more sophisticated models build over the quantum field theory of gravity (FTG) that show that the GR description is based in approximations as ignoring the graviton component T_g in the source term. [..][/quote] [quote]That's all well and good, but is there any _evidence_ for these alternative models? Especially you take into account observations of Sgr. A* which image the black hole down to a few Schwarzschild radii, and other observations which show 0 evidence for a compact surface. [/quote] This is only relevant if the alternatives predict a surface of such a kind, and at non-gigantic red shifts. For theories in which GR remains a good approximation until very close to the Schwarzschild radius, collapsed objects will look from a distance very similar to GR black holes. The enormous red shift at which deviations from GR will appear will mean that the regions where such deviations are relevant are behind what might be called an "effective event horizon". They will leak a small amount of information in the form of red shifted thermal radiation, but their causal influence on the astronomically visible structures will be negligible. As for your first question, surely the strongest evidence is the generally accepted fact that GR is wrong (for a start, it predicts singularities), but is approximately correct in observed regions. Therefore some other theory that approximates to GR in observed regions must be correct. It is merely a matter of determining which! - Gerry Quinn
eric gisse...
Posted: Thu Jul 29, 2010 3:19 am
Gerry Quinn wrote: [quote]In article , jowr.pi.nospam at (no spam) gmail.com says... Juan R. =?iso-8859-1?q?Gonz=E1lez-=C1lvarez?= wrote: [...] There are also more sophisticated models build over the quantum field theory of gravity (FTG) that show that the GR description is based in approximations as ignoring the graviton component T_g in the source term. [..] That's all well and good, but is there any _evidence_ for these alternative models? Especially you take into account observations of Sgr. A* which image the black hole down to a few Schwarzschild radii, and other observations which show 0 evidence for a compact surface. This is only relevant if the alternatives predict a surface of such a kind, and at non-gigantic red shifts. For theories in which GR remains a good approximation until very close to the Schwarzschild radius, collapsed objects will look from a distance very similar to GR black holes. The enormous red shift at which deviations from GR will appear will mean that the regions where such deviations are relevant are behind what might be called an "effective event horizon". They will leak a small amount of information in the form of red shifted thermal radiation, but their causal influence on the astronomically visible structures will be negligible. [/quote] http://arxiv.org/abs/0903.1105 Such thermal radiation was looked for. That is how it has been determined that there is no compact surface at Sgr. A*. Read the whole paper - it is interesting. The existence of a surface would require some serious fine tuning, as the parameter space left behind is a bit small. [[Mod. note -- I strongly agree with the poster's suggestion that anyone interested read this paper -- it's very clearly written, and the paper's authors are experts in this area. Their basic argument point is that matter is falling into Sag A*, but Sag A* is quite faint -- it's total luminosity is less than 0.4% of the gravitational binding energy flux of the infalling matter. That is, at least 99.6% of the gravitational binding energy of the infalling matter is going somewhere *other* than outward electromagnetic radiation. The authors argue that by far the most plausible "other" route for that energy is that the infalling matter is falling in through an event horizon. -- jt]] [quote] As for your first question, surely the strongest evidence is the generally accepted fact that GR is wrong (for a start, it predicts singularities), but is approximately correct in observed regions. [/quote] It is _exactly_ correct in observed regions. The problem is, there is exceedingly little room for modification at currently observed length and energy scales. I'm sure GR isn't the final answer but I'm increasingly unsure how it can't be. Maybe GR is globally correct and the only thing that needs to be modified is how physics behaves locally. Who knows.. [quote]Therefore some other theory that approximates to GR in observed regions must be correct. It is merely a matter of determining which! - Gerry Quinn[/quote]
Gerry Quinn...
Posted: Thu Jul 29, 2010 7:24 am
Arnold Neumaier...
Posted: Thu Jul 29, 2010 9:08 am
Gerry Quinn wrote: [quote]If the deviation from GR were zero over an extended region, but non-zero at at least one other point, then it would have to have some non-differentiable derivative. [/quote] No. The function f from R^3 to R whose value f(x) is zero if x_1<=0 but exp(-1/x_1) if x_1>0 vanishes over an entier halfspace. But all its higher dierivatives are smooth. One can construct similar smooth functions whose support is a compact set of arbitrarily small diameter. Arnold Neumaier
Gerry Quinn...
Posted: Thu Jul 29, 2010 9:08 am
In article , gerryq at (no spam) indigo.ie says... [quote]I've proposed an alternative. What theoretical or observational problems do you see with it? [ Mod. note: Many problems with the above mentioned proposal were already pointed out in this very thread. Anyone looking to jog their memory is encouraged to reread the group archives. -ik ] [/quote] Are you sure you are not confusing it with some other proposal, perhaps that of Charles Francis? When I explained my proposal (in answer to a post of Tom Roberts), there were no posts whatsoever in response. I have just checked on Google and it is the same, so my news server is not to blame. The message on Google is: http://groups.google.ie/group/sci.physics.research/msg/1e7535c9af250239 ?hl=en If I missed something, please refresh my memory, if you will. I am very willing to listen to and discuss any arguments. - Gerry Quinn
Igor Khavkine...
Posted: Thu Jul 29, 2010 10:33 am
eric gisse...
Posted: Fri Jul 30, 2010 4:49 pm
Gerry Quinn...
Posted: Sat Jul 31, 2010 5:43 am
In article , jowr.pi.nospam at (no spam) gmail.com says... [quote]Gerry Quinn wrote: It is _exactly_ correct in observed regions. How can you *possibly* make such an assertion? Because I said 'observed', which is true. [/quote] You said "exactly correct in observed regions" which is most certainly not established (and is unlikely to be true if it is not exactly correct everywhere, even though Arnold Neumaier has pointed out that there are indeed some smooth functions that the physics could in principle obey). [quote]No measurement, direct or indirect, has been conducted to infinite precision. Nor did I say that an experiment has been. [/quote] Taking this and the following... [quote]And no logical argument has been proposed as to why it cannot be an approximation (in contrast to, say, the linearity of quantum theory, which causes theoretical problems if it is not exact.) [/quote] .....into account, your assertion has no basis other than your own prejudices - something you have regularly been accusing others of. [quote]Furthermore, I think the opposite case can in fact be made! Surely in GR spacetime is modelled as a smooth manifold? Doesn't that mean that if it is wrong anywhere, it must be wrong everywhere, or at least in any finite hypervolume? If the deviation from GR were zero over an extended region, but non-zero at at least one other point, then it would have to have some non-differentiable derivative. If GR is wrong, it is an approximation that covers an overwhelmingly large part of the universe to currently available precision. [/quote] That may be a correct statement. It was once a correct statement about Newtonian gravity. By no means does everyone think it is a correct statement, with ideas like dark energy and supposed anomalies in galactic gravitational fields floating around. But I remain highly agnostic on such matters, so I won't labour the point. In my opinion GR is indeed a very good approximation in most places. [quote]Maybe GR is globally correct and the only thing that needs to be modified is how physics behaves locally. Who knows.. I don't really understand what you are proposing here. Perhaps the only thing that needs to be modified is quantum theory and how it behaves on an arbitrary manifold. [/quote] But the singularity problem is not caused by other parts of physics - it emerges from GR itself. Even if you throw away all the rest of physics, you *still* have the same problem! [quote]I know there are problems with this, but the amount of things that annoy me about quantum theory are substantially larger than the amount of things in general relativity. [/quote] My feeling is the opposite - I find it very difficult to understand why so many people cling so strongly to the geometric theory of gravity. - Gerry Quinn
eric gisse...
Posted: Sun Aug 01, 2010 10:36 am
Gerry Quinn wrote: [quote]In article , jowr.pi.nospam at (no spam) gmail.com says... Gerry Quinn wrote: It is _exactly_ correct in observed regions. How can you *possibly* make such an assertion? Because I said 'observed', which is true. You said "exactly correct in observed regions" which is most certainly not established (and is unlikely to be true if it is not exactly correct everywhere, even though Arnold Neumaier has pointed out that there are indeed some smooth functions that the physics could in principle obey). [/quote] Exactly correct within all available precision. Not 'exactly correct to arbitrary precision', which is what you seem to think my statement implies. [...] [quote]Maybe GR is globally correct and the only thing that needs to be modified is how physics behaves locally. Who knows.. I don't really understand what you are proposing here. Perhaps the only thing that needs to be modified is quantum theory and how it behaves on an arbitrary manifold. But the singularity problem is not caused by other parts of physics - it emerges from GR itself. Even if you throw away all the rest of physics, you *still* have the same problem! [/quote] Why is it a problem? The nature of a black hole's interior is closed off from the external universe. Forever. There is no way - even in principle - for this information to be relevant. Not even if you hit GR with a little quantum field theory and allow for Hawking radiation - the singularity is hidden. Besides, leptons are points in quantum theory. Shared 'problem' that goes all the way back to Maxwell. [quote] I know there are problems with this, but the amount of things that annoy me about quantum theory are substantially larger than the amount of things in general relativity. My feeling is the opposite - I find it very difficult to understand why so many people cling so strongly to the geometric theory of gravity. - Gerry Quinn [/quote] Because it works. The theory predicts a wide range of non-intuitive effects that are very nonlinear and dis-similar to what simple theories like Newton predict. It has survived every test in the past century or so. Quantum theory, on the other hand, has had to be majorly re-adjusted every 20-30 years or so since 1930. Not that this is a bad thing, but makes it far less likely - in my eye - to be anywhere near 'the final answer'. Give it a half century of stability and then we'll see.
Gerry Quinn...
Posted: Mon Aug 02, 2010 7:32 am | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394777774810791, "perplexity": 802.8840500024289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/373689/sobolev-inequalities-on-manifolds-dependence-of-the-constants-on-the-riemannian | Sobolev inequalities on manifolds: dependence of the constants on the Riemannian metric
Let $$g$$ be a smooth Riemannian metric on the 2-torus $$T^2$$. $$g$$ induces the Sobolev space $$W^{2,2}_g(T^2)$$ via the norm $$\|f\|_{W^{2,2}_g}^2 = \int_M |f|^2 + g(\nabla^2 f,\nabla^2 f)\, \text{vol}_g,$$ where $$g$$ is extended multi-linearly to all tensor bundles, $$\nabla$$ is the Levi-Civita connection of $$g$$, and $$\text{vol}_g$$ is the volume form. Since $$g$$ is equivalent to the flat metric on the torus, we have the Sobolev inequality $$\|f\|_{L^\infty} \le C \|f\|_{W^{2,2}_g}.$$
Question: Is there any reference to the dependence of $$C$$ on intrinsic properties of $$g$$ (e.g., its volume and curvature)?
We are also interested in this question for other closed manifolds, and other Sobolev inequalities.
For example, when the underlying manifold is one dimensional, that is, $$S^1$$, then the only intrinsic property of the metric is the total length $$\ell_g$$, and one can get $$\|f\|_{L^{\infty}}^2 \leq \left(\ell_g/2+ 2/\ell_g\right) \|f\|_{W^{1,2}(g)}^2.$$ This is shown in Lemma~2.14 in the article by Bruveris-Michor-Mumford https://arxiv.org/pdf/1312.4995.pdf or, more generally, for open curves, Theorem 7.40 in Leoni's first course in Sobolev spaces,'' 2nd edition.
• I don't have time to look up the reference, but I recall that Taylor's three volumes on PDEs develops the theory over Riemannian manifolds and may have proofs that let you make dependence explicit.
– Neal
Oct 9, 2020 at 17:05
• Try looking at the book Sobolev Spaces on Riemannian Manifolds by Hebey. Oct 9, 2020 at 23:39
• @DeaneYang : thanks, though it seems to me that Hebey's books only deal with subcritical Sobolev embeddings, and the ones I'm interested in are mostly super-critical (Morrey-type) inequalities.
– C M
Oct 10, 2020 at 12:49
• You can probably get the $L^\infty$ bound from the sharp $L^2$ inequality using Moser iteration. My guess is that this is somewhere in a paper or book that requires elliptic estimates on a Riemannian manifold. If you know or learn the basic outline of what Moser iteration is, then you probably can work out the details yourself. You can see something like what you want in Appendix C of my paper Convergence of Riemannian manifolds with integral bounds on curvature. II. Ann. Sci. École Norm. Sup. (4) 25 (1992), no. 2, 179–199. Oct 10, 2020 at 18:41
• @DeaneYang Thanks a lot! I'll look further in this direction.
– C M
Oct 10, 2020 at 18:57
If you are interested in understanding arbitrary metrics $$g$$ on a 2-dimensional torus, you can proceed as follows. By the uniformatization theorem -- or equivalent simpler arguments -- we can write $$g=\exp(2u) g_0$$ where $$g_0$$ is a flat metric. It is not difficicult to do many such calculations explicitly for $$g_0$$. And one also sees: if you can control the function $$u$$ and its derivatives, then you can use Sobolev constants with respect to $$g_0$$ in order to get explicit, but in general not optimal Sobolev constants with respect to $$g$$.
It remains to control $$u$$ and its derivatives in terms of geometric data. A method called potential analysis may be used for this. I once worked out (as a PhD student without knowing that other people had done similar calculations) how to control the oscillation of $$u$$, i.e. $$\mathrm{osc} u:= \mathrm{max} u- \mathrm{min} u$$, see Section 3 of [Bernd Ammann, The Willmore Conjecture for immersed tori with small curvature integral, Manuscripta Math. 101, no. 1, 1-22 (2000), also available http://www.mathematik.uni-regensburg.de/ammann/preprints/willflat.html]. Probably, the derivatives of $$u$$ can be controlled similarly, but I have no precise reference at hand. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021939039230347, "perplexity": 171.47520039779707}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00516.warc.gz"} |
http://mathhelpforum.com/pre-calculus/156008-how-emdas-logatirthm.html | # Math Help - how to emdas in logatirthm?
1. ## how to emdas in logatirthm?
can somebody show me the proper step to solve for x
50/(1 + e^-x) = 4
first would be multiply denominator to 4,, .
correct answer is somewhere at -2.44
thanks
2. You can do this: $50e^x = 4e^x + 4\; \Rightarrow \;46e^x = 4\; \Rightarrow \;e^x = \frac{2}{{23}}$
3. Originally Posted by aeroflix
can somebody show me the proper step to solve for x
50/(1 + e^-x) = 4
first would be multiply denominator to 4,, .
correct answer is somewhere at -2.44
thanks
Having first multiplied both sides by the denominator you have 50= 4(1+ e^{-x})= 4+ 4e^{-x} where I have used the "distributive law" to separate the term involving x from the constant term on the right. Now subtract 4 from both sides: 46= 4e^{-x}. Divide both sides by 4: 46/4= 23/2= e^{-x}. To get rid of the exponential now, do the "opposite"- take the natural logarithm of both sides: ln(23/2)= -x. Finally, multiply by -1.
Plato does it in a completely different way, first multiplying both numerator and denominator of the fraction on the left by e^x. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930510520935059, "perplexity": 1467.5707567437807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663365.9/warc/CC-MAIN-20140930004103-00220-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://amathew.wordpress.com/2009/11/09/symmetric-connections-corrected-version/ | My post yesterday on the torsion tensor and symmetry had a serious error. For some reason I thought that connections can be pulled back. I am correcting the latter part of that post (where I used that erroneous claim) here. I decided not to repeat the (as far as I know) correct earlier part.
Proposition 1 Let ${s}$ be a surface in ${M}$, and let ${\nabla}$ be a symmetric connection on ${M}$. Then$\displaystyle \frac{D}{\partial x} \frac{\partial}{\partial y} s = \frac{D}{\partial y} \frac{\partial}{\partial x} s.\ \ \ \ \ (1)$
Assume first ${s}$ is an immersion. Then at some fixed ${p \in U}$
$\displaystyle \frac{D}{\partial x} \frac{\partial}{\partial y} s = (\nabla_{X} Y) \circ s,$
if ${X}$ is a vector field on some neighborhood of ${s(p)}$ which is ${s}$-related to ${\frac{\partial}{\partial x}}$, and ${Y}$ similarly for ${\frac{\partial}{\partial y}}$. Similarly,
$\displaystyle \frac{D}{\partial y} \frac{\partial}{\partial x} s = (\nabla_{Y} X) \circ s.$
The difference between these two quantities is ${T(X,Y) \circ s=0}$, because $[X,Y] \circ s = 0$ by a general theorem about $f$-relatedness preserving the Lie bracket for $f$ a morphism. As before, we can approximate $s$ by an immersion at $p$, and we get the general case.
That was much easier than what I was trying to do earlier. Blogging is a learning experience. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766964912414551, "perplexity": 233.9438554452846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609054.55/warc/CC-MAIN-20170527191102-20170527211102-00354.warc.gz"} |
https://occam.com.ua/physics-structure-formation/ | # The physics of structure formation
The entropy in equilibrium thermodynamics is defined as $$S \sim \ln(\Omega)$$, which always increases in closed systems. It is clearly a special case of Shannon entropy $$H = -\sum_i p_i \ln(p_i)$$. If the probabilities are uniform, $$p_i = const$$, then Shannon entropy boils down to thermodynamic entropy.
A system can be called structured, if some states are predicted with higher probability than others, which leads to lower entropy. As we have argued in a different post, the loss of the ability to predict the system is diagnosed by increasing entropy. In the extreme case, if microstates transition randomly with equal probability, chances are high to get to an unstructured state than to a structured one.
In order to describe both the structure and the transition laws, the concept of algorithmic complexity is needed. If the microstate i is described by a set of numbers, say the speed and position of $$N$$ particles, then this set of numbers can be written as a sequence. Then a Kolmogorov complexity can be assigned to state $$i$$: $$K(i)$$. Since $$K(i)$$ is quite high for most sequences, starting out with low $$K$$ and transiting randomly will increase $$K$$. Therefore the Kolmogorov complexity of the system will increase with time.
Interestingly, a low K allows a better (spatial) prediction of the sequence. However, one may have a model, a “law” if you wish, that governs the time evolution of the system. One may not restrict oneself with a microscopic law, but one about some emergent variables of the system. Even macroscopic ones that we have in classical thermodynamics. The “length” of such laws is, in some sense the Kolmogorov complexity of the system, neglecting all the micro- and mesoscopic details.
What happens when structures occur such as living systems or galaxies etc.? The system evolves into a low complexity state. Moreover, it looks like there is scale-free structure in the universe. Why is it not possible nowadays to plug-in the laws of physics, and see chemistry and biology evolve? Because, even with todays computers things would become too complicated.
***
Interestingly, the minimal entropy of a system corresponds roughly to the algorithmic complexity. Vitanyi and Li write on p. 187: “the interpretation is that $$H(X)$$ bits are on average sufficient to describe an outcome $$x$$. Algorithmic complexity says that an object $$x$$ has complexity, or algorithmic information, $$C(x)$$ equal to the minimum length of a binary program for $$x$$.”
What would be the holy grail?
If we could update the theory of statistical physics to phenomena that create low complexity, i.e. structures, a theory of self-organization, really. If we could state the condition under which complexity of a subsystem will drop below some bound, then it would be a great thing. This may even become a theory of life.
The even holier grail would be to explain the emergence of intelligent life. To me, it seems sufficient to explain the emergence of subsystems that can develop compressed representations of some of its surroundings. For example, a frog that predicts the flight trajectory of a fly has achieved some degree of intelligence since it compresses the trajectory in its “mind”, which enables prediction in the first place.
What does that mean in physical terms? What is representation? In what sense is representation “about” something else? Somehow, it is the ability to create, to unpack, our representation into an image of the represented object, which is what it means to imagine something. Even if the frog does not imagine the future trajectory, it acts as if it knew how it will continue. Essentially, successful goal-directed action is possible only if you predict, that is only if you compress. However, actions and goals are still not part of physical vocabulary.
It will turn out that in order to maintain a low complexity state, the system will have to be open and exchange energy with the environment. After all, in a closed system, entropy and therefore complexity must increase. In order words, the animal has to eat and to shit 🙂
If you extract the energy from your environment in such a way that you maintain your simple state, does it not imply intelligent action already? Don’t you have to be fairly selective of what kind of energy you take and what you reject? If a large stone flies toward you, you may want to avoid collision: that type of energy transfer is not welcome, since it does not help to maintain a low complexity state. However, a crystal also maintains its simplicity. Probably because it becomes so firm that after a while it just does not desintegrate from the influence of the environment. It any case, it does not represent anything about it’s surroundings. It does not react to the environment either.
***
From Jeremy England (2013).
Irreversible systems increase the entropy (and the heat) of the heat bath.
$$\mbox{Heat production} + \mbox{internal entropy change} – \mbox{irreversibility} \ge 0$$ (8)
If we want internal structure (dS shall be negative) and high irreversibility, then a lot of heat would be released into the bath. Can this result be transformed into an expression with algorithmic complexity? If yes, and if we figure out how to construct a system such that it does create that, then we have figured out, how to create structure.
We can also increase beta, which is done by lowering the temperature. Thus, unsurprisingly, freezing leads to structure formation. But that’s not enough for life. Freezing is also fairly irreversible. So, maybe, structure formation is not enough. What we need is structure representation! What does it mean to represent and to predict in physical terms?
Let’s say a particle travels along a straight line. If a living organism can predict it, it means that it has somehow internally found a short program that, when executed, can create the trajectory and also expand it further in time. It can compute points and moments in future where the particle is going to be. It is the birth of intentionality, of “aboutness”. If there is an ensemble of particles with all their microstates, how can they be “about” some other external particle?
The funny thing is, you need such representations, in order to decrease entropy. After all, the more you compress, the less degrees of freedom are remaining, hence the state space is reduced and the entropy decreases therefore. There can also be hierarchical representations within a system, which means that there is an “internal aboutness” as well. Thus the internal entropy decreases once an ensemble of particles at one level is held in a macrostate determined by a higher level ensemble! Hence, predicting the “outer” world may simply be a special case of predicting the “inner” world, you own macrostates. Thus, in order to decrease the state space in such a way, a few high level macrostates have to physically determine all the microstate at lower levels. For example, in an autoencoder the hidden layer compresses and recreates the inputs at the input layer. In the nervous system neurons get active or not active and therefore take up a large part of the entropy of the brain. The physical determination happens through the propagation of an electric potential though the axons and dendrites of neurons. But, especially in the beginning of life, things have to work out without a nervous system.
Instead of thinking about a practical implementation, we could think of a theoretical description, such as the formula (8). We imagine all microstates of a level being partitioned with a macrostate assigned to each subset of the partition. Those macrostates would correspond to the microstates of the level above. Now, there is a highly structured outside world, which means that the entropy is low and there are much less states than are theoretically possible. If you have got sensors, some part of the outer world activates them. Their probability distribution is the one to be compressed. Which means, if the world has been created by running a short program, your job as a living being is to find that program. And why should that happen? It would be very cool to show that our world is made in a way such that subsystem will emerge that try to represent it and ultimately find out the way it has been made. Predicting food trajectories may be a start to do exactly that.
So, it is not just the goal of decreasing internal entropy, but to do it in such a way that it represents the outer world, the entropy of which is already decreased by the laws of nature. And what does represent mean in that sense? In the internal sense it means to physically determine ones own internal states. And for the outer world it means to have sensors somehow such that the states of the outer world are reflected by the states of your sensors. So, we can imagine the lowest layer/level to encode the states of the outer world, at least a part of it. And it does so in a non-compressing way, hence it is a one to one map of a part of the world. Can we show, that under such circumstances, compression in terms of algorithmic complexity is the best thing to be done? Basically, it means that some part of the animal is driven by some outer influences and therefore can not be changed, hence contributes majorly to the entropy of the animal. In order to decrease the entropy nevertheless, the animal has to find a way to recreate those same inputs.
Now, we have to clarify, what probability distributions are meant, when we compute the entropy. In a deterministic world, probabilities are always the reflection of our – the scientists’ – lack of knowledge. Those probabilities are different from the probabilities assigned by the animal: those are the animal’s knowledge. We should treat them as the same.
A way to reduce internal entropy is to couple all remaining internal states to the sensor inputs. Which does not necessarily mean to compress. Well, it does decrease the entropy, since only the sensor entropy remains, but it does not decrease it even further! In order to decrease it even further, the sensor entropy has to depend on internal states, which should be fewer in number. They have to GENERATE the microstates of the sensors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817211389541626, "perplexity": 419.4524097286033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00001.warc.gz"} |
https://sppgway.jhuapl.edu/bibcite/reference/1257 | # Cross Helicity Reversals in Magnetic Switchbacks
Author Keywords Abstract
We consider 2D joint distributions of normalized residual energy, σr(s, t), and cross helicity, σc(s, t), during one day of Parker Solar Probe\textquoterights (PSP\textquoterights) first encounter as a function of wavelet scale s. The broad features of the distributions are similar to previous observations made by Helios in slow solar wind, namely well-correlated and fairly Alfv\ enic wind, except for a population with negative cross helicity that is seen at shorter wavelet scales. We show that this population is due to the presence of magnetic switchbacks, or brief periods where the magnetic field polarity reverses. Such switchbacks have been observed before, both in Helios data and in Ulysses data in the polar solar wind. Their abundance and short timescales as seen by PSP in its first encounter is a new observation, and their precise origin is still unknown. By analyzing these MHD invariants as a function of the wavelet scale, we show that magnetohydrodynamic (MHD) waves do indeed follow the local mean magnetic field through switchbacks, with a net Elsässer flux propagating inward during the field reversal and that they, therefore, must be local kinks in the magnetic field and not due to small regions of opposite polarity on the surface of the Sun. Such observations are important to keep in mind as computing cross helicity without taking into account the effect of switchbacks may result in spurious underestimation of σc as PSP gets closer to the Sun in later orbits.
Year of Publication 2020 Journal The Astrophysical Journal Supplement Series Volume 246 Number Number of Pages 67 Date Published 02/2020 URL https://iopscience.iop.org/article/10.3847/1538-4365 DOI 10.3847/1538-4365/ab6dce | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892407417297363, "perplexity": 2045.9382288367017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00546.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.