url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://inventtolearn.com/3n/ | # The 3N problem
SCENARIO
You and your noted mathematician colleagues convene in (virtual) Geneva to present brilliant theories pertaining to one of the world’s great mysteries, the elusive 3n Problem.
BACKGROUND
The 3N problem offers a fantastic world of exploration for learners of all ages. (I have done this with kids as young as the third grade.)
The problem is known by several other names, including: Ulam’s problem, the Hailstone problem, the Syracuse problem, Kakutani’s problem, Hasse’s algorithm, Thwaite’s Conjecture 3X+1 Mapping and the Collatz problem.
The 3N problem has a simple set of rules. Put a positive integer (1, 2, 3, etc…) in a “machine.” If the number is even, cut in half – if it is odd, multiply it by 3 and add 1. Then put the resulting value back through the machine. For example, 5 becomes 16, 16 becomes 8, becomes 4, 4 becomes 2, 2 becomes 1, and 1 becomes 4. Mathematicians have observed that any number placed into the machine will eventually be reduced to a repeating pattern of 4…2…1…
This observation has yet to be proven since only a few billion integers have been tested. The 4…2…1… pattern therefore remains a conjecture.
The computer will serve as your lab assistant – smart enough to work hard without sleep, food or pay, but not so smart that it does the thinking for you. It will collect data and represent it in three different ways for you. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445913195610046, "perplexity": 1214.1105996957717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320070.48/warc/CC-MAIN-20170623151757-20170623171757-00198.warc.gz"} |
https://www.saranextgen.com/homeworkhelp/doubts.php?id=50249 | # Two forces P and Q of magnitude 2F and 3F, respectively, are at an angle with each other. If the force Q is doubled, then their resultant also gets doubled. Then, the angle is: (a) 300 (b) 600 (c) 900 (d) 1200
## Question ID - 50249 :- Two forces P and Q of magnitude 2F and 3F, respectively, are at an angle with each other. If the force Q is doubled, then their resultant also gets doubled. Then, the angle is: (a) 300 (b) 600 (c) 900 (d) 1200
3537
4F2+9F2+12F2cos=R2
4F2+36 F2+24 F2 cos=4R2
4F2+36 F2+24 F2cos
=4(13F2+12F2cos)= 52 F2 +48F2cos
cos ==
Next Question :
The actual value resistance R, shown in the figure is 30. This is measured in an experiment as showing using the standard formula R=, where V and I are the readings of the voltmeter and ammeter, respectively. If the measured value of R is 5% less, then the internal resistance of the voltmeter is: (a) 350 (b) 570 (c) 35 (d) 600 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958806037902832, "perplexity": 1405.8346419313295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00765.warc.gz"} |
https://courses.cs.cornell.edu/cs2800/wiki/index.php/SP20:Lecture_3_prep | # SP20:Lecture 3 prep
Please come to lecture 3 knowing the following definitions (you can click on the terms or symbols for more information, or you can review the entire lecture notes from last semester here):
Definition: Subset
If and are sets, then is a subset of (written ) if every is also in
Definition: Power set
The power set of a set (written )is the set of all subsets of . Formally, .
Definition: Union
If and are sets, then the union of and (written ) is given by .
Definition: Intersection
If and are sets, then the intersection of and (written ) is given by .
Definition: Set difference
If and are sets, then the set difference minus (written ) is given by . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152601361274719, "perplexity": 1520.570534711383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00638.warc.gz"} |
http://swmath.org/software/9154 | # CPsuperH
CPsuperH: a computational tool for Higgs phenomenology in the minimal supersymmetric standard model with explicit CP-violation. We provide a detailed description of the Fortran code CPsuperH, a newly-developed computational package that calculates the mass spectrum and decay widths of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and Higgs-boson pole-mass shifts. The code CPsuperH is self-contained (with all subroutines included), is easy and fast to run, and is organized to allow further theoretical developments to be easily implemented. 1 The fact that the masses and couplings of the charged and neutral Higgs bosons are computed at a similar high-precision level makes it an attractive tool for Tevatron, LHC and LC studies, also in the CP-conserving case.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 11 articles , 1 standard article )
Showing results 1 to 11 of 11.
Sorted by year (citations) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204556941986084, "perplexity": 2162.542275724272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00515.warc.gz"} |
http://quant.stackexchange.com/questions/8738/constant-maturity-futures-price-methodology | # Constant maturity futures price methodology
What is the correct methodology to compute constant maturity futures price.
I've met in some papers that do the following. To create constant maturity synthetic futures prices with maturity $m = 30, 60,...,180$ days. We should take a pair of futures that straddle the chosen maturity $m$ with maturities $s<m<l$ measured in days until expiration.
Then the price is derived using the following formula: $$p_m = \alpha p_s + (1-\alpha)p_l,\, \alpha = \frac{l-m}{l-s}$$
1. Should the maturity be rounded to days?
2. What happens when shorter futures comes closer to expiration. On which date and how we roll over the pair? Is it recommended to roll over futures several days before expiration. In this case we should have negative $\alpha$.
3. What are the limitations of this methodology? What are general assumptions?
4. We can take only daily closing prices or we can use more frequent data?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114233613014221, "perplexity": 1559.422430529851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00055-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://astrophysicsformulas.com/astronomy-formulas-astrophysics-formulas/temperature-in-kelvin-to-kev-conversion/ | # Temperature in Kelvin to keV Conversion
Practicing astrophysicists routinely refer to temperatures in units of eV or keV, even though this is wrong, because temperature is not dimensionally equivalent to energy. Nevertheless, they still do it, with the Boltzmann constant being implicitly included in the conversion. Here are formulas for temperature in Kelvin to keV conversion.
$E({\rm eV}) = \frac{kT}{e}$
because 1 eV is by definition the energy required to move a charge $e$ through a potential difference of 1 volt and is equal to $1.6 \times 10^{-19} \times 1$ Joules. Thus
$T \ ({\rm eV}) = 8.625 \times 10^{-5} \ T \ ({\rm Kelvin})$
and
$T \ ({\rm keV}) = 8.625 \ \times 10^{-4} \left(\frac{T \ ({\rm Kelvin})}{10^{4} \ {\rm Kelvin}} \right)$
where $k=1.3806504 \times 10^{-23}$ Joules/Kelvin is the Boltzmann constant (source: NIST).
Details:
With $k=1.38 \times 10^{-23} \ \rm J \ K^{-1}$, $kT({\rm Kelvin})$ is in Joules, divide by the electron charge, $e$, to get eV, then divide by 1000 to get keV:
$\frac{k}{1000 e} = \frac{1.38 \times 10^{-23}}{1000 \times 1.6 \times 10^{-19}} = 8.625 \times 10^{-8}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392608165740967, "perplexity": 746.5204306263829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00113.warc.gz"} |
https://toywiki.xyz/discrete_dlpp_dp.html | # Discrete directed last passage percolations and directed polymers
directed_last_passage_percolation directed_polymer
Definition. The directed last passage percolation (DLPP) $$Z_0(n, m)$$ on integer lattice $$\pint_{>0} \times \pint_{>0}$$, where each vertex $$(i, j)$$ has weight $$a_{ij}$$, is defined as the maximum over directed paths from $$(1, 1)$$ to $$(n, m)$$:
$Z_0(n, m) = \max_{\pi: (1, 1) \to (n, m)} \sum_{(i, j) \in \pi_i} a_{ij}, \qquad (1)$
where recall from Greene's theorem that a directed path $$\pi: (1, 1) \to (n, m)$$ is a collection of coordinates $$(n_1, k_1) = (1, 1), (n_2, k_2), ..., (n_{r - 1}, k_{r - 1}), (n_r, k_r) = (n, m)$$ such that
$(n_i - n_{i - 1}, k_i - k_{i - 1}) \in \{(0, 1), (1, 0)\}.$
Definition. The directed polymer (DP) $$Z_1(n, m)$$ is defined as a geometric lifting of the $$Z_0(n, m)$$:
$Z_1(n, m) = \log \left(\sum_{\pi: (1, 1) \to (n, m)} \prod_{(i, j) \in \pi} e^{a_{ij}}\right). \qquad (2)$
The DLPP (or the Manhattan Tourist problem) is a typical example of dynamic programming, where one can rely on the recursive formula:
$Z_0(n, m) = (Z_0(n - 1, m) \vee Z_0(n, m - 1)) + a_{nm}. \qquad (3)$
Similarly for DP:
$Z_1(n, m) = \log(\exp(Z_1(n - 1, m)) + \exp(Z_1(n, m - 1))) + a_{nm}$
And these formulas in turn are alternative definitions of the DLPP and DP.
Let us focus on a square enclosed by vertices $$(n - 1, m - 1), (n - 1, m), (n, m - 1), (n, m)$$. By defining for $$b = 0, 1$$
\begin{align} U_b &= Z_b(n, m - 1) - Z_b(n - 1, m - 1) \\ V_b &= Z_b(n - 1, m) - Z_b(n - 1, m - 1) \\ X_b &= a_{n, m} \\ U_b' &= Z_b(n, m) - Z_b(n - 1, m) \\ V_b' &= Z_b(n, m) - Z_b(n, m - 1) \end{align}
By writing $$X_b' = U_b - U_b' + X_b$$ one can derive that $$U_b, V_b, X_b, U_b', V_b', X_b'$$ satisfy the Case 1 and 3 of the Burke property respectively when $$b = 0, 1$$.
Due to this link, we define the $$q$$- and $$qt$$-analog of the DLPP according to Case 4 and 5 of the Burke property:
Definition. We define the $$q$$- and $$qt$$-analog of the DLPP as follows:
\begin{align} Z (1, 1) &= a_{1, 1} \\ Z (n, 1) &= a_{n, 1} + Z (n - 1, 1), \qquad n > 1 \\ Z (1, m) &= a_{1, m} + Z (1, n - 1), \qquad m > 1 \\ Z (n, m) &= a_{n, m} + Z (n - 1, m) + Z (n, m - 1) - Z (n - 1, m - 1) - X'(n, m), \qquad m, n > 1. \end{align}
where
1. for $$q$$-analog: $X'(n, m) \sim q\text{Hyp}(Z(n, m - 1) - Z(n - 1, m - 1), \infty, Z(n - 1, m) - Z(n - 1, m - 1))$ where $$q$$Hyp is the q-hypergeometric distribution.
2. for $$qt$$-analog: $X'(n, m) \sim qt\text{IHyp}(Z(n, m - 1) - Z(n - 1, m - 1), Z(n - 1, m) - Z(n - 1, m - 1))$ where $$qt$$IHyp is the $$qt$$-infhypergeometric distribution defined in burke_property.
Remark. As per burke_property, the $$qt$$DLPP reduces to the $$q$$DLPP when $$t = 0$$, and to the usual DLPP when $$t = q$$.
Open. Find a global definition for the $$q$$- and $$qt$$-DLPP (like (1)(2) in the language of directed paths), i.e. analogues of greene_theorem restricted to the first row of the output tableaux.
The Burke property results in strong law of large numbers in the DLPP and DP models with equilibrium boundary conditions. Here we give the $$qt$$-version. The other cases are similar.
Definition. Let $$0 < \alpha, \beta < 1$$. The $$qt$$-deformed DLPP with equilibrium boundary condition is the one on integer lattice $$\pint \times \pint$$ with the following weights:
\begin{align} a_{0, 0} &\equiv 0 \\ a_{n, 0} &\sim qt\text{Geom}(\alpha), \qquad n > 0 \\ a_{0, m} &\sim qt\text{Geom}(\beta), \qquad m > 0 \\ a_{n, m} &\sim qt\text{Geom}(\alpha\beta), \qquad m, n > 0. \end{align}
Claim. Consider the $$qt$$-deformed DLPP with equilibrium boundary condition, we have almost surely
$\lim_{N \to \infty} {Z(\lfloor N x \rfloor, \lfloor N y \rfloor) \over N} = x \gamma(\alpha) + y \gamma(\beta),$
where $$\gamma(\alpha)$$ is the first moment of $$qt\text{Gemo}(\alpha)$$.
Proof. Similar to that of the version of DLPP with geometric weights ($$q = t$$) in e.g. Theorem 4.12 of [{romik14}]. $$\square$$
## References
• [romik14] The surprising mathematics of longest increasing subsequences, , 2014. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999914169311523, "perplexity": 1318.8011558253202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00374.warc.gz"} |
https://www.tiredpixel.com/2013/index-4.html | # walking all combinations of infinite lists
#### Introduction
Some programming languages supply an interleave method for combining elements from infinite lists. The usual pattern for this is cycling through the set of lists, taking successive elements. When combination as a tuple is required, however, the problem becomes more complex, because of the need to ensure that every combination of elements is visited. Walking all combinations of finite lists is straightforward; it is possible to chain lists together and use an increment-and-carry method. This approach is unsuitable for infinite lists, as the first list to be incremented would never carry. It is thus necessary to weave the lists together without getting trapped in infinity.
We consider only indexed, random-access lists (that is, lists where the cost of fetching an element by its index is the same regardless of the index). For example, a list of the Fibonacci Sequence generated using a recursive method would not be suitable (regardless of the method of recursion), but the Fibonacci Sequence generated by Binet’s Formula would be acceptable. We also wish to be able to subdivide the walk of combinations in a parallelisable and state-recoverable manner (that is, we wish to be able to describe ranges of the walk in such a way that the walk can be started or continued from that point, without overlapping with unwanted combinations); thus, we do not consider methods employing nested interleaves. We show that is is possible to satisfy these properties and walk all combinations of infinite lists in a dense manner.
#### Reduction to 2 lists
We call our walk $W$ and represent the first combination walked as $W_1$, which will represent some address $(A_x, B_y)$ where $x, y$ are the indexes to be determined. The walked combinations are thus $W_1, W_2, \ldots$. Note $x, y \in \mathbb{N}$ (from which we always exclude $0$). We name the elements of List $L$ $[L_1, L_2, \ldots]$. Note that we use unity-indexing throughout.
It is sufficient to walk all combinations of only two infinite lists. That is because multiple lists can be combined in a binary tree, satisfying the required properties. For example, consider Lists $A$, $B$, $C$. Suppose it is possible to walk combinations of $A$ and $B$; then it is possible to combine these lists as List $AB$, where $AB_1$ addresses the first combination from Lists $A$ and $B$ (similar to how we use $W$ to represent our overall walk). We then walk Lists $AB$ and $C$. (This means we walk lists at different speeds for more than 2 lists, because of the binary tree. However, the walk is still dense and satisfies the required properties.)
With lists combined in this way, we consider only walking all combinations of 2 infinite lists.
#### Method for 2 lists
We claim the following satisfies the required properties for walking all combinations of 2 infinite Lists X and Y, defining the formulas thusly for subsequent convenience:
\begin{array}{rcl} m(n) & = & \left \lceil \frac{1 + \sqrt{1 + 8n}}{2} \right \rceil \newline x(n) & = & n - \frac{(m(n) - 1)(m(n) - 2)}{2} \newline y(n) & = & m(n) - x(n) \newline & where \newline n & \in & \mathbb{N} \end{array}
Using these formulas, we define the required function:
\begin{array}{rcl} f & : & \mathbb{N} \to \mathbb{Z}^2 \newline n & \mapsto & \left( x(n), y(n) \right) \end{array}
We show that the image of $f$ is in fact $\mathbb{N}^2$, that $f$ is both injective and surjective, and hence that $f$ is bijective $\mathbb{N} \to \mathbb{N}^2$. This is so we can walk $\mathbb{N}$, using $x(n)$ as the index in List X, and $y(n)$ as the index in List Y.
##### Image of $f$
$\forall n \in \mathbb{N}$, $m(n) \in \mathbb{Z}$. Thus, $x(n) \in \mathbb{Z}$ since the fraction is odd times even. Thus, $y(n) \in \mathbb{Z}$. We seek to show that $x(n), y(n) \geq 1$, meaning $x(n), y(n) \in \mathbb{N}$ and thus the image of $f$ is $\mathbb{N}^2$.
Let us define the subsets of $\mathbb{N}$:
\begin{array}{rcl} S_t & := & \left\lbrace n \in \mathbb{N} : \frac{(t - 1)(t - 2)}{2} < n \leq \frac{t(t - 1)}{2} \right\rbrace \newline & for \newline t & \in & \mathbb{N} \setminus \lbrace 1 \rbrace \end{array}
For example, $S_2 = \lbrace 1 \rbrace$, $S_3 = \lbrace 2, 3 \rbrace$, $S_4 = \lbrace 4, 5, 6 \rbrace$, etc. We note $S_2 \cup S_3 \cup S_4 \cup \dots = \mathbb{N}$, thus $S_t$ partitions $\mathbb{N}$. We observe that by construction, each $S_t$ contains precisely one triangle number.
We consider what happens when we put a triangle number into $m$:
\begin{array}{rcl} m\left( \frac{t(t - 1)}{2} \right) & = & \left \lceil \frac{1 + \sqrt{1 + 8\left( \frac{t(t - 1)}{2} \right)}}{2} \right \rceil \newline & = & \left \lceil \frac{1 + \sqrt{4\left( t - \frac{1}{2} \right)^2}}{2} \right \rceil \newline & = & \left \lceil \frac{1 + 2 \left( t - \frac{1}{2} \right)}{2} \right \rceil \newline & = & \lceil t \rceil \newline & = & t \end{array}
We observe that $m$ is non-strict increasing, since $\forall n \in \mathbb{N}$:
\begin{array}{rcl} \frac{1 + \sqrt{1 + 8(n + 1)}}{2} & > & \frac{1 + \sqrt{1 + 8n}}{2} \end{array}
and since $a < b \implies a < \lceil b \rceil \implies \lceil a \rceil \leq \lceil b \rceil$
\begin{array}{rcl} \left \lceil \frac{1 + \sqrt{1 + 8(n + 1)}}{2} \right \rceil & \geq & \left \lceil \frac{1 + \sqrt{1 + 8n}}{2} \right \rceil \newline & \implies \newline m(n+1) & \geq & m(n) \end{array}
We next take an arbitrary $s \in S_t$, and consider $m(s)$. By the construction of $S_t$:
\begin{array}{rcl} \frac{(t - 1)(t - 2)}{2} & < s & \leq \frac{t(t - 1)}{2} \newline & \implies \newline m \left( \frac{(t - 1)(t - 2)}{2} \right) & \leq m(s) & \leq m \left( \frac{t(t - 1)}{2} \right) \end{array}
since $m$ non-strict increasing, thus $t - 1 \leq m(s) \leq t$. But $m(s) \in \mathbb{Z}$, so $m(s) \in \lbrace t - 1, t \rbrace$.
We seek to exclude $m(s) = t - 1$. Let us imagine such a value is fine, and reach a contradiction.
\begin{array}{rcl} m(s) & = & \left \lceil \frac{1 + \sqrt{1 + 8s}}{2} \right \rceil \newline & \iff \newline m(s) - 1 & < \frac{1 + \sqrt{1 + 8s}}{2} & \leq m(s) \newline & \implies \newline t - 2 & < \frac{1 + \sqrt{1 + 8s}}{2} & \leq t - 1 \newline & \implies \newline 2(t - 2) - 1 & < \sqrt{1 + 8s} & \leq 2(t - 1) - 1 \newline & \implies \newline (2t - 5)^2 & < 1 + 8s & \leq (2t - 3)^2 \newline & \implies \newline \frac{(2t - 5)^2 - 1}{8} & < s & \leq \frac{(2t - 3)^2 - 1}{8} \newline & \implies \newline \frac{1}{2}t^2 - \frac{5}{2}t + 3 & < s & \leq \frac{1}{2}t^2 - \frac{3}{2}t + 1 \newline & \implies \newline \frac{(t - 2)(t - 3)}{2} & < s & \leq \frac{(t - 1)(t - 2)}{2} \end{array}
But by construction of $S_t$, $\frac{(t - 1)(t - 2)}{2} < s \leq \frac{t(t - 1)}{2}$, thus $s < s$ and we reach a contradiction.
Thus, $m(s) \in \lbrace t \rbrace$ so $m(s) = t$. Thus, for any $s \in S_t$, $m(s) = t$. This means that by partitioning $\mathbb{N}$ in this manner, we can consider each $S_t$ within which $m(s)$ is constant and the ceiling function no longer makes us anxious.
We now show that $x(n), y(n) \geq 1$. Instead of trying to show this directly, we show that it is true within every $S_t$. But since that partitions $\mathbb{N}$, it yields the required result.
Take $s \in S_t$. By construction of $S_t$:
\begin{array}{rcl} s & > & \frac{(t - 1)(t - 2)}{2} \newline & \implies \newline s - \frac{(t - 1)(t - 2)}{2} & > & 0 \newline & \implies \newline s - \frac{(t - 1)(t - 2)}{2} & \geq & 1 \newline & \implies \newline x(s) & \geq & 1 \end{array}
meaning every element in $S_t$ is strict positive, so $x(n) \geq 1 \forall n \in \mathbb{N}$.
Similarly, take $s \in S_t$. By construction of $S_t$:
\begin{array}{rcl} s & \leq & \frac{t(t - 1)}{2} \newline & = & \left( \frac{t(t - 1)}{2} + 1 \right) - 1 \newline & = & \left( \frac{(t - 1)(t - 2)}{2} + t \right) - 1 \newline & \leq & \frac{(m(s) - 1)(m(s) - 2)}{2} + m(s) - 1 \newline & \implies \newline 1 & \leq & m(s) - s + \frac{(m(s) - 1)(m(s) - 2)}{2} \newline & = & y(s) \end{array}
meaning every element in $S_t$ is strict positive, so $y(n) \geq 1 \forall n \in \mathbb{N}$.
Since we already know $f : \mathbb{N} \to \mathbb{Z}^2$, we observe that the image is in fact $\mathbb{N}^2$, so we can redefine $f : \mathbb{N} \to \mathbb{N}^2$ and say that the image and the codomain are the same.
##### Injectivity of $f$
Choose $(x_1, y_1) \in \mathbb{N}^2$ according to $f$ for some $n_1 \in \mathbb{N}$ such that:
\begin{array}{rcl} m_1 & = & \left \lceil \frac{1 + \sqrt{1 + 8n_1}}{2} \right \rceil \newline x_1 & = & n_1 - \frac{(m_1 - 1)(m_1 - 2)}{2} \newline y_1 & = & m_1 - x_1 \end{array}
Choose $(x_2, y_2) \in \mathbb{N}^2$ in a similar manner for some $n_2 \in \mathbb{N}$. Suppose $(x_1, y_1) = (x_2, y_2)$. Then $x_1 = x_2$ and $y_1 = y_2$.
Substituting, $y_2 = m_1 - x_2$. Since $y_2 = m_2 - x_2$, $m_1 = m_2$.
Also substituting, $x_1 = n_2 - \frac{(m_2 - 1)(m_2 - 2)}{2} = n_2 - \frac{(m_1 - 1)(m_1 - 2)}{2}$. Since $x_1 = n_1 - \frac{(m_1 - 1)(m_1 - 2)}{2}$, $n_1 = n_2$.
Thus $f$ is injective.
##### Surjectivity of $f$
Choose arbitrary $(x_1, y_1) \in \mathbb{N}^2$. We seek $n_1 \in \mathbb{N}$ such that $f$ is satisfied; that is:
\begin{array}{rcl} m(n_1) & = & \left \lceil \frac{1 + \sqrt{1 + 8n_1}}{2} \right \rceil \newline x_1 = x(n_1) & = & n_1 - \frac{(m(n_1) - 1)(m(n_1) - 2)}{2} \newline y_1 = y(n_1) & = & m(n_1) - x(n_1) \end{array}
It might seem straightforward to substitute and solve for $n_1$ explicitly, then calculate $m(n_1)$ explicitly, then show that $x(n_1) = x_1$ and $y(n_1) = y_1$. In this form, however, dancing with the ceiling function is tricksy. Instead, we view the problem as a system of constraints and rewrite it into a simpler form, finding $n_1$ along the way.
\begin{array}{rcl} m(n_1) & = & \left \lceil \frac{1 + \sqrt{1 + 8n_1}}{2} \right \rceil \newline x_1 & = & n_1 - \frac{(m(n_1) - 1)(m(n_1) - 2)}{2} \newline y_1 & = & m(n_1) - x_1 \end{array}
and substituting $m(n_1)$ throughout:
\begin{array}{rcl} x_1 + y_1 & = & \left \lceil \frac{1 + \sqrt{1 + 8n_1}}{2} \right \rceil \newline x_1 & = & n_1 - \frac{(x_1 + y_1 - 1)(x_1 + y_1 - 2)}{2} \end{array}
and rewriting the ceiling function as an inequality and rearranging:
\begin{array}{lcl} x_1 + y_1 - 1 < \frac{1 + \sqrt{1 + 8n_1}}{2} \leq x_1 + y_1 \newline n_1 = x_1 + \frac{(x_1 + y_1 - 1)(x_1 + y_1 - 2)}{2} \end{array}
Considering the inequality:
\begin{array}{lcl} x_1 + y_1 - 1 < \frac{1 + \sqrt{1 + 8n_1}}{2} \leq x_1 + y_1 \newline \implies 2(x_1 + y_1) - 3 < \sqrt{1 + 8n_1} \leq 2(x_1 + y_1) - 1 \newline \implies \left\lbrack 2(x_1 + y_1) - 3 \right\rbrack^2 < 1 + 8n_1 \leq \left\lbrack 2(x_1 + y_1) - 1 \right\rbrack^2 \newline \implies \frac{\left\lbrack 2(x_1 + y_1) - 3 \right\rbrack^2 - 1}{8} < n_1 \leq \frac{\left\lbrack 2(x_1 + y_1) - 1 \right\rbrack^2 - 1}{8} \newline \implies \frac{ 4(x_1 + y_1)^2 - 12(x_1 + y_1) + 8}{8} < n_1 \leq \frac{ 4(x_1 + y_1)^2 - 4(x_1 + y_1) }{8} \newline \implies \frac{1}{2}(x_1 + y_1)^2 - \frac{3}{2}(x_1 + y_1) + 1 < n_1 \leq \frac{1}{2}(x_1 + y_1)^2 - \frac{1}{2}(x_1 + y_1) \end{array}
Considering the equality:
\begin{array}{rcl} n_1 & = & x_1 + \frac{(x_1 + y_1 - 1)(x_1 + y_1 - 2)}{2} \newline & = & x_1 + \frac{1}{2}(x_1 + y_1)^2 - \frac{3}{2}(x_1 + y_1) + 1 \end{array}
Thus we can rewrite the system as:
\begin{array}{lcl} \frac{1}{2}(x_1 + y_1)^2 - \frac{3}{2}(x_1 + y_1) + 1 < n_1 \leq \frac{1}{2}(x_1 + y_1)^2 - \frac{1}{2}(x_1 + y_1) \newline n_1 = x_1 + \frac{1}{2}(x_1 + y_1)^2 - \frac{3}{2}(x_1 + y_1) + 1 \end{array}
and multiplying by $2$ and substituting $n_1$:
\begin{array}{lcl} (x_1 + y_1)^2 - 3(x_1 + y_1) + 2 < 2x_1 + (x_1 + y_1)^2 - 3(x_1 + y_1) + 2 \leq (x_1 + y_1)^2 - (x_1 + y_1) \end{array}
We now split the inequality. The LHS simplifies to $0 < x_1 \implies 1 \leq x_1$ which we know to be true. The RHS simplifies to $x_1 - (x_1 + y_1) + 1 \leq 0 \implies -y_1 \leq -1 \implies y_1 \geq 1$ which we know to be true.
So our candidate solution to the system, which must be unique if so, is:
\begin{array}{rcl} n_1 = x_1 + \frac{(x_1 + y_1 - 1)(x_1 + y_1 - 2)}{2} \end{array}
which was fairly clear at the beginning, but we have shown it satisfies all the constraints including the ceiling. Clearly, $n_1 \in \mathbb{Z}$ since the fraction is even times odd. But $x_1, y_1 \geq 1 \implies x_1 + y_1 \geq 2 \implies n_1 \geq 1$, so $n_1 \in \mathbb{N}$.
Thus, we have found $n_1 \in \mathbb{N}$ which satisfies the constraints and this $n_1 \mapsto (x_1, y_1)$, which was an arbitrary choice in $\mathbb{N}^2$. Thus $f$ is surjective.
#### Conclusion
Since $f : \mathbb{N} \to \mathbb{N}^2$ is both injective and surjective, it is bijective. We have also found explicit ways of converting between $\mathbb{N}$ and $\mathbb{N}^2$ in either direction. This $f$ satisfies the properties we required for walking all combinations of infinite Lists X and Y. We make ourselves anxious about the holes and mistakes in our argument, hope they are not too serious, and finish off the coffee. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990602433681488, "perplexity": 685.1330282280028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00036.warc.gz"} |
https://www.physicsforums.com/threads/how-do-they-measure-quark-masses-quark-charges.834356/ | # How do they measure quark masses? Quark charges?
1. Sep 24, 2015
### H_Hernandez
Hello,
So, I know quarks are confined in baryons. In a proton, there are "3" quarks, but the sum of their masses is not the mass of the proton. This implies a major fraction of the proton mass comes from interactions. My question is, how then do they measure quark u and d masses? And simmilarily, their charges?
2. Sep 24, 2015
### mathman
Google "quark mass measurement". You will get lots of hits, including old entries in this forum.
3. Sep 24, 2015
### Staff: Admin
4. Sep 24, 2015
### Avodyne
It is possible to compute how the masses of quark-containing particles (such as the proton, neutron, and pion, collectively known as hadrons) depend on the quark masses. Then, the observed values of the hadron masses can be used to determine the quark masses.
For complete details, see
http://pdg.lbl.gov/2012/reviews/rpp2012-rev-quark-masses.pdf
This is a technical document, intended for experts, but I think it's pretty readable nonetheless, and gives a good picture of what physicists actually do to figure these things out. One key point is that, because quarks are confined, their "mass" does not have as clear and simple a definition as it does for unconfined particles such as the electron.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: How do they measure quark masses? Quark charges? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650061130523682, "perplexity": 1637.5792847676782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00053.warc.gz"} |
http://mathhelpforum.com/algebra/179169-mixture-problem-help.html | # Math Help - Mixture Problem Help
1. ## Mixture Problem Help
Im having trouble understanding mixture problems. (and all the other ones like distance,work rate)
So my question is...
Victor is making orange juice. He has one pitcher that contains 2 quarts of 50% concentrate, and another pitcher that contains 4 quarts of 20% concentrate. If he combines the two pitchers, how many quarts of water must he add so that he ends up with a mixture containing 25%?
A) 0.5
B)1
C) 1.2
D) 2.4
2. Originally Posted by seals123
Im having trouble understanding mixture problems. (and all the other ones like distance,work rate)
So my question is...
Victor is making orange juice. He has one pitcher that contains 2 quarts of 50% concentrate, and another pitcher that contains 4 quarts of 20% concentrate. If he combines the two pitchers, how many quarts of water must he add so that he ends up with a mixture containing 25%?
A) 0.5
B)1
C) 1.2
D) 2.4
let x = quarts of added water
2(.5) + 4(.2) + x(0) = (6+x)(.25)
3. Thanks!
I wasn't putting x for water. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074417114257812, "perplexity": 4758.16691307931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435465.20/warc/CC-MAIN-20140707234035-00029-ip-10-180-212-248.ec2.internal.warc.gz"} |
http://www.mathportal.org/calculators/complex-numbers-calculator/complex-unary-operations-calculator.php | Math Calculators, Lessons and Formulas
It is time to solve your math problem
You are here:
# Calculators :: Complex numbers :: Operations with one complex number
This calculator extracts the square root, calculate the modulus, finds inverse, finds conjugate and transform complex number to polar form. The calculator will generate a detailed explanation for each operation.
## Operations with one complex number
This calculator will find absolute value, inverse, conjugate or polar for a given complex number.
You can enter either integers (10), decimal numbers(10.12) or FRACTIONS(10/3). Important: The form will NOT let you enter wrong characters (like *, (, ), y, p, ...) How to input??
0 1 2 3 4 5 6 7 8 9 - / . del
conjugate (default) modulus inverse roots polar form
Show me an explanation.
## Formulas for conjugate, modulus, inverse, polar form and roots
### Conjugate
The conjugate of the complex number z = a + bi is:
Example 1:
Example 2:
Example 3:
### Modulus (absolute value)
The absolute value of the complex number z = a + bi is:
Example 1:
Example 2:
Example 3:
### Inverse
The inverse of the complex number z = a + bi is:
Example 1: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967217743396759, "perplexity": 3463.4439119463177}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700245.92/warc/CC-MAIN-20160428164140-00182-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/centripetal-acceleration.293149/ | # Centripetal Acceleration
1. Feb 17, 2009
### lauriecherie
1. The problem statement, all variables and given/known data
What would Earth's rotation period have to be for objects on the equator to have a centripetal acceleration equal to 9.8 m/s^2?
____ min
2. Relevant equations
Centripetal acceleration is equal to velocity * (2PI/T),
where T is the period in seconds.
3. The attempt at a solution
I set centripetal acceleration equal to 9.8m/s^2 and solved for T. Then I took T and divided it by 60 so I could get the answer in minutes. I came out with 4.97 min which Webassign says is incorrect. Any ideas?
Last edited: Feb 17, 2009
2. Feb 17, 2009
### LowlyPion
One way to look at it is that if the object is not to fall into the center then calculate the orbit about a mass of Earth at 1 earth radius.
So what is the period of such an orbit?
V2/R = GM/R2 = ω2R
ω2 = GM/R3 = (2π /T)2
T = 2π*(R3/GM)1/2
3. Feb 17, 2009
### lauriecherie
What does GM stand for?
4. Feb 17, 2009
### LowlyPion
That's sometimes written as μ which is the standard gravitational parameter for earth.
It is the product of earth's mass and the Universal Gravitational constant. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562206268310547, "perplexity": 1631.565458982692}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00185.warc.gz"} |
https://physics.stackexchange.com/questions/176760/problem-on-einstein-de-haas-experiment?noredirect=1 | # Problem on Einstein - de Haas experiment
I am a Physics student (4th year) and I'm trying to study the Einstein - de Haas effect in laboratory. That is what I got: a suspended Iron cylinder with about 5 cm height and a radius of 0.8 cm is put inside a solenoid which can create a magnetic field of 0.2 mT; we use a Titanium fiber of 0.05 mm radius to hold up the cylinder.
Now where is the problem? Well we expect that when we turn on the current and so create the magnetic field through the solenoid, the cylinder would rotate just a bit, for example 2 degrees because that is what is written in the Einstein - de Haas article... but not! We got rotation of 70° and more (to measure such a great angle we used a camera which records the piece while rotating)! Measuring the initial angular velocity with a computer, and substituting it in the formula:
$$I \omega = -(2m/e) M$$
Where $I$ is the moment of inertia, $\omega$ is the angular velocity and $M$ is the magnetization of our material (which we can obtain), we get a value so far from $(2m/e)\simeq10^{-11}rad / s \cdot T$ in fact we got something like $10^{-5}$. I know that we should obtain something like the double of $(2m/e)$ but we are neither close to it!
In conclusion: should we force ourselves to have an angle of $1°\sim5°$ and then (and only then) measure the angular velocity because this effect is measurable only with little oscillation, or we got something strange, which means that we should change the system because something is interfering? Obviously we have isolated as we can the system from air currents and ground vibrations.
If you need more information about what we are using in the experiment, just say it and I will provide them. Thanks!
New information (about the comment of HolgerFielder): the wire diameter I've already given: Titanium 0.1 mm; if you mean the diameter of the solenoid's wire, that is 2 mm almost. Coil diameter = 4 cm. Number of loops = 200. Current $\simeq$ 1.5 Ampère. From these values and using this formula for magnetic field of a solenoid with radius r and with length l (in its "central axis"):
$$B = \mu _{0} (N/l) i \cdot l/(\sqrt(l^{2} + 4 r^{2}))$$
one can obtain $B \simeq 2.2$ mT which is close to the value that we measure with the Gaussmeter (0.2 mT as I mentioned before, in the top of the question). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468093514442444, "perplexity": 178.41015460699703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00299.warc.gz"} |
https://www.physicsforums.com/threads/gaussean-beam.133829/ | # Gaussean beam
1. Sep 27, 2006
### Quasi Particle
hello.
I've got a gaussean beam, which is collimated with a diameter of 2mm, and a wavelength of 1112nm. I need to focus it to a beam waist of 25µm, but my lens with the smallest focal length I have is still f=300mm, so I need to build a telescope:
---collimated beam (w0)-----|lens f)-----beam with w'0-----|lens f')-----beam with w"0-----
I found the following formulae:
-focussing a collimated beam with a lens:
$$w'_{0}=\frac{\lambda}{\pi w_{0}} f$$
where $$w'_0$$ is the new and $$w_0$$ is the old beam waist.
-focussing a beam with the lens in the waist of the original beam:
w"0 = w'0 / [1+(z'0/f')^2]^-1/2
$$w"_0=\frac{w'_0}{\sqrt{1+{\left(\frac{\pi {w'_0}^2}{\lambda f'}\right)}^2}}$$
where w"0 is the second beam waist
now, from the second formula I wanted to calculate $$w'_0$$ so I can determine f (when f'=300) but ended up with
$$w'_0=\sqrt{\sqrt{\frac{{w'_0}^2}{a^2}+\frac{1}{4a^4}}-\frac{1}{2a^2}}$$
with $$a=\frac{\pi}{\lambda f}$$.
Is that right? It looks quite strange and I get something on the order of 10e-10 which seems very small for a beam waist.
Is it right the right ansatz, in the first place? The aim is to calculate how to make the telescope, i.e. what focal length and distance the first lens must have. This is a real problem btw., but it's so much like textbook problems that I posted it here.
Any help is appreciated, I'd be happy to be pointed into the right direction or given some material on this. I searched the forum and the web without much success, but maybe I've been looking in the wrong places
Thanks in advance for taking the time =D
____
EDIT: I don't seem to be able to make latex display some of the formulae. Sorry for the inconvenience.
Last edited: Sep 28, 2006 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475789666175842, "perplexity": 782.4733581799364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00584.warc.gz"} |
http://www.nag.com/numeric/CL/nagdoc_cl23/html/F11/f11mlc.html | f11 Chapter Contents
f11 Chapter Introduction
NAG C Library Manual
# NAG Library Function Documentnag_superlu_matrix_norm (f11mlc)
## 1 Purpose
nag_superlu_matrix_norm (f11mlc) computes the $1$-norm, the $\infty$-norm or the maximum absolute value of the elements of a real, square, sparse matrix which is held in compressed column (Harwell–Boeing) format.
## 2 Specification
#include #include
void nag_superlu_matrix_norm (Nag_NormType norm, double *anorm, Integer n, const Integer icolzp[], const Integer irowix[], const double a[], NagError *fail)
## 3 Description
nag_superlu_matrix_norm (f11mlc) computes various quantities relating to norms of a real, sparse $n$ by $n$ matrix $A$ presented in compressed column (Harwell–Boeing) format.
None.
## 5 Arguments
1: normNag_NormTypeInput
On entry: specifies the value to be returned in anorm.
${\mathbf{norm}}=\mathrm{Nag_RealOneNorm}$
The $1$-norm ${‖A‖}_{1}$ of the matrix is computed, that is $\underset{1\le j\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{i=1}^{n}\left|{A}_{ij}\right|$.
${\mathbf{norm}}=\mathrm{Nag_RealInfNorm}$
The $\infty$-norm ${‖A‖}_{\infty }$ of the matrix is computed, that is $\underset{1\le i\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{j=1}^{n}\left|{A}_{ij}\right|$.
${\mathbf{norm}}=\mathrm{Nag_RealMaxNorm}$
The value $\underset{1\le i,j\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left|{A}_{ij}\right|$ (not a norm).
Constraint: ${\mathbf{norm}}=\mathrm{Nag_RealOneNorm}$, $\mathrm{Nag_RealInfNorm}$ or $\mathrm{Nag_RealMaxNorm}$.
2: anormdouble *Output
On exit: the computed quantity relating the matrix.
3: nIntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
4: icolzp[$\mathit{dim}$]const IntegerInput
Note: the dimension, dim, of the array icolzp must be at least ${\mathbf{n}}+1$.
On entry: ${\mathbf{icolzp}}\left[i-1\right]$ contains the index in $A$ of the start of a new column. See Section 2.1.3 in the f11 Chapter Introduction.
5: irowix[$\mathit{dim}$]const IntegerInput
Note: the dimension, dim, of the array irowix must be at least ${\mathbf{icolzp}}\left[{\mathbf{n}}\right]-1$, the number of nonzeros of the sparse matrix $A$.
On entry: the row index array of the sparse matrix $A$.
6: a[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array a must be at least ${\mathbf{icolzp}}\left[{\mathbf{n}}\right]-1$, the number of nonzeros of the sparse matrix $A$.
On entry: the array of nonzero values in the sparse matrix $A$.
7: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument number $〈\mathit{\text{value}}〉$ had an illegal value.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
Not applicable.
None.
## 9 Example
This example computes norms and maximum absolute value of the matrix $A$, where
$A= 2.00 1.00 0 0 0 0 0 1.00 -1.00 0 4.00 0 1.00 0 1.00 0 0 0 1.00 2.00 0 -2.00 0 0 3.00 .$
### 9.1 Program Text
Program Text (f11mlce.c)
### 9.2 Program Data
Program Data (f11mlce.d)
### 9.3 Program Results
Program Results (f11mlce.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971848726272583, "perplexity": 4285.576197115626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703030/warc/CC-MAIN-20140313024503-00052-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://brilliant.org/problems/simple-isnt-enough/ | # Simple isn't enough
Calculus Level 5
$\int_{0}^{16} { \arctan{(\sqrt{\sqrt{z} -1})} \ dz}$
If the complex integral above can be expressed as $$a + ib$$. Find the value of $$\left \lfloor{a + 10b}\right \rfloor$$.
Clarification: $$i = \sqrt{-1}$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945032596588135, "perplexity": 2715.325544225391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646189.21/warc/CC-MAIN-20180319003616-20180319023616-00372.warc.gz"} |
https://crazyproject.wordpress.com/2011/06/22/decide-whether-a-given-linear-congruence-over-an-algebraic-integer-ring-has-a-solution/ | ## Decide whether a given linear congruence over an algebraic integer ring has a solution
Consider $A = (3\sqrt{-5}, 10+\sqrt{-5})$ as an ideal in $\mathcal{O} = \mathbb{Z}[\sqrt{-5}]$. Does $3\xi \equiv 5$ mod $A$ have a solution in $\mathcal{O}$?
Let $D = ((3),A) = (3,1+\sqrt{-5})$. By Theorem 9.13, our congruence has a solution if and only if $5 \equiv 0$ mod $D$. But as we saw in a previous exercise, $5 \equiv 2 \not\equiv 0$ mod $D$; so this congruence does not have a solution in $\mathcal{O}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605159521102905, "perplexity": 89.45391425429825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00354-ip-10-142-188-19.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/314638/variational-derivative-of-function-with-respect-to-its-derivative/314640 | # Variational derivative of function with respect to its derivative [closed]
What is $$\frac{\delta f(t)}{\delta \dot{f}(t)}~?$$
Where $\dot{f}(t) = df/dt$.
• could you provide some context please? – ZeroTheHero Feb 25 '17 at 3:32
• It is purely a math question ,so better to ask in mathematics stack exchange – Lapmid Feb 25 '17 at 4:00
• @SherlockHolmes yeah and all possible answers are purely math too. – ZeroTheHero Feb 25 '17 at 6:05
• Related question by OP: physics.stackexchange.com/q/263261/2451 – Qmechanic Feb 25 '17 at 7:36
• – Qmechanic Feb 25 '17 at 7:47
The definition of the functional derivative of a functional $I[g]$ is the distribution $\frac{\delta I}{\delta g}(\tau)$ such that $$\left\langle \frac{\delta I}{\delta g}, h\right\rangle := \frac{d}{d\alpha}\bigg\rvert_{\alpha=0} I[g+ \alpha h]$$ for every test function $h$. In our case, assuming to deal with functions which suitably vanish before reaching $\pm \infty$, $$I[g] = \int_{-\infty}^t g(x)dx$$ so that $$I[\dot{f}]= f(t)$$ as requested. Going on with the procedure $$\left\langle \frac{\delta I}{\delta g}, h\right\rangle = \frac{d}{d\alpha}|_{\alpha=0} \int_{-\infty}^t(g(\tau)+ \alpha h(\tau)) d\tau = \int_{-\infty}^t h(\tau) dx = \int_{-\infty}^{+\infty} \theta(t-\tau)h(\tau) d\tau$$ where $\theta(\tau)=1$ for $\tau\geq 0$ and $\theta(\tau)=0$ for $\tau<0$ and so $$\frac{\delta f(t)}{\delta \dot{f}}(\tau) = \frac{\delta I}{\delta g}(\tau)= \theta(t-\tau)$$
• Just a LaTeX tip: you can use \big, \bigg and so forth to have a larger vertical line, \rvert. (See edit.) – JamalS Feb 25 '17 at 9:54
• Thanks. I usually use $\left.$ $\right|$ but I did not exploit them here. – Valter Moretti Feb 25 '17 at 9:56
• Great explanation, thanks! I guess it makes physical sense too - varying at $\tau$ you only expect any effect at $t\geq\tau$ – smörkex Feb 26 '17 at 3:43
The important thing to keep in mind is that a functional derivative is more like a gradient than an ordinary derivative. The reason that this is an important consideration is because, practically, we always specify functions with (possibly infinite) lists of numbers, be they: Taylor series coefficients, continued fraction constants, a list of constant values (approximating with boxcars), a list of points (connect the dots), Fourier series coefficients, or etc.
The important part of this consideration is that the function's derivative doesn't carry any information about a constant vertical offset. Thus, because any function of the form $f(t) + c$ has the same derivative, $\dot{f}(t)$, the functional derivative in the question will not be defined in the "direction" that corresponds to the degree of freedom defined by $c$.
In equations, let \begin{align} g(t) &\equiv \dot{f}(t) \Rightarrow \\ f(t) - f(t_0) & = \int_{t_0}^t g(t') \operatorname{d} t'\end{align} From there: \begin{align} \frac{\delta f(t)}{ \delta \dot{f}(\tau)} - \frac{\delta f(t_0)}{ \delta \dot{f}(\tau)} & = \frac{\delta \int_{t_0}^t g(t') \operatorname{d}t'}{ \delta g(\tau)} \\ & = \int_{t_0}^t \delta(t' - \tau) \operatorname{d}t' \\ & = \Theta(t-\tau) \, \Theta(\tau - t_0) - \Theta(t_0 - \tau)\, \Theta(\tau - t). \end{align}
This now satisfies: $$\frac{\partial}{\partial t} \left(\frac{\delta f(t)}{\delta \dot{f}(\tau)}\right) = \delta(t - \tau),$$ as expected. Because $\dot{f}(t)$ doesn't carry any information about the vertical offset of $f(t)$, only differences of the functional derivative, like above, are well defined.
If the space of functions is limited to those that satisfy $\lim_{t\rightarrow -\infty} f(t) = 0$, then we can take $t_0\rightarrow -\infty$ to get the expression from Valter Moretti's answer.
• Since the choice of $t_0$ is arbitrary, your calculation seems to suggest that this variation $\frac{\delta f(t)}{\delta \dot{f}(t)}$ is not well defined. – taper Feb 25 '17 at 4:15
• @taper I have now addressed that, and you're right, only differences in that functional derivative are well defined. – Sean E. Lake Feb 25 '17 at 19:35
• Thanks for this insight. So combining with Valter Moretti's answer, the full solution is then $\delta f(t) / \delta \dot{f}(\tau) = \theta(t-\tau) + c(\tau)$. But what kinds of initial conditions would determine $c(\tau)$ - it seems like for this problem these should be context-independent of what $f$ actually is. One natural condition seems to be $\delta f(a) / \delta \dot{f}(b) = 0$ where $b>a$. Then $0 = \delta f(a) / \delta \dot{f}(b) = \theta(a-b) + c(b) = c(b)$, so the full solution is just $\delta f(t) / \delta \dot{f}(\tau) = \theta(t-\tau)$ - is this true? – smörkex Feb 26 '17 at 3:53
• I would disagree with that. Looking at your other questions, related to Euler-Lagrange equations, this isn't relevant anyway. There are two main ways to do that problem. First, variational derivative of the action w.r.t. $x(t)$ witch uses chain rules and $\frac{\delta x(t)}{\delta x(t')} = \delta(t-t')$. The second is to use partial derivatives in which $x$, $\dot{x}$, $\ddot{x}$, etc are all treated as independent variables. – Sean E. Lake Feb 26 '17 at 5:59
• @Kurt In either case, the result is: \begin{align} \frac{\delta S[x]}{\delta x(t)} &= \int \left(\frac{\partial L}{\partial x} \delta(t'-t) + \frac{\partial L}{\partial \dot{x}} \delta'(t'-t) + \frac{\partial L}{\partial \ddot{x}} \delta''(t'-t) + \ldots \right) \operatorname{d}t' \\ & = \sum_{n=0}^\infty (-1)^n \frac{\operatorname{d}^n}{\operatorname{d}t^n} \left(\frac{\partial L}{\partial \frac{\mathrm{d}^n\, x}{\mathrm{d}\, t^n}}\right) \end{align} – Sean E. Lake Feb 26 '17 at 6:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909767508506775, "perplexity": 741.8760936008621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00405.warc.gz"} |
https://pdglive.lbl.gov/Particle.action?init=0&node=M059&home=MXXX025 | ${\mathit {\mathit c}}$ ${\mathit {\overline{\mathit c}}}$ MESONS(including possibly non- ${\mathit {\mathit q}}$ ${\mathit {\overline{\mathit q}}}$ states)
#### ${{\mathit \eta}_{{c}}{(2S)}}$
$I^G(J^{PC})$ = $0^+(0^{- +})$
Quantum numbers are quark model predictions.
${{\mathit \eta}_{{c}}{(2S)}}$ MASS $3637.5 \pm1.1$ MeV (S = 1.2)
${{\mathit \eta}_{{c}}{(2S)}}$ WIDTH $11.3 {}^{+3.2}_{-2.9}$ MeV
$\Gamma_{1}$ hadrons not seen
$\Gamma_{2}$ ${{\mathit K}}{{\overline{\mathit K}}}{{\mathit \pi}}$ $(1.9\pm{1.2})\%$ 1729
$\Gamma_{3}$ ${{\mathit K}}{{\overline{\mathit K}}}{{\mathit \eta}}$ $(5\pm{4})\times 10^{-3}$ 1637
$\Gamma_{4}$ 2 ${{\mathit \pi}^{+}}$2 ${{\mathit \pi}^{-}}$ not seen 1792
$\Gamma_{5}$ ${{\mathit \rho}^{0}}{{\mathit \rho}^{0}}$ not seen 1645
$\Gamma_{6}$ 3 ${{\mathit \pi}^{+}}$3 ${{\mathit \pi}^{-}}$ not seen 1749
$\Gamma_{7}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ not seen 1700
$\Gamma_{8}$ ${{\mathit K}^{*0}}{{\overline{\mathit K}}^{*0}}$ not seen 1585
$\Gamma_{9}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \pi}^{0}}$ $(1.4\pm{1.0})\%$ 1667
$\Gamma_{10}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}$2 ${{\mathit \pi}^{+}}$2 ${{\mathit \pi}^{-}}$ not seen 1627
$\Gamma_{11}$ ${{\mathit K}_S^0}$ ${{\mathit K}^{-}}$2 ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ + c.c. seen 1666
$\Gamma_{12}$ 2 ${{\mathit K}^{+}}$2 ${{\mathit K}^{-}}$ not seen 1470
$\Gamma_{13}$ ${{\mathit \phi}}{{\mathit \phi}}$ not seen 1506
$\Gamma_{14}$ ${{\mathit p}}{{\overline{\mathit p}}}$ seen 1558
$\Gamma_{15}$ ${{\mathit p}}{{\overline{\mathit p}}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ seen 1461
$\Gamma_{16}$ ${{\mathit \gamma}}{{\mathit \gamma}}$ $(1.9\pm{1.3})\times 10^{-4}$ 1819
$\Gamma_{17}$ ${{\mathit \gamma}}{{\mathit J / \psi}{(1S)}}$ $<1.4\%$ CL=90% 500
$\Gamma_{18}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}}$ not seen 1766
$\Gamma_{19}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}^{\,'}}$ not seen 1680
$\Gamma_{20}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}_{{c}}{(1S)}}$ $<25\%$ CL=90% 537
FOOTNOTES | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141559600830078, "perplexity": 2274.262832301001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00472.warc.gz"} |
https://www.zbmath.org/?q=an%3A0958.30029 | ×
zbMATH — the first resource for mathematics
Metrics of constant curvature 1 with three conical singularities on the 2-sphere. (English) Zbl 0958.30029
Summary: Let $$\text{Met}_1(\Sigma)$$ be the set of positive semi-definite conformal metrics of constant curvature 1 with conical singularities on a compact Riemann surface $$\Sigma$$. Suppose that $$d\sigma^2 \in\text{Met}_1 (\Sigma)$$ has conical singularities at points $$p_j\in\Sigma (j=1,\dots,n)$$ with order $$\beta_j(>-1)$$, that is, it admits a tangent cone of angle $$2\pi(\beta_j +1)$$ at each $$p_j$$. A formal sum $$D=\sum^n_{j=1} \beta_jp_j$$ is called the divisor of $$d\sigma^2$$. Then the Gauss-Bonnet formula implies that $$\chi (\Sigma, D):= \chi(\Sigma): =\chi(\Sigma) +\sum^n_{j=1} \beta_j>0$$. The divisor $$D$$ is called subcritical, critical, or supercritical when $$\delta(\Sigma,D): = \chi (\Sigma,D) -2\text{Min}_{j=1, \dots,n} \{1,\beta_j +1\}$$ is negative, zero, or positive, respectively. M. Troyanov [Trans. Am. Math. Soc. 324, No. 2, 793-821 (1991; Zbl 0724.53023)] showed that if $$\chi(\Sigma,D)>0$$, there exists a pseudometric in $$\text{Met}_1(\Sigma)$$ with divisor $$D$$ whenever it is subcritical. On the other hand, for the supercritical case several obstructions are known and the existence problem of the metrics is difficult: M. Troyanov [Lect. Notes Math. 1410, 296-306 (1989; Zbl 0697.53037) gave a classification of metrics of constant curvature 1 with at most two conical singularities on the 2-sphere. In the paper, the authors gave a necessary and sufficient condition for the existence and uniqueness of a metric with three conical singularities of given order on the 2-sphere. As shown by the authors, there is a one to one correspondence between the set $$\text{Met}_1(\Sigma)$$ and the set of branched CMC-1 (constant mean curvature one) immersions of $$\Sigma$$ excluded finite points into the hyperbolic 3-space with given hyperholic Gauss map. To show the theorem, this correspondence plays an important role. It should be remarked that classical work of F. Klein (Vorlesungen über die hypergeometrische Funktion (1933; Zbl 0461.33001) is related to the paper.
MSC:
30F10 Compact Riemann surfaces and uniformization 53C21 Methods of global Riemannian geometry, including PDE methods; curvature restrictions 53A10 Minimal surfaces in differential geometry, surfaces with prescribed mean curvature | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365729689598083, "perplexity": 252.8889897527901}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00645.warc.gz"} |
http://math.stackexchange.com/questions/209654/proof-of-continuity-for-a-real-function/210006 | # Proof of continuity for a real function!
Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a real function, and satisfy that: for all $x\in\mathbb{R}$ $$\lim_{r\to x,r\in\mathbb{Q}}f(r)=f(x).$$ Show that $f$ is continuous on $\mathbb{R}.$
-
you have continuous function $\tilde{f}$ on $\mathbb Q$. Now the question is: what is extension of this functon on $\mathbb R$. Try this physicsforums.com/showthread.php?t=430083 – Nikita Evseev Oct 9 '12 at 5:58
For a fixed $x$, and a given $\epsilon>0$, first find a $\delta$ so that, if $r$ is rational and $|r-x|<\delta$ then $|f(r)-f(x)|<\epsilon$. Then if $t$ is any real with say $|t-x|<\delta/2$ we can pick a rational $r$ very close to $t$ such that $r$ is also within $\delta$ of $x$. Now apply the triangle inequality to $(f(t)-f(x)) = (f(t)-f(r)) + (f(r)-f(x))$ and obtain
$|f(t)-f(x)|<=|f(t)-f(r)|+|f(r)-f(x)|.$
Since $t$ is close to $r$ the first term is small, and since $r$ is close to $x$ the second term is small. The idea is that in each separate term one rational and one real occurs, so that your assumption about convergence of limits through rational values converging to any real applies. I didn't fill in all the details, but with some manipulation one can get the thing less than $2\epsilon$.
EDIT: This needs to be thought through more. I agree with the OP that it's not clear how the details should go. But at least it seems to me it will go through...
Another try, more details: Given the fixed real $x$ and some $\epsilon>0$. First we can pick $\delta_1>0$ so that for rational $r$ we have
$|r-x|<\delta_1$ implies $|f(r)-f(x)|<\epsilon/2$.
Define $\delta=\delta_1/2$
Suppose $t$ is real with $|t-x|<\delta = \delta_1/2$.
We can now pick $\delta'>0$ so that for rational $r$ we have
$|r-t|<\delta'$ implies $|f(r)-f(t)|<\epsilon/2$.
Now put $\delta_2=min(\delta,\delta')$ and pick a rational $r$ with $|r-t|<\delta_2$.
Then we have $|r-x|<=|r-t|+|t-x|<\delta_1/2+\delta_1/2=\delta_1$ so that $|f(r)-f(x)|<\epsilon/2$, and also from $|r-t|<\delta_2$ we have also $|r-t|<\delta'$ so that $|f(r)-f(t)|<\epsilon/2$. We finally arrive at $|f(t)-f(x)|<\epsilon$ on applying the triangle inequality.
-
you can have a try. Threr will be somthing wrong when you take $\delta$ – Riemann Oct 9 '12 at 8:02
Riemann: I think you'll find the writeup is now in standard "epsilon delta" format for continuity at x. Thanks for the note, as before it wasn't completely clear. – coffeemath Oct 9 '12 at 19:53
Fix a $\xi\in{\mathbb R}$. If $f$ were not continuous at $\xi$ we could find an $\epsilon_0>0$ and for each $\delta>0$ a point $x_\delta\in U_\delta(\xi)$ with $$|f(x_\delta)- f(\xi)|\geq\epsilon_0\ .$$ Consider such an $x_\delta$. As $$\lim_{q\to x_\delta, \ q\in{\mathbb Q}} f(q)=f(x_\delta)$$ we can find a $q_\delta\in{\mathbb Q}$ with $|q_\delta-x_\delta|<\delta$ such that $$|f(q_\delta)-f(x_\delta|<{\epsilon_0\over2}\ .$$
It follows that there is for each $\delta>0$ a point $q_\delta\in U_{2\delta}(\xi)\cap{\mathbb Q}$ such that $$|f(q_\delta)-f(\xi)|\geq{\epsilon_0\over2}\ .$$ This contradicts $\lim_{q\to \xi, \ q\in{\mathbb Q}} f(q)=f(\xi)$.
-
It looks like Rudin's theorem 4.6 (in the Third edition) is almost exactly this. To paraphrase it to make it match the situation:
If $x$ is a limit point of $\mathbb Q$, then $f$ is continuous at $x$ if and only if $\lim_{r\rightarrow x,r\in \mathbb Q}f(r)=f(x)$.
Clearly, all $x\in \mathbb R$ are limit points of $\mathbb Q$. Sadly, Rudin's proof is, and I quote exactly,
"This is clear if we compare Definitions 4.1 and 4.5".
The former being the definition of the limit and the latter the definition of continuity. Maybe what he's getting at is that being a limit point guarantees that a $\delta$ neighborhood of $x$ will always exist that contains an appropriate $r$, and we can use the exact same $\delta$ for any corresponding $\epsilon$ used to establish the limit in the hypothesis.
-
Here is my own answer. (right or not)
Proof: We will use Heine Theorem: Suppose $x_n\to x_0$ as $n\to \infty$, we want to prove that: $$\lim_{n\to \infty}f(x_n)=f(x_0).$$ Due to $$\lim_{r\to x,r\in\mathbb{Q}}f(r)=f(x)\ \ (*)$$ we know that: for any given $\epsilon>0,\exists\ \delta>0$, s.t when $|r-x_n|<\delta$ and $r\in\mathbb{Q}$ $$|f(r)-f(x_n)|<\epsilon.$$ Now take $\epsilon_n=\frac{1}{n}$, then $\exists \ \delta_n'>0$, s.t when $|r-x_n|<\delta_n'$ and $r\in\mathbb{Q}$ $$|f(r)-f(x_n)|<\epsilon_n=\frac{1}{n}.$$ Let $\delta_n=\min\{\delta_n',1/n\}$, obviously, $\delta_n\leq\frac{1}{n}$, then take $r_n$ such that $|r_n-x_n|<\delta_n\leq\frac{1}{n}.$ And it satisfies that $$|f(r_n)-f(x_n)|<\epsilon_n=\frac{1}{n}.$$ By $|r_n-x_n|<\frac{1}{n}$ and $x_n\to x_0$, we know that $r_n\to x_0.$ Combining $(*)$, we get $$\lim\limits_{n\to \infty}f(r_n)=f(x_0).$$ For any given $\epsilon>0,\exists N>0$, s.t when $n>N$ $$\frac{1}{n}<\frac{\epsilon}{2}\ \text{and}\ |f(r_n)-f(x_0)|<\frac{\epsilon}{2}.$$ So $|f(x_n)-f(x_0)|\leq|f(x_n)-f(r_n)|+|f(r_n)-f(x_0)|<\frac{1}{n}+\frac{\epsilon}{2}<\epsilon.$ Here comes the result.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838025569915771, "perplexity": 137.60522340093362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00056-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://project-navel.com/navel/news/magazines/2003-06.html | # ŐV - fڎ
2011N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2010N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2009N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2008N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2007N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2006N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2005N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2004N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2003N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
##### 2003N6
G Le e
dP
8
ifBA[NXj
630ij SHUFFLE!E낵܍|X^[EґSeJ v1/2y[W
PUSH!!
8
iWVɁj
621iyj J낵CXgE
SHUFFLE!EAb`PyR
v5y[W
RveB[N
7
ip쏑Xj
610ij SHUFFLE! v2y[W
Raspberry
Vol.10
i\tgoN
pubVOj
610ij \E}EXpbhE
SHUFFLE!E uhЉE
Ab`PC^r[
v6y[W
{}EXpbh
JtPUREGIRL
7
iruXj
62ij 낵CXgE
VR[i[uNavelʏ`100v
v2y[W
2011N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2010N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2009N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2008N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2007N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2006N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2005N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2004N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2003N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997242093086243, "perplexity": 10.49472725131493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00383.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-9-quadratic-relations-and-conic-sections-9-7-solve-quadratic-systems-9-7-exercises-skill-practice-page-661/15 | ## Algebra 2 (1st Edition)
Published by McDougal Littell
# Chapter 9 Quadratic Relations and Conic Sections - 9.7 Solve Quadratic Systems - 9.7 Exercises - Skill Practice - Page 661: 15
#### Answer
$(-1,-4),(-6.5,7)$
#### Work Step by Step
Substituting the second equation ($y=-6-2x$) into the first one we get: $4x^2-5(-6-2x)^2=-76\\4x^2-5(36+24x+4x^2)=-76\\-16x^2-104-120x=0\\2x^2+13+15x=0\\(x+1)(2x+13)=0$ Thus $x=-1$ or $x=-6.5$. If $x=-1$, then $y=-4$, and if $x=-6.5$, then $y=7$. Thus the solutions are: $(-1,-4),(-6.5,7)$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570375680923462, "perplexity": 1189.5796032332764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00836.warc.gz"} |
https://tclcertifications.com/my-big-gty/237f36-sum-of-exponential-distribution | Jan 02, 2021 0 comment
sum of exponential distribution
The Gamma random variable of the exponential distribution with rate parameter λ can be expressed as: $Z=\sum_{i=1}^{n}X_{i}$ Here, Z = gamma random variable. 1. PROPOSITION 2. 2) so – according to Prop. We obtain: PROPOSITION 4 (m = 3). An interesting property of the exponential distribution is that it can be viewed as a continuous analogue of the geometric distribution. The distribution-specific functions can accept parameters of multiple exponential distributions. 1 – we can write: The reader has likely already realized that we have the expressions of and , thanks to Prop. � ����������H��^oR�| �~�� ���#�p�82e1�θ���CM�u� where f_X is the distribution of the random vector [].. $$X=$$ lifetime of a radioactive particle $$X=$$ how long you have to wait for an accident to occur at a given intersection Sum of exponential random variables over their indices. The reader might have recognized that the density of Y in Prop. 2. endobj This has been the quality of my life for most of the last two decades. And once more, with a great effort, my mind, which is not so young anymore, started her slow process of recovery. Let be independent random variables with an exponential distribution with pairwise distinct parameters , respectively. Now, calculate the probability function at different values of x to derive the distribution curve. <> ⢠E(S n) = P n i=1 E(T i) = n/λ. <> That is, if , then, (8) (2) The rth moment of Z can be expressed as; (9) Cumulant generating function By definition, the cumulant generating function for a random variable Z is obtained from, By expansion using Maclaurin series, (10) Desperately searching for a cure. Then, some days ago, the miracle happened again and I found myself thinking about a theorem I was working on in July. If we let Y i = X i / t , i = 1 , ⦠, n â 1 then, as the Jacobian of ⦠Consider I want x random numbers that sum up to one and that distribution is exponential. exponential distribution, mean and variance of exponential distribution, exponential distribution calculator, exponential distribution examples, memoryless property of exponential ⦠Therefore, X is a two- I know that they will then not be completely independent anymore. ⢠Deï¬ne S n as the waiting time for the nth event, i.e., the arrival time of the nth event. Modifica ), Mandami una notifica per nuovi articoli via e-mail, Sum of independent exponential random variables, Myalgic Encephalomyelitis/Chronic Fatigue Syndrome, Postural orthostatic tachycardia syndrome (POTS), Sum of independent exponential random variables with the same parameter, Sum of independent exponential random variables with the same parameter – paolo maccallini. In the following lines, we calculate the determinant of the matrix below, with respect to the second line. Let be independent exponential random variables with distinct parameters , respectively. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. Letâs derive the PDF of Exponential from scratch! PROPOSITION 1. So we have: The sum within brackets can be written as follows: So far, we have found the following relationship: In order for the thesis to be true, we just need to prove that. ;^���wE�1���Nm���=V�5�N>?l�4�9(9 R�����9&�h?ք���,S�����>�9>�Q&��,�Cif�W�2��h���V�g�t�ۆ�A#���#-�6�NШ����'�iI��W3�AE��#n�5Tp_\$���8������g��ON�Nl"�)Npn#3?�,��x �g�������Y����J?����C� When I use . This lecture discusses how to derive the distribution of the sum of two independent random variables.We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the summands are discrete) or its probability density function (if the summands are continuous). Below, suppose random variable X is exponentially distributed with rate parameter λ, and $${\displaystyle x_{1},\dotsc ,x_{n}}$$ are n independent samples from X, with sample mean $${\displaystyle {\bar {x}}}$$. <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> : (15.7) The above example describes the process of computing the pdf of a sum of continuous random variables. Then $$W = \min(W_1, \ldots, W_n)$$ is the winning time of the race, and $$W$$ has an Exponential distribution with rate parameter equal to sum of the individual contestant rate parameters. I concluded this proof last night. We just have to substitute in Prop. A paper on this same topic has been written by Markus Bibinger and it is available here. read about it, together with further references, in âNotes on the sum and maximum of independent exponentially distributed random variables with diï¬erent scale parametersâ by Markus Bibinger under So does anybody know a way so that the probabilities are still exponential distributed? identically distributed exponential random variables with mean 1/λ. The two random variables and (with n | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576094150543213, "perplexity": 433.1842657424845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00545.warc.gz"} |
https://rd.springer.com/article/10.1007%2FBF00276205 | Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates
# On a transcendental equation in the stability analysis of a population growth model
• 63 Accesses
• 14 Citations
## Summary
We consider the rate equation n = rn for the density n of a single species population in a constant environment. We assume only that there is a positive constant solution n*, that the rate of increase r depends on the history of n and that r decreases for great n. The stability properties of the solution n* depend on the location of the eigenvalues of the linearized functional differential equation. These eigenvalues are the complex solutions λ of the equation λ + α∫ −1 0 exp [λa]ds(a) − 0 with α>0 and s increasing, s (−1)=0, s (0)=1. We give conditions on a and s which ensure that all eigenvalues have negative real part, or that there are eigenvalues with positive real part. In the case of the simplest smooth function s (s=id+1), we obtain a theorem which describes the distribution of all eigenvalues in the complex plane for every α>0.
This is a preview of subscription content, log in to check access.
## References
1. [1]
Dieudonne, J.: Foundations of modern analysis. New York: Academic Press 1960.
2. [2]
Halbach, U., Burkhardt, H. J.: Sind einfache Zeitverzögerungen die Ursachen für periodische Populationsschwankungen? Oecologia (Berlin) 9, 215–222 (1972).
3. [3]
Hutchinson, G. E.: Circular causal systems in ecology. Annals of the New York Academy of Sciences 50, 221–246 (1948).
4. [4]
Hale, J. K.: Functional differential equations. Berlin-Heidelberg-New York: Springer 1971.
5. [5]
Walther, H. O.: Asymptotic stability for some functional differential equations. Proceedings of the Royal Society of Edinburgh 74 A (1974/75).
6. [6]
Wright, E. M.: A non-linear differential-difference equation. Jour. Reine Angewandte Math. 194, 66–87(1955).
7. [7]
Zygmund, A.: Trigonometric series I, Second edition. Cambridge: University Press 1959.
## Rights and permissions
Reprints and Permissions
Walther, H.-. On a transcendental equation in the stability analysis of a population growth model. J. Math. Biology 3, 187–195 (1976). https://doi.org/10.1007/BF00276205 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560588955879211, "perplexity": 2376.338191823203}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00557.warc.gz"} |
https://www.compadre.org/portal/../picup/exercises/exercise.cfm?A=ParticleAccelerator | +
Particle Accelerator!
Developed by Christopher Orban - Published May 23, 2017
This exercise illustrates a charged particle being accelerated through two charged plates. The student will explore how changing the mass, charge, and the spacing between the plates affects the final velocity of the particle. Although looking at the code is an important part of this exercise, there is only a small amount of coding involved. The student will change the values of a few different variables. Much of the work of this exercise is in doing analytic calculations for the final speed of the particle. Coding is still important to this exercise because there is a "Particle repulsion" exercise that will follow this one in which the student will modify the code to allow two particles to repel from each other. This exercise will use a javascript based programming language called [p5.js](http://p5js.org) that is very similar to C and C++ programming. (Note: If you are familiar with C or C++ the main difference you will see is that there is no main() function and instead the draw() function serves this role.) **Importantly, this exercise can be completed using any computer or chromebook without downloading any software!** This exercise is broken up into two parts because there are two different, equivalent ways to think about the acceleration of a proton from two charged plates: You can think of the proton as having a constant acceleration due to the electric field until it leaves the plates, or you can think of the proton as being accelerated over a "potential" that increases its kinetic energy by an amount that depends on the electric field and spacing between the plates. Either way you get the same answer, but it's interesting to think about it from two different points of view. This exercise is designed for an algebra-based physics class at the college or high school level. It may also be useful for calculus-based physics for non-majors (e.g. engineering & science majors). This exercise is part of a series of exercises developed by Prof. Chris Orban. The next exercise is on the [Replusion between two charges (with application to fusion!)](http://www.compadre.org/PICUP/exercises/exercise.cfm?I=253&A=ParticleRepulsion) There are pre-and-post assessment questions associated with this exercise (not available here) that are being used in an educational research study. If interested to collaborate on that study please e-mail Prof. Chris Orban ([email protected]). The first paper from this study [is available at this link](https://doi.org/10.1119/1.5058449), the second paper which discusses the electromagnetism exercises [is available at this link](http://dx.doi.org/10.1119/perc.2017.pr.067)
Subject Area Electricity & Magnetism High School and First Year Javascript 1. Students will gain experience applying kinematics equations from classical mechanics to a situation with a proton being accelerated from between two charged plates. This will involve performing an analytic calculation for the final speed of the proton that should closely match the result of the simulation. 2. Students will gain intuition on how the charge and mass of a particle affects its behavior in an electric field by modifying the code to change the charge, mass and direction of the electric field and seeing what happens. After making these changes students will also perform analytic calculations for the final speed of the particle that should match the result of the simulation. 3. Students will also learn how to think about charged plates in terms of potential difference. Students will perform analytic calculations that use potential difference to determine the final speed of the particle. 60 min
These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.).
### Part 1. Electric fields and acceleration!
In this exercise we will make a simulation of a particle being accelerated between two plates. The relevant equations in this case are these: $$v_{xf} = v_{xi} + a_x t \nonumber$$ $$v_{xf}^2 = v_{xi}^2 + 2 a_x \Delta x \nonumber$$ $$F_x = m a_x \nonumber$$ $$F_x = q E \nonumber$$ We will use some [unusual force and electric field units](http://www.physics.ohio-state.edu/~orban/physics_coding/units.html) in this exercise, but other things should be more familiar. Specifically the variable q will be in terms of the elementary charge. So a proton will be q = 1.0;. The variable mass will be in atomic mass units, so a proton would be mass = 1.0;. Step. 1. Check out this nice animation of a [proton being accelerated through two charged plates](https://www.asc.ohio-state.edu/orban.14/stemcoding/accelerator2/accelerator.html). In the animation, notice that the initial x velocity ($v_{xi}$) is non-zero. Step 2. Try out the accelerator code in an editor [Click on this link to open the accelerator code in a p5.js editor](https://editor.p5js.org/ChrisOrban/sketches/1selx5sR) Press play there to run the code. It should behave the same way it did [with the link you were given in Step 1](https://www.asc.ohio-state.edu/orban.14/stemcoding/accelerator2/accelerator.html) Important! Create an account with the editor or sign in to your account. Then click "Duplicate" so you can have your own version of the code! Step 3. Try to make sense of the code behind the animation. Think especially about this section:
if ( ( x > x_plate_left) & (x < x_plate_right))
{
deltaVx = (q*E/mass)*dt;
t += dt;
}
This is the change in velocity each timestep (deltaVx) when the particle is inbetween the two plates. The quantity in the parenthesis (q*E/mass) is acceleration. Optional Step: Plot $v_x$ versus time by adding this code after display(); but before the end of draw()
graph1.addPoint(vx);
graph1.display();
This should produce a plot of vx versus time in the top right corner of the simulation. Step 4. Calculate the acceleration The final velocity at the end of the animation is 55.5 meters per second. (Ok, really it's pixels per second but let's just think about it as meters per second. The width of the screen would be 750 meters.) The particle spends t = 9.1 seconds in the electric field. If we can just figure out the acceleration, we should be able to use this formula to relate the initial velocity to the final velocity: $$v_{xf} = v_{xi} + a_x t \nonumber$$ What should we use for $a_x$ in this case? Use q = 1, E = 5, and m = 1 to figure it out. You should be able to come up with 55.5 meters per second for $v_{xf}$ with the correct value for $a_x$. Do not simply use the 55.5 meters per second result to figure out what the acceleration was! We are doing a consistency check on the code! Consistency is key! Step 5. Imagine you didn't know the time In a laboratory setting it is often hard to figure out exactly how much time a particle spends in the electric field. But we still know the initial velocity, the strength of the electric field, the mass of the particle and the separation ($\Delta x$) between the two plates which in this case is $\Delta x = 500-200 = 300$ meters. Use this information with this equation to come up with the 55.5 meters per second result for $v_{xf}$. $$v_{xf}^2 = v_{xi}^2 + 2 a_x \Delta x \nonumber$$ Show that you can get 55.5 meters per second for $v_{xf}$ from this equation. This is another consistency check for the code. Step 6. See what happens if the charge of the particle is doubled. Set q = 2.0 instead of 1.0. Does the charge of the particle affect the final velocity? why or why not? Step 7. See what happens if the mass of the particle is 2.0 instead of 1.0. Change the charge of the particle back to 1.0 so that the simulation is like accelerating a Deuteron instead of a proton. Note: Deuterons have about twice the mass as protons because a Deuteron is a proton and a neutron that are stuck together by nuclear forces. Protons and neutrons have roughly the same mass so the total mass of a Deuteron is about twice that of the proton. The net charge of the Deuteron is the same as the proton because neutrons are electrically neutral particles (no charge). Predict the final velocity of the Deuteron and check to see if your expectation is proven right! Show your calculation, prediction and measurement in what you turn in for this lab. Step 8. What happens if you change the electric field from 5 (the default value) to -5? Notice that the direction of the field lines changes when you do this. How fast does a Deuteron need to be traveling in order to get through the plate? Calculate why it has to be this fast! Optional: Step 9. (Extra Credit) Modify the program in some way (choose one or more) Suggestions/inspiration for modifying the program:
• add a component of the initial velocity in the y direction and predict the final speed
• Make the code smart enough to use negative(x,y) if the charge is less than zero and positive(x,y) if the charge is greater than zero.
### How to get full credit for Part 1!!!
In what you turn in you should answer the questions asked in this programming lab: 1. Make sure to explain why the final velocity turned out to be 55.5 meters per second (Steps 4 & 5) As best you can write down the equations that you used to calculate your number and write down the number you got. You may not get exactly 55.5 but that's ok. Try to get within 10% of that number. 2. Say in words whether increasing the charge of the particle from 1.0 to 2.0 affects the final velocity (Step 6) Just write a sentence. Say whether the final velocity increases, decreases or stays the same. No calculation necessary. 3. Change the mass to 2.0 and the charge back to 1.0 so that the particle is a Deuteron. Predict the final velocity and measure it (Step 7) Make sure your calculation matches the measured result to 10%. 4. Describe what happens when the electric field is negative and figure out how fast the Deuteron needs to be traveling (Step 8) Just change the initial velocity of the Deuteron until it passes through. Then calculate why it had to be this way. Write down the number for how fast it should be going. It may not match your empirical result exactly, but it should agree to maybe 10% 5. The extra credit really is optional You can still get full credit without doing the extra credit as long as you've done everything else correctly
### Part 2. Electric fields and electric potential!
Thinking about the problem in terms of potential difference, the relevant equations are: $$\Delta V = E d \nonumber$$ $$\Delta KE = q \Delta V \nonumber$$ $$KE = \frac{1}{2} m v_x^2 \nonumber$$ $$\Delta KE = KE_f - KE_i \nonumber$$ where $d$ is the distance between the plates. It should be clear from the animation that the initial kinetic energy is non-zero (because $v_{xi}$ is non-zero). The purpose of this programming lab is to show that the potential difference way of thinking about the problem is just as useful as thinking about the problem in terms of forces, and maybe even more useful, if the energy is what we care about!!! You can also use your code from the previous exercise so long as you **change the charge of the particle back to +1 and the mass of the particle back to 1.0!!!** Step 3. The final velocity at the end of the animation is 55.5 meters per second. The initial velocity was 10 meters per second and the acceleration occurs over 300 meters. Go back to the previous programming lab and remember how you were able to explain why the final velocity is 55.5 meters per second. In the last programming lab we thought about the problem in terms of forces. Write down the equations that explained the 55.5 meters per second in what you turn in for this lab. Step 4. The potential difference between the plates is $\Delta V = E d$ where $E$ is 5 and $d$ is 300 meters. The potential difference is therefore $\Delta V = E d = 1500$. Choose a different value for $E$, and choose a different value for $d$ by changing the variables x_plate_left and x_plate_right. Make sure that the new values of $d$ multiply to $\Delta V = 1500$ and make sure $d < 750$ meters or else one of the plates will be off-screen. Check to see if approximately the same final velocity (55.5 meters per second) is achieved. (The final velocity should be the same because the potential difference is the same. This is one reason why the potential difference is such a useful concept.) When you turn in this lab make sure your code has the values of $E$ and $d$ that you chose and write down the measured value of the velocity to confirm that this worked. Step 5. With your new values for $E$ and $d$, check to see what happens if the charge of the particle is doubled. Set q = 2.0 instead of 1.0. Does the charge of the particle affect the final velocity? why or why not? Do you get the same final velocity with the original values of $E$ and $d$? Is it faster or slower? Step 6. With your new values for $E$ and $d$, check to see what happens if the mass of the particle is 2.0 instead of 1.0 and change the charge of the particle back to 1.0. As mentioned in the last programming lab, this is like changing the particle from a proton to a deuteron. Predict the final velocity and check to see if your expectation is proven right. Do this calculation three different ways: (1) thinking about the problem in terms of acceleration and the time, as in Part 1, (2) thinking about the problem in terms of acceleration and the distance without knowing the time as in Part 1 and (3) thinking about the problem in terms of the potential difference and the change in kinetic energy You should be able to show that all three approaches give essentially the same answer.
### How to get full credit for Part 2!!!
In what you turn in you should answer the questions asked in this programming lab: 1. Write down the equations that gave 55.5 meters per second from Part 1 Feel free to look back at Part 1 and just put these same equations here 2. Choose new values for $E$ and $d$ (Step 4a) Make sure that the code you submit contains the new values for $E$ and $d$. These should multiply to $\Delta V = Ed = 1500$ and make sure $d < 750$ meters or else one of the plates will be off screen. 3. State whether you get approximately the same final velocity with the new values of $E$ and $d$ (Step 4b) This is just a yes or no question. The answer should be yes (as mentioned in step 4) or else you've done something wrong. 4. Say whether increasing the charge of the particle increases the final speed (Step 5) This is just a yes/no question. Make sure you change $E$ and $d$ before you test this. 5. Change the mass and show three different ways to calculate the final velocity (Step 6) You can calculate the final velocity using acceleration with time, or acceleration with distance or using the change in electric potential and the change in kinetic energy. Write down the equations as best you can in the comments. Don't just show the result. You should get approximately the same answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275870680809021, "perplexity": 240.82806207985954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00309.warc.gz"} |
http://itfeature.com/statistics/measure-of-dispersion/descriptive-statistics-multivariate-data-set | # Descriptive Statistics Multivariate Data set
Much of the information contained in the data can be assessed by calculating certain summary numbers, known as descriptive statistics such as Arithmetic mean (measure of location), average of the squares of the distances of all of the numbers from the mean (variation/spread i.e. measure of spread or variation) etc. Here we will discuss about descriptive statistics multivariate data set.
We shall rely most heavily on descriptive statistics that is measure of location, variation and linear association.
## Measure of Location
The arithmetic Average of n measurements $(x_{11}, x_{21}, x_{31},x_{41})$ on the first variable (defined in Multivariate Analysis: An Introduction) is
Sample Mean = $\bar{x}=\frac{1}{n} \sum _{j=1}^{n}x_{j1} \mbox{ where } j =1, 2,3,\cdots , n$
The sample mean for $n$ measurements on each of the p variables (there will be p sample means)
$\bar{x}_{k} =\frac{1}{n} \sum _{j=1}^{n}x_{jk} \mbox{ where } k = 1, 2, \cdots , p$
Measure of spread (variance) for n measurements on the first variable can be found as
$s_{1}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )^{2}$ where $\bar{x}_{1}$ is sample mean of the $x_{j}$’s for p variables.
Measure of spread (variance) for n measurements on all variable can be found as
$s_{k}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} \mbox{ where } k=1,2,\dots ,p \mbox{ and } j=1,2,\cdots ,p$
The Square Root of the sample variance is sample standard deviation i.e
$S_{l}^{2} =S_{kk} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} \mbox{ where } k=1,2,\cdots ,p$
Sample Covariance
Consider n pairs of measurement on each of Variable 1 and Variable 2
$\left[\begin{array}{c} {x_{11} } \\ {x_{12} } \end{array}\right],\left[\begin{array}{c} {x_{21} } \\ {x_{22} } \end{array}\right],\cdots ,\left[\begin{array}{c} {x_{n1} } \\ {x_{n2} } \end{array}\right]$
That is $x_{j1}$ and $x_{j2}$ are observed on the jth experimental item $(j=1,2,\cdots ,n)$. So a measure of linear association between the measurements of $V_1$ and $V_2$ is provided by the sample covariance
$s_{12} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )(x_{j2} -\bar{x}_{2} )$
(the average of product of the deviation from their respective means) therefore
$s_{ik} =\frac{1}{n} \sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )(x_{jk} -\bar{x}_{k} )$; i=1,2,..,p and k=1,2,\… ,p.
It measures the association between the kth variable.
Variance is the most commonly used measure of dispersion (variation) in the data and it is directly proportional to the amount of variation or information available in the data.
## Sample Correlation Coefficient
The sample correlation coefficient for the ith and kth variable is
$r_{ik} =\frac{s_{ik} }{\sqrt{s_{ii} } \sqrt{s_{kk} } } =\frac{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{j} )(x_{jk} -\bar{x}_{k} ) }{\sqrt{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )^{2} } \sqrt{\sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} } }$
$\mbox{ where } i=1,2,..,p \mbox{ and} k=1,2,\dots ,p$
Note that $r_{ik} =r_{ki}$ for all $i$ and $k$, and $r$ lies between -1 and +1. $r$ measures the strength of the linear association. If $r=0$ the lack of linear association between the components exists. The sign of $r$ indicates the direction of the association. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231976270675659, "perplexity": 642.8680939932141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805894.15/warc/CC-MAIN-20171120013853-20171120033853-00602.warc.gz"} |
https://oxfordre.com/physics/view/10.1093/acrefore/9780190871994.001.0001/acrefore-9780190871994-e-20?rskey=I1Fbul | Show Summary Details
Page of
PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, PHYSICS (oxfordre.com/physics). (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).
date: 07 July 2020
# Solar Flares
## Summary and Keywords
A solar flare is a transient increase in solar brightness powered by the release of magnetic energy stored in the Sun’s corona. Flares are observed in all wavelengths of the electromagnetic spectrum. The released magnetic energy heats coronal plasma to temperatures exceeding ten million Kelvins, leading to a significant increase in solar brightness at X-ray and extreme ultraviolet wavelengths. The Sun’s overall brightness is normally low at these wavelengths, and a flare can increase it by two or more an orders of magnitude. The size of a given flare is traditionally characterized by its peak brightness in a soft X-ray wavelength. Flares occur with frequency inversely related to this measure of size, with those of greatest size occuring less than once per year. Images and light curves from different parts of the spectrum from many different flares have led to an accepted model framework for explaining the typical solar flare. According to this model, a sheet of electric current (a current sheet) is first formed in the corona, perhaps by a coronal mass ejection. Magnetic reconnection at this current sheet allows stored magnetic energy to be converted into bulk flow energy, heat, radiation, and a population of non-thermal electrons and ions. Some of this energy is transmitted downward to cooler layers, which are then evaporated (or ablated) upward to fill the coronal with hot dense plasma. Much of the flares bright emission comes from this newly heated plasma. Theoretical models have been proposed to describe each step in this process.
# Observation and Overview
## Light Curves
A solar flare is a sudden brightening of a small portion of the Sun’s surface, powered by the release of stored magnetic energy. In current practice, a flare is identified and classified as a peak in the Sun’s total brightness in the 1–8Å soft X-ray (SXR) band of NOAA’s GOES satellite (The Geostationary Operational Environmental Satellite of the US National Oceanic and Atmospheric Administration). During a strong flare the Sun’s brightness in this particular band can increase more than 100-fold as shown in Figure 1a. A peak exceeding $10−4$, $10−5$, or $10−6W/m2$ is categorized as a flare of class X, M, or C respectively. The example flare peaks at $7×10−5W/m2$, designating it class M7. While describing flares in generality, the well-observed example shown in Figure 1 will be used for concrete illustration.
Figure 1. Light curves from an M7 flare on April 18, 2014. (a) SXR emission in the 1–8Å band from GOES, on a logarithmic scale. The left axis gives the intensity as a fraction of pre-flare levels, while the right gives the intensity in $W/m2$, along with ranges for X, M, and C flares. (b) Integrated intensities from ions of iron: Fe xx (red, 133Å, $T=9$ MK), Fe xviii (magenta, 94Å, $T=6$ MK), Fe xvi (green, 335Å, $T=3$ MK), from SDO/EVE, and Fe ix (blue, 171Å, $T=0.6$ MK) from SDO/AIA. All curves show the relative difference from pre-flare level on a logarithmic scale. (c) The 1600Å bandpass of SDO/AIA, showing relative difference from pre-flare on a linear scale. (d) Hard X-rays in 25–50 keV (blue) and 50–100 keV (red) band, by RHESSI, on a linear scale. (e) The 1–8Å curve from (a), now on a linear scale (blue), and its time derivative in black. The bottom axis is UTC, the top is in minutes from peak in 1–8Å. Diamonds on (b), (c) and (d) mark the times of the images in Figure 2.
A flare includes simultaneous brightening, to some degree, in virtually every wavelength in the spectrum. The extreme ultraviolet (EUV) spectral lines shown in Figure 1b brighten by anywhere from 3% (Fe ix) to 1,000% (Fe xx) above pre-flare levels. The wide range is due mostly to the wide range of quiescent values on which the comparison is based. The Sun is brightest in the visible, around 5,000Å, and even a large flare will only brighten these wavelengths by about $∼0.01$ % (Kopp, Lawrence, & Rottman, 2005; Kretzschmar et al., 2010). Therefore, only extremely large flares are detectable as increases in overall visible brightness of the Sun, or a comparable increases in its bolometric luminosity. It is easier to detect a localized increase in visible surface brightness; a flare large enough to show such an increase is termed a white light flare.
Light curves of different wavelengths from a given flare tend toward one of two characteristic behaviors. Some radiation, such as microwaves, gamma rays, or the hard X-ray (HXR) curve shown in Figure 1d, originates from the flare’s footpoints, and has a brief, impulsive light curve. In general, these persist only for the initial 1–10 minutes, known as the flare’s impulsive (or rise) phase. Other radiation, such as the SXR and EUV curves from Figures 1a and 1b, originates from the coronal plasma, and evolves more gradually. These light curves rise during the impulsive phase, and then decay for times ranging from 10 minutes to over 10 hours, known as the gradual (or main) phase.
The time derivative of a coronal light curve, most commonly the GOES 1–8Å curve shown in Figure 1e, tends to resemble the more impulsive footpoint light curves. This resemblance is known as the Neupert effect (Dennis & Zarro, 1993; Neupert, 1968), and is taken as evidence that energy deposited in the lower atmosphere heats and ablates material upward into the corona. This ablation process is commonly called chromosphere evaporation (Canfield et al., 1980; Antonucci et al., 1999), even though it is related to familiar chemical evaporation only by analogy.
The gradually evolving coronal plasma has a temperature typically ranging from 1–30 MK. In the example from Figure 1, a ratio of SXR bands shows a temperature peaking at $Tmax≃17$ MK, and decreasing gradually thereafter. This cooling behavior is generally reflected in the light curves from progressively cooler ion species peaking at progressively later times (Aschwanden & Alexander, 2001; Qiu & Longcope, 2016; Warren, Mariska, & Doschek, 2013); compare the four curves in Figure 1b.
Emission during the impulsive phase from microwaves and HXR often appears to originate from a population of energetic electrons interacting with the coronal plasma or the footpoints (Krucker et al., 2008). Spectra show this population to have a non-Maxwellian (i.e., non-thermal) distribution of energies—typically a power law, above some lower cutoff, $Ec$. The flare process evidently produces such a non-thermal electron population, at least during its impulsive phase.
Photons with energies above 1 MeV, that is, gamma-rays, are observed during the impulsive phases of some large flares (Murphy, 2007). These sometimes show evidence of spectral lines from nuclear processes, providing clear evidence of non-thermal ions of high energies. It is generally believed that ions are being accelerated in many, if not all, flares, even though gamma ray signatures can be observed in only the largest ones.
Flares, especially large ones, are frequently associated with the eruption of mass-loaded flux ropes, known as coronal mass ejections (CMEs). Several investigators have attempted to determine whether flares cause, or at least precede, CMEs or vice versa (Zhang, Dere, Howard, Kundu, & White, 2001). There are, however, numerous well-studied cases of flares occurring without CMEs (Chen et al., 2015b; Yashiro, Gopalswamy, Akiyama, Michalek, & Howard, 2005), called compact or confined flares, and of CMEs occurring without flares (Munro et al., 1979). This makes it clear that there can be no invariable cause–effect relation between these two distinct phenomena (Gosling, 1993). They are simply associated phenomena. Their association is most likely when the flare is large and has an extended gradual phase; such cases are known as eruptive flares or long-duration events.
In all cases, the flare is the component that produces the enhanced brightness, intimately related to the Sun’s lower atmosphere. The enhancement of EUV and X-rays increases the rate of ionization in the upper atmosphere of the Earth and other planets (Chamberlin, Woods, & Eparvier, 2008; Fuller-Rowell & Solomon, 2010). A flare thereby affects the ionosphere immediately, while a CME can have more varied effects at Earth, but only when the magnetized mass impacts its magnetosphere.
The brightness enhancement of a flare demands that it be associated with the release of energy beyond the Sun’s steady luminous radiation. It is relatively straightforward to compute the total radiative loss from the hot coronal plasma, since this is readily observed in EUV and SXR. The two SXR bands from GOES provide an estimate of temperature and emission measure from which total radiative losses can be computed. Doing so for the example in Figure 1 reveals a peak coronal radiation power, $Pc,r≃1027erg/s$, roughly 20 times greater than the power in the narrow 1–8Å bandpass of GOES. Over its multi-hour duration the coronal plasma of this particular flare radiates $ΔEc,r≃5×1030$ ergs. These values are fairly average, and X flares will typically show coronal radiation up to $Pc,r∼1029erg/s$ and total energies up to a few times $1032$ erg. More careful computations, made using the full differential emission measure of the coronal plasma, yield comparable values in general.
The coronal plasma is only part of the flare, so the power it radiates accounts for only a fraction of the flare’s total energy. That total has proven difficult to compute, although several serious attempts have been made to do so in particular cases (Emslie et al., 2004, 2005) or for collections (Emslie et al., 2012). Energy radiated from the lower atmosphere, while at lower temperature, will usually exceed, even far exceed, the coronal losses. This is, however, a much smaller fraction of the pre-flare losses at those wavelengths, and is thus far more difficult to measure. Other contributions cannot be quantified without a good model for the flare process: the power deposited in the chromosphere by non-thermal electrons can often exceed the losses to coronal radiation, but it is not entirely clear whether the deposited energy is ultimately radiated from the corona or chromosphere (and is therefore already counted) or is lost in some other way. A CME will also carry away energy, but its source may be from the flare, or from the much larger coronal volume it affects. In light of these factors, there is no simple relationship between the peak X-ray flux, or flare class, and the total energy powering a solar flare.
## Morphology
Solar flares almost always occur in active regions (ARs) of relatively strong, complex magnetic field (see Figure 2a). The most intense chromospheric emission is generally organized into elongated structures, called flare ribbons. In the prototypical case, called a two-ribbon flare, there is one ribbon in each of the AR’s magnetic polarities, and they are separated by the polarity inversion line (PIL; see Figure 2b). The ribbons generally move apart slowly over the course of the flare, providing evidence of progressing magnetic reconnection. There is also an apparent motion, especially early on, related to the formation and elongation of the ribbons (Fletcher, Pollock, & Potts, 2004; Qiu, 2009). Some ribbons can have very complex structure on their finest scales and this may undergo motions more disorderly than the foregoing description implies (Fletcher & Hudson, 2001).
Coronal wavelengths show loops of hot plasma tracing out field lines interconnecting the magnetic polarities. The loops often connect points on the opposing ribbons, thereby forming an elongated arcade: the flare arcade (see Figures 2c and 2d). The loops appear later in images from progressively cooler ions, consistent with the cooling plasma scenario (Aschwanden & Alexander, 2001; Warren et al., 2013). As the ribbons spread apart, the loops anchored to them appear to be rising upward. It is generally accepted, however, that individual loops are relatively stationary, or may even be contracting downward (Forbes & Acton, 1996). The apparent upward motion is thus ascribed to the appearance of new loops piling on top of older loops as magnetic reconnection proceeds.
When images are made from HXR emission, they typically show one or more concentrated sources. Often the sources fall at points along the ribbons, but not along the entire extended ribbon (Sakao, 1994). Such sources are attributed to electron deposition at the footpoint(s) of flare loop(s). The yellow contours on Figure 2b show one source on each ribbon, presumably from both footpoints of a single loop, such as the one appearing in Figure 2c. HXR footpoint sources from a single loop can appear of differing strength, as in Figure 2b. One interpretation is that the footpoint with stronger magnetic field mirrors a larger fraction of the precipitating electrons (Goff, Matthews, van Driel-Gesztelyi, & Harra, 2004; Sakao, 1994). The sources can appear to move along the ribbon, probably showing footpoints of different loops in succession. The directions and speed of this motion has been used to infer aspects of the magnetic reconnection, and the electron acceleration (Bogachev, Somov, Kosugi, & Sakao, 2005).
Figure 2. Images of the solar flare on April 18, 2014. All four panels show the same $180′′×150′′$ field of view. (a) Line-of-sight magnetogram from SDO/HMI on a linear grey scale. Blue and cyan curves are the leading edge of the ribbons from (b), and magenta curve is the PIL. (b)–(d) show different SDO/AIA images using inverse logarithmic color scales.
Figure 2(b). shows the flare ribbons in 1600Å image from 12:50. An image made at the same time from the 25–50 keV band of RHESSI is over plotted as yellow contours at 60%, 75%, and 90% of maximum. The magenta curve is the PIL.
Figure 2(c). 94Å (Fe xviii) from 12:55.
Figure 2(d). 171Å image from 13:09. The times of each AIA image are marked by a diamond on the corresponding curves of Figure 1.
HXR images sometimes show a concentrated source between the footpoints, where the loop’s apex is expected to be (Masuda, Doschek, Boris, Oran, & Young, 1994). Flares at the limb show the source to be located just above the hottest loop visible in softer wavelengths, leading to the term above-the-looptop source for such features (see yellow contours in Figure 3a). Apparent motions of such sources has also been interpreted in terms of the time-dependance of the reconnection process (Sui & Holman, 2003).
## The Standard Model Framework
The various observations have led to a model, or framework, of a generic flare. The model, shown in Figure 3b, is typically cast in terms of an eruptive flare (see Figure 3a), but most features are expected to have counterparts in compact flares. The earliest version is attributed to Carmichael (1964), Sturrock (1968), Hirayama (1974), and Kopp and Pneuman (1976), and is called the CSHKP model. Since then the basic model has been extended and augmented to accommodate new theoretical understanding and new observed features.
Figure 3. The standard flare model. (a) Three images of an eruptive flare on the west limb September 10, 2017, made by SDO/AIA in its 193Å band. Cyan curves mark the solar limb, with north to the left. Each image shows the same $90′′×270′′$ field of view. Yellow curves in the final image are 60%, 75%, and 90% contours from RHESSI’s 25–50 keV image.
Figure 3(b). The geometry of the standard flare model in the same orientation. Blue lines are magnetic field lines, and the red curve is the separatrix field line. A cyan shaded circle is the erupting flux rope, and the red shaded regions are outflows originating from the diffusion region (green ellipse). A blue region shows the most recently closed flux tube which forms the flare loop together with its feet, the two flare ribbons (magenta squares), separated by the PIL.
The model flare is initiated when an erupting flux rope (cyan circle in Figure 3b) pulls open the flux on either side of the PIL, creating a current sheet separating upward from downward open field. Reconnection occurs at some point in the current sheet, designated by a green ellipse labeled X. Open flux is swept inward, and reconnected to form closed field lines which are then swept out by long narrow outflow jets. Loop retraction stops abruptly at a point labeled termination, at the end of the jet. The fully retracted loop becomes the flare loop and its feet form the ribbons, which are seen end-on as magenta boxes in Figure 3b.
According to the foregoing, two-dimensional model, the outermost, or leading, edge of the flare ribbon anchors the separatrix (red curve in Figure 3b) which connects to the X-point. The amount of flux reconnected,$φrx$, can be computed by integrating the vertical magnetic flux over which the ribbon appears to sweep (Forbes & Priest, 1984; Poletto & Kopp, 1986; Qiu, Lee, Gary, & Wang, 2002). Such measurements have become reasonably routine and provide reconnection rates typically peaking in the range $φ˙rx∼3×1017Mx/s$ in small flares to $3×1019Mx/s$ for the largest (Tschernitz, Veronig, Thalmann, Hinterreiter, & Pötzi, 2018).
The reconnection in the standard flare model reflects the structure found in studies of reconnection in generic contexts, and has been verified to some extent through observation (McKenzie, 2002). The diffusion region (green ellipse in Figure 3b) occupies only a small portion of the current sheet, thereby permitting reconnection of the fast, Petschek variety (Forbes, Priest, Seaton, & Litvinenko, 2013; Petschek, 1964). In this mode of reconnection the outflow jets are bounded by slow magnetosonic shocks, shown in dark red in Figure 3b, which accelerate, compress, and heat the jet’s plasma (Forbes & Priest, 1983). If accelerated to a speed above the local fast magnetosonic speed, there will be a fast magnetosonic shock at its termination (Forbes, 1986). Some evidence has been found for such a structure in radio observations (Aurass, Vršnak, & Mann, 2002; Chen et al., 2015a).
The model offers several possibilities by which the energy released by reconnection could generate a population of non-thermal electrons. Various theoretical models under study predict electron acceleration occurring within the diffusion region, in the outflow jet, or at the termination shock. Each of these possibilities offers an explanation for the population of non-thermal electrons observed in microwaves and HXRs, and each produces a distribution consistent with a power law. The electrons may be trapped near the termination, to produce an above-the-looptop source (shown in orange in Figure 3b), or they could precipitate along the flare loop to produce the footpoint sources.
Some of the energy released by magnetic reconnection will be guided downward by the magnetic field to the chromospheric flare ribbons. The energy could be transported by the non-thermal electrons or it could be transported through conventional thermal conduction, which is directed almost entirely along magnetic field lines. Once the energy reaches the feet, it will raise the temperature of the chromospheric plasma, and drive upward evaporation. This scenario nicely explains the Neupert effect where coronal emission appears as a response to the chromosphere. Spectroscopic measurements confirm the fast upflow of material within the flare ribbons (Antonucci & Dennis, 1983; Milligan & Dennis, 2009).
As reconnection proceeds, new loops will be moved through the outflow to lie atop the arcade. This will have the effect of moving the reconnection point upward and moving the separatrix, and hence the flare ribbons, outward. It will also produce a series of ever higher flare loops. Both effects are consistent with the observed evolution.
## The Flare Population
While every flare is unique, there is a strong tendency for extensive quantities to scale together: a large flare is usually large in every measure (known as “big flare syndrome”; Kahler, 1982). It is therefore common to characterize a flare’s size is by a single measure, usually by its peak flux in GOES 1–8Å, designated here by $F1−8$. As was mentioned, no rigorous relation exists between this measure and any other characteristic of a flare. Nevertheless, $F1−8$ is readily measured and scales with the flare’s size.
Flares occur at all sizes with frequency inversely dependent on size (Crosby, Aschwanden, & Dennis, 1993). Figure 4 summarizes the flare activity, as characterized by $F1−8$, over three solar cycles (from 1986 to 2016). Since flares are associated with ARs, they occur with highest frequency around solar maxima (1991, 2001, and 2014). At these times flaring rates can increase to over 8 C-class flares and one M-class flare per day, as shown in Figure 4b. These rates drop by more than two orders of magnitude during solar minimum. Averaged over all three cycles, M-flares occur at a mean rate of $0.33$/day, or about 1,300 over an 11-year cycle. (There were 2,047, 1,441, and 681 M-flares in these three solar cycles, whose amplitudes clearly decrease.)
Figure 4. Summary of flaring activity from 1986 to 2016, characterized by flare magnitude $F1−8$. (a) The number of flares vs. date and magnitude, using an inverse color scale—darker for more flares. Blue, green, and red dashed lines mark the levels for C, M, and X flares respectively. A flare occurs above the cyan curve once per day. Red corsses show the largest flare over a 90-day window. (b) The mean frequency, averaged over 90-day windows, of C-class flares (blue) and M-class flares (green). Blue and green ticks to the left of the $y$ axis show the rates averaged over the 31-year period. The cyan dashed line is for reference to the cyan curve in (a). (c) The international sunspot number, for reference. (d) The frequency distribution for the entire 31-year interval, plotted on its side, with magnitude on the vertical axis to match panel (a) to its left. The magenta line is a power-law fit, $dN/dF∼F−2.14$.
The average flaring rate over the entire three-cycle interval can be formed into a frequency histogram, shown in Figure 4d. This clearly shows a power-law behavior, $dN/dF∼F−2.14$. This means that on average an X-flare is less probable than an M-flare by a factor of $10−1.14=0.072$. They occur at an average rate of $8.7$/year, compared to 120 M-flares per year.
The power-law distribution of solar flares is reminiscent of the distributions of earthquakes energies, or avalanche sizes (Bak, Tang, & Wiesenfeld, 1988). This resemblance has led some investigators to propose analogous models to explain flare-size distributions (Aschwanden et al., 2018; Lu & Hamilton, 1991). It is also possible to use the empirical relationship along with observations of the current rate of small flares to forecast the likelihood of a larger flare in the near future (Wheatland, 2004).
Investigations reveal that the power-law distribution continues in the direction of smaller flares, in spite of the apparent roll-off evident in Figure 4d. That is caused by the systematic undercounting of small flares when the activity, and related background, rises. (Their absence is clear as light voids in the $F1−8<10−6W/m2$ regions at solar maximum.) This confusion limit can be overcome using spatially-resolved measurements over more limited fields of view. Such measurements reveal a continuation to ever smaller flares, and perhaps still farther to yet-unobserved nano-flares.
The distribution may also extend to extremely large flares, but small numbers make this unclear. Based on the power law, flares above X10 should occur at a rate of $0.63$ per year, which is 7.2% of the rate for flares X1–X10. This would amount to 19 in the 31-year sample. In actuality 16 flares were observed in this class, (10 in cycle 22 and 6 in cycle 23), which is within expectations for such a small sample. Thus we cannot rule out the extension of the power law to that and still higher levels. If it applies out to $F1−8>0.1erg/s$ (i.e., X1000), we would expect one such super-flare every 300 years, on average.
This physics of flares shows that larger flares require larger active regions and the reconnection of more flux from them. Any of these factors may turn out to have an upper bound which would in turn place a limit on the possible size of a solar flare. Several efforts have been made to estimate that upper limit (Aulanier et al., 2013; Schrijver et al., 2012; Shibata et al., 2013), but no consensus has been reached.
## Models and Theories
There have been many attempts to understand solar flares theoretically and to incorporate this understanding into models. Most efforts have tended to focus on one specific aspect, although a few notable efforts have attempted to combine multiple aspects into a single model. One focus has been on the large-scale dynamics of a solar flare, often including the CME. A second has been on the dynamics of the plasma flowing within the flare loop, with particular attention to the process of chromospheric evaporation. A final set of studies has focused on the generation of non-thermal particles and their propagation along the flare loop.
# Large-Scale Models
## Triggering and Eruption
Under the standard model framework, flares and CME begin together, so their earliest phases are generally combined into a single flare/CME model. This initial evolution occurs on a very large scale and is almost always modeled using the single-fluid equations of MHD (see “Magnetohydrodynamics—Overview,” Priest, 2019). A successful model must explain the sudden onset of fast eruption after an extended period of slow evolution. In a number of models the eruption is initiated through a large-scale, current-driven MHD instability triggered when slow (quasi-static) evolution, driven from the lower boundary brings the system past the instability threshold. Every active region, and thus every flare, has a different geometry. Models typically use a simplified, generic geometry, in an effort to study flare evolution in general.
In one model (Biskamp & Welter, 1989; Hood & Priest, 1980; Mikic, Barnes, & Schnack, 1988) the initial magnetic field forms an equilibrium arcade across the PIL, which is sheared by slow motions of the lower boundary. This slow shearing causes a proportionately slow upward expansion of the equilibrium arcade. In some versions of this model there is a shear threshold beyond which equilibria are unstable (Kusano, Maeshiro, Yokoyama, & Sakurai, 2004). Once this threshold is crossed, the upward expansion becomes dynamic, rather than quasi-static. Other versions lack a genuine threshold, but instead exhibit an expansion of increasing speed until the system must behave dynamically, regardless of how slow the boundary moves (Mikic & Linker, 1994). In either case, the rapid upward expansion creates a current sheet at which reconnection occurs to form the erupting flux rope and thereafter the flare and CME.
Several other models assume a twisted (i.e., current-carrying) flux rope exists in equilibrium prior to eruption. The rope’s twist can be increased either by slow boundary motions or by reconnection (Amari, Luciani, Mikic, & Linker, 2000). Such equilibria are subject to large-scale instability for sufficient levels of twist. In one instability, the kink mode, the previously smooth axis of the flux rope develops a helical pitch, reducing the total magnetic energy. The equilibrium becomes unstable once the field lines wrap around the straight axis more by than a critical angle, generally $2π$$3.5π$ depending on the particular equilibrium (Hood & Priest, 1981).
A second instability concerns a twisted flux rope overlain by an external field required to balance the rope’s outward hoop force. The hoop force is a repulsion between the current in a section of the rope and its image current below the boundary. This force decreases as the flux rope moves upward, away from the boundary. A stable equilibrium requires that the overlying field, and the balancing force it supplies, decrease less rapidly than the hoop force, in order that a balance can always be achieved. If this is not the case, that is, if the overlying field deceases too rapidly with height, the net upward force will increase with height leading to a run-away expansion: an eruption. This is a form of lateral kink instability known as the torus instability, and it is triggered once the flux rope enters a region where the overlying field strength decreases sufficiently rapidly with height (Kliem & Török, 2006). Both kink and torus instabilities are considered viable mechanisms to produce a CME and associated flare (Démoulin & Aulanier, 2010). No consensus has yet been reached as to whether one is invariably responsible, or simply more frequently responsible, for observed eruptions.
In a separate class of models, called loss of equilibrium, the equilibrium contains a flux rope as well as one or two current sheets. Magnetic reconnection is assumed to occur at the current sheet, but slowly enough to drive quasi-static evolution (Moore & Sterling, 2006). This evolution can reach a point beyond which no neighboring equilibrium exists—a loss of equilibrium or a catastrophe—requiring rapid dynamical evolution to a new equilibrium (Forbes & Isenberg, 1991; Longcope & Forbes, 2014; Priest & Forbes, 1990; Yeates & Mackay, 2009). In one such scenario, called tether-cutting, reconnection occurs at a current sheet beneath the flux rope, causing it to rise slowly until equilibrium is lost and the flux rope rises dynamically: eruption (Moore, Sterling, Hudson, & Lemen, 2001). In an alternative scenario, called break-out, the initial equilibrium includes a horizontal current sheet above the flux rope, as well as the vertical sheet beneath it (Antiochos, Devore, & Klimchuk, 1999). Slow reconnection at the upper sheet causes the flux rope to rise slowly, until equilibrium is lost and it erupts dynamically. Under this scenario, the lower current sheet stretches and thins, first slowly under quasi-static evolution, and then more rapidly as a result of eruption. The latter phase is accompanied by rapid reconnection at the lower sheet, termed flare reconnection, which produces the flare itself according to the standard model.
## Reconnection in a Flare
The flare itself is largely a consequence of the magnetic reconnection occurring at the current sheet beneath the erupting flux rope. At least in the standard model, the current sheet has global scale so reconnection there is typically modeled using MHD equations (the vertical structure evident in Figure 3a is more than 100 Mm long). In general such studies conform to results of more generic studies of reconnection in the MHD regime (Forbes & Priest, 1983). To obtain steady Petschek reconnection of the form depicted in the standard model, it is necessary that the reconnection electric field be somehow localized within the large-scale current sheet (Biskamp & Schwarz, 2001; Kulsrud, 2001). This localization will not occur if the magnetic induction equation includes only a uniform resistivity. For this reason, it is common for models to use a current-dependent anomalous resistivity (Magara, Mineshige, Yokoyama, & Shibata, 1996; Ugai & Tsuda, 1977). Doing so yields numerical results closely resembling the standard model cartoon, that is, Figure 3b, including a fast magnetosonic shock at the termination (Forbes & Malherbe, 1986), although exhibiting its own dynamics (Takasao, Matsumoto, Nakamura, & Shibata, 2015).
Other models have used uniform resistivity, or no resistivity at all, and cannot therefore have fast, steady reconnection. Instead they exhibit reconnection of an unsteady variety, including multiple, evolving magnetic islands (Karpen, Antiochos, & DeVore, 2012). Such reconnection is also found in more generic studies (Bhattacharjee, HuangYang, & Rogers, 2009; Loureiro, Schekochihin, & Cowley, 2007). These evolving islands have been invoked to explain certain features observed in the context of solar flares, such as supra-arcade downflows (McKenzie & Hudson, 1999; Savage, McKenzie, & Reeves, 2012).
Flare reconnection differs from more generic varieties owing to the significant role played by the cool chromosphere to which the reconnecting field lines are anchored. This layer is responsible for many of the observational signatures of a solar flare. Several numerical and analytic models have examined the role of field-aligned thermal conductivity by which energy may be transported to the chromosphere (Chen, Fang, Tang, & Ding, 1999; Forbes, Malherbe, & Priest, 1989; Yokoyama & Shibata, 1997). These exhibit a layer of hot plasma surrounding the outflow jet and chromospheric evaporation, but the most detailed studies of the latter process remain those using a class of one-dimensional flare loop models.
# Flare Loop Models
Flare loop models consider, for the most part, the plasma dynamics in a static, closed magnetic loop. Magnetic evolution is neglected, or assumed to be complete, leaving a stationary curved tube. Plasma flows only along this static loop, with velocity parallel to the axis. The loop is assumed thin enough to reduce the problem to a single spatial dimension. Mass density, plasma velocity (parallel), and plasma pressure are all functions of axial position, and evolve in time as required by the conservation of mass, momentum, and energy. Assuming a static loop obviates the need for an equation governing magnetic field evolution, leaving a system of gas dynamic equations, rather than MHD. Restriction to a single spatial dimension permits numerical solutions to resolve scales as small as meters—scales which can develop in a flare’s low atmosphere (Fisher, Canfield, & McClymont, 1985a; MacNeice, Burgess, McWhirter, & Spicer, 1984).
The energy equation typically includes radiative transport, including optically thin losses from the corona and thermal conduction along the tube’s axis. It also includes an energy source term representing the magnetic energy released to produce the flare. In some versions the source term is an ad hoc function of space and time representing a generic dissipation, and typically concentrated around the loop top (Cheng, Oran, Doschek, Boris, & Mariska, 1983; MacNeice, 1986). In others it is taken to be the energy deposition from non-thermal electrons, which had originated at the loop top with a specified flux and energy spectrum (Emslie & Nagai, 1985; Fisher, Canfield, & McClymont, 1985b). Both versions have been extensively studied, and produce broadly similar evolution, largely conforming to observations.
## Integrated (0D) Models
The one-dimensional gas dynamic equations can be simplified further by integrating them over the loop’s coronal section. This yields two ordinary differential equations governing the time evolution of total coronal mass and energy, or equivalently, average coronal density and temperature (Antiochos & Sturrock, 1978). A few assumptions are made about the spatial profiles of the primitive quantities, but the result is a robust zero-dimensional system based on global conservation laws. The equations may be solved numerically, or analytically after a few more assumptions; examples of each are shown in Figure 5. The former approach generally shows evolution in three phases, as illustrated in Figure 5c and 5d. Analytical approaches assume this three-phase evolution (Cargill, Mariska, & Antiochos, 1995).
Figure 5. Zero-dimensional models of a flare loop of full length $L=40$ Mm to which $2×1011erg/cm2$ is added over 1 s. The left column (a) and (c), shows the evolution of average coronal density (blue), along the left axis, and average coronal temperature (red), along the right axis, against a logarithmic time axis. The right column, (b) and (d), shows the evolution in temperature/density space. These curves progress clockwise, as indicated by arrows on (d). Magenta dashed curves show the line along which radiative time scales equal the conductive time scale. Violet dashed lines mark a line of constant pressure equal to the total energy input uniformly distributed over the loop volume. The top row, (a) and (c), shows the numerical solution of EBTEL (Klimchuk, Patsourakos, & Cargill, 2008), while the bottom row, (c) and (d), is from the analytic model of Cargill, Mariska, and Antiochos (1995). For comparison, a grey curve in (b) shows the evolution when the same total energy is added over 10 s rather than 1 s.
During the heating phase (see Figure 5a) energy is added and the coronal temperature rises more rapidly than density can respond. This phase is particularly distinct if the duration of energy input is short, as in the example, otherwise it overlaps the next phase. In the next phase, coronal heat is transported to the chromosphere where it drives evaporation, carrying much of the energy back into the corona (i.e., by enthalpy flux). Evaporation thus increases the corona’s mass, but keeps its energy, and thus its pressure, largely constant: the evaporative phase proceeds along a line of constant $neT$, as shown by violet dashed lines in Figures 5a and 5c. The time scale for optically thin radiative losses, scaling inversely with the square of the density, is very long at the high temperatures and low densities found at the end of the heating phase. Evaporation thus proceeds on the shorter conductive time scale, until the coronal density has increased enough to make the radiative time-scale comparable. The corona then begins to lose energy through radiation—its final phase. It is no coincidence that the equality of these two time scales is also the condition for mechanical equilibrium, so the cooling occurs through a series of loop equilibria.
This final, radiative, phase is the longest and thus dictates the overall lifetime of the flare loop: $2000$s in Figure 5. Unless the heating persists throughout this phase, models generally find loop cooling times shorter than the gradual phases of flares with properties similar to simulated loops.1
Coronal emission, including the GOES 1–8Å band, peaks when the density peaks at the end of the evaporative phase. The zero-dimensional model can relate that emission peak to the total energy input and other parameters of the loop. Warren and Antiochos (2004) followed this approach to obtain an expression for peak emission $F1−8≃(4×10−5W/m2)E307/4L9−1A18−3/4$, where $E30$ is the total energy added to the loop in units of $1030ergs$, and $L9$ and $A18$ are the loop’s full length and cross sectional area in units of $109$ cm and $1018cm2$ respectively. This would relate the observed SXR peak to the energy of a flare, provided the flare behaved as a single loop. However, a large flare is not well described as a single loop, so the above relation is only approximate.
## Gas-Dynamic (1D) Models
Solutions to the full gas dynamic equations generally corroborate the conclusion of zero-dimensional models that radiative cooling occurs through a sequence of equilibria. Conversely, the processes of heating and evaporation are found to be very dynamic, with flow speeds at or above the sound speed, as shown in the $t≤40$s curves of Figure 6. These phases are therefore better studied using the full one-dimensional gas dynamic models (Mariska, Doschek, Boris, Oran, & Young, 1982; Nagai, 1980; Pallavicini et al., 1983).
Figure 6. Evolution of a one-dimensional model of a flare loop like that shown in Figure 5: $L=40$ Mm to which $2×1011erg/cm2$ is added over 2 seconds. The simulation includes thermal conduction, but no non-thermal particles. The four rows show pressure, velocity, temperature, and density, reading down. The right column shows the left half of the loop’s coronal section. The right column zooms in to the left footpoints, including a crude chromospheric section ($l<0$). The colors represent times, $t=0$ s (black), 5 (red), 10 violet, 20 (green), 40 (magenta), and 120 s (yellow). The axis atop the upper left panel shows the integrated column for the initial loop, in units of $1019cm−2$.
Energy added to the chromosphere, either deposited by non-thermal electrons or conducted from the corona, creates a pressure peak that drives material upward as evaporation (see velocity plots in 6). This occurs in an ablative rarefaction wave with a shock at its front. The upflow speeds in the models are several hundred km/s, which is typically supersonic. Fisher, Canfield, and McClymont (1984) placed an upper bound on the evaporation speed of two to three times the isothermal sound speed. Longcope (2014) found that evaporation driven by thermal conduction flux $Fc$, reached a velocity $ve∼Fc1/3$.
Energy deposited by non-thermal electrons results in chromospheric evaporation classified as either gentle or explosive. The electrons deposit energy in cooler denser chromospheric layers, where radiative loss is particularly effective and becomes more so as heating drives up the temperature there. It is therefore possible for the deposited energy to be immediately radiated with only minor effects; this is called gentle evaporation. Optically thin losses increase with temperature until peaking at a maximum volumetric rate at around 150,000 K. If the rate of deposition exceeds this maximum the radiation will be unable to compensate, allowing temperature and pressure to rise explosively: this is explosive evaporation. The low-lying pressure peak drives plasma upward (evaporation) as well as downward. This downward motion, essentially a back-reaction to the evaporation, is called chromospheric condensation, and has been observed (Canfield, Metcalf, Strong, & Zarro, 1987; Graham & Cauzzi, 2015).
Radiation cannot be assumed optically thin in deeper layers of the low chromosphere, or for wavelengths around very strong spectral lines. Accurate treatment of these cases requires an explicit treatment of the radiative transfer. This is currently the state of the art for flare modeling (Allred, Hawley, Abbett, & Carlsson, 2005; McClymont & Canfield, 1983), and is essential in order to accurately model the form of strong, optically thick spectral lines in a flare.
## Multi-Loop Models
Investigators have encountered some difficulties modeling the full light-curves from a given flare as a single loop. A particularly vexing difficulty is that gradual phases tend to last longer than the radiative cooling time of a characteristic loop (Qiu & Longcope, 2016; Warren, 2006). This has led to the conclusion that a single flare consists of many distinct loops evolving independently after being energized at different times (Hori Yokoyama, Kosugi, & Shibata, 1997; Qiu et al., 2012; Reeves & Warren, 2002; Warren, 2006). This means the phases of flare evolution, impulsive and gradual, do not map simply onto the phases of flare loop evolution. Energy release is not restricted to the impulsive phase, but may continue through much or all of the gradual phase. This is consistent with many observations showing ribbons continuing their spreading motion during this phase (Longcope, Qiu, & Brewer, 2016). Nor is the gradual phase entirely equivalent to the cooling phase of a single loop. Figure 1b shows ample emission by 10 MK Fe xx throughout most of the gradual phase, so the plasma cannot be cooling everywhere.
This understanding has led to models capable of reproducing with reasonable fidelity most of a flare’s myriad light curves (Liu, Qiu, Longcope, & Caspi, 2013; Qiu, Sturrock, Longcope, Klimchuk, & Liu, 2013). The flare is synthesized from a set of loops, initiated in sequence, and the flare’s light curve is a super-position of the light curves of the loop sequence. The flare’s time scale is therefore set by the loop initialization sequence and not by the radiative cooling of a single loop. Moreover, the measured flux transfer rate, $φ˙rx$, reflects the rate of loop creation, rather than the X-point reconnection electric field that it would in steady models (Longcope, Des Jardins, Carranza-Fulmer, & Qiu, 2010; Longcope, Qiu, & Brewer, 2016).
# Non-Thermal Particle Models
A population of particles can be described by its distribution function, $f(x,E,μ)$, depending on particle energy $E$, and the pitch-angle cosine $μ=cosα$. Coulomb collisions among charged particles will drive their distribution toward a Maxwellian, $f∼Eexp(−E/kbT)$. This limiting form is achieved after a few dozen collisions. At particle densities typical of a flare ($ne∼1010cm−3$) 1 keV electrons will collide with frequency $νc∼10$ Hz, and thereby remain approximately Maxwellian during a flare. The Coulomb collision frequency scales inversely with particle energy, $νc∼E−3/2$, so electrons over $E∼100$ keV collide rarely and can travel more than 100 Mm before colliding once. It is these electrons which travel unimpeded along the flare loop to deposit energy at the feet and create the flare ribbons and the footpoint HXR sources.
Lacking frequent collisions, the distribution function does not need to form a Maxwellian at high energies. Instead it is often observed to be better described by a power law, $f∼E−δ′$.2 The entire distribution function is most often written as a sum of a Maxwellian, called the thermal component, and a non-thermal component whose distribution function is a power law restricted to $E>Ec$. The low-energy cut-off $Ec$ is formally required for normalizability, but more physically needed so that non-thermal collision rates are low enough to justify a departure from Maxwellian. The cut-off is generally expected at energies where the thermal component dominates the sum, and thus proves extremely difficult to constrain well by observation.
The lowest three moments of the distribution function correspond to density, fluid velocity, and pressure. A Maxwellian distribution is completely determined by these moments alone, but any other distribution requires more moments, or the entire function, to be specified. Fluid equations, such as assumed in previous sections (see also “Magnetohydrodynamics—Overview,” Priest, 2019), describe the evolution of the lowest three moments and can therefore be considered a valid, but partial, description of plasma evolution. Their description is reasonably complete provided energies are low enough, and collisions frequent enough, to keep the distribution close to Maxwellian, and thus fully described by its lowest moments.
When collisions are not frequent enough, the evolution of the distribution function must be followed using the Fokker–Planck equation. This includes effects of single-particle motion such as propagation along the field line, and mirroring from points of strong field (Parker, 1958; Rosenbluth, MacDonald, & Judd, 1957). It also includes a velocity–space diffusion arising from the average effect of random, high-frequency electric and magnetic fields. These high-frequency fields can arise from Coulomb collisions or from randomly phased plasma waves, of various kinds. The Coulomb contribution will, as mentioned above, cause the distribution function to relax toward a Maxwellian. The other contributions, however, can drive evolution in other directions, and can thus contribute to the creation of a non-thermal component.
## Models of Particle Acceleration
The process of generating the non-thermal component from an erstwhile Maxwellian distribution is known as particle acceleration. A wide variety of models have been proposed for the acceleration process in flares. All are set within the standard flare model framework and produce distributions resembling power laws. It has thus proven difficult to reach a consensus on which mechanism is at work in a solar flare. This question remains open.
A number of models, collectively known as second-order Fermi or stochastic acceleration (SA) models, focus on the velocity-space diffusion from a spectrum of randomly-phased plasma waves. Waves of various kinds can be generated by the MHD turbulence expected within the reconnection outflow jet, featured in Figure 3b. Velocity diffusion is dominated by resonant interactions between the waves and the particles. This poses a challenge for SA models in general since plasma waves often have phase speeds much higher than thermal particles. Many investigators have, however, been able to show that resonances can occur with different wave modes under reasonable assumptions. Some even follow the evolution of the wave spectrum (Miller, Larosa, & Moore, 1996; Petrosian, Yan, & Lazarian, 2006). Stochastic acceleration therefore remains a viable explanation for high-energy flare particles.
A related model considers the effects of an MHD shock along with turbulence capable of repeatedly scattering the particles back to the shock (i.e., effective pitch-angle scattering). These elements combine in a process known as first-order Fermi or diffusive shock acceleration (DSA), which has been extensively studied and observed in other astrophysical and space plasmas (see Blandford & Eichler, 1987, for a review). It results in a power-law distribution whose index, $δ$, is related to the plasma compression ratio across the shock. The fast magnetosonic shock predicted at the termination point (see Figure 3) is an ideal location for DSA (Tsuneta & Naito, 1998; Mann, Aurass, & Warmuth, 2006), and some observations suggest acceleration is indeed occurring there (Chen et al., 2015a; Sui & Holman, 2003).
Charged particles can be temporarily confined either on closed field lines or between magnetic mirror points. As the magnetic field changes, it can add energy to the trapped particles through the betatron term, curvature-drift, or head-on reflection from a moving mirror point. A certain class of models invokes these effects to explain particle acceleration. The magnetic field strength will have a local minimum at the end of the outflow region. This can serve as a magnetic trap, and if it shrinks in size, the particles trapped there can gain energy. This is the basis of the collapsing trap model (Somov & Kosugi, 1997; Karlický & Kosugi, 2004). Alternatively, MHD turbulence in the outflow jet could feature closed magnetic islands, often called plasmoids in reconnection models (Loureiro, Schekochihin, & Cowley, 2007; Shibayama, Kusano, Miyoshi, Nakabou, & Vekstein, 2015). These islands will tend to evolve from elongated to circular, and in so doing accelerate the particles trapped within them (Drake, Swisdak, Schoeffler, Rogers, & Kobayashi, 2006).
Finally it is possible for the large-scale electric field, the defining feature of magnetic reconnection, to accelerate charged particles directly, in so-called direct acceleration. It has already been noted that reconnection is observed at rates $∼φ˙∼1018$ Mx/s. If this occurred as a steady, large-scale electric field along an X-line, there would be a $V∼1010$ Volt drop along it—far more than observed in any electron. Simple as this seems, a detailed model faces several challenges. A plasma tends to screen out any electric field component parallel to the magnetic field, and undergoes a dramatic response if subjected to a field in excess of the so-called Dreicer field—$ED∼10−2$ V/m for a typical flare plasma. Moreover, the simplest scenario would predict all particles of a given charge to be accelerated in the same direction, in apparent contradiction to observations showing electron precipitation at both feet of a loop (see Figure 2b). Several investigators have produced models which overcome these challenges, demonstrating the viability of direct acceleration (Emslie & Hénoux, 1995; Holman, 1985; Litvinenko, 1996; Martens, 1988)
## Models of Non-Thermal Particle Propagation
In most of these models, charged particles are energized within a certain region of the solar flare, such as the reconnection site, the outflow jet, or the termination shock. From there the particles propagate along magnetic field lines, until they have dissipated their energy and rejoined the thermal population. An electron with energy $EkeV$ (in keV) and pitch-angle cosine $μ$ will traverse a total column $N=∫ndl≃(1017cm−2)μEkev2$, before stopping. Electrons leaving the acceleration region roughly parallel ($μ≃1$) with $E≥10$ keV will not stop until they have reached the chromosphere where $N≃1019cm−2$ (see the upper left axis of Figure 6). They will lose the vast majority of this energy at the very end of their journey, leading to chromospheric energy deposition.
Virtually all the energy lost by propagating particles goes into heating the background plasma: a colder thermal plasma. For electrons, a very small fraction (typically $10−5$) is converted to photons via bremsstrahlung, which thus provides the most direct diagnostic of the non-thermal electron population. (Ions lose a far smaller fraction to bremstrahlung, making their detection far more challenging.) An electron with energy $E$ can emit photons of energy $ε≤E$. A single electron will emit a spectrum of bremsstrahlung photons before ultimately joining the thermal population; the complete process is known as thick-target emission. A power-law distribution of electrons, $F(E)∼E−δ$, will thereby produce a power-law distribution of photons, $I(ε)∼ε−γ$, with $γ=δ−1$ in this thick-target process (Tandberg-Hanssen & Emslie, 1988). Hard X-ray spectra from flare footpoints generally exhibit power laws with $γ≥2$, corresponding to electron distributions with $δ≥3$.
The coronal column $N≪1019$ will have little effect on the energy of the electrons propagating through it. Bremstrahlung emission under this condition, called thin-target emission, will reflect the distribution of energies at which electron are produced (accelerated). The resulting photon spectrum will therefore have a power-law index, $γ=δ+1$, considerably softer than for thick-target emission (Tandberg-Hanssen & Emslie, 1988).
## References
Allred, J. C., Hawley, S. L., Abbett, W. P., & Carlsson, M. (2005). Radiative Hydrodynamic Models of the Optical and Ultraviolet Emission from Solar Flares. The Astrophysical Journal, 630, 573.Find this resource:
Amari, T., Luciani, J. F., Mikic, Z., & Linker, J. (2000). A Twisted Flux Rope Model for Coronal Mass Ejections and Two-Ribbon Flares. The Astrophysical Journal, 529, L49.Find this resource:
Antiochos, S. K., Devore, C. R., & Klimchuk, J. A. (1999). A Model for Solar Coronal Mass Ejections. The Astrophysical Journal, 510, 485.Find this resource:
Antiochos, S. K., & Sturrock, P. A. (1978). Evaporative cooling of flare plasma. The Astrophysical Journal, 220, 1137.Find this resource:
Antonucci, E., Alexander, D., Culhane, J. L., de Jager, C., MacNeice, P., Somov, B. V., & Zarro, D. M. (1999). Flare dynamics. In K. T. Strong, J. L. R. Saba, B. M. Haisch, & J. T. Schmelz (Eds.), The many faces of the sun: A summary of the results from NASA’s Solar Maximum Mission (p. 331). New York, NY: Springer.Find this resource:
Antonucci, E., & Dennis, B. R. (1983). Observation of chromospheric evaporation during the Solar Maximum Mission. Solar Physics, 86, 67.Find this resource:
Aschwanden, M. J., & Alexander, D. (2001). Flare Plasma Cooling from 30 MK down to 1 MK modeled from Yohkoh, GOES, and TRACE observations during the Bastille Day Event (14 July 2000). Solar Physics, 204, 91.Find this resource:
Aschwanden, M. J., Scholkmann, F., B´ethune, W., Schmutz, W., Abramenko, V., Cheung, M. C. M., M¨uller, D., Benz, A., Chernov, G., Kritsuk, A. G., Scargle, J. D., Melatos, A., Wagoner, R. V., Trimble, V., & Green, W. H. (2018). Order out of Randomness: Self-Organization Processes in Astrophysics. Space Science Reviews, 214, 55.Find this resource:
Aulanier, G., Démoulin, P., Schrijver, C. J., Janvier, M., Pariat, E., & Schmieder, B. (2013). The standard flare model in three dimensions. II. Upper limit on solar flare energy. Astronomy & Astrophysics, 549, A66.Find this resource:
Aurass, H., Vršnak, B., & Mann, G. (2002). Shock-excited radio burst from reconnection outflow jet? Astronomy & Astrophysics, 384, 273.Find this resource:
Bak, P., Tang, C., & Wiesenfeld, K. (1988). Self-organized criticality. Physical Review A, 38, 364.Find this resource:
Bhattacharjee, A., Huang, Y.-M., Yang, H., & Rogers, B. (2009). Fast reconnection in high-Lundquist-number plasmas due to the plasmoid Instability. Physics of Plasmas, 16, 112102.Find this resource:
Biskamp, D., & Schwarz, E. (2001). Localization, the clue to fast magnetic reconnection. Physics of Plasmas, 8, 4729.Find this resource:
Biskamp, D., & Welter, H. (1989). Magnetic arcade evolution and instability. Solar Physics, 120, 49.Find this resource:
Blandford, R., & Eichler, D. (1987). Particle acceleration at astrophysical shocks: A theory of cosmic ray origin. Physics Reports, 154, 1.Find this resource:
Bogachev, S. A., Somov, B. V., Kosugi, T., & Sakao, T. (2005). The Motions of the Hard X-Ray Sources in Solar Flares: Images and Statistics. The Astrophysical Journal, 630, 561.Find this resource:
Canfield, R. C., Brown, J. C., Brueckner, G. E., Cook, J. W., Craig, I. J. D., Doschek, G. A., Emslie, A. G., Henoux, J.-C., Lites, B. W., Machado, M. E., & Underwood, J. H. (1980). The Chromosphere and Transition Region. In P. A. Sturrock (Ed.), Solar flares. A monograph from Skylab Solar Workshop II (p. 231). Boulder: Colorado Associated University Press.Find this resource:
Canfield, R. C., Metcalf, T. R., Strong, K. T., & Zarro, D. M. (1987). A novel observational test of momentum balance in a solar flare. Nature, 326, 165.Find this resource:
Cargill, P. J., Mariska, J. T., & Antiochos, S. K. (1995). Cooling of solar flares plasmas. 1: Theoretical considerations. The Astrophysical Journal, 439, 1034.Find this resource:
Carmichael, H. (1964). A Process for Flares. In W. N. Hess (Ed.), AAS-NASA Symposium on the Physics of Solar Flares (p. 451). Washington, DC: NASA.Find this resource:
Chamberlin, P. C., Woods, T. N., & Eparvier, F. G. (2008). Flare Irradiance Spectral Model (FISM): Flare component algorithms and results. Space Weather, 6, S05001.Find this resource:
Chen, B., Bastian, T. S., Shen, C., Gary, D. E., Krucker, S., & Glesener, L. (2015a). Particle acceleration by a solar flare termination shock. Science, 350, 1238.Find this resource:
Chen, H., Zhang, J., Ma, S., Yang, S., Li, L., Huang, X., & Xiao, J. (2015b). Confined Flares in Solar Active Region 12192 from 2014 October 18 to 29. The Astrophysical Journal, 808, L24.Find this resource:
Chen, P. F., Fang, C., Tang, Y. H., & Ding, M. D. (1999). Simulation of Magnetic Reconnection with Heat Conduction. The Astrophysical Journal, 513, 516.Find this resource:
Cheng, C.-C., Oran, E. S., Doschek, G. A., Boris, J. P., & Mariska, J. T. (1983). Numerical simulations of loops heated to solar flare temperatures I. The Astrophysical Journal, 265, 1090.Find this resource:
Crosby, N. B., Aschwanden, M. J., & Dennis, B. R. (1993). Frenquency distributions and correlations of solar x-ray flar parameters. Solar Physics, 143, 275.Find this resource:
Démoulin, P., & Aulanier, G. (2010). Criteria for Flux Rope Eruption: Non-equilibrium Versus Torus Instability. The Astrophysical Journal, 718, 1388.Find this resource:
Dennis, B. R., & Zarro, D. M. (1993). The Neupert effect - What can it tell us about the impulsive and gradual phases of solar flares? Solar Physics, 146, 177.Find this resource:
Drake, J. F., Swisdak, M., Schoeffler, K. M., Rogers, B. N., & Kobayashi, S. (2006). Formation of secondary islands during magnetic reconnection. Geophysical Research Letters, 33, 13105.Find this resource:
Emslie, A. G., Dennis, B. R., Holman, G. D., & Hudson, H. S. (2005). Refinements to flare energy estimates: A followup to “Energy partition in two solar flare/CME events” by A. G. Emslie et al. Journal of Geophysical Research, 110, A11103.Find this resource:
Emslie, A. G., Dennis, B. R., Shih, A. Y., Chamberlin, P. C., Mewaldt, R. A., Moore, C. S., Share, G. H., Vourlidas, A., & Welsch, B. T. (2012). Global Energetics of Thirty-eight Large Solar Eruptive Events. The Astrophysical Journal, 759, 71.Find this resource:
Emslie, A. G., & Hénoux, J.-C. (1995). The electrical current structure associated with solar flare electrons accelerated by large-scale electric fields. The Astrophysical Journal, 446, 371.Find this resource:
Emslie, A. G., Kucharek, H., Dennis, B. R., Gopalswamy, N., Holman, G. D., Share, G. H., Vourlidas, A., Forbes, T. G., Gallagher, P. T., Mason, G. M., Metcalf, T. R., Mewaldt, R. A., Murphy, R. J., Schwartz, R. A., & Zurbuchen, T. H. (2004). Energy partition in two solar flare/CME events. Journal of Geophysical Research, 109, 10104.Find this resource:
Emslie, A. G., & Nagai, F. (1985). Gas dynamics in the impulsive phase of solar flares. II - The structure of the transition region - A diagnostic of energy transport processes. The Astrophysical Journal, 288, 779.Find this resource:
Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1984). Chromospheric evaporation velocities in solar flares. The Astrophysical Journal, 281, L79.Find this resource:
Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1985a). Flare loop radiative hydrodynamics. V - Response to thick-target heating. The Astrophysical Journal, 289, 414.Find this resource:
Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1985b). Flare Loop Radiative Hydrodynamics. VI - Chromospheric Evaporation due to Heating by Nonthermal Electrons. The Astrophysical Journal, 289, 425.Find this resource:
Fletcher, L., & Hudson, H. (2001). The Magnetic Structure and Generation of EUV Flare Ribbons. Solar Physics, 204, 69.Find this resource:
Fletcher, L., Pollock, J. A., & Potts, H. E. (2004). Tracking of TRACE Ultraviolet Flare Footpoints. Solar Physics, 222, 279.Find this resource:
Forbes, T. G. (1986). Fast-shock formation in line-tied magnetic reconnection models of solar flares. The Astrophysical Journal, 305, 553.Find this resource:
Forbes, T. G., & Acton, L. W. (1996). Reconnection and Field Line Shrink age in Solar Flares. The Astrophysical Journal, 459, 330.Find this resource:
Forbes, T. G., & Isenberg, P. A. (1991). A catasrophe mechanism for coronal mass ejections. The Astrophysical Journal, 373, 294.Find this resource:
Forbes, T. G., & Malherbe, J. M. (1986). A shock condensation mechanism for loop prominences. The Astrophysical Journal, 302, L67.Find this resource:
Forbes, T. G., Malherbe, J. M., & Priest, E. R. (1989). The formation flare loops by magnetic reconnection and chromospheric ablation. Solar Physics, 120, 285.Find this resource:
Forbes, T. G., & Priest, E. R. (1983). A numerical experiment relevant to line-tied reconnection in two-ribbon flares. Solar Physics, 84, 169.Find this resource:
Forbes, T. G., & Priest, E. R. (1984). Reconnection in Solar Flares. In D. Butler & K. Papadopoulos (Eds.), Solar terrestrial physics: Present and future (p. 35). Washington, DC: NASA.Find this resource:
Forbes, T. G., Priest, E. R., Seaton, D. B., & Litvinenko, Y. E. (2013). Indeterminacy and instability in Petschek reconnection. Physics of Plasmas, 20, 052902.Find this resource:
Fuller-Rowell, T., & Solomon, S. C. (2010). Flares, coronal mass ejections, and atmopsheric responses. In C. J. Schrijver & G. Siscoe (Eds.), Heliophysics II. Space storms and radiation: Causes and effects (p. 321). Cambridge, UK: Cambridge University Press.Find this resource:
Goff, C. P., Matthews, S. A., van Driel-Gesztelyi, L., & Harra, L. K. (2004). Relating magnetic field strengths to hard X-ray emission in solar flares. Astronomy & Astrophysics, 423, 363.Find this resource:
Gosling, J. T. (1993). The solar flare myth. Journal of Geophysical Research, 98, 18937.Find this resource:
Graham, D. R., & Cauzzi, G. (2015). Temporal Evolution of Multiple Evaporating Ribbon Sources in a Solar Flare. The Astrophysical Journal, 807, L22.Find this resource:
Hirayama, T. (1974). Theoretical Model of Flares and Prominences. I: Evaporating Flare Model. Solar Physics, 34, 323.Find this resource:
Holman, G. D. (1985). Acceleration of runaway electrons and Joule heating in solar flares. The Astrophysical Journal, 293, 584.Find this resource:
Hood, A. W., & Priest, E. R. (1980). Magnetic instability of coronal arcades as the origin of two-ribbon flares. Solar Physics, 66, 113.Find this resource:
Hood, A. W., & Priest, E. R. (1981). Critical conditions for magnetic instabilities in force-free coronal loops. Geophysical and Astrophysical Fluid Dynamics, 17, 297.Find this resource:
Hori, K., Yokoyama, T., Kosugi, T., & Shibata, K. (1997). Pseudo–Two- dimensional Hydrodynamic Modeling of Solar Flare Loops. The Astrophysical Journal, 489, 426.Find this resource:
Kahler, S. W. (1982). The role of the big flare syndrome in correlations of solar energetic proton fluxes and associated microwave burst parameters. Journal of Geophysical Research, 87, 3439.Find this resource:
Karlický, M., & Kosugi, T. (2004). Acceleration and heating processes in ay collapsing magnetic trap. Astronomy & Astrophysics, 419, 1159.Find this resource:
Karpen, J. T., Antiochos, S. K., & DeVore, C. R. (2012). The Mechanisms for the Onset and Explosive Eruption of Coronal Mass Ejections and Eruptive Flares. The Astrophysical Journal, 760, 15.Find this resource:
Kliem, B., & Török, T. (2006). Torus Instability. Physical Review Letters, 96, 255002.Find this resource:
Klimchuk, J. A., Patsourakos, S., & Cargill, P. J. (2008). Highly Efficient Modeling of Dynamic Coronal Loops. The Astrophysical Journal, 682, 1351.Find this resource:
Kopp, G., Lawrence, G., & Rottman, G. (2005). The Total Irradiance Monitor (TIM): Science Results. Solar Physics, 230, 129.Find this resource:
Kopp, R. A., & Pneuman, G. W. (1976). Magnetic reconnection in the corona and the loop prominence phenomenon. Solar Physics, 50, 85.Find this resource:
Kretzschmar, M., de Wit, T. D., Schmutz, W., Mekaoui, S., Hochedez, J.-F., & Dewitte, S. (2010). The effect of flares on total solar irradiance. Nature Physics, 6, 690.Find this resource:
Krucker, S., Battaglia, M., Cargill, P. J., Fletcher, L., Hudson, H. S., MacKinnon, A. L., Masuda, S., Sui, L., Tomczak, M., Veronig, A. L., Vlahos, L., & White, S. M. (2008). Hard X-ray emission from the solar corona. Astronomy and Astrophysics Review, 16, 155.Find this resource:
Kulsrud, R. M. (2001). Magnetic reconnection: Sweet-Parker vs. Petscheck. Earth, Planets and Space, 53, 417.Find this resource:
Kusano, K., Maeshiro, T., Yokoyama, T., & Sakurai, T. (2004). The Trigger Mechanism of Solar Flares in a Coronal Arcade with Reversed Magnetic Shear. The Astrophysical Journal, 610, 537.Find this resource:
Litvinenko, Y. E. (1996). Particle Acceleration in Reconnecting Current Sheets with a Nonzero Magnetic Field. The Astrophysical Journal, 462, 997.Find this resource:
Liu, W.-J., Qiu, J., Longcope, D. W., & Caspi, A. (2013). Determining Heating Rates in Reconnection Formed Flare Loops of the M8.0 Flare on 2005 May 13. The Astrophysical Journal, 770, 111.Find this resource:
Longcope, D. W. (2014). A Simple Model of Chromospheric Evaporation and Condensation Driven Conductively in a Solar Flare. The Astrophysical Journal, 795, 10.Find this resource:
Longcope, D. W., Des Jardins, A. C., Carranza-Fulmer, T., & Qiu, J. (2010). A Quantitative Model of Energy Release and Heating by Time-dependent, Localized Reconnection in a Flare with a Thermal Looptop X-ray Source. Solar Physics, 267, 107.Find this resource:
Longcope, D. W., & Forbes, T. G. (2014). Breakout and tether-cutting eruption models are both catastrophic (sometimes). Solar Physics, 6, 2091.Find this resource:
Longcope, D. W., Qiu, J., & Brewer, J. (2016). A reconnection-driven model of the hard X-ray loop-top source from flare 2004-Feb-26. The Astrophysical Journal, 833, 211.Find this resource:
Loureiro, N. F., Schekochihin, A. A., & Cowley, S. C. (2007). Instability of current sheets and formation of plasmoid chains. Physics of Plasmas, 14, 100703.Find this resource:
Lu, E. T., & Hamilton, R. J. (1991). Avalanches and the distribution of solar flares. The Astrophysical Journal, 380, L89.Find this resource:
MacNeice, P. (1986). A numerical hydrodynamic model of a heated coronal loop. Solar Physics, 103, 47.Find this resource:
MacNeice, P., Burgess, A., McWhirter, R. W. P., & Spicer, D. S. (1984). A numerical model of a solar flare based on electron beam heating of the chromospheres. Solar Physics, 90, 357.Find this resource:
Magara, T., Mineshige, S., Yokoyama, T., & Shibata, K. (1996). Numerical Simulation of Magnetic Reconnection in Eruptive Flares. The Astrophysical Journal, 466, 1054.Find this resource:
Mann, G., Aurass, H., & Warmuth, A. (2006). Electron acceleration by the reconnection outflow shock during solar flares. Astronomy & Astrophysics, 454, 969.Find this resource:
Mariska, J. T., Doschek, G. A., Boris, J. P., Oran, E. S., & Young, T. R., Jr. (1982). Solar transition region response to variations in the heating rate. The Astrophysical Journal, 255, 783.Find this resource:
Martens, P. C. H. (1988). The generation of proton beams in two-ribbon flares. The Astrophysical Journal, 330, L131.Find this resource:
Masuda, S., Kosugi, T., Hara, H., Tsuneta, S., & Ogawara, Y. (1994). A loop-top hard X-ray source in a compact solar flare as evidence for magnetic reconnection. Nature, 371, 495.Find this resource:
McClymont, A. N., & Canfield, R. C. (1983). Flare loop radiative hydrodynamics. I - Basic methods. The Astrophysical Journal, 265, 483.Find this resource:
McKenzie, D. E. (2002). Signatures of Reconnection in Eruptive Flares. In P. C. H. Martens & D. Cauffman (Eds.), Multi-wavelength observations of coronal structure and dynamics—Yohkoh 10th anniversary meeting (COSPAR Colloquia Series, p. 155). Elsevier.Find this resource:
McKenzie, D. E., & Hudson, H. S. (1999). X-Ray Observations of Motions and Structure above a Solar Flare Arcade. The Astrophysical Journal, 519, L93.Find this resource:
Mikic, Z., Barnes, D. C., & Schnack, D. D. (1988). Dynamical evolution of a solar coronal magnetic field arcade. The Astrophysical Journal, 328, 830.Find this resource:
Mikic, Z., & Linker, J. A. (1994). Disruption of coronal magnetic field arcades. The Astrophysical Journal, 430, 898.Find this resource:
Miller, J. A., Larosa, T. N., & Moore, R. L. (1996). Stochastic Electron Acceleration by Cascading Fast Mode Waves in Impulsive Solar Flares. The Astrophysical Journal, 461, 445.Find this resource:
Milligan, R. O., & Dennis, B. R. (2009). Velocity Characteristics of Evaporated Plasma Using Hinode/EUV Imaging Spectrometer. The Astrophysical Journal, 699, 968.Find this resource:
Moore, R. L., & Sterling, A. C. (2006). Initiation of Coronal Mass Ejections. In E. Robbrecht & D. Berghmans (Eds.), Solar Eruptions and Energetic Particles (Vol. 165, p. 43). Washington DC: American Geophysical Union Geophysical Monograph Series.Find this resource:
Moore, R. L., Sterling, A. C., Hudson, H. S., & Lemen, J. R. (2001). Onset of the Magnetic Explosion in Solar Flares and Coronal Mass Ejections. The Astrophysical Journal, 552, 833.Find this resource:
Munro, R. H., Gosling, J. T., Hildner, E., MacQueen, R. M., Poland, A. I., & Ross, C. L. (1979). The association of coronal mass ejection transients with other forms of solar activity. Solar Physics, 61, 201.Find this resource:
Murphy, R. J. (2007). Solar Gamma-Ray Spectroscopy. Space Science Reviews, 130, 127.Find this resource:
Nagai, F. (1980). A model of hot loops associated with solar flares. I - Gas- dynamics in the loops. Solar Physics, 68, 351.Find this resource:
Neupert, W. M. (1968). Comparison of Solar X-Ray Line Emission with Microwave Emission during Flares. The Astrophysical Journal, 153, L59.Find this resource:
Pallavicini, R., Peres, G., Serio, S., Vaiana, G., Acton, L., Leibacher, J., & Rosner, R. (1983). Closed coronal structures. V - Gasdynamic models of flaring loops and comparison with SMM observations. The Astrophysical Journal, 270, 270.Find this resource:
Parker, E. N. (1958). Suprathermal Particle Generation in the Solar Corona. The Astrophysical Journal, 128, 677.Find this resource:
Petrosian, V., Yan, H., & Lazarian, A. (2006). Damping of Magnetohydrodynamic Turbulence in Solar Flares. The Astrophysical Journal, 644, 603.Find this resource:
Petschek, H. E. (1964). Magnetic field annihilation. In W. N. Hess (Ed.), AAS-NASA Symposium on the physics of solar flares (p. 425). Washington, DC: NASAFind this resource:
Poletto, G., & Kopp, R. A. (1986). Macroscopic electric fields during two-ribbon flares. In D. F. Neidig (Ed.), The lower atmospheres of solar flares (p. 453). National Solar Observatory.Find this resource:
Priest, E. R. (2019). Magnetohydrodynamics – Overview. In B. Foster (Ed.), Oxford Research Encyclopedia of Physics. Oxford, UK: Oxford University Press.Find this resource:
Priest, E. R., & Forbes, T. G. (1990). Magnetic field evolution during prominence eruptions and two-ribbon flares. Solar Physics, 126, 319.Find this resource:
Qiu, J. (2009). Observational Analysis of Magnetic Reconnection Sequence. The Astrophysical Journal, 692, 1110.Find this resource:
Qiu, J., Lee, J., Gary, D. E., & Wang, H. (2002). Motion of Flare Footpoint Emission and Inferred Electric Field in Reconnecting Current Sheets. The Astrophysical Journal, 565, 1335.Find this resource:
Qiu, J., Liu, W.-J., & Longcope, D. W. (2012). Heating of Flare Loops With Observationally Constrained Heating Functions. The Astrophysical Journal, 752, 124.Find this resource:
Qiu, J., & Longcope, D. W. (2016). Long Duration Flare Emission: Impulsive Heating or Gradual Heating? The Astrophysical Journal, 820, 14.Find this resource:
Qiu, J., Sturrock, Z., Longcope, D. W., Klimchuk, J. A., & Liu, W.-J. (2013). Ultraviolet and Extreme-ultraviolet Emissions at the Flare Footpoints Observed by Atmosphere Imaging Assembly. The Astrophysical Journal, 774, 14.Find this resource:
Reeves, K. K., & Warren, H. P. (2002). Modeling the Cooling of Postflare Loops. The Astrophysical Journal, 578, 590.Find this resource:
Rosenbluth, M. N., MacDonald, W. M., & Judd, D. L. (1957). Fokker-Planck Equation for an Inverse-Square Force. Physical Review, 107, 1.Find this resource:
Sakao, T. (1994). Characteristics of solar flare hard X-ray sources revealed with the hard X-ray telescope aboard the Yohkoh satellite. PhD thesis, University of Tokyo.Find this resource:
Savage, S. L., McKenzie, D. E., & Reeves, K. K. (2012), Re-interpretation of Supra-arcade Downflows in Solar Flares. The Astrophysical Journal, 747, L40.Find this resource:
Schrijver, C. J., et al. (2012). Estimating the frequency of extremely energetic solar events, based on solar, stellar, lunar, and terrestrial records. Journal of Geophysical Research, 117, A08103.Find this resource:
Shibata, K., et al. (2013). Can Superflares Occur on Our Sun? Publications of the Astronomical Society of Japan, 65, 49.Find this resource:
Shibayama, T., Kusano, K., Miyoshi, T., Nakabou, T., & Vekstein, G. (2015). Fast magnetic reconnection supported by sporadic small-scale Petschek type shocks. Physics of Plasmas, 22, 100706.Find this resource:
Somov, B. V., & Kosugi, T. (1997). Collisionless Reconnection and High-Energy Particle Acceleration in Solar Flares. The Astrophysical Journal, 485, 859.Find this resource:
Sturrock, P. A. (1968). A Model of Solar Flares in IAU Symp. 35: Structure and Development of Solar Active Regions, 471.Find this resource:
Sui, L., & Holman, G. D. (2003). Evidence for the Formation of a Large- Scale Current Sheet in a Solar Flare. The Astrophysical Journal, 596, L251.Find this resource:
Takasao, S., Matsumoto, T., Nakamura, N., & Shibata, K. (2015). Magnetohydrodynamic Shocks in and above Post-flare Loops: Two-dimensional Simulation and a Simplified Model. The Astrophysical Journal, 805, 135.Find this resource:
Tandberg-Hanssen, E., & Emslie, A. G. (1988). The physics of solar flares (Cambridge Astrophysics Series). Cambridge, UK: Cambridge University Press.Find this resource:
Tschernitz, J., Veronig, A. M., Thalmann, J. K., Hinterreiter, J., & Pötzi, W. (2018). Reconnection Fluxes in Eruptive and Confined Flares and Implications for Superflares on the Sun. The Astrophysical Journal, 853, 41.Find this resource:
Tsuneta, S., & Naito, T. (1998). Fermi Acceleration at the Fast Shock in a Solar Flare and the Impulsive Loop-Top Hard X-Ray Source. The Astrophysical Journal, 495, L67.Find this resource:
Ugai, M., & Tsuda, T. (1977). Magnetic field line reconnection by localized enhancement of resistivity. I. Evolution in a compressible MHD fluid. Journal of Plasma Physics, 17, 337.Find this resource:
Warren, H. P. (2006). Multithread Hydrodynamic Modeling of a Solar Flare. The Astrophysical Journal, 637, 522.Find this resource:
Warren, H. P., & Antiochos, S. K. (2004). Thermal and Nonthermal Emission in Solar Flares. The Astrophysical Journal, 611, L49.Find this resource:
Warren, H. P., Mariska, J. T., & Doschek, G. A. (2013). Observations of Thermal Flare Plasma with the EUV Variability Experiment. The Astrophysical Journal, 770, 116.Find this resource:
Wheatland, M. S. (2004). A Bayesian Approach to Solar Flare Prediction. The Astrophysical Journal, 609, 1134.Find this resource:
Yashiro, S., Gopalswamy, N., Akiyama, S., Michalek, G., & Howard, R. A. (2005). Visibility of coronal mass ejections as a function of flare location and intensity. Journal of Geophysical Research, 110, A12S05.Find this resource:
Yeates, A. R., & Mackay, D. H. (2009). Initiation of Coronal Mass Ejections in a Global Evolution Model. The Astrophysical Journal, 699, 1024.Find this resource:
Yokoyama, T., & Shibata, K. (1997). Magnetic Reconnection Coupled with Heat Conduction. The Astrophysical Journal, 474, L61.Find this resource:
Zhang, J., Dere, K. P., Howard, R. A., Kundu, M. R., & White, S. M. (2001), On the Temporal Relationship between Coronal Mass Ejections and Flares. The Astrophysical Journal, 559, 452.Find this resource:
## Notes:
(1.) The loop in Figure 5 is chosen to resemble one of those found near the south end of the arcade in Figure 2c. It has similar length and the energy used results in coronal density $ne≃2×1010cm−3$ when $T≃6$ Mm.
(2.) Measurements tend to work with the particle flux distribution, $F(E)∼Ef(e)$, whose power law is traditionally written $F∼E−δ$. The power in the latter is $δ=δ′+1/2$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 92, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873569130897522, "perplexity": 2409.574837356891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00235.warc.gz"} |
https://www.khanacademy.org/science/physics/mechanical-waves-and-sound/simple-harmonic-motion-with-calculus/v/harmonic-motion-part-2-calculus | If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
## Physics library
### Unit 8: Lesson 2
Simple harmonic motion (with calculus)
# Harmonic motion part 2 (calculus)
We test whether Acos(wt) can describe the motion of the mass on a spring by substituting into the differential equation F=-kx. Created by Sal Khan.
## Want to join the conversation?
• in some books its given x= A sin wt and in some books it is x= Acos(wt). which one is right or are both of them correct??
• The thing is, the only difference between the two is where you start. The function A sin wt is just the function A cos wt displaced by 90 degrees (graph it on a calculator, you'll see). So, both are right. It just depends on how you decide to graph it.
If you start the oscillation by compressing a spring some distance and then releasing it, then x = A cos wt, because at time t=0, x=A (A being whatever distance you compressed the spring). But if you start the oscillation by suddenly applying a force to the spring at rest and then letting it oscillate, at t=0, x must equal 0. So in that case, x = A sin wt (sin0 = 0)
• what is the purpose of omega in the equation?
x(t) = A cos(wt)
• The omega is a constant in the equation that stretches the cosine wave left and right (along the x axis), just as the A at the front of equation scales the cosine wave up and down. The bigger the omega, the more squashed the cosine wave showing the spring's position (and thus quicker the spring's movement).
• If my knowledge of Calculus is correct, derivatives lower the power of a function and integrals raise the power of a function. In this lesson, a (acceleration) is depicted as a second derivative, which is true, but the actual work appears to be that of an integral equation. In short, my question is this:
Why would the acceleration equation have a higher power than the distance equation?
• In addition, the power of a function is given by the exponent on the variable (in this case, x). The w, on the other hand, is a constant, so the power of w isn't related to the power of the function.
• I can understand the fact that to make position-time equation dimensionally correct you introduce a term w(omega) multiplied with t in 'sin(wt)'. But I can't justify the appearance of a rotational quantity 'w' in this equation which is actually the representation of linear motion of spring-mass system performing SHM.
Thanking You.
• the omega (curvy w) comes from the explanation of SHM using a reference circle. Omega is the angular velocity of the displacement phasor as it travels in a circular motion. This velocity is constant, unlike the linear motion of SHM. wt (omega x time) therefore equals angualr displacement, represented by the letter theta. For a little clearer understanding, the diagrams on this page may help. http://www.tutorvista.com/content/physics/physics-iii/oscillations/circle-reference.php
• Why w=square root of k/m not plus or minus square root of k/m ?. Isn't angular velocity a vector
• angular velocity is a vector but w is a magnitude of it
• why x(t)= A*cos(omega*t)? and wat is 'T'?
• x(t) = A*cos(omega*t) represents the function of the SHM. and T is the time period of the SHM, that is the time taken by the system to complete on cycle.. from A to O and to -A and again to O and again to A.
• If I keep a block attached to a horizontal spring on the floor of an elevator going up with an acceleration ‘a’, and then displace the block slightly by stretching the spring-block system horizontally, will the time period of oscillation change as compared to the (T=2π√m/k)?
I don't think it should, because the time period depends on the horizontal force, and the elevator changes it in the vertical direction, but my friend asserts that it should change, though she can't explain why. Which of us, if either, is correct?
• Here is a simulation of a mass on a vertical spring
Go to the simulation and try it out. You can measure the period of oscillation by clicking on the box for "stopwatch". To measure the period, measure the time for 10 or 20 bounces and divide by 10 or 20.
Now look over on the right side and you will see that you can change gravity from that of Earth to that of Jupiter. Go ahead and do that and see if gravity makes a difference to the period.
Now how does this apply to acceleration? When you are in an elevator accelerating up at a rate of a, that is exactly the same as if gravity increased from g to g+a. You can tell this by figuring out what your weight would be in the elevator - if you work it out you will see it will be m(g+a).
So if the period is the same on jupiter, that means it will also be the same in an accelerating elevator, and if it is different on jupiter, that means it will also be different in the elevator.
Once you know the answer by experiment, see if you can figure out why the answer is what it is.
• How would we solve a problem that is an oscillation but doesn't start where cos(wt) and sin(wt) start?
• That's a good question. The function would then be Asin(wt+k) where k is some constant.
If the graph is just shifted (horizontally) slightly, you add or subtract a constant that is equal to the amount by which it has been shifted on the x-axis.
When to add and when to subtract?
- Add when it is shifted to the left
- Subtract when it is shifted to the right | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872663974761963, "perplexity": 493.1076943543525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00373.warc.gz"} |
https://arxiv.org/abs/1812.02086 | math.MG
(what is this?)
# Title:Infinitesimal Hilbertianity of locally CAT($κ$)-spaces
Abstract: We show that, given a metric space $(Y,d)$ of curvature bounded from above in the sense of Alexandrov, and a positive Radon measure $μ$ on $Y$ giving finite mass to bounded sets, the resulting metric measure space $(Y,d,μ)$ is infinitesimally Hilbertian, i.e. the Sobolev space $W^{1,2}(Y,d,μ)$ is a Hilbert space.
The result is obtained by constructing an isometric embedding of the abstract and analytical' space of derivations into the concrete and geometrical' bundle whose fibre at $x\in Y$ is the tangent cone at $x$ of $Y$. The conclusion then follows from the fact that for every $x\in Y$ such a cone is a CAT(0)-space and, as such, has a Hilbert-like structure.
Comments: 44 pages Subjects: Metric Geometry (math.MG) MSC classes: 51Fxx, 49J52, 46E35 Cite as: arXiv:1812.02086 [math.MG] (or arXiv:1812.02086v1 [math.MG] for this version)
## Submission history
From: Elefterios Soultanis Mr. [view email]
[v1] Wed, 5 Dec 2018 16:21:26 UTC (54 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726376533508301, "perplexity": 996.9756533724704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00376.warc.gz"} |
http://www.maa.org/press/maa-reviews/the-riemann-hypothesis-and-the-roots-of-the-riemann-zeta-function | # The Riemann Hypothesis and the Roots of the Riemann Zeta Function
###### Samuel W. Gilbert
Publisher:
BookSurge Publishing
Publication Date:
2009
Number of Pages:
140
Format:
Paperback
Price:
49.95
ISBN:
9781439216385
Category:
General
[Reviewed by
Underwood Dudley
, on
08/23/2009
]
The Clay Mathematics Institute has offered a prize of \$1,000,000 for a resolution of the Riemann Hypothesis that all of the zeros of the Riemann zeta function that lie in the critical strip 0 < s < 1 are on the line s = 1/2. Perhaps because of the prize, recently many proofs (and at least one disproof) have been put forward by a variety of authors. Two proofs made it as far as the web site arXiv.org, but were withdrawn by their authors after flaws were pointed out.
This book purports to give a proof. Its author, a member of the American Mathematical Society, holds the Ph. D. degree in chemical engineering (1987, University of Illinois). He has worked for Eastman Kodak and Exxon Research and currently has a “wealth advisory practice” in Virginia. His book, 140 pages long, was published by BookSurge Publishing, an organization that enables on-demand publishing.
The author does not say at whom his book is aimed, but the level of mathematics is sufficiently high that I doubt that it could be read by anyone other than professional mathematicians.
As might be expected, the book contains a good deal of what could be called padding. For example, there are graphs of the first ten roots of the Riemann zeta function, six pages devoted to a chart of a sequence converging to its first imaginary root, a constant given to 1026 significant figures, and an excerpt from an encyclopedia about the Gordian knot.
For this reason, and others, I can’t say if his proof is correct. He says that the series for the zeta function for s > 1, the sum of the reciprocals of the sth powers of the positive integers, diverges everywhere in the critical strip 0 < s < 1 but that it “does, in fact, converge at the roots in the critical strip—and only at the roots in the critical roots in the critical strip—in a special geometrical sense.” What this means was not clear to me and I did not exert myself sufficiently to make it clear.
A good use for the book, I think, would be for an instructor in a course in analytic number theory to give it to a student with the assignment of seeing what, if anything, is there. If the proof is valid, I owe the author an apology.
Woody Dudley knows enough number theory not to attack the Riemann Hypothesis, and not enough chemical engineering to attack any large open problems in that discipline. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012692928314209, "perplexity": 552.5072709135161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00264-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/questions/13526/is-prelu-superfluous-with-respect-to-relu?noredirect=1 | Is PReLU superfluous with respect to ReLU?
Why do people use the $$PReLU$$ activation?
$$PReLU[x] = ReLU[x] + ReLU[p*x]$$
with the parameter $$p$$ typically being a small negative number.
If a fully connected layer is followed by a at least two element $$ReLU$$ layer then the combined layers together are capable of emulating exactly the $$PReLU$$, so why is it necessary?
Am I missing something?
Lets assume we have 3 Dense layers, where the activations are $$x^0 \rightarrow x^1 \rightarrow x^2$$, such that $$x^2 = \psi PReLU(x^1) + \gamma$$ and $$x^1 = PReLU(Ax^0 + b)$$
Now lets see what it would take to conform the PReLU into a ReLU
\begin{align*} PReLU(x^1) &= ReLU(x^1) + ReLU(p \odot x^1)\\ &= ReLU(Ax^0+b) + ReLU(p\odot(Ax^0+b))\\ &= ReLU(Ax^0+b) + ReLU((eye(p)A + eye(p)b)x^0)\\ &= ReLU(Ax^0+b) + ReLU(Qx^0+c) \quad s.t. \quad Q = eye(p)A, \ \ c = eye(p)b\\ &= [I, I]^T[ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ \implies x^2 &= [\psi, \psi][ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ &= V*ReLU(Sx^0 + d) \quad V=[\psi, \psi], \ \ S=[A, Q] \ \ d=[b, c] \end{align*}
So as you said it is possible to break the form of the intermiediary $$PReLU$$ into a pure $$ReLU$$ while keeping it as a linear model, but if you take a second look at the parameters of the model, the size increase drastically. The hidden units of S doubled meaning to keep $$x^2$$ the same size $$V$$ also doubles in size. So this means if you dont want to use the $$PReLU$$ you are learning double the parameters to achieve the same capability (granted it allows you to learn a wider span of functions as well), and if you enforce the constraints on $$V,S$$ set by the $$PReLU$$ the number of paramaters is the same but you are still using more memory and more operations!
I hope this example convinces you of the difference
• ok thanks, this sounds like a matter of efficiency and probably better learning/convergence capabilities. what does eye(p) stand for? Jul 23 '19 at 15:40
• eye(p), is taking the vector p and making a diagonal matrix, where the elements of p is the diagonal (same functionality as like numpys np.eye). Jul 23 '19 at 15:48
Here are 3 reasons I can think of:
• Space - As @mshlis pointed out, size. To approximate a PReLu you require more than 1 ReLu. Even without formal proof one can easily see that PReLu is 2 adjustable (parameterizable) linear functions within 2 different ranges joined together, while ReLu is just a single adjustable (parameterizable) linear function within half that range, so you require minimum 2 ReLu's to approximate a PReLU. And thus space complexity increases and you require more space to store parameters
• Time - This increase in number of ReLu directly affects training time, here is a question on the time complexity of training a Neural Network, you can check out and work out the necessary mathematical details for time increment for a 2x Neural Network size. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833402037620544, "perplexity": 1137.6289203150397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00313.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-7-exponential-functions-7-3-logarithms-and-their-derivatives-exercises-page-343/66 | ## Calculus (3rd Edition)
$$y'= 24x+47.$$
Taking the $\ln$ on both sides of the equation, we get $$\ln y= \ln (3x+5)(4x+9)$$ Then using the properties of $\ln$, we can write $$\ln y= \ln (3x+5)+\ln(4x+9).$$ Now taking the derivative for the above equation, we have $$\frac{y'}{y}= \frac{3}{3x+5}+ \frac{4}{4x+9},$$ Hence $y'$ is given by $$y'=y\left( \frac{3}{3x+5}+ \frac{4}{4x+9}\right)=(3x+5)(4x+9)\left( \frac{3}{3x+5}+ \frac{4}{4x+9}\right)\\ 12x+27+12x+20=24x+47.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997203946113586, "perplexity": 50.77625703619477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00085.warc.gz"} |
https://manufacturingscience.asmedigitalcollection.asme.org/DSCC/proceedings-abstract/DSCC2020/84270/V001T21A007/1096550 | Abstract
A design method is proposed for a nonlinear disturbance observer based on the notion of passivity. As an initial application, we consider here systems whose structure comprises a set of integrator cascades, though the proposed approach can be extended to a larger class of systems. We describe an explicit procedure to choose the output of the system and to design the nonlinear feedback law used by the observer, provided the system satisfies a sufficient condition for output feedback semi-passification. The output injection term in the observer scales the measurement residual with a nonlinear gain that depends on the output and a set of static design parameters. We provide guidance for parameter tuning such that the disturbance tracking performance and the transient response of the estimation error can be intuitively adjusted. Example applications to two nonlinear mechanical systems illustrate that the proposed nonlinear observer design method is quite effective, producing an observer that can estimate a wide range of disturbances without any need to know or assume the disturbance dynamics.
This content is only available via PDF.
You do not currently have access to this content. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233617544174194, "perplexity": 316.408379274746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00099.warc.gz"} |
https://math.stackexchange.com/questions/2417195/help-with-strong-induction | # Help with Strong Induction
I am stuck on the inductive step of a proof by strong induction, in which I am proving proposition $S(x)$: $$\sum_{i=1}^{2^x} \frac{1}{i} \geq 1 + \frac{x}{2}$$ for $x \geq 0$. I have already finished verifying the base case, $S(0)$, and writing my inductive hypothesis, $S(x)$ for $0 \leq x \leq x + 1$. What I need to prove is $S(x+1)$: $$\sum_{i=1}^{2^{x+1}} \frac{1}{i} \geq 1 + \frac{x+1}{2}$$ but I cannot figure out how to go from point A to point B on this.
What I have so far is the following (use of inductive hypothesis denoted by I.H.): \begin{eqnarray*} \sum_{i=1}^{2^{x+1}} \frac{1}{i} & = & \sum_{i=1}^{2^x} \frac{1}{i} + \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \\ & \stackrel{I.H.}{\geq} & 1 + \frac{x}{2} + \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \\ \end{eqnarray*} But in order to complete the proof with this approach, I need to show that $$\sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \geq \frac{1}{2}$$ and I have absolutely no idea how to do that. When I consulted with my professor, he suggested that I should leverage the inequality more than I am, but I frankly can't see how to do that either. I have been staring at this proof for over 6 hours, will someone please give me a hint? Or, more preferably, could you explain a simpler/easier way to go about this proof? Thank you.
• Notice that in your last sum, $i$ is always greater than $2^x$. That can give you an upper bound on all of the $1/i$ terms – JonathanZ Sep 5 '17 at 4:11
Note that $\frac{1}{i}\geq \frac{1}{2^{x+1}}, \forall i\in \{2^x+1,2^x+2,....,2^{x+1}\}$ $$\Rightarrow \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \geq 2^x\cdot \frac{1}{2^{x+1}}=\frac{1}{2}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788329005241394, "perplexity": 84.15447289581807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00009.warc.gz"} |
https://crypto.stackexchange.com/questions/32506/common-modulus-attack-not-reproducible/32514 | # Common Modulus Attack not reproducible
I want to calculate a simple example of the RSA common modulus attack. However, the result is not correct and I do not find my mistake.
p=$29, q=37, n=p*q = 1073, \phi(n) = 1008, e1 = 5, e2 = 11$
Let $m = 999$.
$c_1 = m^{e_1} \pmod n = 296$, $c_2 = m^{e_2} \pmod n = 555$
The extended Euclidean algorithm gives me $y_1$ and $y_2$: $y_1 \cdot e_1 + y_2 \cdot e_2 = 1$
$y_1 = -2, y_2 = 1$ (edited)
$m = c_1^{y_1} * c_2^{y_2} = 296^{-2} \cdot 555^1 \pmod {1073}$
How do I calculate $296^{-2}$? I tried to get the inverse of $296 \pmod {1073}$ and then square it, but $296$ has no inverse. What am I doing wrong?
• You're not noticing that m is not coprime to n. (Encryption followed by decryption still gives the original input, but such an m [that's also not a multiple of n] gives a non-trivial factorization of n, and the ciphertext will have the same property.) – user991 Feb 6 '16 at 12:23
• The original RSA encryption scheme does not require m to be coprime to n. Why is this necessary when conducting the common modulus attack? – null Feb 6 '16 at 12:53
• I haven't checked this, but think it's not actually necessary. One could instead try using meadow inverses. – user991 Feb 6 '16 at 12:57
• But as we see that the attack above does not work, because "m is not coprime to n". So it seems to be a prerequisite, doesn't it? – null Feb 6 '16 at 13:05
• @fgrieu : Yes, and that can be done with gcd and CRT. – user991 Feb 6 '16 at 15:46
In real word RSA modules are so large that probability for finding $c_1$ which is not coprime with $n$ is approximately zero.
Also if you founded such number then $p=gcd(c_1,n)\neq1$ so $p$ is a factor of $n$ and in this case attack is not necessary because $n$ is factored.
$gcd(296,1073)=37\neq 1$ so $p=37,q=\frac{1073}{37}=29$ and $\phi(n)=1008$
Now you can easily compute private key $d_2$:
$e_2\cdot d_2=1 \pmod{ \phi(n)}$ so $d_2=275$.
$$m={c_2}^{d2}\pmod n={555}^{275}\pmod{1073}=999$$
• How does your answer help me in solving the upper exercise? – null Feb 6 '16 at 17:30
• @null, The goal of attack is breaking RSA. With factoring $n$ you can easily find private key and decrypt ciphertext. If your question is compute $d$ which $296\cdot d\pmod{1073}=1$ you cant find such $d$. – Meysam Ghahramani Feb 6 '16 at 19:27
• I understand. Well if I cannot invert d the upper attack is not possible. Why? – null Feb 6 '16 at 22:47
The problem is to reliably and efficiently find message $m$ (with $0\le m<n$) given RSA modulus $n$, distinct RSA public exponents $e_1$ and $e_2$ coprime to each others and to the unknown $\phi(n)$, and ciphertexts $c_1=m^{e_1}\bmod n$ and $c_2=m^{e_2}\bmod n$. WLoG, and per the corrected question, $y_1$ is negative when it is applied the extended euclidean algorithm to $e_1$ and $e_2$ in order to find $y_1$ and $y_2$ with $y_1\cdot e_1+y_2\cdot e_2=1$.
For random choice of message $m$, odds that $\gcd(m,n)=0$ are low, precisely $1-\phi(n)/n$, that is $1/p+1/q-1/n$ if $n=p\cdot q$ with $p$ and $q$ distinct primes. If $n$ is square-free (as assumed in most definitions of RSA), $\gcd(m,n)=\gcd(m^e_1,n)$, thus odds that $\gcd(c_1,n)=0$ also are $1-\phi(n)/n$. Hence, odds that $c_1$ has no inverse for random choice of $m$ are low (less than $2^{-510}$ of 1024-bit RSA with two 512-bit primes factors). Hence, for overwhelmingly most $m$, $c_1^{y_1}\cdot c_2^{y_2}\bmod n$ is well-defined, and is the desired $m$. But that does not quite always work.
We can make an efficient algorithm that always work, including for the definition of RSA in PKCS#1v2 where $n$ can have multiple prime factors, even though we might be unable to efficiently find any prime factor in $n$. The method goes:
• Check if $c_1=0$, in which case $m=0$.
• Compute $r=\gcd(c_1,n)$. That's a divisor of $n$, often $1$ (however it is possible that $r>1$, in which case $r$ divides $n$; and also that $r$ or/and $n/r$ are composite, thus factoring $n$ might remain uneasy).
• Compute $s=n/r$; with the assumption that $n$ is square-free, $\gcd(r,s)=1$ holds.
• Compute $i_1=((((c_1\bmod s)\cdot r)\bmod s)^{-1}\bmod s)\cdot r$, the so-called meadow inverse of $c_1$ modulo $n$, such that $i_1\cdot c_1\bmod r=0$ and $i_1\cdot c_1\bmod s=1$, with $r$ and $s$ defined as above.
• Compute $i_1^{-y_1}\cdot c_2^{y_2}\bmod n$, which is the desired $m$ (as pointed by Ricky Demer in a comment to the question).
Proof sketch: we prove $i_1^{-y_1}\cdot c_2^{y_2}-m\equiv0\pmod r$ and $i_1^{-y_1}\cdot c_2^{y_2}-m\equiv0\pmod s$.
Example: $e_1=5$, $e_2=11$, $n=837876170870196973028071$, $c_1=621961884462245272210948$, $c_2=653042419105836777869045$. We compute
• $r=932340427217$; that's a factor of $n$ (this example is crafted to make it composite)
• $s=898680510263$; that's a factor of $n$ (also composite in this example)
• $i_1=653042419105836777869045$
• $m=331563319321409011786785$.
Note: we do not need to factor $n$ (or $r$ or $s$), as required to compute a valid private exponent $d$, as would be required by the method outlined in that other answer; and we always find $m$ with polynomial effort w.r.t. the bit size of parameters, contrary to the method in that other answer.
You can actually "invert" a value m with respect to n even if
$$gcd(m,n) \neq 1$$
You are looking for a value $m^{-1}$ that satisfies
$$m*m^{-1} = 1 \pmod{n}$$
This is a linear congruence. You need a bit more time to solve than just simply inverting a number. But by trying all the possible values as specified in the link above, you will finally discover the "inverse" you are looking for.
• Um, no you can't. By definition, if $\gcd(m,n) = k$, then any multiple of $m$ modulo $n$ is also a multiple of $k$. The closest you can get is finding a pseudoinverse $m^*$ such that $m \times m^* = k \pmod n$ (and you can do that with the same extended Euclidean algorithm used to find normal modular inverses). – Ilmari Karonen Feb 7 '16 at 20:55
• Yes, and if you try every possible pseudoinverse as you call it, one of them will be the message and it will allow you to perform the common modulus attack. – mandragore Feb 7 '16 at 21:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841684103012085, "perplexity": 352.9301836221263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00190.warc.gz"} |
http://www.conservapedia.com/Determinant | # Determinant
Jump to: navigation, search
The determinant of a matrix (written |A|) is a single number that depends on the elements of the matrix A. Determinants exist only for square matrices (i.e., ones where the number of rows equals the number of columns).
Determinants are a basic building block of linear algebra, and are useful for finding areas and volumes of geometric figures, in Cramer's rule, and in many other areas ways. If the characteristic polynomial splits into linear factors, then he determinant is equal to the product of the eigenvalues of a matrix, counted by their algebraic multiplicities.
## Motivation
A matrix can be used to transform a geometric figure. For example, in the plane, if we have a triangle defined by its vertices (3,3), (5,1), and (1,4), and we wish to transform this triangle into the triangle of vertices (3,-3), (5,-9), and (1,2), we can simply do a matrix multiplication of each vertex by the matrix $\begin{pmatrix} 1 & 0 \\ -2 & 1 \\ \end{pmatrix}$.
In this transformation, no matter what is the shape of the initial geometric figure, its position, or its area, the final geometric figure will have the same area and orientation.
It can be seen that matrix transformations of geometric figures always give resulting figures whose area is proportional to the initial figure, and whose orientation is either always the same, or always the reverse.
This ratio is called the determinant of the matrix, and it's positive when the orienation is kept, negative when the orientation is reversed, and zero when the final figure always has zero area.
This two-dimensional concept is easily generalized for any dimensions. In 3D, replace area for volume, and in higher dimensions the analogue concept is called hypervolume.
The determinant of a matrix is the oriented ratio of the hypervolumes of the transformed figure to the source figure.
## How to calculate
We need to introduce two notions: the minor and the cofactor of a matrix element. Also, the determinant of a 1x1 matrix equals the sole element of that matrix.
Minor
The minor mij of the element aij of an NxN matrix is the determinant of the (N-1)x(N-1) matrix formed by removing the ith row and jth column from M.
Cofactor
The cofactor Cij equals the minor mij multiplied by ( − 1)i + j
The determinant is then defined to be the sum of the products of the elements of any one row or column with their corresponding cofactors.
### 2x2 case
For the 2x2 matrix
$\begin{pmatrix} a & b \\ c & d\end{pmatrix}$
the determinant is simply ad-bc (for example, using the above rule on the first row).
### 3x3 case
For a general 3x3 matrix
$\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \\ \end{pmatrix}$
we can expand along the first row to find
$|A|=A_{11}\begin{vmatrix}A_{22} & A_{23} \\ A_{32} & A_{33} \end{vmatrix}- A_{12}\begin{vmatrix}A_{21} & A_{23} \\ A_{31} & A_{33} \end{vmatrix}+ A_{13}\begin{vmatrix}A_{21} & A_{22} \\ A_{31} & A_{32} \end{vmatrix}$
where each of the 2x2 determinants is given above.
## Properties of determinants
The following are some useful properties of determinants. Some are useful computational aids for simplifying the algebra needed to calculate a determinant. The first property is that | M | = | MT | where the superscript "T" denotes transposition. Thus, although the following rules refer to the rows of a matrix they apply equally well to the columns.
• The determinant is unchanged by adding a multiple of one row to any other row.
• If two rows are interchanged the sign of the determinant will change
• If a common factor α is factored out from each element of a single row, the determinant is multiplied by that same factor.
• If all the elements of a single row are zero (or can be made to be zero using the above rules) then the determinant is zero.
• | AB | = | A | | B |
In practice, one of the most efficient ways of finding the determinant of a large matrix is to add multiples of rows and/or columns until the matrix is in triangular form such that all the elements above or below the diagonal are zero, for example
$\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ 0 & A_{22} & A_{23} \\ 0 & 0 & A_{33} \\ \end{pmatrix}$.
The determinant of such a matrix is simply the product of the diagonal elements (use the cofactor expansion discussed above and expand down the first column). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940122723579407, "perplexity": 293.6320987383853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462070.88/warc/CC-MAIN-20150226074102-00293-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/the-difference-between-system-equili-and-system-and-steady-state.1623/ | # The difference between system equili and system and steady state
• Start date
• #1
hi
Can anyone explain the difference between a system at equilibrium and a system at steady-state water flow?
I know that equilibrium occurs at equal rates, no net change is produced. But I don't understand steady state system....Please explain it to me.... Thanks
also the difference between saturated and unsaturated hydraulic conductivity? I have hard time to understand these stuffs.. If anyone know these stuffs, can u please explain it to me. Thanks.
• #2
Tom Mattson
Staff Emeritus
Gold Member
5,500
8
I'm moving this to Physics, where perhaps it will get some discussion.
• #3
Alexander
Equilibrium: dU/dx = 0 (usually this happens at extrema of potential energy).
• #4
Tom Mattson
Staff Emeritus
Gold Member
5,500
8
Originally posted by hi
I know that equilibrium occurs at equal rates, no net change is produced.
OK, at first I thought you meant the "zero force" condition, but now I am thinking that you are referring to the continuity equation. That is because when you say "equal rates", it makes me think of "equal flow rates into and out of a volume".
So, that statement of equilibrium would be:
[nab].j+∂ρ/∂t=0
But I don't understand steady state system....Please explain it to me.... Thanks
I dug up the old Fluid Mechanics book (it's been about 10 years!) and looked up the mathematical definition of steady state. It is...
∂A/∂t=0
for any fluid property A. That would include the density ρ, which reduces the continuity equation to:
[nab].j=0
also the difference between saturated and unsaturated hydraulic conductivity? I have hard time to understand these stuffs.. If anyone know these stuffs, can u please explain it to me. Thanks.
This I don't know. Our local "fluids" guy is Enigma; try sending him a PM.
edit: fixed ∂ signs.
• #5
Alexander
I would call steady state a state at which power (rate of chande of energy dU/dt) is constant.
• Last Post
Replies
6
Views
20K
• Last Post
Replies
6
Views
18K
• Last Post
Replies
8
Views
3K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
0
Views
7K
• Last Post
Replies
3
Views
614
• Last Post
Replies
2
Views
1K
• Last Post
Replies
1
Views
3K
• Last Post
Replies
2
Views
2K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106267690658569, "perplexity": 3364.941660461981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00308.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=541289 | ## Simple PDE....
I'm trying to solve the PDE:
$\frac{\partial^2 f(x,t)}{\partial x^2}=\frac{\partial f(x,t)}{\partial t}$ with $x \in [-1,1]$ and boundary conditions f(1,t)=f(-1,t)=0.
Thought that $e^{i(kx-\omega t)}$ would work, but that obviously does not fit with the boundary conditions. Has anyone an idea?
PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire
Blog Entries: 1
Recognitions:
Gold Member
Staff Emeritus
Quote by Aidyan I'm trying to solve the PDE: $\frac{\partial^2 f(x,t)}{\partial x^2}=\frac{\partial f(x,t)}{\partial t}$ with $x \in [-1,1]$ and boundary conditions f(1,t)=f(-1,t)=0. Thought that $e^{i(kx-\omega t)}$ would work, but that obviously does not fit with the boundary conditions. Has anyone an idea?
Your equation is the 1D heat equation, the solutions of which are very well known and understood. A google search should yield what you need.
P.S. You will also need some kind of initial condition.
Quote by Hootenanny Your equation is the 1D heat equation, the solutions of which are very well known and understood. A google search should yield what you need. P.S. You will also need some kind of initial condition.
Hmm... looks like it isn't just a simple solution, however. It seems I'm lacking the basics ... I thought this is sufficeint data to solve it uniquely, what is the difference between boundary and initial conditions?
Blog Entries: 1
Recognitions:
Gold Member
Staff Emeritus
## Simple PDE....
Quote by Aidyan I thought this is sufficeint data to solve it uniquely,
Afraid not, without knowing the temperature distribution at a specific time you aren't going to obtain a (non-trivial) unique solution.
Quote by Aidyan what is the difference between boundary and initial conditions?
The former specifies the temperature on the spatial boundaries of the domain (in this case x=-1 and x=1). The latter specifies the temperature distribution at a specific point in time (usually t=0, hence the term initial condition). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521706461906433, "perplexity": 773.4843891461609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698238192/warc/CC-MAIN-20130516095718-00081-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.rdmag.com/news/2014/03/new-algorithm-improves-efficiency-small-wind-turbines?et_cid=3830296&et_rid=623702531&location=top | News
New algorithm improves the efficiency of small wind turbines
Tue, 03/18/2014 - 9:47am
In recent years, mini wind energy has been developing in a spectacular way. According to estimates by the WWEA-World Wind Energy Association, the level of development of the mini wind energy industry is not the same as that of the wind energy industry, although forecasts are optimistic. The main reason is that the level of efficiency of small wind turbines is low. To address this problem, the UPV/EHU’s research group APERT (Applied Electronics Research Team) has developed an adaptative algorithm. The improvements that are applied to the control of these turbines will in fact contribute towards making them more efficient. The study has been published in the journal Renewable Energy.
Small wind turbines tend to be located in areas where wind conditions are more unfavourable. “The control systems of current wind turbines are not adaptative; in other words, the algorithms lack the capacity to adapt to new situations,” explained Iñigo Kortabarria, one of the researchers in the UPV/EHU’sAPERT research group. That is why “the aim of the research was to develop a new algorithm capable of adapting to new conditions or to the changes that may take place in the wind turbine,” added Kortabarria. That way, the researchers have managed to increase the efficiency of wind turbines.
The speed of the wind and that of the wind turbine must be directly related if the latter is to be efficient. The same thing happens with a dancing partner. The more synchronised the rhythms of the dancers are, the more comfortable and efficient the dance is, and this can be noticed because the energy expenditure for the two partners is at a minimum level. To put it another way, the algorithm specifies the way in which the wind turbine adapts to changes. This is what the UPV/EHU researchers have focussed on: the algorithm, the set of orders that the wind turbine will receive to adapt to wind speed.
“The new algorithm adapts to the environmental conditions and, what is more, it is more stable and does not move aimlessly. The risk that algorithms run is that of not adapting to the changes and, in the worst case scenario, that of making the wind turbine operate in very unfavourable conditions, thereby reducing its efficiency.
Efficiency is the aim
Efficiency is one of the main concerns in the mini wind turbine industry. One has to bear in mind that small wind turbines tend to be located in areas where wind conditions are more unfavourable. Large wind turbines are located in mountainous areas or on the coast; however, small ones are installed in places where the wind conditions are highly variable. What is more, the mini wind turbine industry has few resources to devote to research and very often is unaware of the aerodynamic features of these wind turbines. All these aspects make it difficult to monitor the point of maximum power (MPPT Maximum Power Tracking) optimally.“There has to be a direct relation between wind speed and wind turbine speed so that the monitoring of the maximum point of power is appropriate. It is important for this to be done optimally. Otherwise, energy is not produced efficiently,” explained Iñigo Kortabarria.
Most of the current algorithms have not been tested under the conditions of the wind that blows in the places where small wind turbines are located. That is why the UPV/EHU researchers have designed a test bench and have tested the algorithms that are currently being used —including the new algorithm developed in this piece of research— in the most representative conditions that could exist in the life of a wind turbine with this power. “Current algorithms cannot adapt to changes, and therefore wind turbine efficiency is severely reduced, for example, when wind density changes," asserted Kortabarria.
“The experimental trials conducted clearly show that the capacity to adapt of the new algorithm improves energy efficiency when the wind conditions are variable,” explained Kortabarria.“ We have seen that under variable conditions, in other words, in the actual conditions of a wind turbine, the new algorithm will be more efficient than the existing ones."
A novel adaptative maximum power point tracking algorithm for small wind turbines
Source: Basque Research | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622714877128601, "perplexity": 744.1381833577454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663637.20/warc/CC-MAIN-20140930004103-00001-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://chemicalstatistician.wordpress.com/2014/03/17/video-the-hazard-function-is-the-probability-density-function-divided-by-the-survival-function/ | # Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function
In an earlier video, I introduced the definition of the hazard function and broke it down into its mathematical components. Recall that the definition of the hazard function for events defined on a continuous time scale is
$h(t) = \lim_{\Delta t \rightarrow 0} [P(t < X \leq t + \Delta t \ | \ X > t) \ \div \ \Delta t]$.
Did you know that the hazard function can be expressed as the probability density function (PDF) divided by the survival function?
$h(t) = f(t) \div S(t)$
In my new Youtube video, I prove how this relationship can be obtained from the definition of the hazard function! I am very excited to post this second video in my new Youtube channel. You can also view the video below the fold!
### 2 Responses to Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function
1. skaae says:
Thanks for the explanation. The first time someone explained what appears to be a fact in most text books! I suggest you break down the partial likelihood in Cox models as well :)
• I’m glad that it was useful to you! I will discuss Cox models and partial likelihoods eventually – thanks for the suggestion! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116663932800293, "perplexity": 560.8716145731784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990217.27/warc/CC-MAIN-20150728002310-00024-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://jeeneetqna.in/390/let-a-i-j-b-i-j-k-and-c-be-a-vector-such-that-a-c-and-then-is-equal-to | # Let a = i^−j^, b = i^+j^+k^ and c be a vector such that a × c + b = 0 and a . c = 4, then |c|^2 is equal to :
more_vert
Let $\vec{a}=\hat{i}-\hat{j},\ \vec{b}=\hat{i}+\hat{j}+\hat{k}$ and $\vec{c}$ be a vector such that $\vec{a}\times\vec{c}+\vec{b}=0$ and $\vec{a}.\vec{c}=4$, then $|\vec{c}|^2$ is equal to :
(1) $19\over2$
(2) $8$
(3) $17\over2$
(4) $9$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817787408828735, "perplexity": 175.840308065079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00581.warc.gz"} |
https://math.stackexchange.com/questions/1734893/does-this-definition-of-e-even-make-sense/1735035 | # Does this definition of $e$ even make sense?
This sprung from a conversation here. In Stewart's Calculus textbook, he defined $e$ as the unique solution to $\lim\limits_{h\to 0}\frac{x^h-1}{h}=1$. Ahmed asked how do you define $x^h$ is not by $\exp(h\ln(x))$ and I'm not sure.
Does this definition of $e$ even make sense?
Definition here:
The definition of $e$ as the unique number such that $$\lim_{h \to 0}\frac{e^{h} - 1}{h} = 1$$ makes sense, but there are few points which must be established before this definition can be used:
1. Define the general power $a^{x}$ for all $a > 0$ and all real $x$. One approach is to define it as the limit of $a^{x_{n}}$ where $x_{n}$ is a sequence of rational numbers tending to $x$ (this is not so easy).
2. Based on the definition of $a^{x}$ above show that the limit $(a^{x} - 1)/x$ as $x \to 0$ exists for all $a > 0$ (this is hard) and hence the limit defines a functions $f(a)$ for $a > 0$.
3. The function $f(x)$ defined above is continuous, strictly increasing and maps $(0, \infty)$ to $(-\infty, \infty)$ (easy if previous points are established).
From the last point above it follows that there is a unique number $e > 1$ such that $f(e) = 1$. This is the definition of $e$ with which we started. And as can be seen this definition must be preceded by the proof of the results mentioned in three points above. All this is done in my blog post and in my opinion this is the most difficult route to a theory of exponential and logarithmic functions. Easier routes to the theory of exponential and logarithmic functions are covered in this post and next.
• arguably it is the most difficult but irritatingly it is also the must intuitively and direct to what we think the definitions "mean". Intuitively $b^n$ is b multiplied by itself n times so "obviously" $b^{n/m}$ is the m-th root of b to the n and as x = lim q then $b^x$ is the limit of $b^q$. I mean "duh" and obviously d(b^x)/dx = $C_b*b^x$ so there must be some $e$ where $C_e = 1$ and obviously $e^x$ means $e^x$ and $\ln x$ is just a logarithm. That's obviously what it all "means". It's a pity this is the freaking hardest approach. – fleablood Apr 9 '16 at 19:54
• @fleablood: as I say in my blog post "the most intuitive and obvious approach". – Paramanand Singh Apr 9 '16 at 20:45
• That's a nice blog post btw. – fleablood Apr 9 '16 at 22:21
$b^x$ can be defined as $\lim_{q\in \mathbb Q \rightarrow x}b^q$. (Isn't it usually so defined?) Or alternatively one can define $e = \lim_{h\in \mathbb Q\rightarrow 0}\frac {x^h - 1}{h}$.
I think it's legit and not circular.
• Of course, this assumes there is a limit. – fleablood Apr 9 '16 at 17:21
• It works; $b^q$ is defined for rational $q$ in the "usual" way, and you can force the limit to exist using a monotonicity argument. It is essentially the same argument that is used to prove that the only memoryless distributions, i.e. the ones with the property $P(X>t+s|X>t)=P(X>s)$, are the exponential distributions. – Ian Apr 9 '16 at 19:24
• I think defining new notions of limit is bit complicated and non-standard. The standard calculus texts define limits of functions of real variable and limit of functions of integral variable (sequences). If $x_{n}$ is a sequence of rational numbers with limit $x$ then we can define $a^{x}$ as the limit of sequence $a^{x_{n}}$. The notion of $\lim_{x \in \mathbb{Q} \to a}$ can be made precise by an appropriate definition but it is not very commonly seen. – Paramanand Singh Apr 9 '16 at 19:31
• This isn't a new definition of limits at all. For every real $x$ there is a rational sequence of {$q_n$}$\rightarrow x$ so we define $b^x$ as $\lim b^{q_n}$. That's all my notation of $\lim_{q\in \mathbb Q\rightarrow x}$ means. We do have to clear up that such a real sequence of {$b^{q_n}$} converges but that's pretty standard and mechanical as Ian points out. (Actually it's a pain in the ass, but never mind...) – fleablood Apr 9 '16 at 19:46
• The actual difficulty is establishing the uniqueness: why is $\lim_n b^{q_n}$ the same for any given $q_n \to x$? – Ian Apr 9 '16 at 19:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9534657001495361, "perplexity": 199.8614999177262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00033.warc.gz"} |
https://www.physicsforums.com/threads/entropy-change-of-melting-ice-cube-initially-at-5c.765848/ | # Entropy change of melting ice cube initially at -5°C
1. Aug 13, 2014
### Flucky
1. The problem statement, all variables and given/known data
Calculate the entropy change of an ice cube of mass 10g, at an initial temperature of -5°C, when it completely melts.
cice = 2.1 kJkg-1K-1
Lice-water = 3.34x105 Jkg-1
2. Relevant equations
dQ = mcdT
dS = $\frac{dQ}{T}$
ΔS = $\frac{Q}{T}$
Q = mL
3. The attempt at a solution
First I set the problem out in two stages:
a) the entropy change from the ice going from -5°C to 0°C (in order to melt)
b) the entropy change from the ice going to water
For a)
dQ = mcdT ---------(1)
dS = $\frac{dQ}{T}$ ---------(2)
Putting (1) into (2):
dS = $\frac{mcdT}{T}$
ΔS = mc∫$\frac{1}{T}$dT
ΔS = mcln(Tf/Ti)
∴ΔS1 = (0.01)(2100)ln($\frac{273}{268}$) = 0.388 JK-1
For b)
Q = mL = (0.01)(3.34x105) = 3340J
ΔS2 = $\frac{Q}{T}$ = $\frac{3340}{273}$ = 12.23 JK-1
∴ total ΔS = ΔS1 + ΔS2 = 0.388 + 12.23 = 12.62 JK-1
Am I right in simply adding the to changes of entropy together? Does ΔS work like that?
Cheers.
2. Aug 13, 2014
### rude man
Looks good, and the answer is yes, the entropies add. Entropy is a state function, like gravitational potential. If you went from 0 to 1m above ground you would have g x 1m change in potential. If you went from 1m to 2m there would be a further g x 1m change in potential. Giving total change in potential = g x 2m.
3. Aug 14, 2014
### Flucky
Great, thanks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679383397102356, "perplexity": 3780.555366891645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00641.warc.gz"} |
http://math.stackexchange.com/questions/1612437/notation-of-the-second-derivative-where-does-the-d-go | # Notation of the second derivative - Where does the d go?
In school I was taught that we use $\frac{du}{dx}$ as a notation for the first derivative of a function $u(x)$. I was also told that we could use the $d$ just like any variable.
After some time we were given the notation for the second derivative and it was explained as follows:
$$\frac{d(\frac{du}{dx})}{dx} = \frac{d^2 u}{dx^2}$$
What I do not get here is, if we can use the $d$ as any variable, I would get the following result:
$$\frac{d(\frac{du}{dx})}{dx} =\frac{ddu}{dxdx} = \frac{d^2 u}{d^2 x^2}$$
Apparently it is not the same as the notation we were given. A $d$ is missing.
I have done some research on this and found some vague comments about "There are reasons for that, but you do not need to know..." or "That is mainly a notation issue, but you do not need to know further."
So what I am asking for is: Is this really just a notation thing? If so, does this mean we can actually NOT use d like a variable? If not, where does the $d$ go?
I found this related question, but it does not really answer my specific question. So I would not see it as a duplicate, but correct me if my search has not been sufficient and there indeed is a similar question out there already.
-
$d$ is not a variable; in other words $dxdx$ is not $d$ times $x$ times $d$ times $x$. At best, you can think of $dx$ as one object (with a two letter name), an infinitesimal. – Michael Burr Jan 14 at 19:06
$d$ cannot be used just like any variable. Otherwise you will have $du/dx=u/x$ for example. – AlphaGo Jan 14 at 19:08
I have a hard time to believe somebody told you this. Maybe they said it about the entity "$dx$" – quid Jan 14 at 20:24
It's because the dx is "in parentheses", so to speak. – Mehrdad Jan 14 at 21:20
I kinda assumed $dx^2$ meant $(dx)^2$; i.e. $dx$ is basically one variable. – Akiva Weinberger Jan 15 at 0:16
where does the $d$ go?
Physicist checking in. All the other answers seem to focus on whether $d$ is a variable and are neglecting the heart of your question.
Simply put, $dx$ is the name of one thing, so in your example
$$\frac{d^2u}{dx^2}=\frac{d^2u}{\left(dx\right)^2}$$
In your words, the "second $d$" is inside the implied parentheses.
-
+1. I'm surprised that so many other answers missed this aspect of the question. – mweiss Jan 15 at 16:22
I guess the same could occur with a delta, for example with the formula $U=\frac12 k \Delta x^2$ one might interpret it as $U=\frac12 k (\Delta x)^2$ (the elastic potential energy in a Hooke spring). – Jeppe Stig Nielsen Jan 15 at 22:10
Thanks for this short but good answer. Using the $dx$ as one unit and not as two separate things $d$ and $x$ clears the things up a lot. – Numenkok Balok Jan 18 at 7:50
Gotta love physicists. – Arrow Jan 18 at 11:23
Gottfried Wilhelm Leibniz, who introduced this notation in the 17th century, intended $dx$ to be an infinitely small change in $x$ and $du$ to be the corresponding infinitely small change in $u$, so that if, for example, $du/dx=3$ at a particular point that means $u$ is changing $3$ times as fast as $x$ is changing at that point.
The notation $\dfrac{d^2u}{dx^2}$ actually means $\dfrac{d\left(\dfrac{du}{dx}\right)}{dx}$, the infinitely small change in $du/dx$ divided by the corresponding infinitely small change in $x$. Thus the second derivative is the rate of change of the rate of change.
Notice that if $u$ is in meters and $x$ in seconds, then $du/dx$ is in $\dfrac{\text{m}}{\text{sec}}$, i.e. meters per second, and $d^2 u/dx^2$ is in $\dfrac{\text{m}}{\text{sec}^2}$, i.e. meters per second per second. Thus $dx^2$ means $(dx)^2$, so the units of measurement of $x$ get squared, and $d^2y$ is in the same units of measurement that $y$ is in, consistently with the fact that $y$ is not a part of what gets squared in the numerator.
-
$d$ is not a variable, and neither is $dx$ for that matter.
It is confusing because in some case, like the chain rule, differentials act like variables which can cancel:
$$\frac{dy}{dx}\frac{dx}{dt}=\frac{dy}{dt}$$
However, it is most appropriate to think of $\frac{d}{dx}$ as an operator that does something.
Thus, $\frac{d}{dx}(\frac{d}{dx} y)=\frac{d^2}{dx^2}y$.
Somewhat similarly, you wouldn't say that $\sin^2 x=s^2i^2n^2x$
Edit: In case it isn't from the example, you cannot separate $dx$. That is, $dx$ is not $d$ times $x$. This is very much analogous to chemistry when we say things like $\Delta H$. This isn't $\Delta$ times $H$. It is $\Delta$ (change) of $H$.
-
Are you sure this is a good example? I wouldn't say $\sin^2 x = (\sin x)^2$ either, if I didn't know that's the conventional meaning. What I would rather say is $\sin x^2 = (\sin x)^2$, which is not commonly understood so. – leftaroundabout Jan 14 at 22:48
@leftaroundabout I agree. I would more likely mistake $\sin^2 x$ to mean $\sin(\sin(x))$, which would actually be similar to the reasoning for $\frac{d}{dx}(\frac{d}{dx}y) = \left(\frac{d}{dx}\right)^2(y)=\frac{d^2}{dx^2}y$. And of course the issue with $\sin x ^ 2 = (\sin x)^2$ is that it could easily be mistaken for $\sin\left(x^2\right)$ – David Etler Jan 15 at 1:42
@leftaroundabout I don't see how somebody who didn't know what $\sin^2 x$ means could reasonably come to the conclusion taht it would mean $\sin(x^2)$. The "squared" is applied to the sine, so it could only reasonably mean "take the sine of $x$ and then square it" or "take the sine of $x$ twice (i.e., $\sin(\sin x)$." – David Richerby Jan 15 at 5:31
-1 This answer doesn't seem to address the main issue of the question ("A $d$ is missing.") at all. The OP seems to think that $dx^2$ means $d(x^2)$, not $(dx)^2$, so I don't see how he could agree with the notation $\frac{d}{dx}\frac{d}{dx} = \frac{d^2}{dx^2}$ used here without explanation. – JiK Jan 15 at 10:02
+1 This answer definitely addresses the main issue. – dshapiro Jan 15 at 14:31
${\rm d}(A)$ means an infinitesimally small change in $A$. The ${\rm d}$ is an operator and you better look at it as a function and not a value.
If anything we drop the parenthesis from ${\rm d}x$ for brevity as it should be ${\rm d}(x)$ as in $$\frac{{\rm d}(y)}{{\rm d}(x)}$$ and $$\frac{{\rm d}(\frac{{\rm d}(y)}{{\rm d}(x)})}{{\rm d}(x)} = \frac{ \frac{1}{{\rm d}(x)} {\rm d}({\rm d}(y))}{{\rm d}(x)} = \frac{{\rm d}({\rm d}(y))}{({\rm d}(x))^2} = \frac{{\rm d}^2(y)}{({\rm d}x)^2} = \frac{{\rm d}^2 y}{{\rm d}x^2}$$
-
The derivative $\frac{dy}{dx}$ is not the ratio of a small change of $y$ to a small change in $x$. Even in nonstandard analysis it's not defined that simply. – Bye_World Jan 14 at 19:19
Actually it is $$\frac{{\rm d}A}{{\rm d}x} = \lim_{h\rightarrow 0} \frac{ \left(A(x+h) - A(x)\right)}{\left( (x+h)-x \right)}$$ This is the definition of a derivative (en.wikipedia.org/wiki/Derivative). – ja72 Jan 14 at 19:24
There are several contexts in which $d$ should be considered an operator -- chief among them being as the exterior derivative of a differential $k$-form -- but IMO the derivative in scalar calculus is not one of them. An "infinitesimally small number" not equal to zero doesn't exist in $\Bbb R$. This is fine as a "heuristic" but the claim that $\frac{dy}{dx}$ actually is a ratio -- whether of infinitesimals or finite differences -- is just not true. – Bye_World Jan 14 at 19:37
That's false. Such a quantity would violate the Archmedean property of the real numbers. You have to extend the reals to the hyperreals to make use of nonzero infinitesimals. This is the closest formalization of what you're talking about that exists in mathematics and it still doesn't define the derivative as a fraction of infinitesimals, but as the standard part of a fraction of infinitesimals. Note that is not a part of standard analysis. – Bye_World Jan 14 at 21:00
Arguments about axiomatisation aside, this answer is the best intuitive answer to "where does the d go?", in my opinion. If you wanted to make it precise, you could simply say that $df$ is defined as $f(x+h)-f(x)$, and then have an implicit convention that we always take the $h\to 0$ limit whenever we write down an expression involving $d$. – Nathaniel Jan 16 at 8:06
Think of the meaning of $d/dx$. The $d$ in the numerator is an operator: it says, "take the infinitesimal difference of whatever follows $d/dx$". In contrast, the $dx$ in the denominator is just a number (yes, I know; mathematicians, please don't cringe): it is the infinitesimal difference in $x$.
So $d/dx$ means "take the infinitesimal difference of whatever follows, and then divide by the number $dx$."
Similarly, $d^2/dx^2$ means "take the infinitesimal difference of the infinitesimal difference of whatever follows, and then divide by the square of the number $dx$."
In short, the $d$ in the numerator is an operator, whereas in the denominator, it is part of a symbol. A slightly less ambiguous notation, as suggested by user1717828, would be to put the $(dx)$ in the denominator in parenthesis, but it really isn't necessary in practice.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414179921150208, "perplexity": 202.741514035161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00205-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/53394/deepak | # Deepak
less info
reputation
11
bio website location age member for 5 months seen Apr 3 at 21:01 profile views 108
Life is more fun with SE, I guess.
# 51 Questions
8 Convergence in $L^1$ space 8 Proving $f$ is a constant function 5 proving $f$ is absolutely continuous on $[0,1]$ 5 Finding a conformal map of lunar domain to upper half disk 5 $P(z)$ defines a polynomial
# 636 Reputation
+5 Proving $f$ is a constant function +5 proving $f$ is absolutely continuous on $[0,1]$ +5 $f$ Borel measurable and and $f=g$ a.e (Lebesgue) but $g$ is not Borel measurable +5 How to prove conformal self map of punctured disk ${0<|z|<1}$ is rotation
This user has not answered any questions
# 25 Tags
0 complex-analysis × 35 0 analysis × 2 0 real-analysis × 14 0 recreational-mathematics × 2 0 measure-theory × 11 0 inequality × 2 0 convergence × 4 0 integration × 2 0 education × 2 0 calculus × 2
# 5 Accounts
Mathematics 636 rep 11 Area 51 151 rep 1 French Language & Usage 133 rep 3 English Language & Usage 120 rep 4 Stack Overflow 101 rep | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274610042572021, "perplexity": 2366.320923354745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00002-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1706.07262 | Full-text links:
astro-ph.SR
# Title:MOVES I. The evolving magnetic field of the planet-hosting star HD189733
Abstract: HD189733 is an active K dwarf that is, with its transiting hot Jupiter, among the most studied exoplanetary systems. In this first paper of the Multiwavelength Observations of an eVaporating Exoplanet and its Star (MOVES) program, we present a 2-year monitoring of the large-scale magnetic field of HD189733. The magnetic maps are reconstructed for five epochs of observations, namely June-July 2013, August 2013, September 2013, September 2014, and July 2015, using Zeeman-Doppler Imaging. We show that the field evolves along the five epochs, with mean values of the total magnetic field of 36, 41, 42, 32 and 37 G, respectively. All epochs show a toroidally-dominated field. Using previously published data of Moutou et al. 2007 and Fares et al. 2010, we are able to study the evolution of the magnetic field over 9 years, one of the longest monitoring campaign for a given star. While the field evolved during the observed epochs, no polarity switch of the poles was observed. We calculate the stellar magnetic field value at the position of the planet using the Potential Field Source Surface extrapolation technique. We show that the planetary magnetic environment is not homogeneous over the orbit, and that it varies between observing epochs, due to the evolution of the stellar magnetic field. This result underlines the importance of contemporaneous multi-wavelength observations to characterise exoplanetary systems. Our reconstructed maps are a crucial input for the interpretation and modelling of our MOVES multi-wavelength observations.
Comments: 14 pages, 6 figures, accepted for publication in MNRAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Earth and Planetary Astrophysics (astro-ph.EP) DOI: 10.1093/mnras/stx1581 Cite as: arXiv:1706.07262 [astro-ph.SR] (or arXiv:1706.07262v2 [astro-ph.SR] for this version)
## Submission history
From: Rim Fares [view email]
[v1] Thu, 22 Jun 2017 11:31:42 UTC (1,817 KB)
[v2] Fri, 23 Jun 2017 13:44:19 UTC (1,817 KB) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867119312286377, "perplexity": 2856.406803658351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00423.warc.gz"} |
https://tohoku.pure.elsevier.com/en/publications/can-the-21-cm-signal-probe-population-iii-and-ii-star-formation | # Can the 21-cm signal probe Population III and II star formation?
Research output: Contribution to journalArticlepeer-review
9 Citations (Scopus)
## Abstract
Using varying models for the star formation rate (SFR) of Population (Pop) III and II stars at z > 6 we derive the expected redshift history of the global 21-cm signal from the intergalactic medium (IGM). To recover the observed Thomson scattering optical depth of the cosmic microwave background (CMB) requires SFRs at the level of ~10-3M yr-1 Mpc-3 at z ~ 15 from Pop III stars, or ~10-1M yr-1 Mpc-3 at z ~ 7 from Pop II stars. In the case the SFR is dominated by Pop III stars, the IGM quickly heats above the CMB at z ≳ 12 due to heating from supernovae. In addition, Lyα photons from haloes hosting Pop III stars couple the spin temperature to that of the gas, resulting in a deep absorption signal. If the SFR is dominated by Pop II stars, the IGM slowly heats and exceeds the CMB temperature at z ~ 10. However, the larger and varying fraction of Pop III stars are able to break this degeneracy. We find that the impact of the initial mass function (IMF) of Pop III stars on the 21-cm signal results in an earlier change to a positive signal if the IMF slope is ~-1.2. Measuring the 21-cm signal at z ≳ 10 with next generation radio telescopes such as the Square Kilometre Array will be able to investigate the contribution from Pop III and Pop II stars to the global SFR.
Original language English 654-665 12 Monthly Notices of the Royal Astronomical Society 448 1 https://doi.org/10.1093/mnras/stu2687 Published - 2015 Mar 21
## Keywords
• Dark ages
• First stars
• Galaxies: formation
• Galaxies: high-redshift
• Reionization
• Stars: Population II
## ASJC Scopus subject areas
• Astronomy and Astrophysics
• Space and Planetary Science | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612089157104492, "perplexity": 3695.216859326236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00476.warc.gz"} |
https://www.physicsforums.com/threads/stored-energy-in-a-battery.209242/ | # Stored energy in a battery
1. Jan 16, 2008
### PowerIso
1. The problem statement, all variables and given/known data
A certain lead acid storage battery has a mass of 30kg, Starting from a fully charged state, it can supply 5 amperes for 24 hours with a terminal voltage of 12 V before it is totally discharged. a If the energy stored in the the fully charged battery is used to lift the battery with 100% efficiency what height is attained? Assume that the acceleration due to gravity is 9.88 m/s/s and is constant with height. b. If the energy stored is used to accelerate the battery with 100% efficiency what velocity is attained. C Gasoline contains about 4.5 x 10^7 j/kg. Compare this with the energy content per unit mass for the fully charged battery.
2. Relevant equations
3. The attempt at a solution
I don't have an attempt to the solution manly because I am at lost to where to begin. I've read the section this question is referring to over and over again and I can't seem to get any closer to solving this problem. Can anyone please give me a hint on how to solve such a problem?
2. Jan 16, 2008
### chroot
Staff Emeritus
It's just an energy problem. Start by calculating how much energy is delivered in total by a 5 A current at 12 V over 24 hours. Then, figure out the equivalent altitude where the 30 kg battery has that same amount of gravitational potential energy as was released in the form of electricity.
- Warren
3. Jan 16, 2008
### PowerIso
Thanks a lot. I ended up with 17.6 km for the height and 587 m/s for the velocity question. I'm still confused on part c. It's been about 3 years since i've taken physics I and II, so i'm trying to recall old information, but it's hard.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Stored energy in a battery | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940984606742859, "perplexity": 429.2365012483009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00560.warc.gz"} |
https://dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=1414210 | 0
Discussions
# Discussion: “Model Reduction of Large-Scale Discrete Plants With Specified Frequency Domain Balanced Structure” (, and , 2005, ASME J. Dyn. Syst. Meas., Control, 127, pp. 486–498)PUBLIC ACCESS
[+] Author and Article Information
Hamid Reza Shaker
Section for Automation and Control, Department of Electronic Systems, Aalborg University, Aalborg, [email protected]
Rafael Wisniewski
Section for Automation and Control, Department of Electronic Systems, Aalborg University, Aalborg, [email protected]
J. Dyn. Sys., Meas., Control 131(6), 065501 (Nov 10, 2009) (1 page) doi:10.1115/1.4000138 History: Received November 23, 2007; Revised May 29, 2008; Published November 10, 2009; Online November 10, 2009
## Abstract
This work presents a commentary of the article published by A. Zadegan and A. Zilouchian (2005, ASME J. Dyn. Syst. Meas., Control, 127 , pp. 486–498). We show their order reduction method is not always true and may lead to inaccurate results and is therefore erroneous. A framework for solving the problem is also suggested.
## DISCUSSION
Model reduction of systems with specified frequency domain balanced structure is a reduction technique which is an attempt for increasing the accuracy of approximation by looking at reduction problem within a specified frequency bound instead of the whole frequency domain. In this method it is not required to keep the approximation good outside the specified frequency bound of operation; the accuracy of approximation can be increased comparing to approximation results by applying well-known ordinary balanced reduction method. In this method continuous time controllability and observability Grammians in terms of $ω$ over a frequency bound $[ω1,ω2]$ are defined as (1-7)
$Wcf≜12π∫w2w1(Ijw−A)−1BB*(−Ijw−A*)−1dw$
$Wof≜12π∫w2w1(−Ijw−A*)−1C*C(Ijw−A)−1dw$
Similarly, for discrete time cases, Gramminans are defined as (1-7)
$Wcf≜12π∫w2w1(Iejw−A)−1BB*(Ie−jw−A*)−1dw$
$Wof≜12π∫w2w1(Ie−jw−A*)−1C*C(Iejw−A)−1dw$
This model reduction technique is based on ordinary balanced model reduction method that was first proposed by Moore (8) and then improved and developed in different directions (10). The philosophy of the model reduction method proposed by Zadegan (1-7) is very similar to the one presented by Enns (9), but it is not always true and may lead to inaccurate results. In what follows we discuss the problem of the method in more detail.
In the first step of the aforementioned model reduction technique the original system should be transformed to the specified frequency domain balanced structure, i.e., the controllability and observability Grammians of the transformed system should be equal and diagonal. The second step of the reduction procedure consists of partitioning and applying the generalized singular perturbation approximation to the system with specified frequency domain balanced structure.
The problem which arises in the practical implementation of the reduction technique is the infeasibility of the balancing algorithms for finding an appropriate similarity transform which should transform the original system into the frequency domain balanced structure. In order to find an appropriate similarity transform the authors of Refs. 1-2,7, have suggested to use one of the well-known numerical algorithms which was proposed for the first time by Laub (7). In this algorithm we should apply the Cholesky factorization to the Grammians obtained from 1 or 2. Because the aforementioned Grammians are not real, we cannot apply the Cholesky factorization, and the overall Laub algorithm is not applicable then. If we use $Wcf+Conj(Wcf)$ and $Wof+Conj(Wof)$ instead of $Wcf$ and $Wof$, respectively, as the authors of Refs. 1-2,7 have done in their works, the Laub algorithm can be applied to them but the structure which the original system is transformed to is no longer the frequency domain balanced structure. In the frequency domain balanced structure we should have the equal and diagonal Grammians but the similarity transform obtained from the aforementioned procedure can only transform the system to the structure in which the real part of the Grammians is equal and diagonal.
In order to overcome the problem, one can use input-output weights and make the dynamic system work just within the frequency bound of interest. The frequency weighed dynamic system can be reduced successfully. In this case Plancherel's theorem can guarantee the trueness method.
## References
View article in PDF format.
## Related
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034910559654236, "perplexity": 842.1514222297144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00198.warc.gz"} |
https://www.physicsforums.com/threads/infinite-square-well-with-attractive-potential.345823/ | # Infinite square well with attractive potential
1. Oct 14, 2009
### Brian-san
1. The problem statement, all variables and given/known data
We have an infinite square well potential of width 2L centered at the origin, with an attractive delta function potential V0δ(x) at the origin, with the properties
$$V_0<0, -V_0>\frac{\hbar^2}{mL^2}$$
Determine the conditions for a negative energy bound state.
There are a few other parts to the question, but I do not have the sheet at the moment.
2. Relevant equations
Schrodinger Equation
3. The attempt at a solution
In the absence of the delta function, we get
$$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}=E\psi$$
This differential equation has character polynomial given by
$$x^2+\frac{2mE}{\hbar^2}=0, x=\pm\frac{i\sqrt{2mE}}{\hbar}$$
The solution is then
$$\psi=Asin\left(\frac{\sqrt{2mE}}{\hbar}x\right)+Bcos\left(\frac{\sqrt{2mE}}{\hbar}x\right)$$
Using the boundary condition that the wave function is zero at ±L and normalizing the wave function, I get
$$\psi=L^{-\frac{1}{2}}cos\left(\frac{n\pi x}{2L}\right), L^{-\frac{1}{2}}sin\left(\frac{n\pi x}{2L}\right)$$
Where the cosine solution is for odd n and the sine solution is for even n. Also, the energy spectrum is given by
$$E_n=\frac{n^2\pi^2\hbar^2}{8mL^2}$$
For any positive integer n. This is the solution for the infinite well without the delta function, the full Schrodinger equation should be
$$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}+V_0\delta(x)\psi=E\psi$$
I thought about integrating both sides over the width of the well to eliminate the delta function and get
$$-\frac{\hbar^2}{2m}\int_{-L}^{L}\frac{\partial^2\psi}{\partial x^2}dx+V_0\psi(0)=E\int_{-L}^{L}\psi dx$$
For even n this is trivial, but for odd n it seems to give me that V0=0, which is not true, nor very helpful. Obviously the bound state will most likely be the ground state n=1, so I thought it could be
$$E_1<-V_0, \frac{\pi^2\hbar^2}{8mL^2}<-V_0$$
However, I have a feeling that I am supposed to work in the conditions imposed on V0 somehow. Is the work so far on the right track, or have I missed something important? There are a few more parts to the question, but this is all I could remember without the sheet near me.
2. Oct 14, 2009
### gabbagabbahey
Actually, it gives you $V_0\psi(0)=0$, and since $V_0\neq 0$, $\psi(0)=$___?
You should not be surprised by this result; the wavefunction is always zero in regions where the potential is infinite. The effect of this is simply to divide the well in two.
3. Oct 15, 2009
### Brian-san
Given the solutions I got for the wavefunction, $\psi(0)=0$ for even n, and $\psi(0)=L^{-\frac{1}{2}}$, for odd n. But since V0 is non zero, in order for $V_0\psi(0)=0$, it would imply $\psi(0)=0$. However, wouldn't this make the wavefunction discontinuous for odd n at the origin?
Would this still have the effect of splitting the potential into two regions even if it is attractive? It makes sense if the delta function has a positive coefficient. Visually, this problem should look like a square well with a large dip toward $-\infty$ at the origin. This is what would produce the negative energy bound state I'm looking for.
4. Oct 15, 2009
### gabbagabbahey
No, your solutions are invalid. The fact that $\psi(0)=0$ provides an extra boundary condition which you must apply to the general solution to Schroedinger's equation in inside $|x|\leqL$...make sense?
5. Oct 15, 2009
### Brian-san
With the additional condition that $\psi(0)=0$, If I apply that to my general solution from above, since cosine zero is never zero, then B=0 and the solution is just
$$\psi=Asin\left(\frac{\sqrt{2mE}}{\hbar}x\right)$$
Applying the boundary conditions at the walls of the well and normalizing tells me
$$\psi(x)=L^{-\frac{1}{2}}sin\left(\frac{n\pi}{L}x\right), E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$
Is the incorrect solution simply because I did not apply the condition at x=0 to my first solution, or do I have to solve the differential equation leaving the delta potential term intact? If it's the latter, I don't think I can still solve the differential equation by finding the roots of the characteristic equation.
We derived the solution for an attractive delta potential in class, but that was without the infinite potential walls at ±L. In that case solutions involved the exponential function, but those can't satisfy the boundary conditions at the walls of the well, so I don't think that example will be of much help.
6. Oct 15, 2009
### gabbagabbahey
As I said earlier, the delta function effectively divides your well into two halves, giving you an new boundary at $x=0$. Your original solution was incorrect because it failed to take this boundary and its corresponding boundary condition into account.
7. Oct 15, 2009
### Brian-san
Then is this solution from a previous post correct?
$$\psi(x)=L^{-\frac{1}{2}}sin\left(\frac{n\pi}{L}x\right), E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$
It satisfies that the wave function is zero at x=0,L,-L, is normalized and satisfies the Schrodinger equation.
Also, the last few parts ask about limits on the binding energy when $-V_0$ is large, and when $-\frac{mLV_0}{\hbar^2}=1+\delta, \delta<<1$ and if the energy is continuous at delta=0. That's nothing difficult once I find the relation between E and V.
Once I have the correct expression for the energy states, I also have the facts that $V_0<0, -V_0>\frac{\hbar^2}{mL}$. But in what way do I combine these to find the condition of a bound energy state? Presumably it would occur when the energy of the particle is insufficient to escape the attractive well, so E+V<0
8. Oct 16, 2009
### gabbagabbahey
Well, these are eigenstates for this potential, but they aren't really the eigenstates you were asked for now, are they?
What is the general solution for $E<0$? What do you get when you apply your boundary conditions to it? (Remember to consider the regions $-L<x<0$ and $0<x<L$ separately)
9. Oct 17, 2009
### Brian-san
I solved the equations again, separately for each region and took a few ideas from my notes when thinking about the boundary conditions at the walls of the well. Without going through that whole process, I got:
$$\psi_1(x)=A_1sin(k(L-x)), 0<x\leq L$$
$$\psi_2(x)=A_2sin(k(L+x)), -L\leq x<0$$
With the usual $k^2=\frac{\sqrt{2mE}}{\hbar}$. Since the wave function must be continuous at x=0, then we get that $A_2=\pm A_1$. More specifically, we know $\psi(0)=0$. (The normalization constant is still $A_1=L^{-1/2}$, but it hasn't been needed for anything yet.)
So if we consider the first case, $A_2=-A_1$, then at x=0
$$A_1sin(kL)=-A_1sin(kL)$$
This implies that $kL=n\pi$, and gives the usual result of
$$E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$
Also, in this case, the derivative of the wave function is continuous at x=0. For the case of $A_2=A_1$, the wave function is still continuous at x=0 for our expression of k, but there is now a discontinuity in the derivative. Integrating over a small region near x=0, this can be described by the relation
$$2A_1kcos(kL)=\frac{2mV_0}{\hbar^2}A_1sin(kL)$$
If we let z=kL, then we find a transcendental equation
$$tan(z)=\frac{z\hbar^2}{mLV_0}$$
If you look at this graphically, there are an infinite number of solutions that occur where the two functions intersect. Looking at the limit $-V_0\rightarrow\infty$, then $tan(z)=0$, and $kL=n\pi$. This leads to the usual relation
$$E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$
In the other limit $-\frac{mLV_0}{\hbar^2}=1+\delta, \delta<<1$, I was thinking that
$$tan(z)=\frac{-z}{1+\delta}$$
Then I'm kind of stuck from here. I was thinking that the intersection in this case would occur at a small enough value of z so I could use the approximation
$$tan(z)\approx\frac{z}{1-\frac{1}{2}z^2}$$
Then I thought, even if that were the case, it would only apply to the first intersection point.
10. Oct 17, 2009
### gabbagabbahey
I must apologize, but I think I may have steered you in the wrong direction when I said $\psi(0)=0$. Looking again at the equation,
$$-\frac{\hbar^2}{2m}\int_{-L}^{L}\frac{d^2\psi}{d x^2}dx+V_0\psi(0)=E\int_{-L}^{L}\psi dx$$
It's true that the RHS will be zero since the wavefunction is continuous and $\psi(L)=\psi(-L)=0$, but $\frac{d\psi}{dx}$ need not be continuous, because of the delta function at the center
$$\implies \int_{-L}^{L}\frac{d^2\psi}{d x^2}dx\neq \left.\frac{d\psi}{d x}\right|_{-L}^{L}[/itex] Which was my basis for claiming that $V_0\psi(0)=0$ Instead, for negative energy states, I think you'll want to write the general solution in the form: [tex]]\psi_1(x)=A_1\sinh(\kappa x)+B_1\cosh(\kappa x), -L\leq x<0$$
$$\psi_2(x)=A_2\sinh(\kappa x)+B_2\cosh(\kappa x), 0< x\leq L$$
where $\kappa\equiv\frac{-2mE}{\hbar^2}$ is real and positive.
Then apply your boundary conditions at $x=\pm L$ and the fact that the wavefunction is continuous at $x=0$, but its derivative has a finite discontinuity there.
Last edited: Oct 17, 2009
Similar Discussions: Infinite square well with attractive potential | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743317365646362, "perplexity": 267.08212534868744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00083.warc.gz"} |
http://farside.ph.utexas.edu/teaching/plasma/lectures/node43.html | Next: Plane Waves in a Up: Waves in Cold Plasmas Previous: Waves in Cold Plasmas
# Introduction
The cold-plasma equations describe waves, and other perturbations, which propagate through a plasma much faster than a typical thermal velocity. It is instructive to consider the relationship between the collective motions described by the cold-plasma model and the motions of individual particles that we studied in Sect. 2. The key observation is that in the cold-plasma model all particles (of a given species) at a given position effectively move with the same velocity. It follows that the fluid velocity is identical to the particle velocity, and is, therefore, governed by the same equations. However, the cold-plasma model goes beyond the single-particle description because it determines the electromagnetic fields self-consistently in terms of the charge and current densities generated by the motions of the constituent particles of the plasma.
What role, if any, does the geometry of the plasma equilibrium play in determining the properties of plasma waves? Clearly, geometry plays a key role for modes whose wave-lengths are comparable to the dimensions of the plasma. However, we shall show that modes whose wave-lengths are much smaller than the plasma dimensions have properties which are, in a local sense, independent of the geometry. Thus, the local properties of small-wave-length oscillations are universal in nature. To investigate these properties, we may, to a first approximation, represent the plasma as a homogeneous equilibrium (corresponding to the limit , where is the magnitude of the wave-vector, and is the characteristic equilibrium length-scale).
Next: Plane Waves in a Up: Waves in Cold Plasmas Previous: Waves in Cold Plasmas
Richard Fitzpatrick 2011-03-31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595576524734497, "perplexity": 538.4985604207649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648008.18/warc/CC-MAIN-20141024030048-00010-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/155284/inequality-in-the-sobolev-space-h1/155308 | # Inequality in the Sobolev space $H^1$
I've found the following inequality $$\int_{B_r}\vert u\vert^q\leq C \bigg(\int_{B_r}\vert\nabla u\vert^2\bigg)^{a}\bigg(\int_{B_r}\vert u\vert ^2\bigg)^{\frac{q}{2}-a}+\frac{c}{r^{2a}}\bigg(\int_{B_r}\vert u\vert ^2\bigg) ^{\frac{q}{2}}$$ for $u\in H^1(\mathbb{R}^3)$,$a=\frac{3}{4}(q-2)$ and $q\in [2,6]$. Some hints to prove it? I've started using interpolation and Sobolev's inequality $$\int_{B_r}\vert u\vert^q\leq \Vert u\Vert_{L^2}^{q(1-\theta)}\Vert u\Vert_{L^6}^{q \theta}\leq C \Vert u\Vert_{L^2}^{q(1-\theta)}\Vert u\Vert_{H^1}^{q\theta}$$ with $\theta=\frac{3}{2}\frac{q-2}{q}$. How can I go on?
-
Where or how did you find it? – Nate Eldredge Jan 21 '14 at 17:07
In a paper about Navier-Stokes equation. – user45822 Jan 21 '14 at 17:12
Prove it on a sphere of radius 1 and then rescale – Piero D'Ancona Jan 21 '14 at 20:45
Hint. For every $r>0$, and $q\le 6$, $$W^{1,2}(B_r)\subset L^6(B_r)\subset L^q(B_r),$$ due to Sobolev Imbedding Theorem.
In particular, there is a $c>0$, such that $$\|u\|_{L^q(B_r)}^2 \le c_1 \big(\|\nabla u\|^2_{L^2(B_r)}+\|u\|^2_{L^2(B_r)}\big),$$ for all $u\in W^{1,2}(B_r)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492632746696472, "perplexity": 283.59583494227144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00124-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/gradient-and-open-ball.113633/ | # Gradient and open ball
1. Mar 9, 2006
### eridanus
Let f : Rn -> R.
Suppose that grad(f(x)) = 0 for all x in some open ball B(a, r).
Show that f is constant on B(a, r).
[Hint: use part (a) to make this a problem about a function of one variable]
part (a) is show that for any two points x, y in B
there is a straight line starting at x and ending at y that is contained
in B, which I got, but I don't understand what it has to do with anything. Isn't this just a property of the gradient?
Any help would be greatly appreciated.
2. Mar 9, 2006
### Galileo
Well, try the one-variable case first. Suppose you have a differentiable function f:R->R and f'(x)=0 on some open interval (a,b). Show that f is constant on (a,b).
Then, note that grad(f)(x)=0 are three equations and mix it with the result of part (a).
3. Mar 9, 2006
### HallsofIvy
If the gradient is 0 at every point, then the derivative along any line is 0.
Similar Discussions: Gradient and open ball | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723447322845459, "perplexity": 723.3305506970149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812293.35/warc/CC-MAIN-20180218232618-20180219012618-00509.warc.gz"} |
http://export.arxiv.org/abs/2207.00954 | math.NA
(what is this?)
# Title: The error bounds and perturbation bounds of the absolute value equations and some applications
Abstract: In this paper, by introducing a class of absolute value functions, we study the error bounds and perturbation bounds of two types of absolute value equations (AVEs): Ax -B|x|= b and Ax -|Bx|= b. Some useful error bounds and perturbation bounds for the above two types of absolute value equations are presented. By applying the absolute value equations, we obtain some useful error bounds and perturbation bounds for the horizontal linear complementarity problem (HLCP). Incidentally, two new error bounds for linear complementarity problem (LCP) are given, coincidentally, which are equal to the existing result. Without constraint conditions, a new perturbation bound for the LCP is given as well. Besides, without limiting the matrix type, some computable estimates for the above upper bounds are given, which are sharper than some existing results under certain conditions. Some numerical examples for the AVEs from the LCP are given to show the feasibility of the perturbation bounds.
Subjects: Numerical Analysis (math.NA) Cite as: arXiv:2207.00954 [math.NA] (or arXiv:2207.00954v1 [math.NA] for this version)
## Submission history
From: Shi-Liang Wu [view email]
[v1] Sun, 3 Jul 2022 04:35:14 GMT (20kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802619576454163, "perplexity": 1633.9999874712207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00796.warc.gz"} |
http://mathhelpforum.com/calculus/87042-error-bound-taylor-polynomial-print.html | # error bound taylor polynomial
• May 2nd 2009, 10:57 PM
diroga
error bound taylor polynomial
Quote:
Use taylor's theorem to bound the error in approximating the function f(x) = e^x with the maclaurin series M_6(x) on the interval [-1,1]
The formual for this type of thing is
$|f(x) - P_n(x)| \leq \frac {K_{n + 1}} {(n + 1)!}|x - x_0|^{n + 1}$
the max bound is $K_{n + 1} = e^1$ $x_0 =0, n = 6$
$\frac {e^1} {7!} |-1|^7 \approx 5.39 * 10^-4$
is this correct?
• May 3rd 2009, 12:35 AM
CaptainBlack
Quote:
Originally Posted by diroga
The formual for this type of thing is
$|f(x) - P_n(x)| \leq \frac {K_{n + 1}} {(n + 1)!}|x - x_0|^{n + 1}$
the max bound is $K_{n + 1} = e^1$ $x_0 =0, n = 6$
$\frac {e^1} {7!} |-1|^7 \approx 5.39 * 10^{-4}$
is this correct?
More or less, but you need to improve your notation and explain what things are.
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996148407459259, "perplexity": 2457.764510560437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00153-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/author/D.-George/152997755 | • Publications
• Influence
GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral.
On August 17, 2017 at 12∶41:04 UTC the Advanced LIGO and Advanced Virgo gravitational-wave detectors made their first observation of a binary neutron star inspiral. The signal, GW170817, was detectedExpand
Gravitational Waves and Gamma-Rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A
On 2017 August 17, the gravitational-wave event GW170817 was observed by the Advanced LIGO and Virgo detectors, and the gamma-ray burst (GRB) GRB 170817A was observed independently by the FermiExpand
GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2.
We describe the observation of GW170104, a gravitational-wave signal produced by the coalescence of a pair of stellar-mass black holes. The signal was measured on January 4, 2017 at 10∶11:58.6 UTC byExpand
GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs
We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1$\mathrm{M}_\odot$ during the first and second observing runs of theExpand
GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence.
• B. Abbott, +497 authors S. Kaufer
• Physics, Medicine
• Physical review letters
• 27 September 2017
On August 14, 2017 at 10∶30:43 UTC, the Advanced Virgo detector and the two Advanced LIGO detectors coherently observed a transient gravitational-wave signal produced by the coalescence of twoExpand
GW170608: Observation of a 19 solar-mass binary black hole coalescence
On 2017 June 8 at 02:01:16.49 UTC, a gravitational-wave (GW) signal from the merger of two stellar-mass black holes was observed by the two Advanced Laser Interferometer Gravitational-WaveExpand
GW170817: Measurements of Neutron Star Radii and Equation of State.
On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of thisExpand
GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4 M ⊙
On 2019 April 25, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9. The Virgo detector was also taking data that did not contribute to detection dueExpand
GW190814: Gravitational Waves from the Coalescence of a 23 Solar Mass Black Hole with a 2.6 Solar Mass Compact Object
We report the observation of a compact binary coalescence involving a 22.2–24.3 M ⊙ black hole and a compact object with a mass of 2.50–2.67 M ⊙ (all measurements quoted at the 90% credible level).Expand
Search for Post-merger Gravitational Waves from the Remnant of the Binary Neutron Star Merger GW170817
The first observation of a binary neutron star (NS) coalescence by the Advanced LIGO and Advanced Virgo gravitational-wave (GW) detectors offers an unprecedented opportunity to study matter under theExpand | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602968692779541, "perplexity": 2229.5941609828396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00509.warc.gz"} |
https://www.physicsforums.com/threads/continuously-uniform-function-proof.307814/ | # Continuously uniform function proof
1. Apr 16, 2009
### aeronautical
1. The problem statement, all variables and given/known data
Let f : (0,1) → (0,1) be a strict growing continuous function. Does f have to be continuously uniform? Please note that its from (0,1) → (0,1) and NOT [0,1] → [0,1]. Please help me with the steps...I have no clue where to start...thanks...
2. Apr 16, 2009
### dx
Consider the function $$f(x) = -1 / (x - 1)$$. Prove that this is a counterexample.
Last edited: Apr 16, 2009
3. Apr 16, 2009
### aeronautical
Sorry but could you elaborate further? What theorem should I use?
4. Apr 16, 2009
### dx
Sorry, the function in my previous post should have been $$f(x) = -1 / (x - 1)$$
You don't need to use any special theorem. Just try drawing a graph of this function, and see if you can prove that it is not uniformly continuous on (0,1).
If you find that difficult, think about the following easier example. Prove that 1/x is not uniformly continuous on (0,1).
5. Apr 16, 2009
### aeronautical
I find this rather difficult, so I started pursuing the second option you suggested, and found this link as well:
I can't figure out how you linked my question to the function you were kind to write down for me.
6. Apr 16, 2009
### dx
The function I wrote down is the same function except reflected in the y axis, so that it is strictly increasing (a condition in the question), and moved to the right so that the infinity is at 1 instead of at 0.
7. Apr 16, 2009
### aeronautical
I drew the function. It goes to plus minus infinity at x=1. So what does this tell me? That it is not continuously uniform?
8. Apr 16, 2009
### aeronautical
Here is my plot....
#### Attached Files:
• ###### PIC3492.jpg
File size:
5.7 KB
Views:
62
Last edited: Apr 16, 2009
9. Apr 16, 2009
### dx
You will notice that it is the same as the function 1/x except moved and reflected. You posted a link to a website showing how to prove that 1/x in not uniformly continuous on (0,1), so you can use the same method with slight modifications to prove that this one is not uniformly continuous.
10. Apr 16, 2009
### aeronautical
However, I do not understand why the specify that (0,1) → (0,1) is strict growing continuous function. So this proof is by contradiction?
Similar Discussions: Continuously uniform function proof | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031649827957153, "perplexity": 1067.3693655730685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00073.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/straight-vertical-wire-carries-current-130-downward-region-poles-alarge-superconducting-el-q123252 | A straight, vertical wire carries a current of 1.30 downward in a region between the poles of alarge superconducting electromagnet, where the magnetic field has amagnitude of 0.559 and is horizontal.
Part A) What is the magnitude of the magnetic force on a section ofthe wire with a length of 1.00 that is in this uniform magnetic field, if themagnetic field direction is east?
Part B) What is the direction of the magnetic force on a section ofthe wire with a length of 1.00 that is in this uniform magnetic field, if themagnetic field direction is east?
Part C) What is the magnitude of the magnetic force on a section ofthe wire with a length of 1.00 that is in this uniform magnetic field, if themagnetic field direction is south?
Part D) What is the direction of the magnetic force on a section ofthe wire with a length of 1.00 that is in this uniform magnetic field, if themagnetic field direction is south?
Part E) What is the magnitude of the magnetic force on a section ofthe wire with a length of 1.00 that is in this uniform magnetic field, if themagnetic field direction is
28.0 south of west?
Part F) What angle will the magnetic force on this segment of wiremake relative to the north, if the magnetic field direction is28.0 south of west? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418991804122925, "perplexity": 408.2504168598673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091751.85/warc/CC-MAIN-20150627031811-00033-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://openreview.net/forum?id=B1QgVti6Z | ## Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
15 Feb 2018, 21:29 (modified: 23 Feb 2018, 00:27)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Keywords: Deep Learning Analysis, Deep Learning Theory, Empirical Risk, Landscape Analysis, Nonconvex Optimization
Abstract: This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an $l$-layer linear neural network consisting of $\dm_i$ neurons in the $i$-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of $\mathcal{O}(r^{2l} \sqrt{l\sqrt{\max_i \dm_i} s\log(d/l)/n})$. Here $d$ is the total weight dimension, $s$ is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by $r$. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep \emph{nonlinear} neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth $l$, the layer width $\dm_i$, the network size $d$, the sparsity in weight and the parameter magnitude $r$ determine the neural network landscape.
7 Replies | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9588689208030701, "perplexity": 326.05722231325785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00404.warc.gz"} |
https://blog.ugnus.uk.eu.org/ | ## Implicit coordinate transforms are weird
There’s a wide class of coordinate transforms that are typically given backwards. Witness spherical polar coordinates:
$x = r \cos \phi \sin \theta \\ y = r \sin \phi \sin \theta \\ z = r \cos \theta \\$
Typically we already know what our cartesian coordinates $(x,y,z)$ are, and we want to express them in this fancy new coordinate system $(r,\phi,\theta)$. That is, we want a map
$\Phi : (x, y, z) \mapsto (r, \phi, \theta),$
but it looks like we’ve only been given the inverse map
$\Phi^{-1} : (r, \phi, \theta) \mapsto (x, y, z) = (r \cos \phi \sin \theta, r \sin \phi \sin \theta, r \cos \theta).$
Now, really we know how to invert these expressions. But doing calculus with inverse functions like $\tan^{-1}(y/x)$ is no fun at all, and besides we can imagine situations where no inverse exists.
What we’re interested in is what becomes of the basis vectors $(\partial_x, \partial_y, \partial_z)$ and covectors $(dx, dy, dz)$ when we change to spherical polar coordinates.
Let’s imagine that the manifold $\mathcal{M}$ is “$\mathbb{R}^3$ with cartesian lines drawn on”, and the manifold $\mathcal{N}$ is “$\mathbb{R}^3$ with spherical lines drawn on”. Obviously these are both $\mathbb{R}^3$, but our reasoning will be completely general.
Recall that a map $\Phi : \mathcal{M} \longrightarrow \mathcal{N}$ induces a ‘pullback’ $\Phi^*$ that takes functions/covectors on $\mathcal{N}$ to functions/covectors on $\mathcal{M}$; and a ‘pushforward’ $\Phi_*$ that takes curves/vectors on $\mathcal{M}$ to curves/vectors on $\mathcal{N}$. That is, the pullback $\Phi^*$ operates ‘backwards’ to the direction of the original map $\Phi$.
But this is exactly the same as saying that the pullback induced by the inverse map $\Phi^{-1}$ will operate in the expected ‘forwards’ direction. So, $\Phi^{-1*}$ takes functions/covectors on $\mathcal{M}$ to functions/covectors on $\mathcal{N}$. So, given that we only have access to $\Phi^{-1}$ right now, it looks like we can successfully work out what our covectors will look like in spherical coordinates.
Another way of phrasing this is that the exterior derivative commutes with pullbacks. Let $f$ be a function on $\mathcal{M}$ and $v$ a vector field on $\mathcal{N}$. Then
$(\Phi^{-1*} df)(v) = df(\Phi^{-1}_*v) = (\Phi^{-1}_*v)(f) \\ = v(\Phi^{-1*}f) = d(\Phi^{-1*}f)(v) \\ \Rightarrow \Phi^{-1*}df = d(\Phi^{-1*}f).$
## A correct method for covectors
But now let $f$ be the coordinate function for the coordinate $x$, i.e. $f(x,y,z) = x$. Then
$\Phi^{-1*}dx = d(\Phi^{-1*}x) = d(r \cos \phi \sin \theta) \\ = \cos \phi \sin \theta dr - r \sin \phi \sin \theta d\phi - r \cos \phi \cos \theta d\theta,$
using the fact that we know $(\Phi^{-1*}f)(r,\phi,\theta) = f(\Phi^{-1}(r,\phi,\theta))$ from above, and standard facts about the exterior derivative $d$ (Leibniz rule over multiplication etc.).
Rinse and repeat for the other basis covectors:
$\Phi^{-1*}dy = \sin\phi \sin\theta dr + r \cos\phi \sin\theta d\phi + r \sin\phi \cos\theta d\theta \\ \Phi^{-1*}dz = \cos\theta dr - r \sin \theta.$
So given a covector $\eta$ in cartesian coordinates $\eta = \eta_x dx + \eta_y dy + \eta_z dz$ we now know how substitute for $(dx,dy,dz)$, writing $\eta$ in spherical coordinates.
## An incorrect method for vectors
Let’s try and naively apply the calculus we already know, so try the following (for the $z$ basis vector):
$\partial_z = \frac{\partial}{\partial z} = \frac{\partial r}{\partial z} \frac{\partial}{\partial r} + \frac{\partial \theta}{\partial z} \frac{\partial}{\partial \theta} \\ = \left( \frac{\partial r cos \theta}{\partial r} \right)^{-1} \frac{\partial}{\partial r} + \left( \frac{\partial r cos \theta}{\partial \theta} \right)^{-1} \frac{\partial}{\partial \theta} \\ = \frac{1}{\cos \theta} \partial_r - \frac{1}{r \sin \theta} \partial_\theta.$
Now when we contract this with our earlier expression for $dz$, we should get
$dz(\partial z) = 1.$
$(\Phi^{-1*}dz)(\Phi_* \partial_z) = (\cos\theta dr - r\sin\theta d\theta) \left(\frac{1}{\cos \theta} \partial_r - \frac{1}{r \sin \theta} \partial_\theta\right) \\ = 2(!)$
What went wrong? We neglected to consider contributions to $\partial_z$ that might arise from other coordinate vectors being rotated into the $z$ direction due to the coordinate change (this sentence doesn’t really make sense, but then again, we’re trying to ‘explain’ a contradiction).
## A correct method for vectors
Write out completely general expressions for $(\partial_x, \partial_y, \partial_z)$:
$\Phi_* \partial_x = A \partial_r + B \partial_\phi + C \partial_\theta \\ \Phi_* \partial_y = D \partial_r + E \partial_\phi + F \partial_\theta \\ \Phi_* \partial_z = G \partial_r + H \partial_\phi + I \partial_\theta$
All we know about these basis vectors is that, when contracted with the basis covectors, we should obtain the identity matrix, even when they’ve been written out in spherical coordinates:
$(\Phi^{-1*}dx^i)(\Phi_* \partial_{x^j}) = dx^i(\Phi^{-1}_* \Phi_* \partial_{x^j}) \\ = dx^i(\mathrm{id}_* \partial_{x^j}) = dx^i(\partial_{x^j}) = \delta^i_j.$
($\mathrm{id}$ is just the identity map)
So we repeatedly apply this property to the expression above, essentially inverting the 3-by-3 matrix that has components $A, B, \ldots$.
For example, for $\partial_z$ we get
$\Phi_*(\partial_z) = \cos \theta \partial_r - \frac{\sin \theta}{r} \partial_\theta,$
which gives the correct result when contracted with $dz$.
## Conclusion
The essential difference between vectors and covectors is that, under maps, one of them moves one way and the other one moves the other way. Hopefully the little parable in this blogpost has illustrated this fact.
When you have a metric you can talk about them having indices in different places, but that allows you to forget about the difference between them altogether! The interesting differences between vectors and covectors come into play when:
• You don’t necessarily know what the metric is.
• You’re using maps between manifolds/coordinate systems whose inverses don’t necessarily exist (for example, the projection onto a submanifold has no inverse).
The fact that the exterior derivative commutes with pullbacks also explains why it’s covectors that show up in integrals, thanks to the ‘change of variables’ formula
$\int_V \eta = \int_{\Phi(V)} \Phi^{*-1}(\eta).$
It also explains why it’s so easy to find the form of the metric in new coordinates, because the metric is a rank (0,2)-tensor, i.e. a sum of pairs of covectors, tensor-producted together:
$g = g_{ij} dx^i \otimes dx^j,$
and we can just substitute for $dx^i$ in the new coordinates and we’re done!
## What’s the deal with tautological 1-forms?
Epistemic status: All pretty standard derivations, except the last section on mechanics which is a bit hand-wavy.
When formulating mechanics on cotangent bundles, one comes across an object called the ‘tautological 1-form’ (often denoted $\theta$) which is supposedly key to the whole process. Here I will attempt to describe what this 1-form is, why it is useful, and the role it plays in the fundamentals of classical mechanics.
## Pullbacks and Pushforwards
First a word about smooth maps between manifolds, and the operations derived from them. Let $\mathcal{M}$ and $\mathcal{N}$ be smooth manifolds, and $\Phi : \mathcal{M} \longrightarrow \mathcal{N}$ be a smooth map, not necessarily invertible. Furthermore, let $f : \mathcal{N} \longrightarrow \mathbb{R}$ be a smooth function, let $X : \mathcal{M} \longrightarrow \mathcal{TM}$ be a vector field on $\mathcal{M}$, and let $\eta : \mathcal{N} \longrightarrow \mathcal{T^*N}$ be a covector field on $\mathcal{N}$.
We can use $\Phi$ to ‘pullback’ functions on $\mathcal{N}$ into functions on $\mathcal{M}$, like so:
$\Phi f : \mathcal{M} \longrightarrow \mathbb{R}, \:\:\:\:\:\:\:\: (\Phi f)(x) = f(\Phi(x)),$
and taking advantage of that, we now have a way to ‘pushforward’ vector fields on $\mathcal{M}$ into vector fields on $\mathcal{N}$:
$\Phi_* X : \mathcal{N} \longrightarrow \mathcal{TN}, \:\:\:\:\:\:\:\: \left.(\Phi_* X)(f)\right|_{\Phi(p)} = \left.X(\Phi f)\right|_{p}$
which then also gives a way to ‘pullback’ covector fields on $\mathcal{N}$ into covector fields on $\mathcal{M}$:
$\Phi^*\eta : \mathcal{M} \longrightarrow \mathcal{T^*M}, \:\:\:\:\:\:\:\: \left.(\Phi^*\eta)(X)\right|_{p} = \left.\eta(\Phi_* X)\right|_{\Phi(p)}.$
I have written a bunch of vertical “evaluate-here” bars for clarification. It is common to be rather casual about the difference between a vector (lives in $\mathcal{TM}$) and a vector field (a function $\mathcal{M} \longrightarrow \mathcal{TM}$), and similarly for covectors. Typically, the various kinds of product are evaluated pointwise, e.g. if $f, g, h$ are functions then $\left.fgh\right|_x = f(x)g(x)h(x)$.
## The tautological 1-form itself
Now let $\mathcal{Q}$ be a smooth manifold, and specialise the above discussion to the case $\mathcal{N} = \mathcal{Q}$, $\mathcal{M} = \mathcal{T^*Q}$. Let $q$ be coordinates on $\mathcal{Q}$, and $(p,q)$ be coordinates on $\mathcal{T^*Q}$; that is, points on $\mathcal{T^*Q}$ are 1-forms associated to a particular point in $\mathcal{Q}$: $(p, q) \equiv \left.p_i dq^i\right|_q$. Having these two equivalent ways to look at points on a cotangent bundle is an important point which we shall return to later.
For the current purpose we will study the map
$\pi : \mathcal{T^*Q} \longrightarrow \mathcal{Q}, \:\:\:\:\:\:\:\: \pi(p, q) = q,$
that is, simply the projection map from $\mathcal{T^*Q}$ ‘down’ to $\mathcal{Q}$ – it just tells us the point on $\mathcal{Q}$ that the covector was living at.
Now for a mystical statement: the tautological 1-form is both the pullback $\pi^*$ interpreted as a 1-form, and also has the coordinate expression $\theta = p_i dq^i$. How on earth can both these things be true, and besides, how can one ‘interpret a pullback as a 1-form’?!
That last claim is actually not too bad: a map $\mathcal{T^*Q} \longrightarrow \mathcal{Q}$ induces a pullback $\mathcal{T^*Q} \longrightarrow \mathcal{T^*T^*Q}$, but this map has exactly the domain and range of a covector field on $\mathcal{T^*Q}$! Of course, this requires swapping between the perspectives of $p$ as a coordinate on $\mathcal{T^*Q}$ and $p$ as a 1-form in its own right.
We can use $(\eta,p,q)$ for coordinates on $\mathcal{T^*T^*Q}$, equivalently writing $\eta = \eta^i dp_i + \eta_i dq^i$ where in a slight abuse of notation we’ve written $\eta^i$ for the first $n$ coordinates (the $dp$ components) and $\eta_i$ for the second $n$ coordinates (the $dq$ components).
Now to investigate $\pi$ and its induced pullbacks and pushforwards. Let $f : \mathcal{Q} \longrightarrow \mathbb{R}$ be a function on $\mathcal{Q}$, and let $X : \mathcal{T^*Q} \longrightarrow \mathcal{TT^*Q}$ be a vector field on $\mathcal{T^*Q}$, which in coordinates (using the same abuse of notation as before) we will write $X = X_i \partial_{p_i} + X^i \partial_{q^i}$.
Recalling $\pi(p,q) = q$, the pushforward of $X$ under $\pi$ is then
$\left.(\pi_* X)(f)\right|_{\pi(p,q)} = \left.X(f(\pi(p,q)))\right|_{(p,q)} \\ = \underbrace{\left.X_i \partial_{p_i}(f(q))\right|_{(p,q)}}_{=0} + \left.X^i \partial_{q_i}(f(q))\right|_{(p,q)},$
so the coordinates of our pushed-forward vector field on $\mathcal{TQ}$ are $(\pi_* X)^i = X^i$, i.e. just the $q$ components of $X$.
Now we can look at how $\pi^*$ acts on covector fields $\eta : \mathcal{Q} \longrightarrow \mathcal{T^*Q}$ (in coordinates $\eta = \eta_i dq^i$):
$\left.(\pi^* \eta)(X)\right|_{(p,q)} = \left.\eta(\pi_* X)\right|_{\pi(p,q)} \\ = \left.\eta_i dq^i\right|_q \left(\left.X^j \partial_{q^j}\right|_{(p,q)}\right) = \left.\eta_i\right|_q\left.X^i\right|_{(p,q)}.$
This means that the action of $\pi_*$ is basically to place $\eta$ straight into $\mathcal{T^*T^*Q}$ unchanged, with all $dp$ components set to zero:
$\pi^* : \mathcal{T^*Q} \longrightarrow \mathcal{T^*T^*Q}, \\ \pi^* : (p,q) \longmapsto (p, p, q), \\ \left.\pi^*\eta\right|_{(p,q)} = \left.\eta_i\right|_q \left.dq^i\right|_{(p,q)}.$
And this is the source of the coordinate expression for $\pi^*$ that I quoted above – it means that if we take $\pi^*$ to be a covector field on $\mathcal{T^*Q}$, denoted $\theta$, then
$\left.\theta\right|_{(p,q)} = \left.p_i\right|_q \left.dq^i\right|_{(p,q)},$
or $\theta = p_i dq^i$ for short. If you like you can think of the action of $\pi^*$ as stripping off the $dq$ components that belong to $\mathcal{T^*Q}$ and replacing them with $dq$ components that belong to $\mathcal{T^*T^*Q}$.
## How does it ‘cancel’ pullbacks?
Now look at a general covector field $\alpha : \mathcal{Q} \longrightarrow \mathcal{T^*Q}$. Treating $\alpha$ as a map (a similar trick to above) means it induces a pushforward $\alpha_* : \mathcal{TQ} \longrightarrow \mathcal{TT^*Q}$ and a pullback $\alpha^* : \mathcal{T^*T^*Q} \longrightarrow \mathcal{T^*Q}$.
Let $F : \mathcal{T^*Q} \longrightarrow \mathbb{R}$ be a function, and $Y : \mathcal{Q} \longrightarrow \mathcal{TQ}$ a vector field with coordinate expression $Y = Y^i \partial_{q^i}$, which is pushed-forward like so:
$(\alpha_* Y)(F) = Y(\alpha F) = Y^i \partial_{q^i} (F(\alpha, q)) \\ = \underbrace{Y^i \frac{\partial F}{\partial p_j} \left. \frac{\partial p_j}{\partial q^i} \right|_{p = \alpha}}_{=0} + Y^i \frac{\partial F}{\partial q_j} \left. \frac{\partial q_j}{\partial q^i} \right|_{p = \alpha} = \left. Y^i \right|_q \left. \partial_{q^i} F(p,q) \right|_{(\alpha, q)}.$
We can use this to find how $\alpha^*$ acts on covector fields $\beta : \mathcal{T^*Q} \longrightarrow \mathcal{T^*T^*Q}$:
$\left. (\alpha^* \beta)(Y) \right|_q = \left. \beta(\alpha_* Y) \right|_{(\alpha,q)} = \left. \beta_i \right|_{(\alpha,q)} \left. Y^i \right|_q.$
But what if we specialise to $\beta = \theta = p_i dq^i$, the interesting 1-form we were looking at above? We get
$\left. \alpha^* \theta \right|_q = \left. p_i dq^i \right|_{(\alpha,q)} = \left. \alpha_i dq^i \right|_q = \left. \alpha \right|_q,$
which is exactly the 1-form $\alpha$ that we started off with! This is the reason that $\theta$ is said to ‘cancel’ a pullback, as it gives us back the 1-form that we were using to create the pullback in the first place.
## The basics of mechanics
How does this all link into physics? For mechanics you need a symplectic manifold $\mathcal{M}$ along with an 2-form called $\omega$ that satisfies various properties; notably that $d\omega = 0$, so that at least locally we can find an $\alpha$ such that $d\alpha = \omega$. Abstractly, we want to find paths $\Gamma : \mathbb{R} \longrightarrow \mathcal{M}$ such that the action integral $I$ is minimised:
$I(\Gamma) \equiv \int_\Gamma \alpha.$
Now, there are various ways to come up with symplectic manifolds, but the relevant one for physicists is ‘phase space’, i.e. $\mathcal{M} = \mathcal{T^*Q}$, the cotangent bundle of some manifold $\mathcal{Q}$ (where we think of $\mathcal{Q}$ as being the ‘real’ physical space that we see around us, perhaps 4D space-time or similar). And it turns out that the logical choice of $\omega$ is to take $\omega \equiv -d\theta$ (the minus sign being a mere convention) where $\theta = p_i dq^i$. Thanks to the discussion above we now know exactly what this object is (spoiler: it’s the tautological 1-form!).
Traditionally a physicist would have something called an action functional $S$ that takes curves $\gamma : \mathbb{R} \longrightarrow \mathcal{Q}$ and gives a real number, and they would then find the curve $\gamma$ that minimises that number ($S$ is called a functional because it also depends on the first derivatives of $\gamma$). By parametrising $\gamma$ by the time coordinate the normal Euler-Lagrange equations are derived. However, we wish to stay agnostic about which coordinate represents time! So we will keep our paths parametrised by arc length, i.e. $\gamma : [0,1] \longrightarrow \mathcal{Q}$.
Let $\gamma_q$ be a class of curves in $\mathcal{Q}$ parametrised by arc length with fixed starting point $\gamma_q(0) = q_0$ and ending point $\gamma_q(1) = q$. Now define $W(q) \equiv S[\gamma_q] + W_0$ for some constant $W_0 \equiv W(q_0)$, where $\gamma_q$ is defined to be the curve with endpoints $(q_0,q)$ that minimises $S[\gamma_q]$. This function $W : \mathcal{Q} \longrightarrow \mathbb{R}$ is called Hamilton’s principal function, and note that it depends only on positions, and not the momenta! We now calculate
$S[\gamma_q] = W(q) - W(q_0) \\ \:\:\:\: = \int_{\partial \gamma_q} W \\ \ \:\:\:\: = \int_{\gamma_q} dW \:\:\:\:\:\:\:\: (1) \\ \ \:\:\:\: = \int_{\gamma_q} (dW)^* \theta \:\:\:\:\:\:\:\: (2) \\ \ \:\:\:\: = \int_{dW(\gamma_q)} \theta \:\:\:\:\:\:\:\: (3) \\ \ \:\:\:\: = \int_\Gamma \theta.$
The numbering refers to the following results:
1. Generalised Stokes’ theorem. Here the ‘boundary’ of $\gamma_q$ is just its two endpoints.
2. The ‘cancelling’ property described above: $\eta^*(\theta) = \eta$
3. A standard property of integrals of pullbacks: $\int_U \Phi^*(\eta) = \int_{\Phi(U)} \eta$
So we see that the process of minimising $S$ is just a special case of the general theory of minimisation problems on symplectic manifolds. To lift the path $\gamma$ from $\mathcal{Q}$ to the symplectic phase space $\mathcal{T^*Q}$, we used the 1-form $dW$ as a pullback, similarly to the trick we pulled above with $\pi^*$. That means that the momenta along $\gamma$ are
$p_i = \frac{\partial W}{\partial q^i}.$
Ignoring any details of the expression for the action $S$, how do we derive a more familiar set of differential equations that determine $\gamma$? We can pick out one of the coordinates, say $q^0$, and call it time $t$, and similarly one of the momenta, say $p_0$, and call it energy $-H$ (again, minus sign by convention), so that
$H = -\frac{\partial W}{\partial t}.$
Our aim is to eliminate $W$ in favour of the other coordinates (we will still write $q^i$ and $p_i$ for the other (n-1) coordinates).
Recall that $d^2W = 0$ by the definition of the exterior derivative, so we can immediately write down the first of Hamilton’s equations:
$\frac{\partial}{\partial t}\left(\frac{\partial W}{\partial q^i}\right) = \frac{\partial}{\partial q^i}\left(\frac{\partial W}{\partial t}\right) \\ \frac{\partial p_i}{\partial t} = -\frac{\partial H}{\partial q^i}.$
And since subtracting a total differential from $dW$ retains the $d^2 = 0$ property, we can define a quantity that gives us the second of Hamilton’s equations:
$dA \equiv dW - d(p_iq^i) \\ \frac{\partial}{\partial t}\left(\frac{\partial A}{\partial p_i}\right) = \frac{\partial}{\partial p_i}\left(\frac{\partial A}{\partial t}\right) \\ \frac{\partial q_i}{\partial t} = \frac{\partial H}{\partial p_i}.$
Note that we now have explicitly
$\theta = p_i dq^i - Hdt \\ \omega = dq^i \wedge dp_i - dt \wedge dH$
for our symplectic structure, and we ended up with the familiar Hamilton’s equations
$\frac{\partial p_i}{\partial t} = -\frac{\partial H}{\partial q^i} \\ \frac{\partial q_i}{\partial t} = \frac{\partial H}{\partial p_i}.$
And so, as if by magic, we’ve recovered the traditional formalism of Hamiltonian mechanics as a special case of minimisation procedures on symplectic manifolds.
Of course, we didn’t have to use the 0th coordinate to represent time/energy. Really, time and position are distinguished from each other by the form of the Lorentzian metric, which has not yet entered into our method. It’s true that non-relativistic mechanics will inevitably privilege a time variable; but, the action for a free relativistic point particle is nicely Lorentz-invariant:
$S[\gamma] = \int_\gamma \left.g(X,X)\right|_{\gamma(s)}ds$
where $s$ is arc length, $g$ the metric, and $X$ the 4-vector tangent to $\gamma$.
This blog post was inspired by
• John Baez’s two posts on parallels between thermodynamics and mechanics
• The fact that the Wikipedia page on the tautological 1-form is so abstruse
You may also be interested in
• A 1-page summary of ‘Abstract Hamiltonian mehanics’, which describes an approach which is agnostic about time coordinates (it does not discuss minimisation procedures though)
• An article on how to do geometric Hamilton-Jacobi mechanics properly. It’s possible to derive a single nonlinear differential equation for $W$ (called the ‘Hamilton-Jacobi equation’) which depends explicitly on the time coordinate and the Hamiltonian $H$. The associated time-agnostic (‘non-autonomous’) method is fairly difficult, and this article discusses all the details.
## Volume forms on the mass-shell
The setting for dynamics is the cotangent bundle $\mathcal{T}^*\mathcal{M}$ of a manifold $\mathcal{M}$ with pseudo-Riemannian metric $g_{\mu\nu}$; relevant observables can be functions of both position and momentum. For example, the distribution function $f(x,p)$, which is the number density of particles in phase space ($x^\mu$ and $p^\mu$ are coordinates on $\mathcal{M}$ and $\mathcal{T}^*_x\mathcal{M}$ respectively).
The spatial volume form (integral measure) is $\mathrm{vol}(\mathcal{N}) \equiv \sqrt{g(x)} dx^1 \wedge \ldots \wedge dx^n$, where $g(x)$ is the determinant of the metric evaluated at $x \in M$; and the volume form for the full phase space is $dx^1 \wedge \ldots \wedge dx^n \wedge dp_1 \wedge \ldots \wedge dp_n$. If we want to integrate out the momentum-dependence of some observable, we need just the momentum-part of the volume form. From the two expressions above we can see that this is $\mathrm{vol}(\mathcal{T}^*_x\mathcal{M}) = \frac{1}{\sqrt{g(x)}} dp^1 \wedge \ldots \wedge dp^n$.
However, paths that obey the classical equations of motion are constrained to lie on the mass-shell: in flat Lorentzian spacetime this is a hyperboloid in the momentum cotangent space $\left(\mathbb{R}^{1,3}\right)^*$, given by $-p_0^2 + p_i^2 = -m^2$, where $p_0$ is the energy, $p_i$ the spatial 3-momentum and m the rest mass of a particular particle. However, in fully general curved spacetime the corresponding condition gives a hypersurface $\mathcal{N} \equiv \{(x,p) \in \mathcal{T}^*\mathcal{M} \: | \: p^\mu p_\mu = -m^2 \}$.
Because the permitted region of phase space has been restricted, we can eliminate one momentum component from the integral, treating it as a function of the other coordinates. Conventionally we pick $p_0$ to be this unwanted component, writing $p_0 = p_0(p_i)$, and note that the equation for $\mathcal{N}$ can be written $p_\mu p^\mu = p_0 p^0 + p_i p^i = -m^2$, which allows us to solve for $p_0$.
We then find the volume form induced on $\mathcal{N}$. Let $n$ be the unit vector field normal to $\mathcal{N}$:
$n = \left.\frac{d(p_\mu p^\mu)}{\left\|d(p_\mu p^\mu)\right\|}\right|_\mathcal{N}$
The denominator works out to be $-4m^2$. Noting that the metric is a function of $x$ alone, we perform the derivative on the numerator and so find that
$n = \frac{2p_\mu dp^\mu}{-4m^2} = \frac{-p_\mu dp^\mu}{2m^2}$
(the normal index convention for vectors and covectors is swapped round, as we are working on $\mathcal{T}^*\mathcal{M}$, so the coordinates $p_\mu$ are covariant to begin with).
In general, the volume form induced from a manifold $\mathcal{M}$ onto a submanifold $\mathcal{N}$ with normal VF $n$ is
$\mathrm{vol}(\mathcal{N}) = \left. n \:\lrcorner\: \mathrm{vol}(\mathcal{M}) \right|_\mathcal{N}.$
For the present purposes we therefore have
$\sqrt{g(x)} \mathrm{vol}(\mathcal{N}) = n \:\lrcorner\: \left( dp_0 \wedge dp_1 \wedge dp_2 \wedge dp_3 \right) \\ \:\: = (n \:\lrcorner\: dp_0) dp_1 \wedge dp_2 \wedge dp_3 - (n \:\lrcorner\: dp_1) dp_0 \wedge dp_2 \wedge dp_3 \\ \:\:\:\:\:\: + (n \:\lrcorner\: dp_2) dp_0 \wedge dp_1 \wedge dp_3 - (n \:\lrcorner\: dp_3) dp_0 \wedge dp_1 \wedge dp_2$
We differentiate the condition $p^\mu p_\mu = -m^2$ with respect to a spatial component $p_i$ and rearrange, giving (note the positions of the indices):
$\frac{\partial p_0}{\partial p_i} = \frac{-p^i}{p^0} \\ dp_0 = \frac{\partial p_0}{\partial p_i}dp_i = \frac{-p^i}{p^0}dp_i.$
Expressions of the form $dp_0 \wedge dp_i \wedge dp_j$ can be simplified to involve only the $k$-component of $dp_0$ ( $i \neq k \neq j$ ), as repetition of a form in a wedge product sets the entire expression to zero. We also have $n \:\lrcorner\: dp_\mu = n_\mu = \frac{-p_\mu}{2m^2}$.
So, putting it all together,
$\sqrt{g(x)} \mathrm{vol}(\mathcal{N}) = \\ \:\: \left( \frac{-p_0}{2m^2} \right) dp_1 \wedge dp_2 \wedge dp_3 - \left( \frac{-p_1}{2m^2} \frac{-p^1}{p^0} \right) dp_1 \wedge dp_2 \wedge dp_3 \\ \:\:\:\:\:\: + \left( \frac{-p_2}{2m^2} \frac{-p^2}{p^0} \right) dp_2 \wedge dp_1 \wedge dp_3 - \left( \frac{-p_3}{2m^2} \frac{-p^3}{p^0} \right) dp_3 \wedge dp_1 \wedge dp_2 \\ \:\: = \frac{-1}{2 m^2 p^0}\left(p_0p^0 + p_ip^i\right) dp_1 \wedge dp_2 \wedge dp_3 \\ \:\: = \frac{-(-m^2)}{2 m^2 p^0} dp_1 \wedge dp_2 \wedge dp_3$
And so the final result is
$\mathrm{vol}(\mathcal{N}) = \frac{dp_1 \wedge dp_2 \wedge dp_3}{2 p^0 \sqrt{g(x)}} \\$
which has the expected form. Integrals over momentum space therefore look like
$\int_\mathcal{N} f(p_i) \frac{dp_1 \wedge dp_2 \wedge dp_3}{2 E(p_i) \sqrt{g(x)}}.$
Everything we wrote down was manifestly covariant, so this volume form transforms in the correct way under general coordinate transformations. The rest mass $m$ does not appear in the final volume form, so we are free to set $m = 0$ if we choose, as is the case with photons.
Let’s say you have an array $P$, containing $N$ (~millions) points $P_i \in \mathbb{R}^3$. Perhaps it’s the output of an n-body simulation or something more complicated. Anyway, suppose you also have several other arrays of size $N$, each listing some quantity that is associated with each point $P_i$ – for example $M_i$, the mass at each point.
The task: find some small contiguous region $\Omega \subset \mathbb{R}^3$, and calculate some function at every point within it. For example, approximate the integral of the density by summing over the mass at each point:
$\displaystyle \int_{\Omega} \rho(x) d^3x \approx \sum_{P_i \in \Omega} M_i$
Let us introduce an operation $A \star B = C$, where $|A| = |B|$, $B$ contains only zeroes or ones, and $C$ contains $A_i$ iff $B_i = 1$; also $|C| \leq |A|$, but $C$ maintains the order of elements from $A$. Assume that this operation can be calculated efficiently (in parallel), as opposed to sequentially traversing the elements of an array, which is much slower (this is the case in the IDL programming language, for example).
The solution to the problem at hand is to produce a mask $S$ containing $N$ elements with values the characteristic function $\chi_\Omega$ at each point – that is, $S_i = \chi_\Omega(P_i) = 1$ if $P_i \in \Omega$ and $0$ otherwise. Then the final calculation is
$\displaystyle \sum_{P_i \in \Omega} M_i = \sum_i \left(M \star S\right)_i$
and the advantage is that we can reuse the mask $S$ for additional calculations.
Now suppose that $\Omega$ is itself a very large set, and that we have another, smaller region $\Omega' \subset \Omega$ on which we would also like to perform calculations. Let $P' = P \star S$ and $S' = \chi_{\Omega'}(P')$. Then we can compose our masks to find the $M$ values that lie within $\Omega'$.
$\displaystyle \sum_{P_i \in \Omega'} M_i = \sum_i \left((M \star S) \star S'\right)_i.$
What are the $P$-indices of points that are in the region $\Omega'$? To find the answer we do:
$\displaystyle A = [0,1,2,\ldots,|S|] \\ B = (A \star S) \star S'$
and $B$ now contains the desired $P$-indices.
In the IDL programming language, $A = B \star C$ is written as
A = B[where(C)]
A more general (and also efficient) way of dealing with large, unstructured lists of coordinates is to use an oct-tree.
## From notated music to audible sounds
This is the second post in a series devoted to music from a mathematical point of view. The first post dealt with written intervals and notes; the moral of that post was that there is some structure (a vector space) hidden inside the way we talk about intervals and notes, which we can (and should) take advantage of.
In this post, I will make the transition from notated music to audible noises, still in a way that is aimed at my hypothetical musically-ignorant mathematician.
## Revision of previous ideas
Notated intervals form a two-dimensional vector space. Pitches form a two-dimensional affine space, with intervals as the ‘difference’ vectors. See the previous post for details.
I take audible sounds to be the space of frequencies as measured in units of hertz (cycles per second). However, what we’re really interested in are the ratios between these frequencies. The absolute values only come into play when we choose an arbitrary reference point off which to base all our absolute pitches. (Choosing a reference point is different from using a non-standard tuning system – you can have equal temperament, but at Baroque pitch (A = 415), for example).
Pitch ratios are, of course, combined by multiplication, but we can still write the operation as addition provided we understand that they are being `added’ in log-space:
$f_1 = f_2f_3 \:\:\: \Longleftrightarrow \:\:\: \log{f_1} = \log{f_2} + \log{f_3}.$
In practice, these interval ratios will always be formed by taking rational numbers to rational powers.
## Constraints on a tuning system
Different musical instruments are suited to different methods of tuning notes. For example, the human voice can trivially produce pitches at any frequency in a certain range – and the same for string instruments. Wind instruments have a fixed number of ‘holes’, plus some standardised ways of shifting the basic pitches around. Brass instruments are even more restricted, and the notes they can play are closely related to the harmonic series.
Keyboard instruments are a somewhat different beast – in theory you could associate a button/key with any note imaginable, but due to practical limitations a one-dimensional array of keys is used. This obviously causes issues when we try to match up a notation system based on a two-dimensional system of intervals to the keys available. Therefore we’ll need to come up with some way of reducing (“projecting”) our two dimensions down to a single dimension. This is the Fundamental Keyboard Problem.
## Intervals with rational coefficients
When defining a tuning system, what is typically given are particular ratios for certain intervals. Suppose we have a tuning system $t : \mathcal{I} \longrightarrow \mathbb{R}$, i.e. a map that takes intervals to pitch ratios. We fix two intervals, $t(i_1) = f_1$ and $t(i_2) = f_2$. Assuming it is not the case that $i_1 \propto i_2$, these two intervals span $\mathcal{I}$, so $t(i)$ is now fixed for all $i \in \mathcal{I}$. This is because any $i \in \mathcal{I}$ can be written in the $i_1, i_2$ basis,
$i = \alpha\cdot i_1 + \beta\cdot i_2$
and hence
$t(i) = \alpha\cdot t(i_1) + \beta\cdot t(i_2) \equiv f_1^\alpha f_2^\beta$
Many well-known tuning systems can be specified this way. They are called syntonic tuning systems, or rank-2 tuning systems. However, in practice there is only one interval ratio that is free to be specified arbitrarily, because the other fixed interval is always $t(\mathsf{P8}) = 2$, otherwise octaves aren’t pure!
This gives rise to the main problem: two non-octave intervals can’t be simultaneously pure. This is distinct from the problem of designing keyboard instruments. The diatonic scale of Ptolemy specifies pure intervals for all eight steps of the major scale:
degree ratio
P1 1
M2 9/8
M3 5/4
P4 4/3
P5 3/2
M6 5/3
M7 15/8
P8 2
(There exist numerous slight variations of the Ptolemaic scale, as well as the minor scale etc.)
With a syntonic temperament, we can only get a few of these ‘correct’, unless we happen to get lucky with our ratios. P1 and P8 are correct by definition; then $i$ (e.g. P5) can be specified freely; then, perhaps $\mathsf{P8} - i$ (e.g. P4) will come out correct too. After that you’re out of luck.
## Syntonic tuning systems
In Pythagorean tuning, the given intervals are $3/2$ for the perfect fifth, and $2$ for the octave. As indicated above, this completely specifies the tuning. The procedure for general intervals is then as follows:
• Define a map $t : \mathcal{I} \longrightarrow \mathbb{R}$ that takes intervals to pitch ratios, and define it for the two chosen basis intervals, e.g.
$t(\mathsf{P5}) = \frac{3}{2}\\ t(\mathsf{P8}) = 2$
• Write your chosen interval in terms of the new basis and calculate the appropriate ratio, e.g.
$\mathsf{M6} = 3\cdot \mathsf{P5} - 1\cdot\mathsf{P8} \\ t(\mathsf{M6}) = \left(\frac{3}{2}\right)^3 \left(2\right)^{-1} = \frac{27}{16}$
• Then, for notes, define a new map $T : \mathcal{P} \longrightarrow \mathbb{R}$
• Fix the origin under $T$, i.e. $T(p_0) = f_0$ for some note $p_0$ and pitch $f_0$; the common choice is $p_0 = \mathsf{A}$, and $f_0 = 440\:\mathrm{Hz}$
• Extend $T$ to all notes by
$T(p) = t(p - p_0)\times T(p_0)$
For example,
$T(\mathsf{F\sharp}) = t(\mathsf{M6}) \times T(\mathsf{A}) = \frac{27}{16} \times 440\:\mathrm{Hz} = 742.5\:\mathrm{Hz}$
Here is a table of some common syntonic tuning systems, in each case assuming that the second constrained interval is $\mathsf{P8} \longrightarrow 2$:
Tuning system Fixed interval
Pythagorean $\mathsf{P5} \longrightarrow \frac{2}{3}$
Quarter-comma meantone $\mathsf{M3} \longrightarrow \frac{5}{4}$
Sixth-comma meantone $\mathsf{A4} \longrightarrow \frac{45}{32}$
Third-comma meantone $\mathsf{m3} \longrightarrow \frac{6}{5}$
Schismatic $8\cdot\mathsf{P4} \longrightarrow 10$
Note that we quickly enter the realm of irrational numbers: for example, under quarter-comma meantone, $\mathsf{P5} \longrightarrow \left(\frac{5}{4}\right)^\frac{1}{4}\left(2\right)^\frac{1}{2} \approx 1.495$.
You can immediately see that different tuning systems give different trades-off: quarter-comma meantone provides you with sweet-sounding (and narrow) major thirds, while abandoning the pure fifths of Pythagorean tuning.
There is a link here between theory and practice: in Medieval music, for which Pythagorean tuning was used, phrase-endings rarely feature major thirds – normally open fifths and octaves are the only intervals considered ‘pure’ enough to end a phrase. In Renaissance and Baroque music, major thirds are used much more often, and this coincides with the use of quarter-comma meantone tuning.
## Keyboard instruments with syntonic temperaments
Let us design a keyboard that will use notes from a syntonic temperament $t : \mathcal{I} \longrightarrow \mathbb{R}$ (with fixed interval $i$, origin note $b$, and note-mapping $T : \mathcal{P} \longrightarrow \mathbb{R}$ ); we know that octaves will be pure, so we make our one-dimensional keyboard periodic at the octave, and then place $n$ keys in each octave. Each key (attached to a physical string or pipe) will be tuned to some definite frequency $f \in \{T(p) \: | \: p \in \mathcal{P} \}$.
Now we’ll attempt to distribute notes from our temperament to the physical keys on the keyboard. Starting at note $b$ (with frequency $T(b)$ ), assign the notes $(b\pm k\cdot i) \: \mathrm{mod} \: \mathsf{P8}$ to their keys (with frequencies $T\left((b\pm k\cdot i) \: \mathrm{mod} \: \mathsf{P8}\right)$ ), ending at $k = n$ ($\pm 1$ depending on whether $n$ is odd or even). Unfortunately in general the cycle is not closed, as $\left(n\cdot i \: \mathrm{mod} \: \mathsf{P8}\right) \neq \mathsf{P1}$. This is called a wolf interval, and its existence limits the usefulness of syntonic tuning systems for keyboards.
To minimise disruption, the wolf interval is normally chosen to be one that is little used if playing in keys with a low number of sharps and flats; for example $\mathsf{G\sharp} - \mathsf{E\flat} = \mathsf{A3}$: under Pythagorean tuning, the A3 is about $1.35$, to contrast with the pure P4 which is exactly $\frac{4}{3}$.
## Keyboard instruments with equal temperaments
Returning to the Fundamental Keyboard Problem, we see that the solution is to project the two dimensions of notated intervals down to a one-dimensional subspace. This necessarily involves one interval being set to zero (or to one, multiplicatively speaking). Our search therefore is effectively for syntonic tuning systems where the fixed ratios are $\mathsf{P8} \longrightarrow 2$ and $i \longrightarrow 1$ for some interval $i$.
Before we know what $i$ is, can we say what such a tuning system would look like? Well, if we pick an interval $j \in \mathcal{I}, j \neq i$, and use $i, j$ as our new basis, then because $i \longrightarrow 1$, we can generate all intervals with $\alpha\cdot j$ for some rational $\alpha$. Furthermore, we can pick $j$ carefully so that all intervals can actually be represented by $\alpha\cdot j$ for integral $\alpha$. Then we can use $j$ as a convenient “unit” with which to construct our notation system or keyboard. If $\mathsf{P8} = n\cdot j$, then the tuning system is called $n$-equal temperament.
A bit of experimentation (or suitably clever calculation) results in some promising-looking candidates for $i$:
$i$ $j$ $n$
$\mathsf{A1}$ $\mathsf{M2}$ 7
$\mathsf{d2}$ $\mathsf{A1}, \mathsf{m2}$ 12
$\mathsf{dd2}$ $\mathsf{d2}$ 19
$\mathsf{d^4 3}$ $\mathsf{d2}$ 31
$\mathsf{d^7 6}$ $\mathsf{d2}$ 53
(The $j$ interval is non-unique, as various intervals become identified under equal temperaments.)
As you may have guessed already, the favourite choice here is $n = 12$ and $i = \mathsf{d2} \longrightarrow 1$. This means that $\mathsf{A1} \longrightarrow 2^\frac{1}{12}$, and $\mathsf{m2} \longrightarrow 2^\frac{1}{12}$. So A1 and m2 are identified, and are used as the generator $j$. They are referred to interchangeably as a “semitone”. The other useful property of 12-equal temperament is that $\mathsf{P5} \longrightarrow 2^\frac{7}{12} \approx 1.498$, which is extremely close to the Pythagorean value!
Thus the use of 12-equal temperament to resolve the Fundamental Keyboard Problem leads directly to keyboards with 12 keys per octave; seven “white” notes ${\mathsf{A},\mathsf{B},\mathsf{C},\mathsf{D},\mathsf{E},\mathsf{F},\mathsf{G}}$, and five “black” notes ${\mathsf{A\sharp},\mathsf{C\sharp},\mathsf{D\sharp},\mathsf{F\sharp},\mathsf{G\sharp}}$. There are no more notes to account for, because the equivalency of A1 and m2 means that notes that differ by these intervals are identified, e.g. $\mathsf{B\sharp} \equiv \mathsf{C}$ and $\mathsf{F\sharp} \equiv \mathsf{G\flat}$.
Twelve notes per octave is also fairly convenient given the size of human hands, and how difficult the resulting instrument is to play.
## Other instruments
Consider an ensemble of dynamically-tunable instruments (string instruments, human voices, etc.). If this ensemble plays a major chord, there’s no reason why the players can’t all agree to tune it totally purely – with ratios of $1, \frac{5}{4}, \frac{3}{2}$.
As a general strategy, the ensemble could choose to fix just a few notes overall, and then tweak any chord slightly to maximise harmonicity. Or, locally fix any note that is constant between successive chords, and change all the other notes around it.
These systems of constant readjustment have one big advantage – much nicer-sounding intervals – and several major annoyances, which are:
• There’s no longer an unambiguous mapping between written notes and sounding frequencies. This may or may not offend you greatly, depending on how you axiomatise musical notation (you can probably guess my position…)
• A tendency for the pitch of the entire ensemble to drift over time (particularly with the second system).
• Cannot include certain instruments in the ensemble (any keyboard instruments, certain wind instruments).
Nevertheless, it is hypothesised that certain ensembles (string quartets, unaccompanied choirs) do in fact adjust their intonation in this way.
## Cheap & Easy differential forms
There’s a way of motivating the notions of tangent vectors and covectors that’s hinted at but generally glossed over – at least in the physics courses that I take. This post is a quick overview, serving mostly as a reminder to myself of the topic. Please excuse the lack of rigour.
I will use the Einstein summation convention throughout,
$x^iy_iz^j \equiv \sum\limits_i \: x^iy_iz^j,$
and hopefully by the end I’ll even have explained why it makes sense.
## Tangent vectors
We have an $n$-dimensional manifold $M$, which contains points, but not vectors. You cannot subtract two points on a manifold and expect to get something useful; imagine a line drawn between two points on the Earth’s surface. It would go awkwardly underground, and wouldn’t measure any sort of quantity that’s appreciable by inhabitants on the surface.
Let $\gamma \: : \: \mathbb{R} \longrightarrow M$ be a curve on $M$. It takes some real parameter (lets call it $t$ ) and spits out points in $M$ along a line, as you evolve $t$. Let’s call the coordinates of these points $p^i(t)$ in some coordinate system, and $p'^i(t)$ in some other coordinate system. Then we can find a ‘velocity’ vector $\dot{\gamma}$, tangent to the curve, whose coordinates are $\left(\frac{dp^1}{dt}, \frac{dp^2}{dt}, \ldots, \frac{dp^n}{dt}\right)$. The coordinates of $\dot{\gamma}$ in the primed coordinate system are then given by the chain rule,
$\frac{dp'^i}{dt} = \frac{dp'^i}{dp^j}\frac{dp^j}{dt}.$
This motivates the study of all objects that transform this way, and they are called contravariant vectors, or contravectors, or just vectors.
Now, so far the vectors are just $n$-tuples of numbers, with no particular geometric significance. I will however write down a vector $\mathbf{v}$ with a basis by pairing up its components $v^i$ with a basis $\mathbf{e}_i$, as well as the same in the primed coordinate system:
$\mathbf{v} = v^i\mathbf{e}_i = v'^i\mathbf{e'}_i,$
and for now these basis vectors $\mathbf{e}_i$ are formal placeholders. All we can say is, whatever the choice of $\mathbf{e}_i$, they will have to transform using the inverse of the transformation matrix used by $v^i$, in order that the expression above remains true in any coordinate system.
A vector lives at a single point $p$ of $M$, in a space called ‘the tangent space to $M$ at $p$ ‘, or $T_pM$ for short – imagine a flat plane ( $T_pM$ ) balancing on top of a lumpy surface ( $M$ ), touching it at a point ( $p$ ). If $\mathbf{v}$ varies from point to point, it is strictly a vector field, or in other words a function $\mathbf{v} \: : \: M \longrightarrow T_pM$; in this case we can just say that it lives in $TM$, and we have to be careful not to forget about its position-dependence even if we suppress it occasionally for notational convenience.
## Differential operators
Let $f \: : \: M \longrightarrow \mathbb{R}$ be a scalar field (just a real-valued function) on our manifold $M$. We can write differentiation of $f$ along a vector (the directional derivative along $\mathbf{v}$ ) in three ways, all defined to be equivalent
$\left( \nabla_\mathbf{v} f \right) (p) \equiv \frac{df(p + t\mathbf{v})}{dt}\Big|_{t = 0} \equiv v^i\frac{\partial f}{\partial x^i},$
and note that we’re not really adding the vector $\mathbf{v}$ to the point $p$, because we’re evaluating the expression at $t = 0$. The $p$-dependence is casually suppressed in the last expression.
We might worry that this is coordinate system-dependent, so lets try to write the same quantity down in the primed coordinate system, using the transformation properties of $\mathbf{v}$ that we already know, and the chain rule:
$v'^i\frac{\partial f}{\partial x'^i} = \frac{\partial x'^i}{\partial x^j} v^j \frac{\partial f}{\partial x^k}\frac{\partial x^k}{\partial x'^i} = v^j \frac{\partial f}{\partial x^k} \delta^k_j = v^k \frac{\partial f}{\partial x^k},$
so our directional derivative is coordinate-invariant after all! Note that multiplying the coordinates of a matrix with those of its inverse (and summing according to the Einstein convention) gives the Kronecker delta, which is why we can swap out $j$ for $k$ in the last expression.
Coordinate-invariance shouldn’t surprise us too much, because the first two ways of writing the directional derivative made no mention of any coordinate system for $\mathbf{v}$.
Now, recall that first-order differential operators on real functions of $n$ variables all take the form
$D f = a^i\frac{\partial f}{\partial x^i},$
and so if we just interpret the values $a^i$ as components of a vector, we’ve found a one-to-one correspondence between vectors and first-order differential operators (strictly speaking it’s between vector components and operators, but all the transformation matrices between coordinate systems are one-to-one too so it doesn’t matter).
This correspondence with differential operators hints strongly at what quantities to use as our basis vectors – the individual derivative operators $\frac{\partial}{\partial x^i}$ certainly transform in the correct way. We now make the formal identification
$\frac{\partial}{\partial x^i} \equiv \mathbf{e}_i.$
I say formal because we will not treat these basis vectors like ‘proper’ derivative symbols, as their ‘true’ meaning will only come into play in certain carefully-defined situations.
Let’s make the following abbreviations: $\frac{\partial}{\partial x^i} \equiv \partial_i$ and $\frac{\partial}{\partial x'^i} \equiv \partial'_i$ when talking about operators; and $\frac{\mathbf{\partial}}{\mathbf{\partial} x^i} \equiv \partial_i \equiv \mathbf{e}_i$ and $\frac{\mathbf{\partial}}{\mathbf{\partial} x'^i} \equiv \partial'_i \equiv \mathbf{e'}_i$ when talking about basis vectors.
## Linear functionals
A linear functional is a function $\alpha \: : \: TM \longrightarrow \mathbb{R}$ that satisfies linearity, i.e.
$\alpha(\mathbf{u}) = x \in \mathbb{R} \\ (\alpha + \beta)(\mathbf{u}) = \alpha(\mathbf{u}) + \beta(\mathbf{u}) \\ \alpha(\mathbf{u} + \mathbf{v}) = \alpha(\mathbf{u}) + \alpha(\mathbf{v}).$
Linear functionals are ‘vector-like’, but live in a space called $T^*M$, rather than the $TM$ that contains vectors. They are totally determined by their action on the basis vectors of $TM$, so can be written down in components:
$\alpha_i = \alpha(\mathbf{e}_i) \\ \alpha(\mathbf{v}) = \alpha_i v^i \\ \alpha = \alpha_i\eta^i \\ \eta^i(\mathbf{v}) = v^i,$
where the $\eta^i$ are some as-yet-mysterious basis for our linear functionals. Note the position of the indices on each quantity: the Einstein summation convention is working correctly, even if we don’t necessarily know yet what sort of quantities we’re dealing with.
The expression $\alpha(\mathbf{v}) = \alpha_i v^i$ must be coordinate independent, as the left hand side makes no reference to any coordinate system; and we already know how to transform $v^i$. Therefore the components $\alpha_i$ must use the opposite transformation, $\alpha'_i = \alpha_j \frac{\partial x^j}{\partial x'^i}$. So we have
$\alpha'_i v'^i = \alpha_j \frac{\partial x^j}{\partial x'^i} \frac{\partial x'^i}{\partial x^k} v^k = \alpha_j \delta^j_k v^k = \alpha_j v^j = \alpha(\mathbf{v}).$
These linear functionals are also called covariant vectors, covectors, differential one-forms, or one-forms. Remember that both $\alpha$ and $\mathbf{v}$ can have $p$-dependence in general, making them covector fields and vector fields respectively.
## Total differential of a function
The following formula for the ‘total’ differential of a function should be familiar:
$df = \partial_i f dx^i,$
where $p$-dependence has been suppressed on both sides. However, we don’t currently have a way to make geometric sense of the individual coordinate differentials $dx^i$. This expression must be coordinate-independent (no mention of coordinates is made on the left side), so the coordinate differentials must transform as
$dx'^i = \frac{\partial x'^i}{\partial x^j} dx^j.$
This is exactly how the basis for our covectors $\eta^i$ transforms! So we can make the formal identification $\eta^i \equiv dx^i$, much like how we earlier decided that $\mathbf{e}_i \equiv \partial_i$.
The full component-wise expression for the action of our covectors on our vectors is
$\alpha(\mathbf{v}) = \alpha_i v^j dx^i(\partial_j) = \alpha_i v^j \delta^i_j = \alpha_i v^i.$
The only trick here is $dx^i(\partial_j) = \delta^i_j$, which we have defined to be true.
Our expression for $df$ now comes in handy as a way to generate new covectors. In fact, covectors generated in this way have the following useful property:
$df(\mathbf{v}) = \partial_i f v^i = \nabla_\mathbf{v} f.$
You may have spotted by now the value of the Einstein summation convention – as long as you keep your up-indices on vector components (and covector bases), and down-indices on covector components (and vector bases), any scalar you end up with will be coordinate-independent. This is a useful ‘type-check’ on any expression; if the indices don’t match, something must be wrong (or you’ve violated relativity by finding a preferred coordinate system).
I finish with three warnings:
• Covectors generated from functions (like $df$ above) are not the only kind! Any linear combination of the basis covectors $dx^i$ is a covector, and in general the arbitrary covector $a_i dx^i$ will not be the differential of any function at all.
• The components of vectors transform in the opposite way to the components of covectors. The basis vectors transform oppositely to the vector components, and to the basis covectors. This is confusing! Hence physicists like to pretend that basis vectors don’t exist, and only work with components. This is a convenient way to work for many computations, but you can end up getting confused when your basis vectors change from point-to-point (as they do on most non-trivial manifolds and coordinate systems):
$\frac{d\mathbf{v}}{dt} = \frac{d}{dt}\left(v^i \mathbf{e}_i\right) = \dot{v}^i\mathbf{e}_i + v^i\mathbf{\dot{e}}_i.$
Mathematicians never write any down coordinates, say they are working in a ‘coordinate-free’ way, and act all clever about it.
• There is one more way to write the directional derivative, which is
$\mathbf{v}\left(df\right) = v^i \frac{\partial f}{\partial x^j} \partial_i(dx^j) = v^i \frac{\partial f}{\partial x^j} \delta^j_i = v^i \frac{\partial f}{\partial x^i} = \nabla_\mathbf{v} f,$
treating $\mathbf{v}$ as a function $T^*M \longrightarrow \mathbb{R}$. Unfortunately you also see people write the above as
$\mathbf{v}\left(f\right) = v^i \partial_i f = \nabla_\mathbf{v} f,$
which is very confusing, as it conflicts with our careful definitions of what the basis vectors and covectors mean – such is life.
## Algebraic structure of musical intervals and pitches
Here’s the first in what will hopefully be a series of related posts about one particular (limited) aspect of the interaction between music and mathematics. In my mind, I’ll be explaining things to a hypothetical musically uneducated mathematician, who should nevertheless end up with an understanding as good as any bona-fide musician’s – in the tradition of certain physics books of which I am a fan.
I begin by revising, in unnecessary rigorous detail, what you already knew about musical pitches and intervals.
Musical intervals are the signed distance between musical notes, as written on a traditional (Western) five-lined musical stave. For completeness, I will first summarise traditional pitch and interval notation.
## Pitch syntax
Pitches are a pair, consisting of a letter and an accidental: $P = (N, a)$, $P \in \mathcal{P}$, where
$N \in \{A,B,C,\ldots,A',\ldots\}, \\ a \in \{\natural,\sharp,\flat,\sharp\sharp,\flat\flat,\ldots,\sharp^n,\flat^n,\ldots\}$
leading to constructions such as C♮, F♯, B♭ etc. These pitches correspond in a slightly irregular way to the horizontal lines (and gaps between them) on the stave, but I will not go into the details here. All pairs $(N, a)$ correspond to valid pitches. The set of pitches is actually extended upwards beyond that written above with super-prime symbols ($A', B'$ ), and downwards with sub-primes ($C_{'},D_{'}$ ).
The accidentals are pronounced as follows: ♮ is a natural, ♭ is a flat, ♯ is a sharp.
Pitches form an affine space, with intervals as the difference type (subtraction of two pitches). I will not define this subtraction until we have a clearer idea of the algebra of intervals.
## Interval syntax
Intervals are also a pair, consisting of a quality and a number: $I = (q, n)$, $I \in \mathcal{I}$, where
$q \in \{\mathsf{P,M,m,A,d,AA,dd},\ldots,\mathsf{A}^n,\mathsf{d}^n,\ldots\}, \\ n \in \{\ldots,\mathsf{-3,-2,1,2,3},\ldots\}$
leading to interval names such as P5, M3, m6 etc. Note that the set which $n$ belongs to is not ℤ – the $n$ are simply an arbitrary label (nominal numbers), and their arithmetic is tied up with the overall algebra of musical intervals in a complex way that does not correspond to the conventional notion of integers. Intervals form an associative abelian algebra (generally written as addition), together with the four additional operations of augmentation, diminution, inversion and negation.
The interval qualities listed above are pronounced, respectively, perfect, major, minor, augmented, diminished, doubly augmented etc. The interval numbers are pronounced as ordinal numbers, with the special case that $8$ is an octave, and $1$ a unison.
Note also that not all combinations $(q, n)$ are permitted; in particular, there are certain rules that allow one to construct valid intervals. Start with one of the eleven valid base intervals:
$\mathcal{I}' = \{\mathsf{P1}, \mathsf{m2}, \mathsf{M2}, \mathsf{m3}, \mathsf{M3}, \mathsf{P4}, \mathsf{P5}, \mathsf{m6}, \mathsf{M6}, \mathsf{m7}, \mathsf{M7}\}.$
The total set $\mathcal{I}$ of valid intervals is actually periodic: take $\mathcal{I}'$, and in the positive $n$ direction $\mathcal{I}$ has period 7 and the entire inverted set (the result of the map $n \longrightarrow -n$ ) is also in $\mathcal{I}$, i.e.
$(q,n) \in \mathcal{I}' \implies (q,n \pm 7m) \in \mathcal{I}^{\prime\prime},$
and then also
$(q,n) \in \mathcal{I}^{\prime\prime} \implies (q, -n) \in \mathcal{I} \:\: \mathrm{and} \:\: (q, n) \in \mathcal{I}.$
## Operations on intervals
I will freely interchange two forms of notation for the same interval: $\mathsf{P5}$ and $(P,5)$ for convenience. I will use $+$ for interval vector addition and addition of intervals to pitches, and $-$ for interval vector subtraction and subtraction of pitches. I will use a dot $\cdot$ for scalar multiplication of interval vectors by integers.
Before I get to the complete decision procedure for intervallic addition below, a limited form of interval addition can be defined on $\mathcal{I}'$,
$\mathsf{P1} + I = I\\ (\mathsf{M},n) - (m,n) = \mathsf{m2}\\ (\mathsf{m},n+1) - (\mathsf{M},n) = \mathsf{m2}\\ (\mathsf{m},n+1) - (\mathsf{P},n) = \mathsf{m2},\\$
which can be extended in the obvious way to $\mathcal{I}$ (of course, each case is defined only for $(q,n)$ that actually exist in $\mathcal{I}$ ).
Then, you can augment intervals:
$aug(q,n) = \begin{cases} (\mathsf{A},n) & \mbox{if } q = \mathsf{P} \\ (\mathsf{M},n) & \mbox{if } q = \mathsf{m} \\ (\mathsf{A},n) & \mbox{if } q = \mathsf{M} \\ (\mathsf{m},n) & \mbox{if } q = \mathsf{d} \mbox{ and } (\mathsf{m},n) \in \mathcal{I} \\ (\mathsf{P},n) & \mbox{if } q = \mathsf{d} \mbox{ and } (\mathsf{P},n) \in \mathcal{I} \\ (\mathsf{A}^{i+1},n) & \mbox{if } q = \mathsf{A}^i \\ (\mathsf{d}^{i-1},n) & \mbox{if } q = \mathsf{d}^i,\\ \end{cases}$
and diminish them:
$dim(q,n) = \begin{cases} (\mathsf{d},n) & \mbox{if } q = \mathsf{P} \\ (\mathsf{m},n) & \mbox{if } q = \mathsf{M} \\ (\mathsf{d},n) & \mbox{if } q = \mathsf{M} \\ (\mathsf{M},n) & \mbox{if } q = \mathsf{A} \mbox{ and } (\mathsf{M},n) \in \mathcal{I} \\ (\mathsf{P},n) & \mbox{if } q = \mathsf{A} \mbox{ and } (\mathsf{P},n) \in \mathcal{I} \\ (\mathsf{A}^{i-1},n) & \mbox{if } q = \mathsf{A}^i.\\ (\mathsf{d}^{i+1},n) & \mbox{if } q = \mathsf{d}^i \\ \end{cases}$
Note that $aug(dim(I)) = dim(aug(I)) = I$, for all $I$.
We can now define addition on intervals. Let $I_1 = (q_1,n_1)$ and $I_2 = (q_2,n_2)$. Then,
$I_1 + I_2 = I_3 = (q_3, n_3),$
finding $q_3$ and $n_3$ according to the following procedure: augment or diminish $I_1$ until you have a new interval $I'_1 \in \mathcal{I}$, along the way calculating an augmentation index $j_1$, where you increment $j_1$ once for each augmentation, and decrement once for each diminution. Repeat for $I'_2$. Then perform the following interval addition, which is possible because addition is already defined on $\mathcal{I}$:
$I'_1 + I'_2 = I'_3.$
Then, perform the appropriate number ($j_1 + j_2$ ) of augmentations on $I'_3$, to give $I_3$:
$aug^{j_1+j_2}(I'_3) = (q_3,n_3).$
Incidentally, it is always true that $n_3 = n_1 + n_2 - 1$.
## Free abelian groups
Quick revision of free abelian groups. A free abelian group $G$ of rank $n$ satisfies the following: there exists at least one set $B \subset G$, with $|B| = n$, such that every element of $G$ can be written as a linear combination of the elements of $B$, with integer coefficients, i.e. $g = b_1^{i_1}b_2^{i_2}\ldots b_n^{i_n}$.
Free abelian groups can be thought of as vector spaces over the integers (ℤ as the field of scalars, $G$ as the group of vectors).
A rank-$n$ free abelian group is isomorphic to the group of $n$-tuples of integers, with the group operation pairwise addition:
$(a_1,a_2,\ldots) + (b_1,b_2,\ldots) = (a_1 + b_1, a_2 + b_2, \ldots),$
or in other words, any rank-$n$ free abelian group is isomorphic to the direct sum of $n$ copies of ℤ:
$G \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \ldots \oplus \mathbb{Z}.$
There may be many different choices of basis set $b$, much like a vector space with no preferred basis.
## Intervals as a free abelian group
As you can probably tell, our previous method of interval addition is a horrible mess. Luckily we can prove that intervals form a rank-2 free abelian group (said proof consists of a pile of tedious case analysis). Hence we can find a two-element basis; after which interval addition proceeds easily, being reduced to element-wise addition of pairs of integers.
We need to find any pair of linearly independent intervals to use as a basis – to decompose any arbitrary interval into a linear combination of the two basis intervals. Once we have one basis, we can use them to easily generate all the other bases. However, it’s not immediately obvious how to find our first pair of linearly independent intervals. Luckily, I have such a pair up my sleeve already: (A1, d2). They must be linearly independent, because
$n\cdot \mathsf{A1} = (\mathsf{A}^n, n - n + 1) = (\mathsf{A}^n,1) \neq \mathsf{d2} \:\:\: \forall n.$
We will now take the opportunity to simplify our augmentation and diminution operations,
$aug(I) = I + \mathsf{A1} \\ dim(I) = I - \mathsf{A1}.$
Decomposing arbitrary intervals into a basis requires some more tedious case analysis, so here are just a few examples:
$\mathsf{m2} = \mathsf{A1} + \mathsf{d2}\\ \mathsf{P5} = 7\cdot \mathsf{A1} + 4\cdot \mathsf{d2}\\ \mathsf{M6} = 9\cdot \mathsf{A1} + 5\cdot \mathsf{d2}\\ \mathsf{P8} = 12\cdot \mathsf{A1} + 7\cdot \mathsf{d2}.$
If we define the map $\phi : \mathcal{I} \longrightarrow \mathbb{Z}\times\mathbb{Z}$ to be the map that gives the decomposition into the (A1, d2) basis, then the above could be written as
$\phi(\mathsf{m2}) = (1,1)\\ \phi(\mathsf{P5}) = (7,4)\\ \phi(\mathsf{M6}) = (9,5)\\ \phi(\mathsf{P8}) = (12,7).$
The basis (A1, d2) is convenient insofar as A1 matches up with what we think of as semitones, and d2 simply counts an interval’s number.
Now that we can add and subtract intervals easily, I can concisely define the two remaining special operations on intervals, inversion and negation:
$inv(I) = \mathsf{P8} - I\\ neg(I) = \mathsf{P1} - I.$
## Pitch space
Of course, merely being able to add and subtract intervals is pretty useless on its own. What we really want to do is use intervals to hop around the space of pitches. The rules for pitch arithmetic are only barely less irregular than for intervals.
Note: adding an interval to a pitch is called transposition, and it is technically a separate operation from interval addition (it has a different type signature), but we shall use the $+$ symbol for it anyway.
In our notation from the first section, adding an octave (P8) adds a prime symbol to a letter name
$(N,a) + \mathsf{P8} = (N',a)\\ (N_{'},a) + \mathsf{P8} = (N,a)$
with the obvious extension to multiple octave subtraction, and multiple primes and sub-primes.
Adding and subtracting A1 corresponds to adding and subtracting sharps and flats:
$(N,\natural) + \mathsf{A1} = (N,\sharp)\\ (N,\flat) + \mathsf{A1} = (N,\natural)$
with the obvious extension to double sharps (♯♯) and double flats (♭♭) and arbitrary numbers of accidentals.
All that remains is to give the intervals between the natural (♮) pitches:
$\mathsf{A'\natural} - \mathsf{G\natural} = \mathsf{M2}\\ \mathsf{G\natural} - \mathsf{F\natural} = \mathsf{M2}\\ \mathsf{F\natural} - \mathsf{E\natural} = \mathsf{m2}\\ \mathsf{E\natural} - \mathsf{D\natural} = \mathsf{M2}\\ \mathsf{D\natural} - \mathsf{C\natural} = \mathsf{M2}\\ \mathsf{C\natural} - \mathsf{B\natural} = \mathsf{m2}\\ \mathsf{B\natural} - \mathsf{A\natural} = \mathsf{M2}.$
The equivalent of choosing a basis for our intervals is finding a coordinate system for our pitches. To do this we must convert the pitch affine space $\mathcal{P}$ into a vector space (Wikipedia suggests the name Pointed space) by choosing an origin. At this point, any choice of origin is arbitrary and meaningless, but it becomes important when we get to tuning systems, which will be the subject of the next post in this series.
For example, let us define a map $\psi : \mathcal{P} \longrightarrow \mathbb{Z}\times\mathbb{Z}$ with origin
$\psi(\mathsf{A\natural}) = O.$
Then, to find the coordinates of an arbitrary pitch $P$, we compute
$\psi(P) = O + \phi(P - O).$
Of course, it is easiest to simply define $O = (0,0)$.
We now no longer need to worry about how to represent pitches, and will focus on intervals for the purposes of basis changes.
## Change of interval basis
Given that we now have one valid basis, the problem of further changes of basis reduces to linear algebra in two dimensions.
Let $\phi(I) = (m,n)$ be our interval in the (A1,d2) representation, and let $\phi(I_1) = (a,b)$ and $\phi(I_2) = (c,d)$ be a different pair of linearly independent basis intervals. Then
$x\cdot(a,b) + y\cdot(c,d) = (m,n),$
which is simply a system of two linear equations, to be solved by determinants in the usual way:
$x = \frac{dm - cn}{ad - bc}, \:\:\:\: y = \frac{an - bm}{ad - bc}.$
Clearly the solution will not always be in the integers, so we may sometimes choose to extend our scalar field to the rationals (particularly when we come to tuning systems). Here are the examples from the previous-but-one section, but demonstrating the (P5,P8) basis:
$\mathsf{m2} = -5\cdot\mathsf{P5} + 3\cdot\mathsf{P8}\\ \mathsf{P5} = 1\cdot\mathsf{P5} + 0\cdot \mathsf{P8}\\ \mathsf{M6} = 3\cdot \mathsf{P5} - 1\cdot\mathsf{P8}\\ \mathsf{P8} = 0\cdot \mathsf{P5} + 1\cdot\mathsf{P8},$
and again with the (M2,m2) basis:
$\mathsf{m2} = 0\cdot\mathsf{M2} + 1\cdot\mathsf{m2}\\ \mathsf{P5} = 3\cdot\mathsf{M2} + 1\cdot\mathsf{m2}\\ \mathsf{M6} = 4\cdot \mathsf{M2} + 1\cdot\mathsf{m2}\\ \mathsf{P8} = 5\cdot \mathsf{M2} + 2\cdot \mathsf{m2}.$
Finally, here is a diagram showing pitch space, with arrows representing 3 choices of interval basis.
The ideas in this post are implemented concretely in two software projects: the Haskell Music Suite (also available on Hackage), and in AbstractMusic (the latter being my own personal research project).
Posted in maths, music | 1 Comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 596, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250580072402954, "perplexity": 438.6068792237745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187193.40/warc/CC-MAIN-20170322212947-00210-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/define-angle-of-projection-with-the-given-parameters.572541/ | # Homework Help: Define Angle of Projection with The Given Parameters
1. Jan 30, 2012
### velouria131
1. The problem statement, all variables and given/known data
A projectile is launched with a speed v at an angle theta above the horizontal. Ignore air resistance. Derive an expression for the angle theta in terms of the parameters of the problem such that the horizontal distance from the launch of the object is N times greater than the maximum height achieved during its flight. Assume that the object lands at the same vertical position from which it was launched and that the acceleration due to gravity is taken as g
2. Relevant equations
v = vo + at
vav = ½(vo + v)
x = xo + ½(vo + v)t
x = xo + vot + ½at²
v² = vo² + 2a(x - xo)
x = xo + vt - ½at²
3. The attempt at a solution
First of all, thanks for any help and hello. I have sat here for quite awhile deliberating, and I seem to just convolute any potential solutions. The first thing I did was draw vector R at angle theta from the horizontal. I then made two tables; table X and table Y. I defined the initial velocity in the X direction as vcos? and the initial velocity in the Y direction as vsin?. I defined the acc. in the Y direction as -9.8 m/s^2. I assumed from this point that I would have to construct an equation - inverse tangent vsin? / vcos?, in terms of vsin? and vcos? that would end of being numerical. I cannot for the life of me figure this part out. The key seems to be that the horizontal distance traveled is N times the maximum vertical distance, which would be N times the the distance traveled by the object whose finally velocity is zero and initial velocity vsin?. Where do I go from here? Or am I even on the correct track?
Thanks again.
Last edited: Jan 30, 2012
2. Jan 30, 2012
### Simon Bridge
When you get lost - draw the v-t diagrams.
For this problem, there are two. One for vertical and one for horizontal motion.
Horizontal is just flat for some time T (leave it as T for now).
The horizontal distance traveled is the area under this graph - horizontal speed times T.
note: if θ is the angle between the initial velocity vector and the ground, then vx=v.cos(θ) ... that's correct :)
The vertical one is a line starting at vy and dropping to -vy in time T. The total area will be zero, which is because you start and finish at the same height. The area under the first half of the graph is the maximum vertical height.
So now your relationship between max height and horizontal distance is actually a relationship between two areas... the area of a triangle and the area of a rectangle - you can do those!
You can get the expression of T from the slope of the vy-t graph (hint: it's -g)
3. Jan 31, 2012
### velouria131
I appreciate the help, that clears up quite a bit. I have just woken up, and upon reading this recognize how expressions may be constructed representative of the angle theta using these two vt graphs. However, will this enable me to solve for an actual angle? Or by expression in terms of the parameter, is the problem simply asking for an equation?
4. Jan 31, 2012
### Simon Bridge
When you write the expressions down in a list, you'll see what needs to be done.
You'll end up with 4 equations and 4 unknowns - hint: sin/cos=tan; leave everything as letters.
5. Jan 31, 2012
### velouria131
Alright, let's see. You say there are four expressions? T = sqrt of (-g)^2 -v^2(sinx)^2. T * vcosx is the horizontal distance and 1/2*T*vsinx is the maximum vertical distance. Am I missing something? Now, I guess the problem I am having (at this point) is conceptualizing as to what I am supposed to do with these expressions (if they are correct that is)! Do I derive expressions for vsinx and vcosx and then take the arctan of this expression? I guess I am lost as to what my ends are here. Thanks.
6. Jan 31, 2012
### Simon Bridge
I don't think that first one is right somehow. You have T=some stuff, then say that is a horizontal distance? The "some stuff" does not have the right units for distance or time.
Put max vertical distance as "h" and the max horizontal distance as "R", and initial speed as "v", initial angle is θ (click "go advanced", or "quote" and look on the right) it will be easier.
From vy-t graph you have an expression for the area of one triangle (there are two) and an expression for the slope of the line (2 equations). From the vx-t graph you have an expression for the area of the whole graph (1 equation). From the description of the problem you have an expression relating max height to range. This one, with the three from the graphs makes four.
The three expressions from the graphs describe every possible motion that has that sort-of shape.
The last expression selects one of the possible motions suggested by the graphs.
So all four of them, together, describe the motion you want.
I'll start you off:
from vx-t graph, R is the area, which is "base times height", so: R=vTcosθ ...eq.(1)
from vy-t graph, h is the area under one triangle, which is "half-base-times-height".
The base is (T/2) and the height is v.sinθ so: 2h=vT.sinθ ...eq.(2)
The slope of the vy-t graph is "rise-over-run" ... you do it. That's eq.(3)
Eq.(4) comes from the relationship between h and R. You do it.
Once you have them, you have completely described the words in the problem as maths.
At that point, you must extract only the parts of the math that is of interest. You need to use algebra to put what you want to know only in terms of things you already know.
So you are not told the max height or the range, but you are told how they relate to each other - so N can be treated as a known quantity. Acceleration of gravity is known. Anything else?
When you have eliminated all your unknowns except for θ, you solve for θ and you have your answer.
Last edited: Jan 31, 2012 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281153678894043, "perplexity": 685.6712464059003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161661.80/warc/CC-MAIN-20180925143000-20180925163400-00232.warc.gz"} |
https://math.answers.com/other-math/If_the_radius_of_a_basketball_is_5_inches_what_is_the_volume | 0
# If the radius of a basketball is 5 inches what is the volume?
Wiki User
2010-02-06 13:47:34
If the Basketball is regarded as a perfect sphere then the formula for the volume (V) is V = 4/3πr3 where r is the radius.
If the radius is 5 inches then V = 4/3π53 = 500/3π = 523.60 cubic inches (2dp)
Wiki User
2010-02-06 13:47:34
Study guides
20 cards
➡️
See all cards
3.74
1032 Reviews | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882049560546875, "perplexity": 2956.015847631593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00488.warc.gz"} |
https://www.math.gatech.edu/seminars-and-colloquia-by-series?page=368 | ## Seminars and Colloquia by Series
Thursday, September 11, 2008 - 15:00 , Location: Skiles 269 , Robert Foley , ISyE, Georgia Tech , Organizer: Heinrich Matzinger
Under certain conditions, we obtain exact asymptotic expressions for the stationary distribution \pi of a Markov chain. In this talk, we will consider Markov chains on {0,1,...}^2. We are particularly interested in deriving asymptotic expressions when the fluid limit of the most probable paths from the origin to the rare event are nonlinear. For example, we will derive asymptotic expressions for a large deviation along the x-axis (e.g., \pi(\ell, y) for fixed y) when the most probable paths to (\ell,y) initially climb the y-axis before turning southwest and drifting towards (\ell,y).
Wednesday, September 10, 2008 - 13:00 , Location: ISyE Executive Classroom , Joel Sokol , ISyE, Georgia Tech , Organizer: Annette Rohrs
In order to estimate the spread of potential pandemic diseases and the efficiency of various containment policies, it is helpful to have an accurate model of the structure of human contact networks. The literature contains several explicit and implicit models, but none behave like actual network data with respect to the spread of disease. We discuss the difficulty of modeling real human networks, motivate the study of some open practical questions about network structure, and suggest some possible avenues of attack based on some related research.
Wednesday, September 10, 2008 - 12:00 , Location: Skiles 255 , Zhiwu Lin , School of Mathematics, Georgia Tech , Organizer:
A plasma is a gas of ionized particles. For a dilute plasma of very high temperature, the collisions can be ignored. Such situations occur, for example, in nuclear fusion devices and space plasmas. The Vlasov-Poisson and Vlasov-Maxwell equations are kinetic models for such collisionless plasmas. The Vlasov-Poisson equation is also used for galaxy evolution. I will describe some mathematical results on these models, including well-posedness and stability issues.
Wednesday, September 10, 2008 - 11:00 , Location: Skiles 255 , Michael Goodisman , School of Biology, Georgia Tech , Organizer: Christine Heitsch
The evolution of sociality represented one of the major transition points in biological history. Highly social animals such as social insects dominate ecological communities because of their complex cooperative and helping behaviors. We are interested in understanding how evolutionary processes affect social systems and how sociality, in turn, affects the course of evolution. Our research focuses on understanding the social structure and mating biology of social insects. In addition, we are interested in the process of development in the context of sociality. We have found that some social insect females mate with multiple males, and that this behavior affects the structure of colonies. We have also found that colonies adjust their reproductive output in a coordinated and adaptive manner. Finally, we are investigating the molecular basis underlying the striking differences between queens and workers in highly social insects. Overall, our research provides insight into the function and evolutionary success of highly social organisms.
Series: PDE Seminar
Tuesday, September 9, 2008 - 15:15 , Location: Skiles 255 , Marta Lewicka , School of Mathematics, University of Minnesota , Organizer:
A longstanding problem in the mathematical theory of elasticity is to predict theories of lower-dimensional objects (such as rods, plates or shells), subject to mechanical deformations, starting from the 3d nonlinear theory. For plates, a recent effort (in particular work by Friesecke, James and Muller) has lead to rigorous justification of a hierarchy of such theories (membrane, Kirchhoff, von Karman). For shells, despite extensive use of their ad-hoc generalizations present in the engineering applications, much less is known from the mathematical point of view. In this talk, I will discuss the limiting behaviour (using the notion of Gamma-limit) of the 3d nonlinear elasticity for thin shells around an arbitrary smooth 2d mid-surface S. We prove that the minimizers of the 3d elastic energy converge, after suitable rescaling, to minimizers of a hierarchy of shell models. The limiting functionals (which for plates yield respectively the von Karman, linear, or linearized Kirchhoff theories) are intrinsically linked with the geometry of S. They are defined on the space of infinitesimal isometries of S (which replaces the 'out-of-plane-displacements' of plates), and the space of finite strains (which replaces strains of the `in-plane-displacements'), thus clarifying the effects of rigidity of S on the derived theories. The different limiting theories correspond to different magnitudes of the applied forces, in terms of the shell thickness. This is joint work with M. G. Mora and R. Pakzad.
Monday, September 8, 2008 - 16:30 , Location: Skiles 269 , Vadim Yu Kaloshin , Mathematics Department, Penn State , Organizer: Yingfei Yi
Consider the classical Newtonian three-body problem. Call motions oscillatory if as times tends to infinity limsup of maximal distance among the bodies is infinite, while liminf it finite. In the '50s Sitnitkov gave the first rigorous example of oscillatory motions for the so-called restricted three-body problem. Later in the '60s Alexeev extended this example to the three-body. A long-standing conjecture, probably going back to Kolmogorov, is that oscillatory motions have measure zero. We show that for the Sitnitkov example and for the so-called restricted planar circular three-body problem these motions have maximal Hausdorff dimension. This is a joint work with Anton Gorodetski.
Monday, September 8, 2008 - 14:00 , Location: Skiles 269 , Roland van der Veen , University of Amsterdam , Organizer: Stavros Garoufalidis
The hyperbolic volume and the colored Jones polynomial are two of the most powerful invariants in knot theory. In this talk we aim to extend these invariants to arbitrary graphs embedded in 3-space. This provides new tools for studying questions about graph embedding and it also sheds some new light on the volume conjecture. According to this conjecture, the Jones polynomial and the volume of a knot are intimately related. In some special cases we will prove that this still holds true in the case of graphs.
Friday, September 5, 2008 - 15:00 , Location: Skiles 255 , Ernie Croot , School of Mathematics, Georgia Tech , Organizer: Prasad Tetali
Let A be a set of n real numbers. A central problem in additive combinatorics, due to Erdos and Szemeredi, is that of showing that either the sumset A+A or the product set A.A, must have close to n^2 elements. G. Elekes, in a short and brilliant paper, showed that one can give quite good bounds for this problem by invoking the Szemeredi-Trotter incidence theorem (applied to the grid (A+A) x (A.A)). Perhaps motivated by this result, J. Solymosi posed the following problem (actually, Solymosi's original problem is slightly different from the formulation I am about to give). Show that for every real c > 0, there exists 0 < d < 1, such that the following holds for all grids A x B with |A| = |B| = n sufficiently large: If one has a family of n^c lines in general position (no three meet at a point, no two parallel), at least one of them must fail to be n^(1-d)-rich -- i.e. at least one of then meets in the grid in fewer than n^(1-d) points. In this talk I will discuss a closely related result that I and Evan Borenstein have proved, and will perhaps discuss how we think we can use it to polish off this conjecture of Solymosi.
Thursday, September 4, 2008 - 15:00 , Location: Skiles 269 , Heinrich Matzinger , School of Mathematics, Georgia Tech , Organizer: Heinrich Matzinger
A common subsequence of two sequences X and Y is a sequence which is a subsequence of X as well as a subsequence of Y. A Longest Common Subsequence (LCS) of X and Y is a common subsequence with maximal length. Longest Common subsequences can be represented as alignments with gaps where the aligned letter pairs corresponds to the letters in the LCS. We consider two independent i.i.d. binary texts X and Y of length n. We show that the behavior of the the alignment corresponding to the LCS is very different depending on the number of colors. With 2-colors, long blocks tend to be aligned with no gaps, whilst for four or more colors the opposite is true. Let Ln denote the length of the LCS of X and Y. In general the order of the variance of Ln is not known. We explain how a biased affect of a finite pattern can influence the order of the fluctuation of Ln.
Wednesday, September 3, 2008 - 12:00 , Location: Skiles 255 , Robin Thomas , School of Mathematics, Georgia Tech , Organizer:
I will explain and prove a beautiful and useful theorem of Alon and Tarsi that uses multivariate polynomials to guarantee, under suitable hypotheses, the existence of a coloring of a graph. The proof method, sometimes called a Combinatorial Nullstellensatz, has other applications in graph theory, combinatorics and number theory. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224322199821472, "perplexity": 916.2310044601813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00400.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/132216-correlation-samples-print.html | # Correlation of samples
• March 5th 2010, 01:49 PM
Anonymous1
Correlation of samples
Let $X_i, i= 1,..,n,$ be i.i.d. sampes from $N(\mu, \sigma^2).$ Let $\bar{X} = \frac{1}{n}\sum_{i=1}^{n} X_i.$
Prove that $\bar{X}$ and $X_i - \bar{X}$ are uncorrelated for any $i.$
• March 5th 2010, 10:13 PM
matheagle
for simplicity let i=1....
Let's obtain the covariance between $\bar X$ and $X_1-\bar X$
$Cov(\bar X,X_1-\bar X) ={1\over n} Cov\left(X_1+\cdots +X_n,\left(1-{1\over n}\right)X_1-{1\over n}[X_2+\cdots +X_n]\right)$
Now find the covariance between each pair, taking one term from each set.
ALL we need is for the $Cov(X_i,X_j)=0$ for each $i\ne j$ , we don't need NORMALITY...
$\left({1\over n}\right)\left[\left(1-{1\over n}\right)\sigma_1^2-{1\over n}\left[\sigma_2^2+\cdots +\sigma_n^2\right]\right)$
Next use the fact that all the variances are equal this becomes zero.
Changing 1 to i is easy, just sum over all the terms that's not i in the second sum. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842896461486816, "perplexity": 776.2259154653684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00159-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://en.formulasearchengine.com/wiki/Disjunct_matrix | # Disjunct matrix
Disjunct and separable matrices play a pivotal role in the mathematical area of non-adaptive group testing. This area investigates efficient designs and procedures to identify 'needles in haystacks' by conducting the tests on groups of items instead of each item alone. The main concept is that if there are very few special items (needles) and the groups are constructed according to certain combinatorial guidelines, then one can test the groups and find all the needles. This can reduce the cost and the labor associated with of large scale experiments.
The grouping pattern can be represented by a ${\displaystyle t\times n}$ binary matrix, where each column represents an item and each row represents a pool. The symbol '1' denotes participation in the pool and '0' absence from a pool. The d-disjunctness and the d-separability of the matrix describe sufficient condition to identify d special items.
In a matrix that is d-separable, the Boolean sum of every d columns is unique. In a matrix that is d-disjunct the Boolean sum of every d columns does not contain any other column in the matrix. Theoretically, for the same number of columns (items), one can construct d-separable matrices with fewer rows (tests) than d-disjunct. However, designs that are based on d-separable are less applicable since the decoding time to identify the special items is exponential. In contrast, the decoding time for d-disjunct matrices is polynomial.
## d-separable
### Decoding algorithm
First we will describe another way to look at the problem of group testing and how to decode it from a different notation. We can give a new interpretation of how group testing works as follows:
This formalizes the relation between ${\displaystyle \mathbf {x} }$ and the columns of ${\displaystyle M}$ and ${\displaystyle \mathbf {r} }$ in a way more suitable to the thinking of ${\displaystyle d}$-separable and ${\displaystyle d}$-disjunct matrices. The algorithm to decode a ${\displaystyle d}$-separable matrix is as follows:
1. For each ${\displaystyle T\subseteq [n]}$ such that ${\displaystyle |T|\leq d}$ check if ${\displaystyle S_{\mathbf {r} }=\bigcup _{j\in T}S_{M_{j}}}$
This algorithm runs in time ${\displaystyle n^{{\mathcal {O}}(d)}}$.
## d-disjunct
In literature disjunct matrices are also called super-imposed codes and d-cover-free families.
### Decoding algorithm
The algorithm for ${\displaystyle d}$-separable matrices was still a polynomial in ${\displaystyle n}$. The following will give a nicer algorithm for ${\displaystyle d}$-disjunct matrices which will be a ${\displaystyle d}$ multiple instead of raised to the power of ${\displaystyle d}$ given our bounds for ${\displaystyle t}$. The algorithm is as follows in the proof of the following lemma:
Lemma 1: There exists an ${\displaystyle {\mathcal {O}}(nt)}$ time decoding for any ${\displaystyle d}$-disjunct ${\displaystyle t}$ x ${\displaystyle n}$ matrix.
Proof of Lemma 1: Given as input ${\displaystyle \mathbf {r} \in \{0,1\}^{t},M}$ use the following algorithm:
By Observation 1 we get that any position where ${\displaystyle \mathbf {r} _{i}=0}$ the appropriate ${\displaystyle \mathbf {x} _{j}}$'s will be set to 0 by step 2 of the algorithm. By Observation 2 we have that there is at least one ${\displaystyle i}$ such that if ${\displaystyle \mathbf {x} _{j}}$ is supposed to be 1 then ${\displaystyle M_{i,j}=1}$ and, if ${\displaystyle \mathbf {x} _{j}}$ is supposed to be 1, it can only be the case that ${\displaystyle \mathbf {r} _{i}=1}$ as well. Therefore step 2 will never assign ${\displaystyle \mathbf {x} _{j}}$ the value 0 leaving it as a 1 and solving for ${\displaystyle \mathbf {x} }$. This takes time ${\displaystyle {\mathcal {O}}(nt)}$ overall. ${\displaystyle \Box }$
## Upper bounds for non-adaptive group testing
The results for these upper bounds rely mostly on the properties of ${\displaystyle d}$-disjunct matrices. Not only are the upper bounds nice, but from Lemma 1 we know that there is also a nice decoding algorithm for these bounds. First the following lemma will be proved since it is relied upon for both constructions:
Note: these conditions are stronger than simply having a subset of size ${\displaystyle d}$ but rather applies to any pair of columns in a matrix. Therefore no matter what column ${\displaystyle i}$ that is chosen in the matrix, that column will contain at least ${\displaystyle w_{\min }}$ 1's and the total number of shared 1's by any two columns is ${\displaystyle a_{\max }}$.
Proof of Lemma 2: Fix an arbitrary ${\displaystyle S\subseteq [n],|S|\leq d,j\notin S}$ and a matrix ${\displaystyle M}$. There exists a match between ${\displaystyle i\in S{\text{ and }}j\notin S}$ if column ${\displaystyle i}$ has a 1 in the same row position as in column ${\displaystyle j}$. Then the total number of matches is ${\displaystyle \leq a_{\max }\cdot d\leq a_{\max }\cdot ({\frac {w_{\min }-1}{a_{\max }}})=w_{\min }-1<{\text{ }}w_{\min }}$, i.e. a column ${\displaystyle j}$ has a fewer number of matches than the number of ones in it. Therefore there must be a row with all 0s in ${\displaystyle S}$ but a 1 in ${\displaystyle j}$. ${\displaystyle \Box }$
We will now generate constructions for the bounds.
### Randomized construction
This first construction will use a probabilistic argument to show the property wanted, in particular the Chernoff bound. Using this randomized construction gives that ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log n)}$. The following lemma will give the result needed.
Theorem 1: There exists a random ${\displaystyle d}$-disjunct matrix with ${\displaystyle {\mathcal {O}}(d^{2}\log n)}$ rows.
Note that in this proof ${\displaystyle t=d^{2}\log n}$ thus giving the upper bound of ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log n)}$. ${\displaystyle \Box }$
### Strongly explicit construction
It is possible to prove a bound of ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log ^{2}{n})}$ using a strongly explicit code. Although this bound is worse by a ${\displaystyle \log n}$ factor, it is preferable because this produces a strongly explicit construction instead of a randomized one.
Theorem 2: There exists a strongly explicit ${\displaystyle d}$-disjunct matrix with ${\displaystyle {\mathcal {O}}(d^{2}\log ^{2}{n})}$ rows.
This proof will use the properties of concatenated codes along with the properties of disjunct matrices to construct a code that will satisfy the bound we are after.
then ${\displaystyle M_{C^{*}}}$ is ${\displaystyle \lfloor {\frac {w_{\min }-1}{a_{\max }}}\rfloor }$-disjunct. To complete the proof another concept must be introduced. This concept uses code concatenation to obtain the result we want.
Kautz-Singleton '64
---
Example: Let ${\displaystyle k=1,q=3,C_{out}=\{(0,0,0),(1,1,1),(2,2,2)\}}$. Below, ${\displaystyle M_{C}}$ denotes the matrix of codewords for ${\displaystyle C_{out}}$ and ${\displaystyle M_{C^{*}}}$ denotes the matrix of codewords for ${\displaystyle C^{*}=C_{out}\circ C_{in}}$, where each column is a codeword. The overall image shows the transition from the outer code to the concatenated code.
---
Thus we have a strongly explicit construction for a code that can be used to form a group testing matrix and so ${\displaystyle t(d,n)\leq (d\log n)^{2}}$.
For non-adaptive testing we have shown that ${\displaystyle \Omega (d\log n)\leq t(d,n)}$ and we have that (i) ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log ^{2}{n})}$ (strongly explicit) and (ii) ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log n)}$ (randomized). As of recent work by Porat and Rothscheld, they presented an explicit method construction (i.e. deterministic time but not strongly explicit) for ${\displaystyle t(d,n)\leq {\mathcal {O}}(d^{2}\log n)}$,[1] however it is not shown here. There is also a lower bound for disjunct matrices of ${\displaystyle t(d,n)\geq \Omega ({\frac {d^{2}}{\log d}}\log n)}$[2][3][4] which is not shown here either.
## Examples
Here is the 2-disjunct matrix ${\displaystyle M_{9\times 12}}$: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 270, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352563619613647, "perplexity": 343.10476054387556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00220.warc.gz"} |
http://thermodynamicsystem0.blogspot.com/2011_03_01_archive.html | ## Saturday, 12 March 2011
### Thermodynamic system
A thermodynamic system is a precisely defined macroscopic region of the universe, often called a physical system, that is studied using the principles of thermodynamics.
All space in the universe outside the thermodynamic system is known as the surroundings, the environment, or a reservoir. A system is separated from its surroundings by a boundary which may be notional or real, but which by convention delimits a finite volume. Exchanges of work, heat, or matter between the system and the surroundings may take place across this boundary. Thermodynamic systems are often classified by specifying the nature of the exchanges that are allowed to occur across its boundary.
A thermodynamic system is characterized and defined by a set of thermodynamic parameters associated with the system. The parameters are experimentally measurable macroscopic properties, such as volume, pressure, temperature, electric field, and others.
The set of thermodynamic parameters necessary to uniquely define a system is called the thermodynamic state of a system. The state of a system is expressed as a functional relationship, the equation of state, between its parameters. A system is in thermodynamic equilibrium when the state of the system does not change with time.
Originally, in 1824, Sadi Carnot described a thermodynamic system as the working substance under study.
### Overview
Thermodynamics describes the physics of matter using the concept of the thermodynamic system, a region of the universe that is under study. All quantities, such as pressure or mechanical work, in an equation refer to the system unless labeled otherwise. As thermodynamics is fundamentally concerned with the flow and balance of energy and matter, systems are distinguished depending on the kinds of interaction they undergo and the types of energy they exchange with the surrounding environment.
Interactions of thermodynamic systems Type of system Mass flow Work Heat
Open Green tickY Green tickY Green tickY
Closed Red XN Green tickY Green tickY
Isolated Red XN Red XN Red XN
Isolated systems are completely isolated from their environment. They do not exchange heat, work or matter with their environment. An example of an isolated system is a completely insulated rigid container, such as a completely insulated gas cylinder. Closed systems are able to exchange energy (heat and work) but not matter with their environment. A greenhouse is an example of a closed system exchanging heat but not work with its environment. Whether a system exchanges heat, work or both is usually thought of as a property of its boundary. Open systems may exchange any form of energy as well as matter with their environment. A boundary allowing matter exchange is called permeable. The ocean would be an example of an open system.
In practice, a system can never be absolutely isolated from its environment, because there is always at least some slight coupling, such as gravitational attraction. In analyzing a system in steady-state, the energy into the system is equal to the energy leaving the system [1].
An example system is the system of hot liquid water and solid table salt in a sealed, insulated test tube held in a vacuum (the surroundings). The test tube constantly loses heat in the form of black-body radiation, but the heat loss progresses very slowly. If there is another process going on in the test tube, for example the dissolution of the salt crystals, it will probably occur so quickly that any heat lost to the test tube during that time can be neglected. Thermodynamics in general does not measure time, but it does sometimes accept limitations on the time frame of a process.
### History
The first to develop the concept of a thermodynamic system was the French physicist Sadi Carnot whose 1824 Reflections on the Motive Power of Fire studied what he called the working substance, e.g., typically a body of water vapor, in steam engines, in regards to the system's ability to do work when heat is applied to it. The working substance could be put in contact with either a heat reservoir (a boiler), a cold reservoir (a stream of cold water), or a piston (to which the working body could do work by pushing on it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings, and began referring to the system as a "working body." In his 1850 manuscript On the Motive Power of Fire, Clausius wrote:
“ "With every change of volume (to the working body) a certain amount work must be done by the gas or upon it, since by its expansion it overcomes an external pressure, and since its compression can be brought about only by an exertion of external pressure. To this excess of work done by the gas or upon it there must correspond, by our principle, a proportional excess of heat consumed or produced, and the gas cannot give up to the "surrounding medium" the same amount of heat as it receives." ”
The article Carnot heat engine shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine; below, we see the Carnot engine as is typically modeled in current use:
Carnot engine diagram (modern) - where heat flows from a high temperature TH furnace through the fluid of the "working body" (working substance) and into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions.
In the diagram shown, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted through to produce work. In 1824, Sadi Carnot, in his famous paper Reflections on the Motive Power of Fire, had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, or air, etc. Although, in these early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work W here is the movement of the piston as it is used to turn a crank-arm, which was then typically used to turn a pulley so to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height."
### Boundary
A system boundary is a real or imaginary volumetric demarcation region drawn around a thermodynamic system across which quantities such as heat, mass, or work can flow.[1] In short, a thermodynamic boundary is a division between a system and its surroundings.
Boundaries can also be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in an engine, a fixed boundary means the piston is locked at its position; as such, a constant volume process occurs. In that same engine, a moveable boundary allows the piston to move in and out. Boundaries may be real or imaginary. For closed systems, boundaries are real while for open system boundaries are often imaginary. A boundary may be adiabatic, isothermal, diathermal, insulating, permeable, or semipermeable.
In practice, the boundary is simply an imaginary dotted line drawn around a volume when there is going to be a change in the internal energy of that volume. Anything that passes across the boundary that effects a change in the internal energy needs to be accounted for in the energy balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics; it could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.
### Surroundings
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment, and the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in analysis of the system, except in regards to these interactions.
### Open system
In open systems, matter may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: the increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system. The first law for open systems is given by:
\mathrm{d}U=\mathrm{d}U_{in}+\delta Q-\mathrm{d}U_{out}-\delta W\,
where Uin is the average internal energy entering the system and Uout is the average internal energy leaving the system
The region of space enclosed by open system boundaries is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of matter out as if it were driving a piston of fluid. There are then two types of work performed: flow work described above which is performed on the fluid (this is also often called PV work) and shaft work which may be performed on some mechanical device. These two types of work are expressed in the equation:
\delta W=\mathrm{d}(P_{out}V_{out})-\mathrm{d}(P_{in}V_{in})+\delta W_{shaft}\,
Substitution into the equation above for the control volume cv yields:
\mathrm{d}U_{cv}=\mathrm{d}U_{in}+\mathrm{d}(P_{in}V_{in}) - \mathrm{d}U_{out}-\mathrm{d}(P_{out}V_{out})+\delta Q-\delta W_{shaft}\,
The definition of enthalpy, H, permits us to use this thermodynamic potential to account for both internal energy and PV work in fluids for open systems:
\mathrm{d}U_{cv}=\mathrm{d}H_{in}-\mathrm{d}H_{out}+\delta Q-\delta W_{shaft}\,
During steady-state operation of a device (see turbine, pump, and engine), any system property within the control volume is independent of time. Therefore, the internal energy of the system enclosed by the control volume remains constant, which implies that dUcv in the expression above may be set equal to zero. This yields a useful expression for the power generation or requirement for these devices in the absence of chemical reactions:
\frac{\delta W_{shaft}}{\mathrm{d}t}=\frac{\mathrm{d}H_{in}}{\mathrm{d}t}- \frac{\mathrm{d}H_{out}}{\mathrm{d}t}+\frac{\delta Q}{\mathrm{d}t} \,
This expression is described by the diagram above
### Closed system
In a closed system, no mass may be transferred in or out of the system boundaries. The system will always contain the same amount of matter, but heat and work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
* Adiabatic boundary – not allowing any heat exchange
* Rigid boundary – not allowing exchange of work
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
Beginning with the first law of thermodynamics for an open system, this is expressed as:
\mathrm{d}U=Q-W+m_{i}(h+\frac{1}{2}v^2+gz)_{i}-m_{e}(h+\frac{1}{2}v^2+gz)_{e}
where U is internal energy, Q is heat transfer, W is work, and since no mass is transferred in or out of the system, both expressions involving mass flow, , zeroes, and the first law of thermodynamics for a closed system is derived. The first law of thermodynamics for a closed system states that the amount of internal energy within the system equals the difference between the amount of heat added to or extracted from the system and the work done by or to the system. The first law for closed systems is stated by:
dU = δQ − δW
where U is the average internal energy within the system, Q is the heat added to or extracted from the system and W is the work done by or to the system.
Substituting the amount of work needed to accomplish a reversible process, which is stated by:
δW = PdV
where P is the measured pressure and V is the volume, and the heat required to accomplish a reversible process stated by the second law of thermodynamics, the universal principle of entropy, stated by:
δQ = TdS
where T is the absolute temperature and S is the entropy of the system, derives the fundamental thermodynamic relationship used to compute changes in internal energy, which is expressed as:
δU = TdS − PdV
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
\sum_{j=1}^m a_{ij}N_j=b_i^0
where Nj is the number of j-type molecules, aij is the number of atoms of element i in molecule j and bi0 is the total number of atoms of element i in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system.
### Isolated system
An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is considered to be in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann’s H-theorem used equations which assumed a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann’s assumption of molecular chaos can be justified.
The second law of thermodynamics is only true for isolated systems. It states that the entropy of an isolated system not in equilibrium will tend to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the available energy can never increase, and it complement, entropy, can never decrease. A closed system's entropy can decrease.
It is important to note that isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
It is worth noting that 'closed system' is often used in thermodynamics discussions when 'isolated system' would be correct - i.e. there is an assumption that energy does not enter or leave the system.
### Systems in equilibrium
At thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than systems which are not in equilibrium. Often, when analyzing a thermodynamic process, it can be assumed that each intermediate state in the process is at equilibrium. This will also considerably simplify the analysis.
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered to be in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states.
In thermodynamic processes, large departures from equilibrium during intermediate steps are associated with increases in entropy and increases in the production of heat rather than useful work. It can be shown that for a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862369954586029, "perplexity": 523.6107107942976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424889.43/warc/CC-MAIN-20170724162257-20170724182257-00474.warc.gz"} |
http://math.stackexchange.com/users/35706/gmb?tab=activity | GMB
Reputation
1,760
Next privilege 2,000 Rep.
Jun14 revised What is the mean of this random variable? added 9 characters in body Jun14 comment What is the mean of this random variable? Okay, thank you. Edited. Jun14 revised What is the mean of this random variable? edited body Jun14 revised What is the mean of this random variable? added 38 characters in body Jun14 comment What is the mean of this random variable? $0 < \delta < 1$, if that answers the question =) Jun14 asked What is the mean of this random variable? Jun9 comment Turan's theorem for balanced r-partite graphs @TravisJ Yes to both. Jun9 asked Turan's theorem for balanced r-partite graphs May28 asked Is there a function over $\mathbb{Z}_p$ that is never linear? May19 asked How big can a $k$-sum free set be? Apr29 asked Maximum number of size $k$ subsets where no two overlap on more than $e$ elements. Apr28 comment A probability puzzle @Raskolnikov: Thanks for the hint. I'm having trouble turning it into a proof, though. I know $\Pr(\text{friends} \, | \, \text{share } h_k)$, but it seems that I need to know $\Pr(\text{friends} \, | \, \text{share } h_k \text{ but not } h_{j < k})$ to get an answer. Apr28 asked A probability puzzle Apr27 answered Every tournament conatins a hamiltonian path - question about the proof Apr17 asked The size of the vertex hull of a lattice Jan20 accepted Convergence rate of the power method for finding eigenvectors Jan20 comment Convergence rate of the power method for finding eigenvectors Thanks, you're right. Edited. Jan20 revised Convergence rate of the power method for finding eigenvectors added 22 characters in body Jan20 asked Convergence rate of the power method for finding eigenvectors Dec9 awarded Caucus | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896839439868927, "perplexity": 1213.3169897774585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098808.88/warc/CC-MAIN-20150627031818-00051-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs10107-014-0814-9 | Mathematical Programming
, Volume 153, Issue 2, pp 495–534
# Decomposition algorithms for submodular optimization with applications to parallel machine scheduling with controllable processing times
• Akiyoshi Shioura
• Natalia V. Shakhlevich
• Vitaly A. Strusevich
Open Access
Full Length Paper Series A
## Abstract
In this paper we present a decomposition algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. Apart from this contribution to submodular optimization, our results extend the toolkit available in deterministic machine scheduling with controllable processing times. We demonstrate how this method can be applied to developing fast algorithms for minimizing total compression cost for preemptive schedules on parallel machines with respect to given release dates and a common deadline. Obtained scheduling algorithms are faster and easier to justify than those previously known in the scheduling literature.
### Keywords
Submodular optimization Parallel machine scheduling Controllable processing times Decomposition
### Mathematics Subject Classification
90C27 90B35 90C05
## 1 Introduction
In scheduling with controllable processing times, the actual durations of the jobs are not fixed in advance, but have to be chosen from a given interval. This area of scheduling has been active since the 1980s, see surveys [16] and [22].
Normally, for a scheduling model with controllable processing times two types of decisions are required: (1) each job has to be assigned its actual processing time, and (2) a schedule has to be found that provides a required level of quality. There is a penalty for assigning shorter actual processing times, since the reduction in processing time is usually associated with an additional effort, e.g., allocation of additional resources or improving processing conditions. The quality of the resulting schedule is measured with respect to the cost of assigning the actual processing times that guarantee a certain scheduling performance.
As established in [23, 24], there is a close link between scheduling with controllable processing times and linear programming problems with submodular constraints. This allows us to use the achievements of submodular optimization [4, 21] for design and justification of scheduling algorithms. On the other hand, formulation of scheduling problems in terms of submodular optimization leads to the necessity of studying novel models with submodular constraints. Our papers [25, 27] can be viewed as convincing examples of such a positive mutual influence of scheduling and submodular optimization.
This paper, which builds up on [26], makes another contribution towards the development of solution procedures for problems of submodular optimization and their applications to scheduling models. We present a decomposition algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. Apart from this contribution to submodular optimization, our results extend the toolkit available in deterministic machine scheduling. We demonstrate how this method can be applied to several scheduling problems, in which it is required to minimize the total penalty for choosing actual processing times, also known as total compression cost. The jobs have to be processed with preemption on several parallel machines, so that no job is processed after a common deadline. The jobs may have different release dates.
The paper is organized as follows. Section 2 gives a survey of the relevant results on scheduling with controllable processing times. In Sect. 3 we reformulate three scheduling problems in terms of linear programming problems over a submodular polyhedron intersected with a box. Section 4 outlines a recursive decomposition algorithm for solving maximization linear programming problems with submodular constraints. The applications of the developed decomposition algorithm to scheduling with controllable processing times are presented in Sect. 5. The concluding remarks are contained in Sect. 6.
## 2 Scheduling with controllable processing times: a review
In this section, we give a brief overview of the known results on the preemptive scheduling problems with controllable processing times to minimize the total compression cost for schedules that are feasible with respect to given release dates and a common deadline.
Formally, in the model under consideration the jobs of set $$N=\{1,2,\ldots ,n\}$$ have to be processed on parallel machines $$M_{1},M_{2},\ldots ,M_{m}$$, where $$m\ge 2$$. For each job $$j\in N$$, its processing time $$p(j)$$ is not given in advance but has to be chosen by the decision-maker from a given interval $$\left[ \underline{p}(j),\overline{p}(j)\right]$$. That selection process can be seen as either compressing (also known as crashing) the longest processing time $$\overline{p}(j)$$ down to $$p(j)$$, or decompressing the shortest processing time $$\underline{p}(j)$$ up to $$p(j)$$. In the former case, the value $$x(j)=\overline{p}(j)-p(j)$$ is called the compression amount of job $$j$$, while in the latter case $$z(j)=p(j)-\underline{p}(j)$$ is called the decompression amount of job $$j$$. Compression may decrease the completion time of each job $$j$$ but incurs additional cost $$w(j)x(j)$$, where $$w(j)$$ is a given non-negative unit compression cost. The total cost associated with a choice of the actual processing times is represented by the linear function $$W=\sum _{j\in N}w(j)x(j)$$.
Each job $$j\in N$$ is given a release date$$r(j)$$, before which it is not available, and a common deadline$$d$$, by which its processing must be completed. In the processing of any job, preemption is allowed, so that the processing can be interrupted on any machine at any time and resumed later, possibly on another machine. It is not allowed to process a job on more than one machine at a time, and a machine processes at most one job at a time.
Given a schedule, let $$C(j)$$ denote the completion time of job $$j$$, i.e., the time at which the last portion of job $$j$$ is finished on the corresponding machine. A schedule is called feasible if the processing of a job $$j\in N$$ takes place in the time interval $$\left[ r(j),d \right]$$.
We distinguish between the identical parallel machines and the uniform parallel machines. In the former case, the machines have the same speed, so that for a job $$j$$ with an actual processing time $$p(j)$$ the total length of the time intervals in which this job is processed in a feasible schedule is equal to $$p(j)$$. If the machines are uniform, then it is assumed that machine $$M_{h}$$ has speed $$s_{h},\,1\le h\le m$$. Without loss of generality, throughout this paper we assume that the machines are numbered in non-increasing order of their speeds, i.e.,
\begin{aligned} s_{1}\ge s_{2}\ge \cdots \ge s_{m}. \end{aligned}
(1)
For some schedule, denote the total time during which a job $$j\in N$$ is processed on machine $$M_{h},\,1\le h\le m$$, by $$q^{h}(j)$$. Taking into account the speed of the machine, we call the quantity $$s_{h}q^{h}(j)$$ the processing amount of job $$j$$ on machine $$M_{h}$$. It follows that
\begin{aligned} p(j)=\sum _{h=1}^{m}s_{h}q^{h}(j). \end{aligned}
In all scheduling problems studied in this paper, we need to determine the values of actual processing times and find the corresponding feasible preemptive schedule so that all jobs complete before the deadline and total compression cost is minimized. Adapting standard notation for scheduling problems by Lawler et al. [11], we denote problems of this type by $$\alpha |r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$. Here, in the first field $$\alpha$$ we write “$$P$$” in the case of $$m\ge 2$$ identical machines and “$$Q$$ ” in the case of $$m\ge 2$$ uniform machines. In the middle field, the item “$$r(j)$$” implies that the jobs have individual release dates; this parameter is omitted if the release dates are equal. We write “$$p(j)=\overline{p}(j)-x(j)$$” to indicate that the processing times are controllable and $$x(j)$$ is the compression amount to be found. The condition “$$C(j)\le d$$” reflects the fact that in a feasible schedule the common deadline should be respected. The abbreviation “$$pmtn$$” is used to point out that preemption is allowed. Finally, in the third field we write the objective function to be minimized, which is the total compression cost $$W=\sum _{j \in N} w(j)x(j)$$. Scheduling problems with controllable processing times have received considerable attention since the 1980s, see, e.g., surveys by Nowicki and Zdrzałka [16] and by Shabtay and Steiner [22].
If the processing times $$p(j),\,j\in N$$, are fixed then the corresponding counterpart of problem $$\alpha |r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ is denoted by $$\alpha |r(j),pmtn|C_{\max }$$. In the latter problem it is required to find a preemptive schedule that for the corresponding settings minimizes the makespan $$C_{\max }=\max \left\{ C(j)|j\in N\right\}$$.
In the scheduling literature, there are several interpretations and formulations of scheduling models that are related to those with controllable processing times. Below we give a short overview of them, indicating the points of distinction and similarity with our definition of the model.
Janiak and Kovalyov [8] argue that the processing times are resource-dependent, so that the more units of a single additional resource is given to a job, the more it can be compressed. In their model, a job $$j\in N$$ has a ‘normal’ processing time $$b(j)$$ (no resource given), and its actual processing time becomes $$p(j)=b(j)-a(j)u(j)$$, provided that $$u(j)$$ units of the resource are allocated to the job, where $$a(j)$$ is interpreted as a compression rate. The amount of the resource to be allocated to a job is limited by $$0\le u(j)\le \tau (j)$$, where $$\tau (j)$$ is a known job-dependent upper bound. The cost of using one unit of the resource for compressing job $$j$$ is denoted by $$v(j)$$, and it is required to minimize the total cost of resource consumption. This interpretation of the controllable processing times is essentially equivalent to that adopted in this paper, which can be seen by setting
\begin{aligned}&\overline{p}(j)=b(j),\quad \underline{p}(j)=b(j)-a(j)\tau (j),\quad x(j)=a(j)u(j),\\&\quad w(j)=v(j)/a(j),\quad j\in N. \end{aligned}
A very similar model for scheduling with controllable processing times is due to Chen [2], later studied by McCormick [13]. For example, McCormick [13] gives algorithms for finding a preemptive schedule for parallel machines that is feasible with respect to arbitrary release dates and deadlines. The actual processing time of a job is determined by $$p(j)=\max \left\{ b(j)-a(j)\lambda (j),0\right\}$$ and the objective is to minimize the function $$\sum _{j\in N}\lambda (j)$$. This is also similar to our interpretation due to
\begin{aligned} \overline{p}(j)=b(j),\quad \underline{p}(j)=0,\quad x(j)=a(j)\lambda (j),\quad w(j)=1/a(j),\quad j\in N. \end{aligned}
Another range of scheduling models relevant to our study belongs to the area of imprecise computation; see [12] for a recent review. In computing systems that support imprecise computation, some computations (image processing programs, implementations of heuristic algorithms) can be run partially, producing less precise results. In our notation, a task with processing requirement $$\overline{p}(j)$$ can be split into a mandatory part which takes $$\underline{p}\left( j\right)$$ time, and an optional part that may take up to $$\overline{p}(j)-\underline{p}(j)$$ additional time units. To produce a result of reasonable quality, the mandatory part must be completed in full, while an optional part improves the accuracy of the solution. If instead of an ideal computation time $$\overline{p}(j)$$ a task is executed for $$p(j)=\overline{p}(j)-x(j)$$ time units, then computation is imprecise and $$x(j)$$ corresponds to the error of computation. Typically, the problems of imprecise computation are those of finding a deadline feasible preemptive schedule either on a single machine or on parallel machines. A popular objective function is $$\sum w(j)x(j)$$, which is interpreted here as the total weighted error. It is surprising that until very recently, the similarity between the models with controllable processing times and those of imprecise computation have not been noticed. Even the most recent survey by Shabtay and Steiner [22] makes no mention of the imprecise computation research.
Scheduling problems with controllable processing times can serve as mathematical models in make-or-buy decision-making; see, e.g., Shakhlevich et al. [25]. In manufacturing, it is often the case that either the existing production capabilities are insufficient to fulfill all orders internally in time or the cost of work-in-process of an order exceeds a desirable amount. Such an order can be partly subcontracted. Subcontracting incurs additional cost but that can be either compensated by quoting realistic deadlines for all jobs or balanced by a reduction in internal production expenses. The make-or-buy decisions should be taken to determine which part of each order is manufactured internally and which is subcontracted. Under this interpretation, the orders are the jobs and for each order $$j\in N$$, the value of $$\overline{p}(j)$$ is interpreted as the processing requirement, provided that the order is manufactured internally in full, while $$\underline{p}(j)$$ is a given mandatory limit on the internal production. Further, $$p(j)=\overline{p}(j)-x(j)$$ is the chosen actual time for internal manufacturing, where $$x(j)$$ shows how much of the order is subcontracted and $$w(j)x(j)$$ is the cost of this subcontracting. Thus, the problem is to minimize the total subcontracting cost and find a deadline-feasible schedule for internally manufactured orders.
It is obvious that for scheduling problems with controllable processing times, minimizing the total compression cost $$W$$ is equivalent to maximizing either the total decompression cost $$\sum w(j)z(j)$$ or total weighted processing time $$\sum w(j)p(j)$$. Most of the problems relevant to this study have been solved using a greedy approach. One way of implementing this approach is to start with a (possibly, infeasible) schedule in which all jobs are fully decompressed to their longest processing times $$\overline{p} (j)$$, scan the jobs in non-decreasing order of their weights $$w(j)$$ and compress each job by the smallest possible amount that guarantees a feasible processing of a job. Another approach, which is in some sense dual to the one described above, is to start with a feasible schedule in which all jobs are fully compressed to their smallest processing times $$\underline{p}(j),$$ scan the jobs in non-increasing order of their weights $$w(j)$$ and decompress each job by the largest possible amount.
Despite the similarity of these approaches, in early papers on this topic each problem is considered separately and a justification of the greedy approach is often lengthy and developed from the first principles. However, as established by later studies, the greedy nature of the solution approaches is due to the fact that many scheduling problems with controllable processing times can be reformulated in terms of linear programming problems over special regions such as submodular polyhedra, (generalized) polymatroids, base polyhedra, etc. See Sect. 3 for definitions and main concepts of submodular optimization.
Nemhauser and Wolsey [15] were among the first who noticed that scheduling with controllable processing times could be handled by methods of submodular optimization; see, e.g., Example 6 (Sect. 6 of Chapter III.3) of the book [15]. A systematic development of a general framework for solving scheduling problems with controllable processing times via submodular methods has been initiated by Shakhlevich and Strusevich [23, 24] and further advanced by Shakhlevich et al. [25]. This paper makes another contribution in this direction.
Below we review the known results on the problems to be considered in this paper. Two aspects of the resulting algorithms are important: (1) finding the actual processing times and therefore the optimal value of the function, and (2) finding the corresponding optimal schedule. The second aspect is related to traditional scheduling to minimize the makespan with fixed processing times.
Zero release dates, common deadline The results for the models under these conditions are summarized in the second and third columns of Table 1. If the machines are identical, then solving problem $$P|pmtn|C_{\max }$$ with fixed processing times can be done by a linear-time algorithm that is due to McNaughton [14]. As shown by Jansen and Mastrolilli [9], problem $$P|p(j)=\overline{p} (j)-x(j),pmtn,C(j)\le d|W$$ reduces to a continuous generalized knapsack problem and can be solved in $$O(n)$$ time. Shakhlevich and Strusevich [23] consider the bicriteria problem $$P|p(j)=\overline{p} (j)-x(j),pmtn|\left( C_{\max },W\right) ,$$ in which makespan $$C_{\max }$$ and the total compression cost $$W=\sum w(j)x(j)$$ have to be minimized simultaneously, in the Pareto sense; the running time of their algorithm is $$O(n\log n)$$.
Table 1
Summary of the results
Problem
$$r(j)=0$$
Arbitrary $$r(j)$$
$$\alpha =P$$
$$\alpha =Q$$
$$\alpha =P$$
$$\alpha =Q$$
$$\alpha |r(j),pmtn|C_{\max }$$
$$O(n)$$
$$O(m \log m+n)$$
$$O(n \log n)$$
$$O(nm + n \log n)$$
[14]
[5]
[18]
[19]
$$\alpha |r(j), p(j)=\overline{p}(j)-x(j), pmtn,C(j)\le d|W$$
Previously known
$$O(n)$$
$${O(nm+n \log n)}$$
$$O(n^{2}\log m)$$
$$O(n^{2}m)$$
[9]
[17, 24]
[27]
[27]
This paper
$$O(\min \{n \log n, n\!+\!m\log m \log n \})$$
$$O( n \log n \log m)$$
$$O\left( nm\log n\right)$$
Section 5.1
Section 5.2
Section 5.3
$$\alpha |r(j), p(j)\!=\overline{p}_{j}\!-\!x(j),pmtn|\left( C_{\max },W\right)$$
$$O(n \log n)$$
$$O(nm \log m)$$
$$O\left( n^{2}\log m \right)$$
$$O\left( n^{2}m \right)$$
[23]
[27]
[27]
[27]
In the case of uniform machines, the best known algorithm for solving problem $$Q|pmtn|C_{\max }$$ with fixed processing times is due to Gonzalez and Sahni [5]. For problem $$Q|p(j)=\overline{p}(j)-x(j),pmtn,C(j) \le d|W$$ Nowicki and Zdrzałka [17] show how to find the actual processing times in $$O(nm+n\log n)$$ time. Shakhlevich and Strusevich [24] reduce the problem to maximizing a linear function over a generalized polymatroid; they give an algorithm that requires the same running time as that by Nowicki and Zdrzałka [17], but can be extended to solving a bicriteria problem $$Q|p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right)$$. The best running time for the bicriteria problem is $$O(nm\log m)$$, which is achieved in [27] by submodular optimization techniques.
Arbitrary release dates, common deadline The results for the models under these conditions are summarized in the fourth and fifth columns of Table 1. These models are symmetric to those with a common zero release date and arbitrary deadlines. Problem $$P|r(j),pmtn|C_{\max }$$ with fixed processing times on $$m$$ identical parallel machines can be solved in $$O(n\log n)$$ time (or in $$O(n\log m)$$ time if the jobs are pre-sorted) as proved by Sahni [18]. For the uniform machines, Sahni and Cho [19] give an algorithm for problem $$Q|r(j),pmtn|C_{\max }$$ that requires $$O(mn+n\log n)$$ time (or $$O(mn)$$ time if the jobs are pre-sorted).
Prior to our work on the links between submodular optimization and scheduling with controllable processing times [27], no purpose-built algorithms have been known for problems $$\alpha |r(j),p(j)= \overline{p}(j)-x(j),pmtn,C(j)\le d|W$$ with $$\alpha \in \left\{ P,Q\right\}$$. It is shown in [27] that the bicriteria problems $$\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right)$$ can be solved in $$O\left( n^{2}\log m\right)$$ time and in $$O(n^{2}m)$$ time for $$\alpha =P$$ and $$\alpha =Q$$, respectively. Since a solution to a single criterion problem $$\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn,C(j)\le d|W$$ is contained among the Pareto optimal solutions for the corresponding bicriteria problem $$\alpha m|r(j),p(j)=\overline{p}(j)-x(j),pmtn|\left( C_{\max },W\right)$$, the algorithms from [27] are quoted in Table 1 as the best previously known for the single criterion problems with controllable processing times.
The main purpose of this paper is to demonstrate that the single criterion scheduling problems with controllable processing times to minimize the total compression cost can be solved by faster algorithms that are based on reformulation of these problems in terms of a linear programming problem over a submodular polyhedron intersected with a box. For the latter generic problem, we develop a recursive decomposition algorithm and show that for the scheduling applications it can be implemented in a very efficient way.
## 3 Scheduling with controllable processing times: submodular reformulations
For completeness, we start this section with definitions related to submodular optimization. Unless stated otherwise, we follow a comprehensive monograph on this topic by Fujishige [4], see also [10, 21]. In Sect. 3.1, we introduce a linear programming problem for which the set of constraints is a submodular polyhedron intersected with a box. Being quite general, the problem represents a range of scheduling models with controllable processing times. In Sect. 3.2 we give the details of the corresponding reformulations.
### 3.1 Preliminaries on submodular polyhedra
For a positive integer $$n$$, let $$N=\{1,2,\ldots ,n\}$$ be a ground set, and let $$2^{N}$$ denote the family of all subsets of $$N$$. For a subset $$X\subseteq N$$, let $$\mathbb {R}^{X}$$ denote the set of all vectors $${\mathbf {p}}$$ with real components $$p(j)$$, where $$j\in X$$. For two vectors $${\mathbf {p}}=(p(1),p(2),\ldots ,p(n))\in \mathbb {R}^{N}$$ and $${\mathbf {q}} =(q(1),q(2),\ldots ,q(n))\in \mathbb {R}^{N}$$, we write $${\mathbf {p}}\le {\mathbf {q}}$$ if $$p(j)\le q(j)$$ for each $$j\in N$$. Given a set $$X\subseteq \mathbb {R}^{N}$$, a vector $${\mathbf {p}}\in X$$ is called maximal in $$X$$ if there exists no vector $${\mathbf {q}}\in X$$ such that $${\mathbf {p}}\le {\mathbf {q}}$$ and $${\mathbf {p}}\ne {\mathbf {q}}$$. For a vector $${\mathbf {p}} \in \mathbb {R}^{N}$$, define $$p(X)=\sum _{j\in X}p(j)$$ for every set $$X\in 2^{N}$$.
A set function $$\varphi :2^N \rightarrow \mathbb {R}$$ is called submodular if the inequality
\begin{aligned} \varphi (X)+\varphi (Y) \ge \varphi (X\cup Y)+\varphi (X\cap Y) \end{aligned}
holds for all sets $$X,Y \in 2^N$$. For a submodular function $$\varphi$$ defined on $$2^{N}$$ such that $$\varphi (\emptyset )=0$$, the pair $$(2^N, \varphi )$$ is called a submodular system on $$N$$, while $$\varphi$$ is referred to as the rank function of that system.
For a submodular system $$(2^{N},\varphi )$$, define two polyhedra
\begin{aligned} P(\varphi )&= \left\{ {\mathbf {p}}\in \mathbb {R}^{N}\mid p(X)\le \varphi (X),\quad X\in 2^{N}\right\} \!, \end{aligned}
(2)
\begin{aligned} B(\varphi )&= \left\{ {\mathbf {p}}\in \mathbb {R}^{N}\mid {\mathbf {p}}\in P(\varphi ),\quad p(N)=\varphi (N)\right\} \!, \end{aligned}
(3)
called the submodular polyhedron and the base polyhedron, respectively, associated with the submodular system. Notice that $$B(\varphi )$$ represents the set of all maximal vectors in $$P(\varphi )$$.
The main problem that we consider in this section is as follows:
\begin{aligned} \begin{aligned} \hbox {(LP):}&\hbox {Maximize}&\displaystyle \sum \limits _{j\in N}w(j)p(j)&\\&\hbox {subject to}&&\displaystyle p(X)\le \varphi (X),&X\in 2^{N}, \\&&\displaystyle \underline{p}(j)\le p(j)\le \overline{p}(j),&j\in N, \end{aligned} \end{aligned}
(4)
where $$\varphi :2^{N}\rightarrow \mathbb {R}$$ is a submodular function with $$\varphi (\emptyset )=0,\,\mathbf {w}\in \mathbb {R}_{+}^{N}$$ is a nonnegative weight vector, and $$\overline{\mathbf {p}},\underline{\mathbf {p}}\in \mathbb {R }^{N}$$ are upper and lower bound vectors, respectively. This problem serves as a mathematical model for many scheduling problems with controllable processing times. Problem (LP) can be classified as a problem of maximizing a linear function over a submodular polyhedron intersected with a box.
In our previous work [25], we have demonstrated that Problem (LP) can be reduced to optimization over a simpler structure, namely, over a base polyhedron. In fact, we have shown that a problem of maximizing a linear function over the intersection of a submodular polyhedron and a box is equivalent to maximizing the same objective function over a base polyhedron associated with another rank function.
### Theorem 1
(cf. [25])
1. (i)
Problem (LP) has a feasible solution if and only if $$\underline{\mathbf {p}}\in P(\varphi )$$ and $$\underline{\mathbf {p}}\le \overline{\mathbf {p}}$$.
2. (ii)
If Problem (LP) has a feasible solution, then the set of maximal feasible solutions of Problem (LP) is a base polyhedron $$B(\tilde{\varphi })$$ associated with the submodular system $$(2^{N},\tilde{ \varphi })$$, where the rank function $$\tilde{\varphi }:2^{N}\rightarrow \mathbb {R}$$ is given by
\begin{aligned} \tilde{\varphi }(X)= \min _{Y\in 2^{N}}\left\{ \varphi (Y)+\overline{p}(X{\setminus } Y)-\underline{p}(Y{\setminus } X)\right\} . \end{aligned}
(5)
Notice that the computation of the value $$\tilde{\varphi }(X)$$ for a given $$X\in 2^{N}$$ reduces to minimization of a submodular function, which can be computed in polynomial time by using any of the available algorithms for minimizing a submodular function [7, 20]. However, the running time of known algorithms is fairly large. In many special cases of Problem (LP), including its applications to scheduling problems with controllable processing times, the value $$\tilde{\varphi }(X)$$ can be computed more efficiently without using the submodular function minimization, as shown later.
Throughout this paper, we assume that Problem (LP) has a feasible solution, which, due to claim (i) of Theorem 1, is equivalent to the conditions $$\underline{\mathbf {p}}\in P(\varphi )$$ and $$\underline{\mathbf {p}}\le \overline{\mathbf {p}}$$. Claim (ii) of Theorem 1 implies that Problem (LP) reduces to the following problem:
\begin{aligned} \begin{aligned}&\hbox {Maximize}\quad \sum \limits _{j\in N}w(j)p(j)\\&\hbox {subject to} \quad {\mathbf {p}}\in B(\tilde{\varphi }), \end{aligned} \end{aligned}
(6)
where the rank function $$\tilde{\varphi }:2^{N}\rightarrow \mathbb {R}$$ is given by (5).
An advantage of the reduction of Problem (LP) to a problem of the form (6) is that the solution vector can be obtained essentially in a closed form, as stated in the theorem below.
### Theorem 2
(cf. [4]) Let $$j_{1},j_{2},\ldots ,j_{n}$$ be an ordering of elements in $$N$$ that satisfies
\begin{aligned} w(j_{1})\ge w(j_{2})\ge \cdots \ge w(j_{n}). \end{aligned}
(7)
Then, vector $$\mathbf {p^{*}}\in \mathbb {R}^{N}$$ given by
\begin{aligned} p^{*}(j_{h})=\tilde{\varphi }(\{j_{1},\ldots ,j_{h-1},j_{h}\})-\tilde{ \varphi }(\{j_{1},\ldots ,j_{h-1}\}),\quad h=1,2,\ldots ,n, \end{aligned}
(8)
is an optimal solution to the problem (6) [and also to the problem (4)].
This theorem immediately implies a simple algorithm for Problem (LP), which computes an optimal solution $$\mathbf {p^{*}}$$ by determining the value $$\tilde{\varphi }(\{j_{1},j_{2},\ldots , j_{h}\})$$ for each $$h = 1, 2, \ldots , n$$. In this paper, instead, we use a different algorithm based on decomposition approach to achieve better running times for special cases of Problem (LP), as explained in Sect. 4.
### 3.2 Rank functions for scheduling applications
In this subsection, we follow [27] and present reformulations of three scheduling problems on parallel machines with controllable processing times in terms of LP problems defined over a submodular polyhedron intersected with a box of the form (4). We assume that if the jobs have different release dates, they are numbered to satisfy
\begin{aligned} r(1)\le r(2)\le \cdots \le r(n). \end{aligned}
(9)
If the machines are uniform they are numbered in accordance with (1). We denote
\begin{aligned} S_{0}=0,\qquad S_{k}=s_{1}+s_{2}+\cdots +s_{k},\quad 1\le k\le m. \end{aligned}
(10)
$$S_{k}$$ represents the total speed of $$k$$ fastest machines; if the machines are identical, $$S_{k}=k$$ holds.
For each problem $$Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W,\, P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ and $$Q|r(j),p(j)= \overline{p}(j)-x(j),C(j)\le d,pmtn|W$$, we need to find the actual processing times $$p(j)=\overline{p}(j)-x(j),\,j\in N$$, such that all jobs can be completed by a common due date $$d$$ and the total compression cost $$W=\sum _{j\in N}w(j)x(j)$$ is minimized. In what follows, we present LP formulations of these problems with $$p(j),\,j\in N$$, being decision variables, and the objective function to be maximized being $$\sum _{j\in N}w(j)p(j)=\sum _{j\in N}w(j)\left( \overline{p}(j)-x(j)\right)$$. Since each decision variable $$p(j)$$ has a lower bound $$\underline{p}(j)$$ and an upper bound $$\overline{p}(j),$$ an LP formulation includes the box constraints of the form $$\underline{p}(j)\le p(j)\le \overline{p}(j),\,j\in N$$.
The derivations of the rank functions for the models under consideration can be justified by the conditions for the existence of a feasible schedule for a given common deadline $$d$$ formulated, e.g., in [1]. Informally, these conditions state that for a given deadline $$d$$ a feasible schedule exists if and only if
1. (i)
for each $$k,1\le k\le m-1,\,k$$ longest jobs can be processed on $$k$$ fastest machines by time $$d$$, and
2. (ii)
all $$n$$ jobs can be completed on all $$m$$ machines by time $$d$$.
We refer to [27] where the rank functions for the relevant problems are presented and discussed in more details. Below we present their definitions. In all scheduling applications a meaningful interpretation of $$\varphi (X)$$ is the largest capacity available for processing the jobs of set $$X$$.
For example, problem $$Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ reduces to Problem (LP) of the form (4) with the rank function
\begin{aligned} \varphi (X)=dS_{\min \{|X|,m\}}=\left\{ \begin{array}{l@{\quad }l} dS_{\left| X \right| }, &{}\hbox {if }\,|X|\le m-1, \\ dS_{m}, &{}\hbox {if }\,|X|\ge m. \end{array}\right. \end{aligned}
(11)
It is clear that the conditions $$p(X)\le \varphi (X),\,X\in 2^{N}$$, for the function $$\varphi (X)$$ defined by (11) correspond to the conditions (i) and (ii) above, provided that $$\left| X\right| \le m-1$$ and $$\left| X\right| \ge m$$, respectively. As proved in [24], function $$\varphi$$ is submodular.
We then consider problem $$Q|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$. For a set of jobs $$X\subseteq N$$, we define $$r_{i}(X)$$ to be the $$i$$-th smallest release date in set $$X\in 2^{N},\,1\le i\le \left| X\right|$$. Then, for a non-empty set $$X$$ of jobs, the largest processing capacity available on the fastest machine $$M_{1}$$ is $$s_{1}\left( d-r_{1}(X)\right)$$, the total largest processing capacity on two fastest machines $$M_{1}$$ and $$M_{2}$$ is $$s_{1}\left( d-r_{1}(X)\right) +s_{2}\left( d-r_{2}(X)\right)$$, etc. We deduce that
\begin{aligned} \varphi (X)=\left\{ \begin{array}{l@{\quad }l} dS_{\left| X\right| }-\sum \nolimits _{i=1}^{\left| X\right| }s_{i}r_{i}(X), &{}\hbox {if }\,\left| X\right| \le m-1, \\ dS_{m}-\sum \nolimits _{i=1}^{m}s_{i}r_{i}(X), &{}\hbox {if }\,\left| X\right| \ge m. \end{array}\right. \end{aligned}
(12)
It can be verified that this function is submodular.
Problem $$P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ is a special case of problem $$Q|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$, where $$s_{1}=s_{2}=\cdots =s_{m}=1$$. Hence, the corresponding rank function $$\varphi$$ can be simplified as
\begin{aligned} \varphi (X)=\left\{ \begin{array}{l@{\quad }l} d{|X|}-\sum \nolimits _{i=1}^{|X|}r_{i}(X), &{} \hbox {if }\,\left| X\right| \le m-1, \\ dm-\sum \nolimits _{i=1}^{m}r_{i}(X), &{}\hbox {if }\,\left| X\right| \ge m. \end{array}\right. \end{aligned}
(13)
## 4 Decomposition of LP problems with submodular constraints
In this section, we describe a decomposition algorithm for solving LP problems defined over a submodular polyhedron intersected with a box. In Sect. 4.1, we demonstrate that the linear programming problem under study can be recursively decomposed into subproblems of a smaller dimension, with some components of a solution vector fixed to one of their bounds. We provide an outline of an efficient recursive decomposition procedure in Sect. 4.2 and analyze its time complexity in Sect. 4.3. In Sect. 5 we present implementation details of the recursive decomposition procedure for the relevant scheduling models with controllable processing times.
### 4.1 Fundamental idea for decomposition
In this section, we show an important property, which makes the foundation of our decomposition algorithm for Problem (LP) of the form (4).
The lemma below demonstrates that some components of an optimal solution can be fixed either at their upper or lower bounds, while for some components their sum is fixed. Given a subset $$\hat{N}$$ of $$N$$, we say that $$\hat{N}$$ is a heavy-element subset of$$N$$with respect to the weight vector$$\mathbf {w}$$ if it satisfies the condition
\begin{aligned} \min _{j\in \hat{N}}w(j)\ge \max _{j\in N{\setminus } \hat{N}}w(j). \end{aligned}
For completeness, we also regard the empty set as a heavy-element subset of $$N$$.
Given Problem (LP), in accordance with (5) define a set $$Y_{*}\subseteq N$$ such that the equality
\begin{aligned} \tilde{\varphi }(X)=\varphi (Y_{*})+\overline{p}(X{\setminus } Y_{*})- \underline{p}(Y_{*}{\setminus } X) \end{aligned}
(14)
holds for a set $$X\subseteq N$$. Because of its special role, in the remainder of this paper we call $$Y_{*}$$ an instrumental set for set $$X$$.
### Lemma 1
Let $$\hat{N}\subseteq N$$ be a heavy-element subset of $$N$$ with respect to $$\mathbf {w}$$, and $$Y_{*}\subseteq N$$ be an instrumental set for set $$\hat{N}$$. Then, there exists an optimal solution $${\mathbf {p}}^{*}$$ of Problem (LP) such that
\begin{aligned} \text{(a) } p^{*}(Y_{*})={\varphi }(Y_{*}), \text{(b) } p^{*}(j)=\overline{p}(j),\ j\in \hat{N}{\setminus } Y_{*}, \text{(c) } p^{*}(j)=\underline{p}(j),\ j\in Y_{*}{\setminus } \hat{N }. \end{aligned}
### Proof
Since $$\hat{N}$$ is a heavy-element subset, there exists an ordering $$j_{1},j_{2},\ldots ,j_{n}$$ of elements in $$N$$ that satisfies (7) and $$\hat{N}=\{j_{1},j_{2},\ldots ,j_{k}\}$$, where $$k=|\hat{N}|$$. Theorems 1 and 2 guarantee that the solution $${\mathbf {p}}^{*}$$ given by (8) is optimal. In particular, this implies
\begin{aligned} p^{*}(\hat{N})&= \tilde{\varphi }(j_{1})+\sum _{i=2}^{k}\left( \tilde{\varphi }(\left\{ j_{1},j_{2},\ldots ,j_{i}\right\} )-\tilde{\varphi }(\left\{ j_{1},j_{2},\ldots ,j_{i-1}\right\} \right) )\\&= \tilde{\varphi }(\left\{ j_{1},j_{2},\ldots ,j_{k}\right\} )=\tilde{\varphi }(\hat{N}). \end{aligned}
Since $${\mathbf {p}}^{*}$$ is a feasible solution of Problem (LP), the following conditions simultaneously hold:
\begin{aligned} p^{*}(Y_{*})\le \varphi (Y_{*}), \quad p^{*}(j)\le \overline{p}(j),\ j\in \hat{N}{\setminus } Y_{*}, \quad -p^{*}(j)\le - \underline{p}(j), j\in Y_{*}{\setminus } \hat{N}. \end{aligned}
(15)
On the other hand, due to the choice of set $$Y_{*}$$ we have
\begin{aligned} p^{*}(\hat{N})=\tilde{\varphi }(\hat{N})=\varphi (Y_{*})+\overline{p}( \hat{N}{\setminus } Y_{*})-\underline{p}\left( Y_{*}{\setminus } \hat{N}\right) , \end{aligned}
which implies that each inequality of (15) must hold as equality, and that is equivalent to the properties (a), (b), and (c) in the lemma. $$\square$$
In what follows, we use two fundamental operations on a submodular system $$\left( 2^{N},\varphi \right)$$, as defined in [4, Section 3.1]. For a set $$A\in 2^{N}$$, define a set function $$\varphi ^{A}:2^{A}\rightarrow \mathbb {R}$$ by
\begin{aligned} \varphi ^{A}(X)=\varphi (X),\quad X\in 2^{A}. \end{aligned}
Then, $$(2^{A},\varphi ^{A})$$ is a submodular system on $$A$$ and it is called a restriction of$$(2^{N},\varphi )$$to$$A$$. On the other hand, for a set $$A\in 2^{N}$$ define a set function $$\varphi _{A}:2^{N{\setminus } A}\rightarrow \mathbb {R}$$ by
\begin{aligned} \varphi _{A}(X)=\varphi (X\cup A)-\varphi (A),\quad X\in 2^{N{\setminus } A}. \end{aligned}
Then, $$(2^{N{\setminus } A},\varphi _{A})$$ is a submodular system on $$N{\setminus } A$$ and it is called a contraction of$$(2^{N},\varphi )$$by$$A$$.
For an arbitrary set $$A\in 2^{N}$$, Problem (LP) can be decomposed into two subproblems of a similar structure by performing restriction of $$\left( 2^{N},\varphi \right)$$ to $$A$$ and contraction of $$\left( 2^{N},\varphi \right)$$ by $$A$$, respectively. These problems can be written as follows: for restriction asand for contraction asWe show that an optimal solution of the original Problem (LP) can be easily restored from the optimal solutions of these two subproblems. For every subset $$A\subseteq N$$ and vectors $$\mathbf {p}_{\mathbf{1}}\in \mathbb {R}^{A}$$ and $$\mathbf {p}_\mathbf{2}\in \mathbb {R}^{N{\setminus } A}$$, the direct sum$${\mathbf {p}_\mathbf{1}}\oplus {\mathbf {p}}_{{\mathbf {2}}}\in \mathbb {R}^{N}$$ of $$\mathbf {p_{1}}$$ and $${\mathbf {p}}_{{\mathbf {2}}}$$ is defined by
\begin{aligned} (p_{1}\oplus p_{2})(j)=\left\{ \begin{array}{l@{\quad }l} p_{1}(j), &{}\hbox {if }\,j\in {A}, \\ p_{2}(j), &{}\hbox {if }\,j\in N{\setminus } {A}. \end{array}\right. \end{aligned}
### Lemma 2
Let $$A\in 2^{N}$$, and suppose that $$q(A)=\varphi (A)$$ holds for some optimal solution $$\mathbf {q}\in \mathbb {R}^{N}$$ of Problem (LP). Then,
1. (i)
Each of problems (LP1) and (LP2) has a feasible solution.
2. (ii)
If a vector $$\mathbf {p}_\mathbf{1}\in \mathbb {R}^{A}$$ is an optimal solution of Problem (LP1) and a vector $$\mathbf {p}_\mathbf{2}\in \mathbb {R} ^{N{\setminus } A}$$ is an optimal solution of Problem (LP2), then the direct sum $${\mathbf {p}}^{*}=\mathbf {p}_\mathbf{1}\oplus \mathbf {p}_\mathbf{2}\in \mathbb {R} ^{N}$$ of $$\mathbf {p}_\mathbf{1}$$ and $$\mathbf {p}_\mathbf{2}$$ is an optimal solution of Problem (LP).
### Proof
The proof below is similar to that for Lemma 3.1 in [4]. We define vectors $$\mathbf {q}_\mathbf{1}\in \mathbb {R}^{A}$$ and $$\mathbf {q}_\mathbf{2}\in \mathbb {R}^{N{\setminus } A}$$ by
\begin{aligned} q_{1}(j)=q(j),j\in A,\qquad q_{2}(j)=q(j),j\in N{\setminus } A. \end{aligned}
To prove (i), it suffices to show that $$\mathbf {q}_\mathbf{1}$$ and $$\mathbf {q}_\mathbf{2}$$ are feasible solutions of Problems (LP1) and (LP2), respectively. Since $$\mathbf {q}$$ is a feasible solution of Problem (LP), we have
\begin{aligned}&q(X)\le \varphi (X),\mathrm{}X\in 2^{N}, \end{aligned}
(16)
\begin{aligned}&\underline{p}(j)\le q(j)\le \overline{p}(j),\mathrm{}j\in N. \end{aligned}
(17)
Then, (16) and (17) imply that $$\mathbf {q}_\mathbf{1}\in \mathbb {R}^{A}$$ is a feasible solution of Problem (LP1). It follows from (16) and the equality $$q(A)=\varphi (A)$$ that
\begin{aligned} q(X)=q(X\cup A)-q(A)\le \varphi (X\cup A)-\varphi (A),\mathrm{\quad }X\in 2^{N{\setminus } A}, \end{aligned}
which, together with (17), implies that $$\mathbf {q}_\mathbf{2} \in \mathbb {R}^{N{\setminus } A}$$ is a feasible solution of Problem (LP2). This concludes the proof of (i).
To prove (ii), we first show that $$\mathbf {p}^{*}$$ is a feasible solution of Problem (LP). Since $$\mathbf {p}_\mathbf{1}$$ and $$\mathbf {p}_\mathbf{2}$$ are feasible solutions of Problem (LP1) and Problem (LP2), respectively, we have
\begin{aligned}&p^{*}(X)\le \varphi (X),\qquad X\in 2^{A}, \end{aligned}
(18)
\begin{aligned}&p^{*}(X)\le \varphi (X\cup A)-\varphi (A),\qquad X\in 2^{N{\setminus } A}, \end{aligned}
(19)
\begin{aligned}&\underline{p}(j)\le p^{*}(j)\le \overline{p}(j),\qquad j\in N. \end{aligned}
(20)
For any $$X\in 2^{N}$$, we derive
\begin{aligned} p^{*}(X)&= p^{*}(X\cap A)+p^{*}(X{\setminus } A) \\&\le \varphi (X\cap A)+\varphi ((X{\setminus } A)\cup A)-\varphi (A) \\&= \varphi (X\cap A)+\varphi (X\cup A)-\varphi (A) \\&\le \varphi (X), \end{aligned}
where the first inequality is by (18) and (19), and the second by the submodularity of $$\varphi$$. This inequality and (20) show that the vector $$\mathbf {p}^{*}$$ is a feasible solution of (LP).
To show optimality of $$\mathbf {p}^{*}$$, notice that by optimality of $$\mathbf {p}_\mathbf{1}$$ and $$\mathbf {p}_\mathbf{2}$$ we have
\begin{aligned} \sum _{j\in A}w(j)p_{1}(j)\ge \sum _{j\in A}w(j)q_{1}(j),\quad \sum _{j\in N{\setminus } A}w(j)p_{2}(j)\ge \sum _{j\in N{\setminus } A}w(j)q_{2}(j), \end{aligned}
and due to the definition of $$\mathbf {p}^{*}$$ we obtain
\begin{aligned} \sum _{j\in N}w(j)p^{*}(j)&= \sum _{j\in A}w(j)p_{1}(j)+\sum _{j\in N{\setminus } A}w(j)p_{2}(j) \\&\ge \sum _{j\in A}w(j)q_{1}(j)+\sum _{j\in N{\setminus } A}w(j)q_{2}(j)\ =\sum _{j\in N}w(j)q(j), \end{aligned}
so that, $$\mathbf {p}^{*}$$ is an optimal solution of (LP). $$\square$$
From Lemmas 1 and 2, we obtain the following property, which is used recursively in our decomposition algorithm.
### Theorem 3
Let $$\hat{N}\subseteq N$$ be a heavy-element subset of $$N$$ with respect to $$\mathbf {w}$$, and $$Y_{*}$$ be an instrumental set for set $$\hat{N}$$. Let $$\mathbf {p}_\mathbf{1}\in \mathbb {R} ^{Y^{*}}$$ and $$\mathbf {p}_\mathbf{2}\in \mathbb {R}^{N{\setminus } Y^{*}}$$ be optimal solutions of the linear programs (LPR) and (LPC), respectively, where (LPR) and (LPC) are given as
\begin{aligned}&\begin{aligned}&\mathrm{(LPR):}&\mathrm{Maximize}&\displaystyle \sum _{j\in Y_{*}}w(j)p(j)&\\&\text{ subject } \text{ to }&\displaystyle p(X)\le \varphi (X),&X\in 2^{Y_{*}}, \\&&\displaystyle \underline{p}(j)\le p(j)\le \overline{p}(j),&j\in Y_{*}\cap \hat{N}, \\&&\displaystyle p(j)=\underline{p}(j),&j\in Y_{*}{\setminus } \hat{N} \end{aligned}\\&\begin{aligned}&\mathrm{(LPC):}&\mathrm{Maximize}&\displaystyle \sum _{j\in N{\setminus } Y_{*}}w(j)p(j)&\\&\text{ subject } \text{ to }&\displaystyle p(X)\le \varphi (X\cup Y_{*})-\varphi (Y_{*}),&X\in 2^{N{\setminus } Y_{*}}, \\&&\displaystyle \underline{p}(j)\le p(j)\le \overline{p}(j),&j\in \left( N{\setminus } Y_{*}\right) {\setminus } \left( \hat{N}{\setminus } Y_{*}\right) , \\&&\displaystyle p(j)=\overline{p}(j),&j\in \hat{N}{\setminus } Y_{*}. \end{aligned} \end{aligned}
Then, the vector $$\mathbf {p}^{*}\in \mathbb {R}^{N}$$ given by the direct sum $$\mathbf {p}^{*}=\mathbf {p}_\mathbf{1}\oplus \mathbf {p}_\mathbf{2}$$ is an optimal solution of (LP).
Notice that Problem (LPR) is obtained from Problem (LP) as a result of restriction to $$Y_{*}$$ and the values of components $$p(j), j\in Y_{*}{\setminus } \hat{N}$$, are fixed to their lower bounds in accordance with Property (c) of Lemma 1. Similarly, Problem (LPC) is obtained from Problem (LP) as a result of contraction by $$Y_{*}$$ and the values of components $$p(j), j\in \hat{N}{\setminus } Y_{*}$$, are fixed to their upper bounds in accordance with Property (b) of Lemma 1.
### 4.2 Recursive decomposition procedure
In this subsection, we describe how the original Problem (LP) can be decomposed recursively based on Theorem 3, until we obtain a collection of trivially solvable problems with no non-fixed variables. In each stage of this process, the current LP problem is decomposed into two subproblems, each with a reduced set of variables, while some of the original variables receive fixed values and stay fixed until the end.
### Remark 1
The definition of a heavy-element set can be revised to take into account the fact that some variables may become fixed during the solution process. The fixed variables make a fixed contribution into the objective function, so that the values of their weights become irrelevant for further consideration and can therefore be made, e.g., zero. This means that a heavy-element set can be selected not among all variables of set $$N$$ but only among the non-fixed variables. Formally, if the set $$N$$ of jobs is known to be partitioned as $$N=Q\cup F$$, where the variables of set $$Q$$ are non-fixed and those of set $$F$$ are fixed, then $$\hat{Q}\subseteq Q$$ is a heavy-element subset with respect to the weight vector$$\mathbf {w}$$ if it satisfies the condition
\begin{aligned} \min _{j\in \hat{Q}}w(j)\ge \max _{j\in Q{\setminus } \hat{Q}}w(j). \end{aligned}
Notice that for this refined definition of a heavy-element subset, Lemma 1 and Theorem 3 can be appropriately adjusted.
In each stage of the recursive procedure, we need to solve a subproblem that can be written in the following generic form:
\begin{aligned} \begin{aligned}&\text{ LP }(H,F,K,\mathbf {l},\mathbf {u}\mathrm{)}&\mathrm{Maximize}&\displaystyle \sum _{j\in H}w(j)p(j)&\\&\text{ subject } \text{ to }&\displaystyle p(X)\le \varphi _{K}^H (X)=\varphi (X\cup K)-\varphi (K),&X\in 2^{H}, \\&&\displaystyle l(j)\le p(j)\le u(j),&j\in H{\setminus } F, \\&&\displaystyle p(j)=u(j)=l(j),&j\in F, \end{aligned} \end{aligned}
(21)
where
• $$H\subseteq N$$ is the index set of components of vector $${\mathbf {p}}$$;
• $$F\subseteq H$$ is the index set of fixed components, i.e., $$l(j)=u(j)$$ holds for each $$j\in F$$;
• $$K\subseteq N{\setminus } H$$ is the set that defines the rank function $$\varphi _{K}^H :2^{H}\rightarrow \mathbb {R}$$ such that
\begin{aligned} \varphi _{K}^H (X)=\varphi (X\cup K)-\varphi (K), \qquad X\in 2^{H}; \end{aligned}
• $$\mathbf {l}=(l(j)\mid j\in H)$$ and $$\mathbf {u}=(u(j)\mid j\in H)$$ are respectively the vectors of the lower and upper bounds on variables $$p(j), j\in H$$. For $$j\in N$$, each of $$l(j)$$ and $$u(j)$$ either takes the value of $$\underline{p}(j)$$ or that of $$\overline{p}(j)$$ from the original Problem (LP). Notice that $$l(j)=u\left( j\right)$$ for each $$j\in F$$.
Throughout this paper, we assume that each Problem LP$$(H,F,K,\mathbf {l}, \mathbf {u})$$ is feasible. This is guaranteed by Lemma 2 if the initial Problem (LP) is feasible.
The original Problem (LP) is represented as Problem LP$$(N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}},\overline{{\mathbf {p}}})$$. For $$j\in H$$, we say that the variable $$p(j)$$ is a non-fixed variable if $$l(j)<u(j)$$ holds, and a fixed variable if $$l(j)=u(j)$$ holds. If all the variables in Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$ are fixed, i.e., $$l(j)=u(j)$$ holds for all $$j\in H$$, then an optimal solution is uniquely determined by the vector $$\mathbf {u}\in \mathbb {R}^{H}$$.
Consider a general case that Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$ of the form (21) contains at least one non-fixed variable, i.e., $$|H{\setminus } F|>0$$. We define a function $$\widetilde{\varphi } _{K}^{H}:2^{H}\rightarrow \mathbb {R}$$ by
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)=\min _{Y\in 2^{H}}\{\varphi _{K}^{H}(Y)+u(X{\setminus } Y)-l(Y{\setminus } X)\}. \end{aligned}
(22)
By Theorem 1 (ii), the set of maximal feasible solutions of Problem LP$$(H,F,K,\,\mathbf {l},\mathbf {u})$$ is given as a base polyhedron $$B( \widetilde{\varphi }_{K}^{H})$$ associated with the function $$\widetilde{ \varphi }_{K}^{H}$$. Therefore, if $$|H{\setminus } F|=1$$ and $$H{\setminus } F=\{j^{\prime }\}$$, then an optimal solution $$\mathbf {p^{*}}\in \mathbb {R }^{H}$$ is given by
\begin{aligned} p^{*}(j)=\left\{ \begin{array}{l@{\quad }l} \tilde{\varphi }_{K}^{H}(\{j^{\prime }\}), &{} j=j^{\prime }, \\ u(j), &{} j\in F, \end{array}\right. \end{aligned}
(23)
Suppose that $$|H{\setminus } F|\ge 2$$. Then, we call a procedure Procedure Decomp$$(H,F,K,\mathbf {l},\mathbf {u})$$ explained below. Let $$\hat{H} \subseteq H {\setminus } F$$ be a heavy-element subset of $$H$$ with respect to the vector $$(w(j)\mid j\in H)$$, and $$Y_{*}\subseteq H$$ be an instrumental set for set $$\hat{H}$$, i.e.,
\begin{aligned} \widetilde{\varphi }_{K}^{H}(\hat{H})=\varphi _{K}^{H}(Y_{*})+u\left( \hat{H} {\setminus } Y_{*}\right) -l(Y_{*}{\setminus } \hat{H}). \end{aligned}
(24)
Theorem 3, when applied to Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$, implies that the problem is decomposed into the two subproblems
\begin{aligned} \begin{aligned}&\mathrm{Maximize}&\displaystyle \sum _{j\in Y_{*}}w(j)p(j)&\\&\text{ subject } \text{ to }&\displaystyle p(X)\le \varphi _{K}^{Y_*}(X)=\varphi (X\cup K)-\varphi (K),&X\in 2^{Y_{*}}, \\&l(j)\le p(j)\le l(j),&j\in Y_{*}{\setminus } \hat{H}, \\&l(j)\le p(j)\le u(j),&j\in Y_{*}\cap \hat{H}, \end{aligned} \end{aligned}
and
\begin{aligned} \begin{aligned}&\mathrm{Maximize}&\displaystyle \sum _{j\in H{\setminus } Y_{*}}w(j)p(j)&\\&\text{ subject } \text{ to }&\displaystyle p(X)\le \varphi _{K\cup Y_*}^{H {\setminus } Y_*}(X) =\varphi (X\cup K \cup Y_{*})-\varphi (K \cup Y_{*}),&X\in 2^{H{\setminus } Y_{*}}, \\&u(j)\le p(j)\le u(j),&j\in \hat{H}{\setminus } Y_{*}, \\&l(j)\le p(j)\le u(j),&j\in \left( H{\setminus } Y_{*}\right) {\setminus } (\hat{H}{\setminus } Y_{*}). \end{aligned} \end{aligned}
The first of these subproblems corresponds to Problem (LPR), and in that problem the values of components $$p(j), j\in Y_{*}{\setminus } \hat{H}$$, are fixed to their lower bounds. The second subproblem corresponds to Problem (LPC), and in that problem the values of components $$p(j), j\in \hat{ H}{\setminus } Y_{*}$$, are fixed to their upper bounds.
We denote these subproblems by Problem LP$$(Y_{*},F_{1},K,\mathbf {l}_\mathbf{1}, \mathbf {u}_\mathbf{1})$$ and Problem LP$$(H{\setminus } Y_{*},F_{2},$$$$K\cup Y_{*},\mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2})$$, respectively, where the vectors $$\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1}\in \mathbb {R}^{Y_{*}}$$ and $$\mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2}\in \mathbb {R}^{H{\setminus } Y_{*}}$$, and the updated sets of fixed variables $$F_{1}$$ and $$F_{2}$$ are given by
\begin{aligned}&\begin{aligned} l_{1}(j)&= l(j),j\in Y_{*},\\ u_{1}(j)&= \left\{ \begin{array}{l@{\quad }l} l(j), &{} j\in Y_{*}{\setminus } \hat{H}, \\ u(j), &{} j\in Y_{*}\cap \hat{H}, \end{array}\right. \\ F_{1}&= Y_{*}{\setminus } \hat{H}, \end{aligned}\end{aligned}
(25)
\begin{aligned}&\begin{aligned} l_{2}(j)&= \left\{ \begin{array}{l@{\quad }l} u(j), &{} j\in \hat{H}{\setminus } Y_{*}, \\ l(j), &{} j\in H{\setminus } (Y_{*}\cup \hat{H}), \end{array}\right. \\ u_{2}(j)&= u(j),j\in H{\setminus } Y_{*},\\ F_{2}&= (\hat{H}\cup (H\cap F)){\setminus } Y_{*}. \end{aligned} \end{aligned}
(26)
Notice that Problem LP$$(Y_{*},F_{1},K,\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1})$$ inherits the set of fixed variables $$Y_{*}\cap F$$ from the problem of a higher level, and additionally the variables of set $$Y_{*}{\setminus } \hat{ H}$$ become fixed. However, since $$\hat{H}$$ contains only non-fixed variables, we deduce that $$Y_{*}{\setminus } \hat{H}\supseteq Y_{*}\cap F$$, so that the complete description of the set $$F_{1}$$ of fixed variables in Problem LP$$(Y_{*},F_{1},K,\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1})$$ is given by $$Y_{*}{\setminus } \hat{H}$$.
Problem LP$$(H{\setminus } Y_{*},F_{2},K\cup Y_{*},\mathbf {l}_\mathbf{2}, \mathbf {u}_\mathbf{2})$$ inherits the set of fixed variables $$\left( H{\setminus } Y_{*}\right) \cap F$$ from the problem of a higher level, and additionally the variables of set $$\hat{H}{\setminus } Y_{*}$$ become fixed. These two sets are disjoint. Thus, the complete description of the set $$F_{2}$$ of fixed variables in Problem LP$$(H{\setminus } Y_{*},F_{2},K, \mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2})$$ is given by $$(\hat{H}\cup (H\cap F)){\setminus } Y_{*}$$.
Without going into implementation details, we now give a formal description of the recursive procedure, that takes Remark 1 into account. For the current Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$, we compute optimal solutions $$\mathbf {p}_\mathbf{1}\in \mathbb {R}^{Y_{*}}$$ and $$\mathbf {p}_\mathbf{2}\in \mathbb {R}^{H{\setminus } Y_{*}}$$ of the two subproblems by calling procedures Decomp$$(Y_{*},F_{1},K,\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1})$$ and Decomp$$(H{\setminus } Y_{*},F_{2},K\cup Y_{*}, \mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2})$$. By Theorem 3, the direct sum $${\mathbf {p}}^{*}={\mathbf {p}}_{1}\oplus {\mathbf {p}}_{{\mathbf {2}}}$$ is an optimal solution of Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$, which is the output of the procedure Decomp$$(H,F,K,\mathbf {l},\mathbf {u})$$.
Recall that the original Problem (LP) is solved by calling Procedure Decomp$$(N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}},\overline{{\mathbf {p}}})$$. Its actual running time depends on the choice of a heavy-element subset $$\hat{H}$$ in Step 2 and on the time complexity of finding an instrumental set $$Y_{*}$$.
### 4.3 Analysis of time complexity
We analyze the time complexity of Procedure Decomp. To reduce the depth of recursion of the procedure, it makes sense to perform decomposition in such a way that the number of non-fixed variables in each of the two emerging subproblems is roughly a half of the number of non-fixed variables in the current Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$.
### Lemma 3
If at each level of recursion of Procedure Decomp for Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$ with $$|H{\setminus } F|>1$$ a heavy-element subset $$\hat{H}\subseteq H{\setminus } F$$ in Step 2 is chosen to contain $$\lceil |H{\setminus } F|/2\rceil$$ non-fixed variables, then the number of non-fixed variables in each of the two subproblems that emerge as a result of decomposition is either $$\left\lceil |H{\setminus } F|/2\right\rceil$$ or $$\lfloor |H{\setminus } F|/2\rfloor$$.
### Proof
For Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$, let $$g=|H{\setminus } F|$$ denote the number of the non-fixed variables. In Step 2 Procedure Decomp$$(H,F,K,\mathbf {l},\mathbf {u})$$ selects a heavy-element subset $$\hat{H}\subset H{\setminus } F$$ that contains $$\lceil g/2\rceil$$ non-fixed variables, i.e., $$|\hat{H}|=\left\lceil {g}/{2} \right\rceil$$. Then, the number of the non-fixed variables in Problem LP$$(Y_{*},F_{1},K,\mathbf {l}_\mathbf{1},\mathbf {u}_\mathbf{1})$$ considered in Step 3 satisfies $$|Y_{*}\cap \hat{H}|\le \left\lceil {g}/{2}\right\rceil$$.
Due to (26), the number of non-fixed variables in Problem LP$$(H{\setminus } Y_{*},F_{2},K\cup Y_{*},\mathbf {l}_\mathbf{2},\mathbf {u}_\mathbf{2})$$ considered in Step 4 satisfies
\begin{aligned} |H{\setminus } (\hat{H}\cup F\cup Y_{*})|\le |H{\setminus } \hat{H} |=\left\lfloor \frac{g}{2}\right\rfloor . \end{aligned}
$$\square$$
This lemma implies that the overall depth of recursion of Procedure Decomp applied to Problem LP$$(N,\emptyset ,\emptyset ,\underline{{\mathbf {p}}} ,\overline{{\mathbf {p}}})$$ is $$O(\log n)$$.
Let us analyze the running time of Procedure Decomp applied to Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$. We denote by $$T_{\mathrm{LP} }(h,g)$$ the time complexity of Procedure Decomp$$(H,F,K,{\mathbf {l}},\mathbf {u})$$, where $$h=|H|$$ and $$g=|H{\setminus } F|$$. Let $$T_{Y_{*}}(h)$$ denote the running time for computing the value $$\widetilde{\varphi } _{K}^{H}(\hat{H})$$ for a given set $$\hat{H}\subseteq H$$ and finding an instrumental set $$Y_{*}$$ that minimizes the right-hand side of the Eq. (22). In Steps 3 and 4, Procedure Decomp splits Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$ into two subproblems: one with $$h_{1}$$ variables among which there exist $$g_{1}\le \min \{h_{1},\lceil g/2\rceil \}$$ non-fixed variables, and the other one with $$h_{2}=h-h_{1}$$ variables, among which there exist $$g_{2}\le \min \{h_{2},\lfloor g/2\rfloor \}$$ non-fixed variables. Let $$T_{\mathrm{Split}}\left( h\right)$$ denote the time complexity of such a decomposition, i.e., for setting up the instances of the two subproblems. A required heavy-element set can be found in $$O(h)$$ time by using a linear-time median-finding algorithm. Then, we obtain a recursive equation:
\begin{aligned} T_{\mathrm{LP}}(h,g)=\left\{ \begin{array}{l@{\quad }l} O(1), &{} \text{ if } g=0, \\ T_{Y_{*}}(h), &{} \text{ if } g=1, \\ T_{Y_{*}}(h)+T_{\mathrm{Split}}(h)+T_{\mathrm{LP}}(h_{1},g_{1})+T_{ \mathrm{LP}}(h_{2},g_{2}), &{} \text{ if } g>1. \end{array}\right. \end{aligned}
By solving the recursive equation under an assumption that both functions $$T_{Y_{*}}(h)$$ and $$T_{\mathrm{Split}}\left( h\right)$$ are non-decreasing and convex, we obtain
\begin{aligned} T_{\mathrm{LP}}(n,n)=O(\left( T_{Y_{*}}(n)+T_{\mathrm{Split}}\left( n\right) \right) \log n). \end{aligned}
Thus, the findings of this section can be summarized as the following statement.
### Theorem 4
Problem (LP) can be solved by Procedure Decomp in $$O((T_{Y_{*}}(n)+T_{\mathrm{Split}}(n))\log n)$$ time.
In the forthcoming discussion of three scheduling applications of the results of this section, we pay special attention to designing fast algorithms that could find the required set $$Y_{*}$$ in all levels of the recursive Procedure Decomp. We develop fast algorithms that compute the value $$\widetilde{\varphi }(\hat{H})$$ and find a set $$Y_{*}$$ in accordance with its definition; see Sect. 5.
### 4.4 Comparison with decomposition algorithm for maximizing a concave separable function
In this subsection, we refer to our decomposition algorithm for Problem (LP) defined over a submodular polyhedron intersected with a box as Algorithm SSS-Decomp. Below, we compare that algorithm with a known decomposition algorithm that is applicable for maximizing a separable concave function over a submodular polyhedron; see [3], [4, Sect. 8.2] and [6].
Consider the problem of maximizing a separable concave function over a submodular polyhedron:
\begin{aligned} \begin{aligned}&\mathrm{(SCFM)}&\mathrm{Maximize}&\displaystyle \sum _{j\in N}f_{j}(p(j))&\\&\mathrm{subject~to}&\displaystyle p(X)\le \varphi (X),&X\in 2^{N}, \end{aligned} \end{aligned}
where $$f_{j}:\mathbb {R}\rightarrow \mathbb {R}$$ is a univariate concave function for $$j\in N$$ and $$\varphi :2^{N}\rightarrow \mathbb {R}$$ is a submodular function with $$\varphi (\emptyset )=0$$.
The decomposition algorithm for Problem (SCFM) was first proposed by Fujishige [3] for the special case where each $$f_{j}$$ is quadratic and $$\varphi$$ is a polymatroid rank function. Groenevelt [6] then generalized the decomposition algorithm for the case where each $$f_{j}$$ is a general concave function and $$\varphi$$ is a polymatroid rank function. Later, it was pointed out by Fujishige [4, Sect. 8.2] that the decomposition algorithm in [6] can be further generalized to the case where $$\varphi$$ is a general submodular function. We refer to that algorithm as Algorithm FG-Decomp.
For simplicity of presentation, in the description of Algorithm FG-Decomp we assume that each $$f_{j}$$ is monotone increasing; the general case with non-monotone $$f_{j}$$ can be dealt with by an appropriate modification of the algorithm; see [6].
Notice that for the set $$Y_{*}$$ chosen in Step 3, there exists some optimal solution $$\mathbf {p}^{*}$$ of Problem (SCFM) such that $$\varphi (Y_{*})=p^{*}(Y_{*})$$; see [4, Sect. 8.2], [6].
It is easy to see that Problem (LP) can be reduced to Problem (SCFM) by setting the functions $$f_{j}$$ as
\begin{aligned} f_{j}(\alpha )=\left\{ \begin{array}{l@{\quad }l} w(j)\underline{p}(j)+M(\alpha -\underline{p}(j)), &{}\text{ if } \,\alpha < \underline{p}(j); \\ w(j)\alpha , &{}\text{ if } \,\underline{p}(j)\le \alpha \le \overline{p}(j); \\ w(j)\overline{p}(j)-M(\alpha -\overline{p}(j)), &{}\text{ if } \,\alpha > \overline{p}(j) \end{array}\right. \end{aligned}
(27)
with a sufficiently large positive number $$M$$. Thus, Algorithm FG-Decomp (appropriately adjusted to deal with non-monotone functions $$f_{j}$$) can be applied to solving Problem (LP).
For Problem (LP), Algorithm FG-Decomp is quite similar to Algorithm SSS-Decomp. Indeed, both algorithms recursively find a set $$Y_{*}$$ and decompose a problem into two subproblems by using restriction to $$Y_{*}$$ and contraction by $$Y_{*}$$.
The difference of the two decomposition algorithms is in the selection rule of a set $$Y_{*}$$. In fact, a numerical example can be provided that demonstrates that for the same instance of Problem (LP) the two decomposition algorithms may find different sets $$Y_{*}$$ in the same iteration.
In addition, Algorithm SSS-Decomp fixes some variables in the subproblems so that the number of non-fixed variables in each subproblem is at most the half of the non-fixed variables in the original problem; this is an important feature of our algorithm which is not enjoyed by Algorithm FG-Decomp. This difference affects the efficiency of the two decomposition algorithms; indeed, for Problem (LP) the height of the decomposition tree can be $$\varTheta (n)$$ if Algorithm FG-Decomp is used, while it is $$O(\log n)$$ in our Algorithm SSS-Decomp.
Thus, despite certain similarity between the two decomposition algorithms, our algorithm cannot be seen as a straightforward adaptation of Algorithm FG-Decomp designed for solving problems of non-linear optimization with submodular constraints to a less general problem of linear programming.
On the other hand, assume that the feasible region for Problem (SCFM) is additionally restricted by imposing the box constraints, similar to those used in Problem (LP). Theorem 1 can be used to reduce the resulting problem to Problem (SCFM) with a feasible region being the base polyhedron with a modified rank function. Although the obtained problem can be solved by Algorithm FG-Decomp, this approach is computationally inefficient, since it requires multiple calls to a procedure for minimizing a submodular function. It is more efficient not to rely on Theorem 1, but to handle the additional box constraints by adapting the objective function, similarly to (27), and then to use Algorithm FG-Decomp.
## 5 Application to parallel machine scheduling problems
In this section, we show how the decomposition algorithm based on Procedure Decomp can be adapted for solving problems with parallel machines efficiently. Before considering implementation details that are individual for each scheduling problem under consideration, we start this section with a discussion that addresses the matters that are common to all three problems.
Recall that each scheduling problem we study in this paper can be formulated as Problem (LP) of the form (4) with an appropriate rank function. Thus, each of these problems can be solved by the decomposition algorithm described in Sect. 4.2 applied to Problem LP$$(N,\emptyset ,\emptyset ,\mathbf {l},\mathbf {u)}$$, where $$\mathbf {l}=\underline{{\mathbf {p}}}$$ and $$\mathbf {u}=\overline{{\mathbf {p}}}$$.
For an initial Problem LP$$(N,\emptyset ,\emptyset ,\mathbf {l},\mathbf {u)}$$, we assume that the following preprocessing is done before calling Procedure Decomp$$(N,\emptyset ,\emptyset ,\mathbf {l},\mathbf {u})$$:
1. 1.
If required, the jobs are numbered in non-decreasing order of their release dates in accordance with (9).
2. 2.
If required, the machines are numbered in non-increasing order of their speeds in accordance with (1), and the partial sums $$S_{v}$$ are computed for all $$v,\,0\le v\le m$$, by (10).
3. 3.
The lists $$\left( l(j)\mid j\in N\right)$$ and $$\left( u(j)\mid j\in N\right)$$ are formed and their elements are sorted in non-decreasing order.
The required preprocessing takes $$O(n\log n)$$ time.
To adapt the generic Procedure Decomp to solving a particular scheduling problem, we only need to provide the implementation details for Procedure Decomp$$(H,F,K,{\mathbf {l}},\mathbf {u})$$ that emerges at a certain level of recursion. To be precise, we need to explain how to compute for each particular problem the function $$\widetilde{\varphi }_{K}^{H}(X)$$ for a chosen set $$X\in 2^{H}$$ and how to find for a current heavy-element set an instrumental set $$Y_{*}$$ defined by (22), which determines the pair of problems into which the current problem is decomposed.
Given Problem LP$$(H,F,K,\mathbf {l},\mathbf {u})$$ of the form (21) define $$h=|H|$$ and $$k=|K|$$. Recall that $$K,H\subseteq N$$ are sets with $$K\cap H=\emptyset$$. For $$v=0,1,\ldots ,h$$, define
\begin{aligned} \mathcal {H}_{v}=\left\{ Y\subseteq H\mid |H|=v\right\} \end{aligned}
(28)
Introduce
\begin{aligned} \hat{h}=\min \left\{ h,m-k-1\right\} . \end{aligned}
(29)
Since $$\varphi _{K}^{H}(Y)=\varphi (Y\cup K)-\varphi (K)$$ for $$Y\in 2^{H}$$, it follows that for a given set $$X\subseteq H$$ the function $$\widetilde{ \varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}$$ can be computed as follows:
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)&= \min _{Y\in 2^{H}}\left\{ \varphi _{K}^{H}(Y)+u(X{\setminus } Y)-l(Y{\setminus } X)\right\} \nonumber \\&= u(X)-\varphi (K)+\min _{Y\in 2^{H}}\left\{ \varphi (Y\cup K)-u(Y\cap X)-l(Y{\setminus } X)\right\} \nonumber \\&= u(X)-\varphi (K)+\min _{Y\in 2^{H}}\left\{ \varphi (Y\cup K)-\lambda (Y)\right\} \!, \end{aligned}
(30)
where $$\varphi$$ is the initial rank function associated with the scheduling problem under consideration, and
\begin{aligned} \lambda (j)=\left\{ \begin{array}{l@{\quad }l} u(j), &{}\mathrm{if} \ \,j\in X, \\ l(j), &{}\mathrm{if}\ \,j\in H{\setminus } X. \end{array}\right. \end{aligned}
(31)
Notice that if the minimum in the left-hand side of (30) is achieved for $$Y=Y_{*}$$, then $$Y_{*}$$ is an instrumental set for set $$X$$.
### 5.1 Uniform machines, equal release dates
In this subsection, we show that problem $$Q|p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W$$ can be solved in $$O(n\log n)$$ time by the decomposition algorithm. To achieve this, we consider Problem LP$$(H,F,K, \mathbf {l},\mathbf {u})$$ that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function $$\widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}$$ given by (22). We show that for an arbitrary set $$X\subseteq H$$ the value $$\widetilde{\varphi }_{K}^{H}(X)$$ can be computed in $$O(h)$$ time. For a heavy-element set $$\hat{H}\subseteq H{\setminus } F$$, finding a set $$Y_{*}$$ that is instrumental for set $$\hat{H}$$ also requires $$O(h)$$ time.
Recall that for problem $$Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ the rank function $$\varphi :2^{N}\rightarrow \mathbb {R}$$ is defined by (11), i.e.,
\begin{aligned} \varphi (X)=dS_{\min \{m,|X|\}},\qquad X\in 2^{N}. \end{aligned}
This, together with (30), implies
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)=u(X)-dS_{\min \{m,k\}}+\min _{Y\in 2^{H}}\left\{ dS_{\min \left\{ m,|Y|+k\right\} }-\lambda (Y)\right\} . \end{aligned}
(32)
The computation of the minimum in the last term in (32) is done differently for the sets $$Y\subseteq H$$ with $$\left| Y\right| \le \hat{h}$$ and with $$\left| Y\right| >\hat{h}$$, where $$\hat{h}$$ is defined by (29), provided that the corresponding sets exist. With $$\mathcal {H}_{v},\,0\le v\le h,$$ defined by (28), introduce
\begin{aligned} \varPhi ^{\prime }=\left\{ \begin{array}{l@{\quad }l} \displaystyle \min \limits _{0\le v\le \hat{h}}\big \{dS_{v+k}-\max _{Y\in \mathcal {H} _{v}}\lambda (Y)\big \}, &{}\mathrm{if}\ \,m>k, \\ +\infty , &{}\mathrm{if}\ \,m\le k, \end{array}\right. \end{aligned}
(33)
and
\begin{aligned} \varPhi ^{\prime \prime }=\left\{ \begin{array}{l@{\quad }l} dS_{m}-\max \{\lambda (Y)\mid Y\in 2^{H},\ |Y|>\hat{h}\}, &{}\mathrm{if}\ \,h>m-k-1, \\ +\infty , &{}\mathrm{if}\ \,h\le m-k-1. \end{array}\right. \end{aligned}
(34)
Then, we can rewrite the last term in (32) as
\begin{aligned} \min _{Y\in 2^{H}}\{dS_{\min \{m,|Y|+k\}}-\lambda (Y)\}=\min \left\{ \varPhi ^{\prime },\varPhi ^{\prime \prime }\right\} . \end{aligned}
Notice that $$\varPhi ^{\prime }=+\infty$$ corresponds to the case that the set $$Y\in \mathcal {H}_{v}$$ does not exist for $$0\le v\le \hat{h}$$ (this happens if $$m\le k$$ or equivalently $$\hat{h}<0$$); $$\varPhi ^{\prime \prime }=+\infty$$ corresponds to the case that the set $$Y\in \mathcal {H}_{v}$$ does not exist for $$v>\hat{h}$$ (this happens if $$h\le m-k-1$$ or equivalently $$\hat{h}=h$$).
Assume $$m>k$$, and let $$\lambda _{v}$$ be the $$v$$-th largest value in the list $$\left( \lambda (j)\mid j\in H\right)$$ for $$v=1,2,\ldots ,\hat{h}$$. It follows that
\begin{aligned} \varPhi ^{\prime }=\min _{0\le v\le \hat{h}}\left\{ dS_{v+k}-\sum _{i=1}^{v}\lambda _{i}\right\} \!. \end{aligned}
(35)
We then assume $$h>m-k-1$$. Since $$\lambda (j)\ge 0$$ for $$j\in H$$, the maximum in the right-hand side of the top line of (34) is achieved for $$Y=H$$, i.e.,
\begin{aligned} \varPhi ^{\prime \prime }=dS_{m}-\lambda (H). \end{aligned}
(36)
Below we describe the procedure that uses Eqs. (35) and (36) for computing the values $$\varPhi ^{\prime }$$ and $$\varPhi ^{\prime \prime }$$. Since the procedure will be used as a subroutine within the recursive Procedure Decomp, here we present it for computing $$\widetilde{\varphi }_{K}^{H}(X)$$ with $$X$$ being a heavy-element set $$\hat{H}$$. Besides, its output contains set $$Y_{*},$$ an instrumental set for set $$\hat{H}$$.
Let us analyze the time complexity of Procedure CompQr0. In Step 2, the values $$\lambda _{1},\lambda _{2},\ldots ,\lambda _{\hat{h}}$$ can be found in $$O(h)$$ time by using the list $$(\lambda (j)\mid j\in H)$$, so that the value $$\varPhi ^{\prime }$$ and set $$Y^{\prime }$$ can be computed in $$O(h)$$ time. It is easy to see that $$\varPhi ^{\prime \prime }$$ and $$Y^{\prime \prime }$$ can be obtained in $$O(h)$$ time as well. Hence, the value $$\widetilde{ \varphi }_{K}^{H}(X)$$ and set $$Y_{*}$$ can be found in $$O(h)$$ time.
### Theorem 5
Problem $$Q|p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ can be solved either in $$O(n\log n)$$ time or in $$O(n+m\log m\log n)$$ time.
### Proof
Here, we only present the proof of the running time $$O(n\log n)$$, that is derived if in each level of recursion of Procedure Decomp we use Procedure CompQr0; the proof of the running time $$O(n+m\log m\log n)$$ is given in “Appendix”.
As proved above, Procedure CompQr0 applied to Problem LP$$(H,F,K,\mathbf {l}, \mathbf {u)}$$ takes $$O(h)$$ time. In terms of Theorem 4 on the running time of Procedure Decomp, this implies that $$T_{Y_{*}}(h)=O(h)$$.
In the analysis of the time complexity of Procedure CompQr0, we assume that certain information is given as part of the input. This assumption can be satisfied by an appropriate preprocessing. In particular, when we decompose a problem with a set of job $$H$$ at a certain level of recursion into two subproblems, we may create the sorted lists $$(u(j)\mid j\in H)$$ and $$(l(j)\mid j\in H)$$. This can be done in $$O(h)$$ time, since the sorted lists $$(u(j)\mid j\in N)$$ and $$(l(j)\mid j\in N)$$ are available as a result of the initial preprocessing. Thus, we have that $$T_{\mathrm{Split}}(h)=O(h)$$. Hence, the theorem follows from Theorem 4. $$\square$$
### 5.2 Identical machines, different release dates
In this subsection, we show that problem $$P|r(j),p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W$$ can be solved in $$O(n\log m\log n)$$ time by the decomposition algorithm. To achieve this, we consider Problem LP$$(H,F,K, \mathbf {l},\mathbf {u})$$ that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function $$\widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}$$ given by (22). We show that for an arbitrary set $$X\subseteq H$$ the value $$\widetilde{\varphi }_{K}^{H}(X)$$ can be computed in $$O(h\log m)$$ time. For a heavy-element set $$\hat{H}\subseteq H{\setminus } F$$, finding a set $$Y_{*}$$ that is instrumental for set $$\hat{H}$$ also requires $$O(h\log m)$$ time.
Recall that for problem $$P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ the rank function $$\varphi :2^{N}\rightarrow \mathbb {R}$$ is defined by (13), i.e.,
\begin{aligned} \varphi (X)=d\cdot \min \{m,|X|\}-\sum _{i=1}^{\min \{m,|X|\}}r_{i}(X),\qquad X\in 2^{N}, \end{aligned}
where $$r_i(X)$$ denotes the i-th smallest release dates among the jobs of set X.
This, together with (30), implies that
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)&= u(X)-\left( d\cdot \min \{m,k\}-\sum _{i=1}^{\min \{m,k\}}r_{i}(K)\right) \nonumber \\&\quad +\,\min _{Y\in 2^{H}}\bigg \{d\cdot {\min \{m,|Y|+k\}} \nonumber \\&\quad -\,\sum _{i=1}^{\min \{m,|Y|+k\}}r_{i}(Y\cup K)-\lambda (Y)\bigg \}, \end{aligned}
(37)
where $$\lambda (j),\,j\in H,$$ are given by (31).
Let $$\hat{h}$$ be defined by (29). Computation of the minimum in the last term in (37) is done differently for sets $$Y\subseteq H$$ with $$\left| Y\right| \le \hat{h}$$ and $$\left| Y\right| >\hat{h}$$. With $$\mathcal {H}_{v},\,0\le v\le h,$$ defined by (28), introduce
\begin{aligned} \varPhi ^{\prime }=\left\{ \begin{array}{l@{\quad }l} \displaystyle \min _{0\le v\le \hat{h}}\left\{ d\cdot (v+k)-\max _{Y\in \mathcal {H}_{v}}\left\{ \sum _{i=1}^{v+k}r_{i}(Y\cup K)+\lambda (Y)\right\} \right\} , &{}\mathrm{if}\ \,m>k, \\ +\infty , &{}\mathrm{if}\ \,m\le k, \end{array}\right. \end{aligned}
(38)
and
\begin{aligned} \varPhi ^{\prime \prime } =\left\{ \begin{array}{l} \displaystyle dm-\max \left\{ \sum _{i=1}^{m}r_{i}(Y\cup K)+\lambda (Y)\ \Big |\ Y\in 2^{H},\ |Y|>\hat{h}\right\} , \\ \qquad \qquad \qquad \qquad \quad \mathrm{if~}h>m-k-1, \\ +\infty , \qquad \qquad \qquad \mathrm{if~}h\le m-k-1. \\ \end{array}\right. \end{aligned}
(39)
Similarly to Sect. 5.1, the values $$\varPhi ^{\prime }$$ and $$\varPhi ^{\prime \prime }$$ are responsible for computing the minimum in the last term in (37) over the sets $$Y\subseteq H$$ with $$\left| Y\right| \le \hat{h}$$ and with $$\left| Y\right| > \hat{h}$$, respectively, provided that the corresponding sets exist. Thus, (37) can be rewritten as
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)=u(X)-\left( d\cdot \min \{m,k\}-\sum _{i=1}^{\min \{m,k\}}r_{i}(K)\right) +\min \left\{ \varPhi ^{\prime },\varPhi ^{\prime \prime }\right\} . \end{aligned}
(40)
We now explain how to compute the values $$\varPhi ^{\prime }$$ and $$\varPhi ^{\prime \prime }$$. From the list $$(\widetilde{\lambda }(j)\mid$$$$j\in H)$$, where
\begin{aligned} \widetilde{\lambda }(j)=r(j)+\lambda (j),\qquad j\in H. \end{aligned}
(41)
Suppose that $$m>k$$. Computing of $$\varPhi ^{\prime }$$ can be done in a similar manner as in Sect. 5.1. The top line of the formula (38) can be rewritten as
\begin{aligned} \varPhi ^{\prime }&= \min _{0\le v\le \hat{h}}\bigg \{d\cdot (v+k)-\max _{Y\in \mathcal {H}_{v}}\big \{r(Y)+r(K)+\lambda (Y)\big \}\bigg \} \\&= -r(K)+\min _{0\le v\le \hat{h}}\bigg \{d\cdot (v+k)-\max _{Y\in \mathcal {H} _{v}}\widetilde{\lambda }(Y)\bigg \}. \end{aligned}
For $$v,~1\le v\le \hat{h}$$, let $$\widetilde{\lambda }_{v}$$ be the $$v$$-th largest value among the numbers $$\widetilde{\lambda }(j),\,j\in H$$. Then, we have
\begin{aligned} \varPhi ^{\prime }=-r(K)+\min _{0\le v\le \hat{h}}\bigg \{d\cdot (v+k)-\sum _{i=1}^{v}\widetilde{\lambda }_{i}\bigg \}. \end{aligned}
(42)
We now turn to computing the value $$\varPhi ^{\prime \prime }$$. We may assume $$\hat{h}<h$$, i.e., $$h>m-k-1$$, since otherwise $$\varPhi ^{\prime \prime }=+\infty$$. For simplicity of the description, we assume, without loss of generality, that the jobs of set $$H\cup K$$ are renumbered in such a way that
\begin{aligned} H\cup K=\left\{ 1,2,\ldots ,h+k\right\} ,\qquad r(1)\le r(2)\le \cdots \le r(h+k). \end{aligned}
(43)
For $$t=m,m+1,\ldots ,h+k$$, introduce
\begin{aligned} K[t]&= \{j\in K\mid j\le t\}, \nonumber \\ \mathcal {H}^{z}[t]&= \left\{ Y\in 2^{H}\mid Y\subseteq \{1,2,\ldots ,t\},\ |Y|+|K[t]|=z\right\} \!,\quad \left| K[t]\right| \le z\le m. \end{aligned}
(44)
We define $$\bar{t}$$ to be the minimum $$t$$ with $$|K[t]|=m$$ if $$k\ge m$$; otherwise, let $$\bar{t}=h+k$$. Note that $$\bar{t}\ge m,$$ and $$\mathcal {H} ^{m}[t]\ne \emptyset$$ if $$m\le t\le \bar{t}$$.
The following lemma is useful for computing the value $$\varPhi ^{\prime \prime }$$ efficiently.
### Lemma 4
Let $$Y^{\prime \prime }\in 2^{H}$$ be a set satisfying $$|Y^{\prime \prime }|>\hat{h}$$ and
\begin{aligned} \sum _{i=1}^{m}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })=\max \left\{ \sum _{i=1}^{m}r_{i}(Y\cup K)+\lambda (Y)\ \bigg |\ Y\in 2^{H},\ |Y|>\hat{h}\right\} . \end{aligned}
(45)
Let $$t_{*}\in H\cup K$$ be a job such that $$m\le t_{*}\le \bar{t}$$ and the set $$\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}$$ contains exactly $$m$$ elements. Define the sets $$Y_{1}^{\prime \prime }=\{j\in Y^{\prime \prime }\mid j\le t_{*}\}$$ and $$Y_{2}^{\prime \prime }=\{j\in Y^{\prime \prime }\mid j>t_{*}\}$$. Then the following properties hold:
\begin{aligned} \begin{aligned} \mathrm{(i)}&\quad \sum \limits _{i=1}^{m}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })=\widetilde{\lambda }(Y_{1}^{\prime \prime })+r(K[t_{*}])+\lambda (Y_{2}^{\prime \prime }), \\ \mathrm{(ii)}&\quad Y_{1}^{\prime \prime }\in \mathcal {H}^{m}[t_{*}]\, \text{ and } \,\widetilde{\lambda }(Y_{1}^{\prime \prime })=\max \{\widetilde{\lambda } (Y)\mid Y\in \mathcal {H}^{m}[t_{*}]\}, \\ \mathrm{(iii)}&\quad Y_{2}^{\prime \prime }=\left\{ j\in H\mid j>t_{*}\right\} . \end{aligned}\qquad \end{aligned}
### Proof
First, notice that set $$Y^{\prime \prime }\cup K$$ contains at least $$\hat{h}+1+k\ge m$$ jobs, so that job $$t_{*}$$ exists and $$m\le t_{*}\le h+k$$. Notice that job $$t_{*}$$ might belong to set $$H{\setminus } Y^{\prime \prime }$$, and that job $$t_{*}$$ is not necessarily unique. Indeed, if, e.g., job $$t_{*}+1\in H{\setminus } Y^{\prime \prime }$$, then $$\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}=\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}+1\}$$.
We need to show that there exists a $$t_{*}$$ that satisfies $$t_{*}\le \bar{t}$$. To prove this, we only need to consider the case that $$k\ge m$$, since otherwise by definition $$\bar{t}=h+k$$. For $$k\ge m$$, let $$t_{*}$$ be the smallest value of $$t$$ such that the equality $$|\{j\in Y^{\prime \prime }\cup K\mid j\le t\}|=m$$ holds. Since $$|\{j\in K\mid j\le t_{*}\}|\le m$$, we have $$t_{*}\le \bar{t}$$ by the definition of $$\bar{t}$$.
Take a $$t_{*}$$ that satisfies the lemma conditions. For an arbitrarily chosen set $$Z_{1}\in \mathcal {H}^{m}[t_{*}]$$, define set $$Z\in 2^{H}$$ as $$Z=Z_{1}\cup Y_{2}^{\prime \prime }$$. Notice that $$\{j\in Z\cup K\mid j\le t_{*}\}=Z_{1}\cup K[t_{*}]$$. This implies
\begin{aligned} \sum _{i=1}^{m}r_{i}(Z\cup K)+\lambda (Z)&= r(Z_{1})+r(K[t_{*}])+\lambda (Z_{1})+\lambda (Y_{2}^{\prime \prime })\nonumber \\&= \widetilde{\lambda } (Z_{1})+r(K[t_{*}])+\lambda (Y_{2}^{\prime \prime }). \end{aligned}
(46)
Since $$\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}=Y_{1}^{\prime \prime }\cup K[t_{*}]$$ and $$|\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}|=m$$, we have $$Y_{1}^{\prime \prime }\in \mathcal {H}^{m}[t_{*}]$$. Applying (46) with $$Z_{1}=Y_{1}^{\prime \prime }$$, we obtain
\begin{aligned} \sum _{i=1}^{m}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })= \widetilde{\lambda }(Y_{1}^{\prime \prime })+r(K[t_{*}])+\lambda (Y_{2}^{\prime \prime }), \end{aligned}
i.e., property (i) holds.
Since the maximum in (45) is achieved for $$Y=Y^{\prime \prime }$$, the inequality
\begin{aligned} \sum _{i=1}^{m}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })\ge \sum _{i=1}^{m}r_{i}(Z\cup K)+\lambda (Z ) \end{aligned}
holds for any set $$Z=Z_{1}\cup Y_{2}^{\prime \prime }$$ with $$Z_{1}\in \mathcal {H}^{m}[t_{*}]$$. Then (46) and property (i) imply that $$\widetilde{\lambda }(Y_{1}^{\prime \prime })\ge \widetilde{\lambda } (Z_{1})$$. Hence, property (ii) holds.
Since $$\lambda (j)\ge 0$$ for $$j\in H$$, we should include all jobs $$j\in H$$ with $$j>t_{*}$$ into set $$Y_{2}^{\prime \prime }$$ to achieve the maximum in (45), i.e., property (iii) holds. $$\square$$
For each $$t,\,m\le t\le \bar{t}$$, define
\begin{aligned} \eta _{1}[t]=\max _{Y\in \mathcal {H}^{m}[t]}\widetilde{\lambda }(Y),\qquad \rho [t]=r(K[t]),\qquad \eta _{2}[t]=\sum _{j\in H,\,j>t}\lambda (j). \end{aligned}
We see from Lemma 4 that
\begin{aligned} \varPhi ^{\prime \prime }=dm-\max _{m\le t\le \bar{t}}\{\eta _{1}[t]+\rho [t]+\eta _{2}[t]\} \end{aligned}
(47)
holds. We now show how to compute the values $$\eta _{1}[t],\rho [t]$$, and $$\eta _{2}[t]$$ efficiently.
For $$t=m$$, define
\begin{aligned} Q_{m}=\left\{ j\in H\mid j\le m\right\} . \end{aligned}
(48)
Notice that
\begin{aligned} \max \{\widetilde{\lambda }(Y)\mid Y\in \mathcal {H}^{m}[m]\}=\max \left\{ \widetilde{\lambda }(Y)\mid Y\subseteq Q_{m},\ |Y|+\left| K[m]\right| =m\right\} =\widetilde{\lambda }(Q_{m}). \end{aligned}
Thus, we have
\begin{aligned} \eta _{1}[m]=\widetilde{\lambda }(Q_{m}),\qquad \rho [m]=r(K[m]),\qquad \eta _{2}[m]=\sum _{j\in H,\,j>m}\lambda (j). \end{aligned}
(49)
### Lemma 5
Let $$t$$ be an integer with $$m<t\le \bar{t}$$.
1. (i)
Given the values $$\rho [t-1]$$ and $$\eta _{2}[t-1],\, \rho [t]$$ and $$\eta _{2}[t]$$ can be obtained as
\begin{aligned} \rho [t]=\left\{ \begin{array}{l@{\quad }l} \rho [t-1], &{}\mathrm{if}\ \,t\in H, \\ \rho [t-1]+r(t), &{}\mathrm{if}\ \,t\in K, \end{array}\right. \quad \eta _{2}[t]=\left\{ \begin{array}{l@{\quad }l} \eta _{2}[t-1]-\lambda (t), &{}\mathrm{if}\ \,t\in H, \\ \eta _{2}[t-1], &{}\mathrm{if}\ \,t\in K. \end{array}\right. \end{aligned}
(50)
2. (ii)
Given a set $$Q\in \mathcal {H}^{m}[t-1]$$ with $$\eta _{1}[t-1]= \widetilde{\lambda }(Q)$$, the value $$\eta _{1}[t-1]$$ and job $$z\in Q$$ such that $$\widetilde{\lambda }(z)=\min _{j\in Q}\widetilde{\lambda }(j)$$, the value $$\eta _{1}[t]$$ can be obtained as
\begin{aligned} \eta _{1}[t]=\left\{ \begin{array}{l@{\quad }l} \eta _{1}[t-1], &{}\displaystyle \mathrm{if}\ \,t\in H,\ \widetilde{\lambda } (z)\ge \widetilde{\lambda }(t), \\ \displaystyle \eta _{1}[t-1]-\widetilde{\lambda }(z)+\widetilde{\lambda }(t), &{}\displaystyle \mathrm{if}\ \,t\in H,\ \widetilde{\lambda }(z)<\widetilde{ \lambda }(t), \\ \displaystyle \eta _{1}[t-1]-\widetilde{\lambda }(z), &{}\mathrm{if}\ \,t\in K. \end{array}\right. \end{aligned}
(51)
### Proof
We have $$K[t]=K[t-1]$$ if $$t\in H$$ and $$K[t]=K[t-1]\cup \{t\}$$ if $$t\in K$$. Hence, the first equation in (50) follows. The second equation in (50) is immediate from the definition of $$\eta _{2}$$. The Eq. (51) follows from the observation that $$\eta _{1}[t]$$ is equal to the sum of $$m-|K[t]|$$ largest numbers in the list $$\left( \widetilde{\lambda }(j)\mid j\in H,\ j\le t\right)$$. $$\square$$
Below we describe the procedure that uses Eqs. (42) and (47) for computing the values $$\varPhi ^{\prime }$$ and $$\varPhi ^{\prime \prime }$$. As in Sect. 5.1, the procedure outputs $$\widetilde{\varphi }_{K}^{H}(X)$$ for $$X=\hat{H}$$ and an instrumental set $$Y_{*}$$ for set $$\hat{H}$$.
Now we analyze the running time of this procedure. In Steps 1 and 2 we compute the value $$\varPhi ^{\prime }$$ and find set $$Y^{\prime }$$. Step 1 can be done in constant time. Step 2-1 can be done by selecting $$\hat{h}$$ largest numbers in the list $$(\widetilde{\lambda }(j)\mid j\in H)$$ in $$O(h)$$ time and then sorting them in $$O(\hat{h}\log \hat{h})$$ time. Since Step 2-2 can be done in $$O(k+\hat{h})$$ time, Step 2 requires $$O(k+h+\hat{h}\log \hat{h })=O(k+h\log \hat{h})=O(k+h\log m)$$ time in total.
In Steps 3 and 4 we compute the value $$\varPhi ^{\prime \prime }$$ and find set $$Y^{\prime \prime }$$. Step 3 can be also done in constant time. We assume that both $$\left( r(j)\mid j\in H\right)$$ and $$\left( r(j)\mid j\in K\right)$$ are given as sorted lists; this can be easily satisfied by appropriate preprocessing. Then, Step 4-1 can be done in $$O(h+k)$$ time by using merge sort. Step 4-2 can be done in $$O(h+k)$$ time. In Step 4-3, we implement $$Q$$ as a heap for computational efficiency. Initially $$Q=Q_{m}$$ consists of at most $$m$$ elements, and to initialize the heap $$Q$$ takes $$O(h+m\log m)$$ time. The number of elements in the heap does not increase, so that each iteration in Step 4-3 can be done in $$O(\log m)$$ time, which implies that Step 4-3 requires $$O((h+k)\log m)$$ time. Step 4-4 can be done in $$O(h+k)$$ time. Step 4-5 is needed for finding the set $$Y^{\prime \prime }$$ and is implemented as a partial rerun of Step 4-3 in $$O((h+k)\log m)$$ time.
Finally, we compute the value $$\widetilde{\varphi }_{K}^{H}(X)$$ in Step 5. We may assume that the value $$u(X)$$ in Step 5 is given in advance. The value $$\sum _{i=1}^{\min \{m,k\}}r_{i}(K)$$ can be computed in $$O(k)$$ time, since a sorted list $$\left( r(j)\mid j\in K\right)$$ is available. Hence, Step 5 can be done in $$O(k)$$ time. In total, Procedure CompPrj requires $$O((h+k)\log m)$$ time. In particular, the procedure runs in $$O(h\log m)$$ time if $$h\ge k$$.
In the rest of this subsection, we show that a slightly modified version of Procedure CompPrj can also be run in $$O(h\log m)$$ time for $$h<k$$.
First, consider the case that $$h\ge m$$. Then, we have $$k>h\ge m$$. Let $$K_{m}$$ be a set of $$m$$ jobs in $$K$$ with $$m$$ smallest release dates. It is easy to see that the jobs in $$K{\setminus } K_{m}$$ do not affect the values $$r_{i}(K)$$ and $$r_{i}(Y\cup K)$$, i.e., it holds that
\begin{aligned} r_{i}(K)=r_{i}(K_{m}),\quad r_{i}(Y\cup K)=r_{i}(Y\cup K_{m}),\qquad i=1,2,\ldots ,m,\quad Y\in 2^{H}. \end{aligned}
It follows that in the formula (37) for $$\widetilde{ \varphi }_{K}^{H}(X)$$, the value in the right-hand side remains the same even if we replace $$K$$ and $$k$$ with $$K_{m}$$ and $$m$$, respectively. Making the same replacement in Procedure CompPrj, we deduce that it will run in $$O((h+m)\log m)=O(h\log m)$$ time, provided that set $$K_{m}$$ is given in advance.
We finally consider the case that $$h<m$$. From the discussion above, we may assume that $$k\le m$$. For any $$Y\in 2^{H}$$, the contribution of the release dates into the right-hand side of (37) is equal to $$\sum _{i=1}^{k}r_{i}(K)-\sum _{i=1}^{\min \{m,|Y|+k\}}r_{i}(Y\cup K)$$. Let $$k^{\prime }=m-h$$ and $$K^{\prime }$$ be the set of jobs in $$K$$ with $$k^{\prime }$$ smallest release dates among $$r(j),\,j\in K$$. Since $$|Y|\le h<m$$, each of the values $$r(j),\,j\in K^{\prime }$$, contributes to the sum $$\sum _{i=1}^{\min \{m,|Y|+k\}}r_{i}(Y\cup K)$$. Hence, it follows that
\begin{aligned} \sum _{i=1}^{k}r_{i}(K)-\sum _{i=1}^{\min \{m,|Y|+k\}}r_{i}(Y\cup K)&= \sum _{i=1}^{k-k^{\prime }}r_{i}\left( K{\setminus } K^{\prime }\right) \\&\quad -\,\sum _{i=1}^{\min \{m,|Y|+(k-k^{\prime })\}}r_{i}\left( Y\cup (K{\setminus } K^{\prime })\right) . \end{aligned}
Thus, in formula (37), the value in the right-hand side remains the same if we replace $$K$$ and $$k$$ with $$K{\setminus } K^{\prime }$$ and $$k-k^{\prime }$$, respectively. Making the same replacement in Procedure CompPrj, we deduce that it will run in $$O((h+k-k^{\prime })\log m)$$ time, provided that the set $$K{\setminus } K^{\prime }$$ is given in advance. Since $$k-k^{\prime }=k-(m-h)\le h$$ holds for $$k\le m$$, the running time of Procedure CompPrj is $$O(h\log m)$$.
We are now ready to prove the main statement regarding problem $$P|r(j),\,p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$.
### Theorem 6
Problem $$P|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ can be solved in $$O(n\log m\log n)$$ time.
### Proof
As proved above, Procedure CompPrj applied to Problem LP$$(H,F,K,$$$$\mathbf {l},\mathbf {u)}$$ takes $$O(h\log m)$$ time. In terms of Theorem 4 on the running time of Procedure Decomp, we have proved that $$T_{Y_{*}}(h)=O(h\log m)$$.
In the analysis of the time complexity of Procedure CompPrj, we assume that certain information is given as part of the input. This assumption can be satisfied by an appropriate preprocessing, when we decompose a problem at a certain level of recursion into two subproblems, based on the found set $$Y_{*}$$. It is not hard to see that this can be done in $$O(h\log m)$$ time, i.e., we have $$T_{\mathrm{Split}}(h)=O(h\log m)$$. Hence, the theorem follows from Theorem 4. $$\square$$
### 5.3 Uniform machines, different release dates
In this subsection, we show that problem $$Q|r(j),p(j)=\overline{p} (j)-x(j),C(j)\le d,pmtn|W$$ can be solved in $$O(nm\log n)$$ time by the decomposition algorithm. To achieve this, we consider Problem LP$$(H,F,K, \mathbf {l},\mathbf {u})$$ that arises at some level of recursion of Procedure Decomp and present a procedure for computing the function $$\widetilde{\varphi }_{K}^{H}:2^{H}\rightarrow \mathbb {R}$$ given by (22). We show that for an arbitrary set $$X\subseteq H$$ the value $$\widetilde{\varphi }_{K}^{H}(X)$$ can be computed in $$O(hm)$$ time. For a heavy-element set $$\hat{H}\subseteq H{\setminus } F$$, finding a set $$Y_{*}$$ that is instrumental for set $$\hat{H}$$ also requires $$O(hm)$$ time.
Recall that for problem $$Q|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ the rank function $$\varphi :2^{N}\rightarrow \mathbb {R}$$ is defined by (12), i.e.,
\begin{aligned} \varphi (X)=dS_{\min \left\{ m,\left| X\right| \right\} }-\sum _{i=1}^{\min \left\{ m,\left| X\right| \right\} }s_{i}r_{i}(X), \end{aligned}
where $$r_{i}\left( X\right)$$ denotes the $$i$$-th smallest release dates among the jobs of set $$X$$. This, together with (30), implies that
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)&= u(X)-\bigg (dS_{\min \{m,k\}}-\sum _{i=1}^{\min \{m,k\}}s_{i}r_{i}(K)\bigg ) \nonumber \\&\quad +\,\min _{Y\in 2^{H}}\Bigg \{ \bigg (dS_{\min \{m,|Y|+k\}}-\sum _{i=1}^{\min \{m,|Y|+k\}}s_{i}r_{i}(Y\cup K)\bigg )-\,\lambda (Y)\Bigg \},\qquad \end{aligned}
(52)
where $$\lambda (j),\,j\in H,$$ are given by (31).
Let $$\hat{h}$$ be defined by (29). Computation of the minimum in the last term in (52) is done differently for sets $$Y\subseteq H$$ with $$\left| Y\right| \le \hat{h}$$ and $$\left| Y\right| >\hat{h}$$. With $$\mathcal {H}_{v},\,0\le v\le h,$$ defined by (28), introduce
\begin{aligned} \varPhi ^{\prime }=\left\{ \begin{array}{l@{\quad }l} \displaystyle \min \limits _{0\le v\le \hat{h}}\left\{ dS_{v+k}-\max \limits _{Y\in \mathcal {H}_{v}}\left\{ \sum _{i=1}^{v+k}s_{i}r_{i}(Y\cup K)+\lambda (Y)\right\} \right\} , &{} \mathrm{if}\ \,m>k, \\ +\infty , &{}\mathrm{if}\ \,m\le k, \end{array}\right. \end{aligned}
(53)
and
\begin{aligned} \varPhi ^{\prime \prime }=\left\{ \begin{array}{l} \displaystyle dS_{m}-\max \bigg \{\sum _{i=1}^{m}s_{i}r_{i}(Y\cup K)+\lambda (Y)\ \bigg |\ Y\in 2^{H},\ |Y|>\hat{h}\bigg \}, \\ \qquad \qquad \mathrm{if}\ \,h>m-k-1, \\ \\ +\infty , \qquad \mathrm{if}\ \,h\le m-k-1. \\ \end{array}\right. \end{aligned}
(54)
Thus, (52) can be rewritten as
\begin{aligned} \widetilde{\varphi }_{K}^{H}(X)=u(X)-\left( dS_{\min \{m,k\}}-\sum _{i=1}^{\min \{m,k\}}s_{i}r_{i}(K)\right) +\min \left\{ \varPhi ^{\prime },\varPhi ^{\prime \prime }\right\} \!. \end{aligned}
(55)
We explain how to compute the values $$\varPhi ^{\prime }$$ and $$\varPhi ^{\prime \prime }$$. As in Sect. 5.2, for simplicity of the description, we assume, without loss of generality, that the jobs are renumbered so that (43) holds.
In order to compute $$\varPhi ^{\prime }$$, for $$v$$ and $$t$$ such that $$0\le v\le \hat{h}$$ and $$1\le t\le h+k$$, define
\begin{aligned} \begin{aligned} \mathcal {H}_{v}[t]&=\left\{ Y\in \mathcal {H}_{v}\mid Y\subseteq \left\{ 1,2,\ldots ,t\right\} \right\} , \\ \xi _{v}[t]&=\max _{Y\in \mathcal {H}_{v}[t]}\left\{ \sum _{i=1}^{v+k}s_{i}r_{i}(Y\cup K)+\lambda (Y)\right\} , \end{aligned} \end{aligned}
(56)
where $$\xi _{v}[t]$$ is set to $$-\infty$$ if $$\mathcal {H}_{v}[t]=\emptyset$$. Then, we have
\begin{aligned} \varPhi ^{\prime }=\max _{0\le v\le \hat{h}}\left\{ dS_{v+k}-\xi _{v}[h+k]\right\} . \end{aligned}
(57)
Notice that all $$k$$ jobs of set $$K$$ and $$v$$ jobs of set $$Y\in \mathcal {H}_{v}[t]$$ contribute into $$\sum _{i=1}^{v+k}s_{i}r_{i}(Y\cup K)$$. The required values $$\xi _{v}[t]$$ can be computed by a dynamic programming algorithm. Assume that for the current numbering of the jobs in $$H\cup K$$, the jobs in set $$K$$ get the numbers $$j_{1},j_{2},\ldots ,j_{k}$$, so that $$r\left( j_{1}\right) \le \cdots \le r\left( j_{k}\right)$$.
For $$v=0$$, notice that $$\mathcal {H}_{0}[t]=\left\{ \emptyset \right\}$$, so that in accordance with (56) we compute
\begin{aligned} \xi _{0}\left[ t\right] =\sum _{i=1}^{k}s_{i}r(j_{i}),\qquad t=1,\ldots ,h+k. \end{aligned}
(58)
If job $$1$$ belongs to set $$H$$, then $$\mathcal {H}_{1}[1]=\left\{ \left\{ 1\right\} \right\} ;$$ otherwise $$\mathcal {H}_{1}[1]=\emptyset$$. Besides, $$\mathcal {H}_{v}[1]=\emptyset$$ for each $$v\ge 2$$. Suppose that for some value of $$t,\,1\le t\le h+k$$, the sets $$\mathcal {H}_{v}[\tau ]$$ have been identified for all $$v$$ and $$\tau ,~0\le v\le \hat{h},\,1\le \tau \le t-1$$. Then
\begin{aligned} \mathcal {H}_{v}[t]=\left\{ \begin{array}{l@{\quad }l} \mathcal {H}_{v}[t-1]\cup \left\{ Y\cup \left\{ t\right\} \mid Y\in \mathcal {H }_{v-1}[t-1]\right\} , &{}\mathrm{if}~t\in H, \\ \mathcal {H}_{v}[t-1] &{} \mathrm{if}~t\in K. \end{array}\right. \end{aligned}
(59)
Given a job $$t\in H$$, let us determine the position of job $$t$$ relative to the jobs of set $$K$$. If $$r(t)>r\left( j_{k}\right)$$, then define $$\ell _{t}=k+1;$$ otherwise, set $$\ell _{t}$$ to be equal to $$\ell$$ such that for job $$j_{\ell }\in K$$ we have that $$j_{\ell -1}<t<j_{\ell }$$. The values of $$\ell _{t}$$ can be found for all $$t\in H$$ in $$O(h+k)$$ time by scanning the sorted sequence of jobs of set $$H\cup K$$.
For some $$t\in H$$ and $$v,\,1\le v\le \hat{h}$$, assume that we have found the value
\begin{aligned} \xi _{v-1}[t-1]=\sum _{i=1}^{v+k-1}s_{i}r_{i}\left( \bar{Y}\cup K\right) +\lambda (\bar{Y}), \end{aligned}
where $$\bar{Y}\in \mathcal {H}_{v-1}[t-1]$$. Take $$\ell =\ell _{t}$$.
If $$\ell =k+1$$, then job $$t$$ has the largest release date among the jobs of set $$\bar{Y}\cup K\cup \left\{ t\right\}$$, so that
\begin{aligned} \xi _{v}[t]&= \max \left\{ \xi _{v}[t-1],\xi _{v-1}[t-1]+s_{k+v}r(t)+\lambda (t)\right\} \\&= \max \left\{ \xi _{v}[t-1],\xi _{v-1}[t-1]+s_{\ell +v-1}r(t)+\lambda (t)\right\} . \end{aligned}
If $$\ell \le k$$, then among jobs $$j\in \bar{Y}\cup K$$ such that $$j\le j_{\ell }$$, there are $$v-1$$ jobs of set $$H$$ and $$\ell$$ jobs of set $$K$$, i.e., job $$j_{\ell }$$ has the $$\left( \ell +v-1\right) -$$th smallest release date in $$\bar{Y}\cup K$$. We deduce that the total contribution of the jobs $$j_{\ell },j_{\ell +1},\ldots ,j_{k}$$ into $$\sum _{i=1}^{v+k-1}s_{i}r_{i} \left( \bar{Y}\cup K\right)$$ is equal to
\begin{aligned} \beta \left( \ell ,v-1\right) =\sum _{i=\ell }^{k}s_{v+i-1}r(j_{i}). \end{aligned}
For computing $$\xi _{v}[t]$$, we need to find a set $$\bar{Y}_{+}\in \mathcal {H }_{v}[t]$$ such that
\begin{aligned} \xi _{v}[t]=\sum _{i=1}^{v+k}s_{i}r_{i}\left( \bar{Y}_{+}\cup K\right) +\lambda (\bar{Y}_{+}). \end{aligned}
According to (59), if $$\bar{Y}_{+}$$ is sought in set $$\mathcal {H} _{v}[t-1]$$, then $$\xi _{v}[t]=\xi _{v}[t-1]$$. Otherwise, it is sought in the sets obtained from sets of $$\mathcal {H}_{v-1}[t-1]$$ by including job $$t$$. In the latter case, set $$\bar{Y}_{+}$$ can be found based on set $$\bar{Y}$$ and on those changes that are caused by the insertion of job $$t$$. As a result of this insertion, job $$t$$ has the $$\left( \ell +v-1\right) -$$th smallest release date in $$\bar{Y}\cup K\cup \left\{ t\right\}$$, so that it will contribute $$s_{\ell +v-1}r(t)+\lambda (t)$$ into $$\xi _{v}[t]$$. Notice that all jobs of set $$K$$ continue making contributions, since $$v<m-k$$. The new joint contribution of jobs $$j_{\ell },j_{\ell +1},\ldots ,j_{k}$$ becomes
\begin{aligned} \beta \left( \ell ,v\right) =\sum _{i=\ell }^{k}s_{v+i}r(j_{i}). \end{aligned}
Therefore, we deduce:
\begin{aligned} \xi _{v}[t]=\max \left\{ \xi _{v}[t-1],\xi _{v-1}[t-1]+\beta \left( \ell ,v\right) -\beta \left( \ell ,v-1\right) +s_{\ell +v-1}r(t)+\lambda (t)\right\} . \end{aligned}
(60)
All required partial sums $$\beta \left( \ell ,v\right)$$ can be found at the preprocessing stage by computing
\begin{aligned} \beta \left( k+1,v\right) =0,\qquad v=0,\ldots ,\hat{h}, \end{aligned}
(61)
followed by computing all $$\beta \left( \ell ,v\right)$$ for $$v,\,0\le v\le \hat{h}$$ and $$\ell ,~\ell =k-1,k-2,\ldots ,1$$ by
\begin{aligned} \beta \left( \ell ,v\right) =\beta \left( \ell +1,v\right) +s_{v+\ell }r(j_{\ell }). \end{aligned}
(62)
Notice that for $$\ell =k+1$$ both $$\beta \left( \ell ,v\right) =\beta \left( \ell ,v-1\right) =0$$, so that the recursive formula (60) is valid for $$\ell =k+1$$ as well.
Applying (60) for $$t,\,1\le t\le h+k$$, and $$v,\,1\le v\le m-k$$ with the initial condition (58), we may find all values $$\xi _{v}[t]$$ needed for computing $$\varPhi ^{\prime }$$ by (57).
We now consider the value $$\varPhi ^{\prime \prime }$$. It is assumed that $$\hat{ h}<h$$, i.e., $$h+k\ge m$$. Suppose that we know the set $$Y^{\prime \prime }\in 2^{H}$$ such that $$|Y^{\prime \prime }|>\hat{h}$$ and
\begin{aligned} \sum _{i=1}^{m}s_{i}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })=\max \left\{ \sum _{i=1}^{m}s_{i}r_{i}(Y\cup K)+\lambda (Y)\ \bigg |\ Y\in 2^{H},\ |Y|>\hat{h}\right\} . \end{aligned}
(63)
Similarly to Sect. 5.2, for $$t,\,1\le t\le h+k$$, introduce sets $$K\left[ t\right]$$ and $$\mathcal {H}^{z}[t]$$ of the form (44). Let $$t_{*}\in H\cup K$$ be the job such that the set $$\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}$$ contains exactly $$m$$ elements. Since the jobs are numbered in non-decreasing order of the release dates, the set $$\{j\in Y^{\prime \prime }\cup K\mid j\le t_{*}\}$$ contains the jobs in $$Y^{\prime \prime }\cup K$$ with $$m$$ smallest release dates.
Putting $$Y_{1}^{\prime \prime }=\{j\in Y^{\prime \prime }\mid j\le t_{*}\}\in \mathcal {H}^{m}[t_{*}]$$, we have
\begin{aligned} \sum _{i=1}^{m}s_{i}r_{i}\left( Y^{\prime \prime }\cup K\right) =\sum _{i=1}^{m}s_{i}r_{i}\left( Y_{1}^{\prime \prime }\cup K\left[ t_{*}\right] \right) . \end{aligned}
Putting $$Y_{2}^{\prime \prime }=Y^{\prime \prime }{\setminus } Y_{1}^{\prime \prime }=\{j\in Y^{\prime \prime }\mid j>t_{*}\}$$, we have
\begin{aligned} \sum _{i=1}^{m}s_{i}r_{i}(Y^{\prime \prime }\cup K)+\lambda (Y^{\prime \prime })=\sum _{i=1}^{m}s_{i}r_{i}(Y_{1}^{\prime \prime }\cup K[t_{*}])+\lambda (Y_{1}^{\prime \prime })+\lambda \left( Y_{2}^{\prime \prime }\right) . \end{aligned}
Thus, we should include all jobs $$j\in H$$ with $$j>t_{*}$$ into set $$Y_{2}^{\prime \prime }$$ to achieve the maximum in (63), i.e., we may assume $$Y_{2}^{\prime \prime }=\{j\in H\mid j>t_{*}\}$$. We also have
\begin{aligned} \sum _{i=1}^{m}s_{i}r_{i}(Y_{1}^{\prime \prime }\cup K)+\lambda (Y_{1}^{\prime \prime })=\max _{Y\in \mathcal {H}^{m}[t_{*}]}\left\{ \sum _{i=1}^{m}s_{i}r_{i}(Y\cup K[t_{*}])+\lambda (Y)\right\} . \end{aligned}
For $$z$$ and $$t,\,1\le z\le m,\,1\le t\le h+k$$, define
\begin{aligned} \zeta _{z}[t]=\left\{ \begin{array}{l@{\quad }l} \max \limits _{Y\in \mathcal {H}^{z}[t]}\left\{ \displaystyle \sum \limits _{i=1}^{z}s_{i}r_{i}(Y\cup K\left[ t\right] )+\lambda (Y)\right\} , &{} \mathrm{if~}z\ge \left| K\left[ t\right] \right| , \\ -\infty , &{} \mathrm{otherwise.} \end{array}\right. \end{aligned}
(64)
Provided that these values are known, we can compute $$\varPhi ^{\prime \prime }$$ by
\begin{aligned} \varPhi ^{\prime \prime }=dS_{m}-\max _{m\le t\le h+k}\left\{ \zeta _{m}[t]+\sum _{j\in H,~j>t}\lambda (j)\right\} . \end{aligned}
(65)
Notice that for a given $$t,\,t\ge m$$, the term $$\sum _{j\in H,~j>t}\lambda (j)$$ is identical to $$\eta _{2}[t]$$ used in Sect. 5.2 and for its computation we can use the formulae (50) with the initial condition (49).
For convenience, define $$\lambda (j)=0$$ for $$j\in K$$. The required values of $$\zeta _{z}[t]$$ can be found recursively by
\begin{aligned} \zeta _{z}[t]=\max \left\{ \zeta _{z}[t-1],~\zeta _{z-1}[t-1]+s_{z}r(t)+\lambda (t)\right\} ,~1\le z\le m,~1\le t\le h+k\nonumber \\ \end{aligned}
(66)
with the initial conditions
\begin{aligned} \zeta _{0}[t]=0,\,0\le t\le h+k;\quad \zeta _{z}[0]=-\infty ,\,1\le z\le m. \end{aligned}
(67)
To see why the recursion (66) works, notice that if in the expression for $$\zeta _{z}[t]$$ job $$t\in H$$ does not belong to set $$Y$$ that delivers the maximum in (64), then $$\zeta _{z}[t]=\zeta _{z}[t-1]$$. Otherwise, job $$t\in H$$, as the job with the largest release date, will be matched with the smallest multiplier $$s_{z}$$ and will make an additional contribution of $$\lambda (t)$$, so that $$\zeta _{z}[t]=\zeta _{z-1}[t-1]+s_{z}r(t)+\lambda (t)$$. The latter situation also occurs if $$t\in K$$, since in this case $$t\in K[t]$$.
Now we are ready to present the procedure that outputs $$\widetilde{\varphi } _{K}^{H}(X)$$ for $$X=\hat{H}$$ and an instrumental set $$Y_{*}$$ for set $$\hat{H}$$.
The most time consuming parts of the procedure are the double loops is Steps 6 and 10, which require $$O\left( \hat{h}\left( h+k\right) \right)$$ time and $$O(m(h+k))$$ time, respectively. Thus, the overall time complexity of Procedure CompQrj is $$O(m(h+k))$$.
For $$h\ge k$$, the time complexity becomes $$O(mh)$$. We can show that the bound $$O(mh)$$ also applies to the case that $$h<k$$; this can be done by an approach similar to that used in Sect. 5.2. Hence, the next theorem follows from Theorem 4.
### Theorem 7
Problem $$Q|r(j),p(j)=\overline{p}(j)-x(j),C(j)\le d,pmtn|W$$ can be solved in $$O(nm\log n)$$ time.
## 6 Conclusions
In this paper, we develop a decomposition recursive algorithm for maximizing a linear function over a submodular polyhedron intersected with a box. We illustrate the power of our approach by adapting the algorithm to solving three scheduling problems with controllable processing times. In these problems, it is required to find a preemptive schedule that is feasible with respect to a given deadline and minimizes total compression cost. The resulting algorithms run faster than previously known.
We intend to extend this approach to other scheduling models with controllable processing times, e.g., to a single machine with distinct release dates and deadlines. It will be interesting to identify problems, including those outside the area of scheduling, for which an adaptation of our approach is beneficial.
Although throughout the paper we assume that the processing times are real numbers from intervals $$\left[ \underline{p}(j),\overline{p}(j)\right]$$, the formulated approach is applicable to the case where the processing times may only take integer values in the interval. Indeed, if all the input numbers, except for costs $$w(j)$$, are given by integers, then the submodular rank function takes integer values, and the optimal solution $$p(j),\,j\in N$$, found by Procedure Decomp is integral.
## Notes
### Acknowledgments
This research was supported by the EPSRC funded project EP/J019755/1 “Submodular Optimisation Techniques for Scheduling with Controllable Parameters”. The first author was partially supported by the Humboldt Research Fellowship of the Alexander von Humboldt Foundation and by Grant-in-Aid of the Ministry of Education, Culture, Sports, Science and Technology of Japan, grants 24500002 and 25106503.
### References
1. 1.
Brucker, P.: Scheduling Algorithms, 5th edn. Springer, Berlin (2007)
2. 2.
Chen, Y.L.: Scheduling jobs to minimize total cost. Eur. J. Oper. Res. 74, 111–119 (1994)
3. 3.
Fujishige, S.: Lexicographically optimal base of a polymatroid with respect to a weight factor. Math. Oper. Res. 5, 186–196 (1980)
4. 4.
Fujishige, S.: Submodular Functions and Optimization. Annals of Discrete Mathematics, vol. 58, 2nd edn. Elsevier, Amsterdam (2005)Google Scholar
5. 5.
Gonzales, T.F., Sahni, S.: Preemptive scheduling of uniform processor systems. J. ACM 25, 92–101 (1978)
6. 6.
Groenevelt, H.: Two algorithms for maximizing a separable concave function over a polymatroid feasible region. Eur. J. Oper. Res. 54, 227–236 (1991)
7. 7.
Iwata, S., Fleischer, L., Fujishige, S.: A combinatorial, strongly polynomial-time algorithm for minimizing submodular functions. J. ACM 48, 761–777 (2001)
8. 8.
Janiak, A., Kovalyov, M.Y.: Single machine scheduling with deadlines and resource dependent processing times. Eur. J. Oper. Res. 94, 284–291 (1996)
9. 9.
Jansen, K., Mastrolilli, M.: Approximation schemes for parallel machine scheduling problems with controllable processing times. Comput. Oper. Res. 31, 1565–1581 (2004)
10. 10.
Katoh, N., Ibaraki, T.: Resource allocation problems. In: Du, D.-Z., Pardalos, P.M. (eds.) Handbook of Combinatorial Optimization, vol. 2, pp. 159–260. Kluwer, Dordrecht (1998)Google Scholar
11. 11.
Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., Shmoys, D.B.: Sequencing and scheduling: algorithms and complexity. In: Graves, S.C., Rinnooy Kan, A.H.G., Zipkin, P.H. (eds.) Handbooks in Operations Research and Management Science. Logistics of Production and Inventory, vol. 4, pp. 445–522. Elsevier, Amsterdam (1993)Google Scholar
12. 12.
Leung, J.Y.-T.: Minimizing total weighted error for imprecise computation tasks. In: Leung, J.Y.-T. (eds.) Handbook of Scheduling: Algorithms, Models and Performance Analysis, pp. 34-1–34-16. Chapman & Hall/CRC, London (2004)Google Scholar
13. 13.
McCormick, S.T.: Fast algorithms for parametric scheduling come from extensions to parametric maximum flow. Oper. Res. 47, 744–756 (1999)
14. 14.
McNaughton, R.: Scheduling with deadlines and loss functions. Manag. Sci. 12, 1–12 (1959)
15. 15.
Nemhauser, G.L., Wolsey, L.A.: Integer and Combinatorial Optimization. Wiley, New York (1988)
16. 16.
Nowicki, E., Zdrzałka, S.: A survey of results for sequencing problems with controllable processing times. Discrete Appl. Math. 26, 271–287 (1990)
17. 17.
Nowicki, E., Zdrzałka, S.: A bicriterion approach to preemptive scheduling of parallel machines with controllable job processing times. Discrete Appl. Math. 63, 237–256 (1995)
18. 18.
Sahni, S.: Preemptive scheduling with due dates. Oper. Res. 27, 925–934 (1979)
19. 19.
Sahni, S., Cho, Y.: Scheduling independent tasks with due times on a uniform processor system. J. ACM 27, 550–563 (1980)Google Scholar
20. 20.
Schrijver, A.: A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J. Comb. Theory B 80, 346–355 (2000)
21. 21.
Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency. Springer, Berlin (2003)Google Scholar
22. 22.
Shabtay, D., Steiner, G.: A survey of scheduling with controllable processing times. Discrete Appl. Math. 155, 1643–1666 (2007)
23. 23.
Shakhlevich, N.V., Strusevich, V.A.: Pre-emptive scheduling problems with controllable processing times. J. Sched. 8, 233–253 (2005)
24. 24.
Shakhlevich, N.V., Strusevich, V.A.: Preemptive scheduling on uniform parallel machines with controllable job processing times. Algorithmica 51, 451–473 (2008)
25. 25.
Shakhlevich, N.V., Shioura, A., Strusevich, V.A.: Single machine scheduling with controllable processing times by submodular optimization. Int. J. Found. Comput. Sci. 20, 247–269 (2009)
26. 26.
Shakhlevich, N.V., Shioura, A., Strusevich, V.A.: Fast divide-and-conquer algorithms for preemptive scheduling problems with controllable processing times—a polymatroidal approach. In: Halperin, D., Mehlhorn, K. (eds.) Lecture Notes Computer Science 5193, ESA 2008, pp. 756–767. Springer, Berlin (2008)Google Scholar
27. 27.
Shioura, A., Shakhlevich, N.V., Strusevich, V.A.: A submodular optimization approach to bicriteria scheduling problems with controllable processing times on parallel machines. SIAM J. Discrete Math. 27, 186–204 (2013)
## Authors and Affiliations
• Akiyoshi Shioura
• 1
• Natalia V. Shakhlevich
• 2
• Vitaly A. Strusevich
• 3
1. 1.Graduate School of Information SciencesTohoku UniversitySendaiJapan
2. 2.School of ComputingUniversity of LeedsLeedsUK
3. 3.Department of Mathematical Sciences, Old Royal Naval CollegeUniversity of GreenwichLondonUK | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698286056518555, "perplexity": 1121.3827199731425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689102.37/warc/CC-MAIN-20170922183303-20170922203303-00397.warc.gz"} |
http://mathoverflow.net/questions/840/the-core-question-of-topology?sort=votes | # The core question of topology
As I see it, the core question of topology is to figure out whether a homeomorphism exists between two topological spaces.
To answer this question, one defines various properties of a space such as connectedness, compactness, the fundamental group, betti numbers etc.
However, it seems that these properties can at best be used to distinguish two spaces - i.e. if X is a space with property Q, and Y is a space without property Q, then we can say with certainty that X & Y are not homeomorphic.
My question is this: given two arbitrary spaces, how does one show that they are homeomporphic, without explicitly showing a homeomorphism?
-
I'm not sure the premise of this question is valid any more than the core question of group theory is to figure out whether an isomorphism exists between two groups. – Qiaochu Yuan Oct 17 '09 at 6:37
You might rephrase this question, removing the claim that this is the core question of topology, especially in the light of negative answers below. – Scott Morrison Oct 17 '09 at 14:02
Thanks so much guys, very helpful! – Tejus Oct 18 '09 at 8:39
I agree, the question is too general and not properly phrased. But thanks for the responses, very helpful to see the big picture of what topology is all about. – Tejus Oct 18 '09 at 8:43
What about homotopy? – Harry Gindi Jan 6 '10 at 7:41
As others have noted, it's hopeless to try to answer this question for general topological spaces. However, there are a few positive results if you assume, say, that X and Y are both simply connected closed manifolds of a given dimension. For example, Freedman showed that if X and Y are oriented and have dimension four, then to check whether they're homeomorphic you just need to compute (i) the bilinear "intersection" forms on H^2(X;Z) and H^2(Y;Z) induced by the cup product; and (ii) a Z/2-valued invariant called the Kirby-Siebenmann invariant. The invariant in (ii) obstructs the existence of a smooth structure, so if you happened to know that both X and Y were smooth manifolds (hence that their Kirby-Siebenmann invariants vanished) you'd just have to look at their intersection forms to determine whether they're homeomorphic (however a great many examples show that this wouldn't suffice to show that they're diffeomorphic).
In higher dimensions, Smale's h-cobordism theorem shows that two simply connected smooth manifolds are diffeomorphic as soon as there is a cobordism between them for which the inclusion of both manifolds is a homotopy equivalence. Checking this criterion can still be subtle, but work of Wall and Barden shows that in the simply-connected 5-dimensional case it suffices to check that there's an isomorphism on second homology H2 which preserves both (i) the second Stiefel-Whitney classes, and (ii) a certain "linking form" on the torsion subgroup of H2.
If you drop the simply-connected assumption, things get rather harder--indeed if n>3 then any finitely presented group is the fundamental group of a closed n-manifold (which can be constructed in a canonical way given a presentation), and Markov (son of the probabilist) showed that the impossibility of algorithmically distinguishing whether two presentations yield the same group translates to the impossibility of algorithmically classifying manifolds. Even assuming you already knew the fundamental groups were isomorphic, there are still complications beyond what happens in the simply-connected case, but these can sometimes be overcome with the s-cobordism theorem.
In a somewhat different direction, in dimension 3 one can represent manifolds by link diagrams, and Kirby showed that two such manifolds are diffeomorphic (which in dimension 3 is equivalent to homeomorphic) iff you can get from one diagram to the other by a sequence of moves of a certain kind. (see Kirby calculus in Wikipedia; similar statements exist in dimension 4). I suppose that one could argue that this isn't an example of what you were looking for, since if one felt like it one could extract diffeomorphisms from the moves in a fairly explicit way, and one can't (AFAIK) just directly extract some invariants from the diagrams which completely determine whether the moves exist.
-
Thank you! That does help a lot actually. – Tejus Oct 18 '09 at 8:42
Could you provide a reference that homotopy equivalence which pulls back tangent bundles is a diffeomorphism? Does it somehow follow from the H-cobordism theorem? – Jason DeVito Nov 16 '09 at 19:05
Smale did not show that "two smooth manifolds are diffeomorphic as soon as there is a homotopy equivalence between them which pulls back the tangent bundle on one to the tangent bundle of the other", because this is not true. I think, exotic 7-spheres should give a counterexample. – Igor Belegradek Feb 28 '10 at 3:42
Sorry--in a hasty effort to find a clean statement in the literature without using cobordism language I overlooked some obviously-rather-important parts of the hypothesis of Theorem 7.1 of Smale's "On the structure of manifolds"...namely that the manifolds need to have vanishing cohomology in degrees above around half the dimension (so obviously they can't be closed, among other serious restrictions). I've edited the error – Mike Usher Mar 1 '10 at 17:39
In your comment you sound like the h-cobordism theorem does NOT apply to manifolds that are closed. In fact it applies to closed simply-connected manifolds of dimension >4: two such manifolds are h-cobordant iff they are diffeomorphic. – Igor Belegradek Mar 1 '10 at 22:41
Among topological spaces simplicial complexes are very nice. But even then we run into problems answering your question. Determining whether two finite simplicial complexes are homeomorphic is an undecidable problem. That means there is no algorithm that can tell you if two finite simplicial complexes are homeomorphic, in finite time. Note that these are particularly nice topological spaces and in general topological spaces can be horrendous. I'm not an expert, but considering this I would say that in general the answer to your question would be: we can't.
-
I would guess that there is a semi-algorithm that will tell you reliably if two simplicial complexes are homeomorphic - which is what the question asks for. (Though it probably does this by exhibiting an explicit homeomorphism, which the question rules out!) – HJRW Jan 6 '10 at 17:43
Like the other responders, I find your question a bit too general to address sensibly. However, I'll give one example of a way to prove two spaces are homeomorphic without providing a homeomorphism: Let M be a connected manifold and f: M --> B a submersion to another manifold. Then any two fibers of f are homeomorpic, but it can be very hard to extract an explicit homemorphism from this data. Rather than requiring that f be a submersion it is enough to require that the critical points of f have codimension 2 in B. For example, this is the easiest way to show that any two smooth hypersurfaces in CP^n of the same degree are homeomorphic.
-
Sometimes we can uniquely categorise a space $X$. We can then find a (preferably) finite list of topological properties, such that any space $Y$ that satisfies these properties must be homeomorphic to $X$. Such characterisations exist for many classical spaces like the Cantor set $C$, the rationals $Q$, the irrationals $P$, the Cantor set minus a point, the real line, the plane $R^2$, the Hilbert cube $I^N$, etc. In that case, in the proof of such theorems, we do show a homeomorphism exists, but once we have this theorem, other mathematicians need not find explicit homeomorphisms any more. I have found such theorems to be quite useful. Of course, only sufficiently nice and/or simple spaces can be characterised in this way, and the reach of such a method is quite limited, as there are far many more spaces than there are such nice lists of properties. But using such theorems, topologists could show that all completely metrisable separable topological linear spaces are homeomorphic, e.g.
-
Aren't there some neat results from Hilbert manifold theory (due to Chapman perhaps) related to your point. I vaguely remember that for Hilbert cube manifolds proper homotopy equivalence implies homeomorphism (perhaps faulty memory)?? – Tim Porter Feb 27 '10 at 8:00
Maybe this is my algebraic topology bias but I'm not sure there's anything one can say about this question in general--there are just too many topological spaces to try to classify them in any sense.
If you only want to know whether two spaces are homotopy equivalent, you can do a lot better. For example, if X and Y are simply connected CW complexes, you can (in principle) show they are homotopy equivalent without writing down any map from X to Y, by computing k-invariants.
-
What are k-invariants? – Kevin H. Lin Feb 27 '10 at 8:38
I'll describe these in the simplest case (where $\pi_1$ acts trivially on all homotopy groups). In that case for a (nice) space X we have a Postnikov tower $$X \to \cdots P_2 \to P_1 \to P_0$$ Each of the maps $P_{i+1} \to P_i$ is a fibration with fiber $K( \pi_{i+1}X, i+1)$. Under the hypothesis, one can show that each of these is (up to homotopy) a principal bundle and classified by a k-invariant $P_i \to K( \pi_{i+1}X, i+2)$, i.e. a certain cohomology class. This is described in Hatcher's Alg. Top. book. The general case is more complicated. One needs twisted cohomology. – Chris Schommer-Pries Mar 1 '10 at 13:25
Thanks, Chris. In brief, the k-invariants are algebraic data which tell you how to solve the extension problems as you go up the Postnikov tower. You can read a bit about them near the end of section 4.3 in Hatcher's book (p. 412 in the online copy). – Reid Barton Mar 1 '10 at 18:02
You often need to put some assumptions on your spaces to have a sensible answer to this question; there are just too many terrible spaces out there, and you can write down a host of topological invariants that detect differences between weird spaces but never fully answer the question.
One of the most studied classification attempts is the study of the classification of smooth closed manifolds. This leads to a lot of topics like surgery theory and Morse theory that allows you to give constructive procedures to build any manifold by elementary moves, and so the main question becomes one of extracting invariants. Homotopy type is one invariant, and homology groups can be extracted from it.
And then you're led into questions like the Poincare conjecture, or topological quantum field theories, or the classification of simply-connected 4-manifolds, et cetera, et cetera.
-
The only way I can think of giving a meaningful answer is by listing examples, interpreting this as a "big list" question. I'll give an example of proving homeomorphism to S3 non-constructively.
• Surgery along a framed link in S3 gives rise to a 3-manifold M, and a presentation for the fundamental group π of M. If π turns out to be the trivial group (you might prove this by the Todd-Coxeter process or something), the Poincare conjecture tells us that M is homeomorphic to S3. To exhibit that homeomorphism might be painful, because the proofs of the Kirby theorem are non-constructive and gives no algorithm to simplify surgery presentations of 3-manifolds.
I'm using the fact that there is a unique 3-manifold with trivial fundamental group (whose proof is non-constructive) and then finding an arbitrarily complicated construction to give you a manifold with those properties (and there are many variations on that theme). These constructions, based on surgery or some other violent operation on the manifold, give no hint of how a homeomorphism might look or how one might try to find one.
-
For 3-manifolds there is an effective algorithm to do what you're talking about. There are standard procedures to construct a triangulation of a 3-manifold obtained by surgery on a link in $S^3$. Then you apply the Rubinstein 3-sphere recognition algorithm to that triangulation, and you're done. It has exponential run-time in the number of tetrahedra in the 3-manifold triangulation and that in turn looks something like the number of crossings in your diagram times a function that measures the size of the surgery coefficients. – Ryan Budney Jan 6 '10 at 9:23
I would respectful suggest that there are other important problems in topology. One thing that the wikipedia article linked doesn't spend much time on is the "placement problem". Instead of attempting a definition, here are the first two examples that spring to my mind: classify curves in a fixed Riemann surface or classify curves in the three-sphere, each time up to isotopy. The first leads to the study of the mapping class group and perhaps Teichmuller spaces. The second leads one towards knot theory.
Notice that if $\alpha$ and $\beta$ are curves in a surface $S$ then deciding if the pairs $(S, \alpha)$ and $(S, \beta)$ are homeomorphic reduces to the classification of surfaces and so is "easy". The mapping class group still manages to be important, however!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386146426200867, "perplexity": 283.58060635385135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825365.1/warc/CC-MAIN-20160723071025-00144-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://commens.org/dictionary/entry/quote-minute-logic-chapter-iii-simplest-mathematics-4 | # The Commens DictionaryQuote from ‘Minute Logic: Chapter III. The Simplest Mathematics’
Quote:
The most ordinary fact of perception, such as “it is light,” involves precisive abstraction, or prescission. But hypostatic abstraction, the abstraction which transforms “it is light” into “there is light here,” which is the sense which I shall commonly attach to the word abstraction (since prescission will do for precisive abstraction) is a very special mode of thought. It consists in taking a feature of a percept or percepts (after it has already been prescinded from the other elements of the percept), so as to take propositional form in a judgment (indeed, it may operate upon any judgment whatsoever), and in conceiving this fact to consist in the relation between the subject of that judgment and another subject, which has a mode of being that merely consists in the truth of propositions of which the corresponding concrete term is the predicate. Thus, we transform the proposition, “honey is sweet,” into “honey possesses sweetness.”
Date:
1902
References:
CP 4.235
Citation:
‘Hypostatic Abstraction’ (pub. 18.07.15-19:06). Quote in M. Bergman & S. Paavola (Eds.), The Commens Dictionary: Peirce's Terms in His Own Words. New Edition. Retrieved from http://www.commens.org/dictionary/entry/quote-minute-logic-chapter-iii-simplest-mathematics-4.
Posted:
Jul 18, 2015, 19:06 by Mats Bergman | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957962393760681, "perplexity": 4782.316315874458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00836.warc.gz"} |
https://arcojaver.firebaseapp.com/897.html | # Command prompt for windows phone 8
Command prompt is an important part of windows operating system for many years a lot of operations related to the system management is used to perform with the help of this tool. In the power user task menu, select either command prompt or command prompt admin. To use them as an alternative to the cli, you need to download this version of cordova from cordova. Disable command prompt in windows 7, windows 8, and windows 10 how to enable or disable command prompt in windows 7, windows 8, and windows 10 a command prompt is an entry point for typing computer commands in the command prompt window. Open command prompt from the file explorer address bar. This tutorial will show you how to open an elevated command prompt that will run as administrator with full administrator rights in windows 8, windows rt, windows 8. When i type ipconfig in the command prompt, it shows two different ipv4 addresses and i cant tell which is the correct one for my computer that will allow me to test the app on a device. Windows 8 has two modes the new metro mode and the old traditional windows mode. A command prompt is an entry point for typing computer commands in the command prompt window.
Its important to know that the commands in windows 10, 8, 7, vista, and xp are called cmd commands or command prompt commands, and the commands in windows 9895 and msdos are called dos commands. Developer command prompt for visual studio microsoft docs. This answer shows how to switch the character encoding in the windows console to utf8 code page 65001, so that shells such as cmd. How to open an elevated command prompt in windows 8. Syntax of this command is explained below with some examples. Open command prompt to access folders of a usb connected. Follow this article to check out the command prompt tricks and hacks. Type cmd and then click ok to open a regular command prompt. Copy the apk file or android app now install in the adb folder and then type adb install to install apk file.
Due to significant differences between the user interface in windows 8 and the user interface in previous versions of windows, new windows 8 users may have difficulty locating tools including the command prompt. Our guide to the new msdos for mobile app for windows phone. Similar to linux command line, the command prompt in windows nt windows x, 7, 8, 8. Camera type camera to launch the ms dos camera app and take photos in ascii, cga, and blackand.
A to z list of windows cmd commands command line reference. Im tring to test my windows phone 8 app on an actual device, but i need the ip address of my computer in order to do this. I want to access my windows phone device from my pcs command prompt windows 8. In android it works fine but in windows phone 8 it. Weve included all of them in this list to help show changes in commands from operating system to operating system. By typing commands at the command prompt, you can perform tasks on your computer without using the. Choose developer command prompt for vs 2019 or the command prompt you want to use. This program is located in the start menu, and can be opened with the command run command. Hit start, type command, and youll see command prompt listed as the main result. The command prompt has long had a fixed spot in the windows start menu as well. Disable command prompt in windows 7, windows 8, and. You can also open an administrative command prompt using just the start menu or start screen in windows 8.
When we launch command prompt, the default directory it opens with is. How to resetforgot lost windows passwords using command prompt cmd duration. A computer is a combination of hardware and software. Currently in this article we are providing you some of the best command prompt tricks, hacks, codes and secrets for windows users of versions xp, 7, 8, and 10. How to make a phone call from command prompt on window. Today, microsoft released an windows store app called msdos mobile. How to get to an msdos prompt or windows command line. Theres a basic shell program which supports cmdlike commands, though it isnt actually. This application completelly native offline application, so you dont have to get. In order to assign a drive letter to a removable device, that device must support ums usb mass storage protocol. For some commands and options to work in the windows vista and 7 command line, you must run the command line as administrator.
To run a dos application in windows 8 requires a little bit of trick. In windows 8 you can open administrative command prompt from your task manager, quick access menu and through windows 8 apps search. The command prompt is very useful for us developers to run quick applications and commands. How to open command prompt windows 10, 8, 7, vista, xp. Powershell or other legit console on windows 10 mobile windows. Access the command prompt from windows 8 recovery drive. Use this solution also for windows server 2008 and. The windows phone commandline tools support creating, building, and running new projects. How to install android apps on windows phone windows 10. The cordova commandline utility is a highlevel tool that allows you to build applications across several platforms at once. By typing commands at the command prompt, you can perform tasks on your computer without using the windows graphical interface. It will automatically go in to search mode and will search for what you typed cmd.
Nowadays microsoft promotes own more advanced powershell console, but we are sure that command prompt will save its own important role in nearest future. But if you need to launch command prompt with administrative privilege, then there are other methods to do that. You can open event viewer either via a command line. Make sure you have a windows installation disk on hand. Command prompt in windows 8 where do i find command prompt for windows 8. Under windows 7 and windows 10, the program is found in the system tools folder. Im implementing an app using javascript and i have a problem with the command prompt. This tutorial will show you different ways to open a command prompt in windows 10.
Windows device logs can be retrieved from windows pc and phone using tools like event viewer and. The windows command line beginners guide gives users new to the windows command line an overview of the command prompt, from simple tasks to network configuration. Accessing windows phone device from command prompt. How to open the command prompt as administrator in windows. Using various commands, you can ask your windows os to perform desired tasks. However, the older and very similar msdos prompt does. Using windows explorer, navigate to the directory in which you need the command prompt. Its not as simple as to just doubleclick on the program and use it. With windows 8, microsoft replaced the start menu with a start screen. In versions of windows released before windows xp, like windows 98 and windows 95, command prompt does not exist.
You can follow the question or vote as helpful, but you cannot reply to this thread. Microsoft launches msdos mobile for lumia smartphones. This will bring up the standard contextual menu but behold. Tips for changing directories in windows command line. How to get windows device logs from a windows machine hexnode. Open administrative command prompt in windows 8 three. Now, msdos command prompt is available on windows phone mobile 8 and above. How to open a command prompt on an android phone quora. Access the command prompt from windows 8s recovery drive and use it to recover data.
Alternatively, you can start typing the name of the command prompt in the search box on the taskbar, and choose the result you want as the result list starts to display the search matches. I figured out couple of ways to launch the command prompt in windows 8. When you open an elevated command prompt, your working directory with be c. Open command prompt here in windows 8 informaticstech. In windows command prompt, we can change the directory using the command cd. If, by contrast, your concern is about the separate aspect of the limitations of unicode. How can i open a command prompt and change directory to this device. Now, while holding shift, right click anywhere in the white area by that, i mean not on any of the files. Launch the command prompt with or without administrative rights directly at any folder from windows explorer.
1018 985 1579 1106 1597 353 1010 564 1519 1337 437 439 893 1113 423 772 694 1646 1260 323 592 878 1579 937 1653 251 1205 611 435 115 332 1207 1437 401 1132 1083 1489 108 638 533 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602262139320374, "perplexity": 2563.7570928814944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00340.warc.gz"} |
https://phys.libretexts.org/Bookshelves/Electricity_and_Magnetism/Book%3A_Electromagnetics_I_(Ellingson)/08%3A_Time-Varying_Fields/8.06%3A_Transformers_as_Two-Port_Devices | # 8.6: Transformers as Two-Port Devices
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Section 8.5 explains the principle of operation of the ideal transformer. The relationship governing the terminal voltages $$V_1$$ and $$V_2$$ was found to be $\frac{V_1}{V_2} = p\frac{N_1}{N_2} \nonumber$ where $$N_1$$ and $$N_2$$ are the number of turns in the associated coils and $$p$$ is either $$+1$$ or $$-1$$ depending on the relative orientation of the windings; i.e., whether the reference direction of the associated fluxes is the same or opposite, respectively.
Figure $$\PageIndex{1}$$: The transformer as a two-port circuit device.
We shall now consider ratios of current and impedance in ideal transformers, using the two-port model shown in Figure $$\PageIndex{1}$$. By virtue of the reference current directions and polarities chosen in this figure, the power delivered by the source $$V_1$$ is $$V_1 I_1$$, and the power absorbed by the load $$Z_2$$ is $$-V_2 I_2$$. Assuming the transformer windings have no resistance, and assuming the magnetic flux is perfectly contained within the core, the power absorbed by the load must equal the power provided by the source; i.e., $$V_1 I_1 = -V_2 I_2$$. Thus, we have1
$\boxed{ \frac{I_1}{I_2} = -\frac{V_2}{V_1} = -p\frac{N_2}{N_1} } \nonumber$
We can develop an impedance relationship for ideal transformers as follows. Let $$Z_1$$ be the input impedance of the transformer; that is, the impedance looking into port 1 from the source. Thus, we have
\begin{aligned}
Z_{1} & \triangleq \frac{V_{1}}{I_{1}} \\
&=\frac{+p\left(N_{1} / N_{2}\right) V_{2}}{-p\left(N_{2} / N_{1}\right) I_{2}} \\
&=-\left(\frac{N_{1}}{N_{2}}\right)^{2}\left(\frac{V_{2}}{I_{2}}\right)
\end{aligned}
In Figure $$\PageIndex{1}$$, $$Z_2$$ is the the output impedance of port 2; that is, the impedance looking out port 2 into the load. Therefore, $$Z_2 = -V_2/I_2$$ (once again the minus sign is a result of our choice of sign conventions in Figure $$\PageIndex{1}$$). Substitution of this result into the above equation yields
$Z_1 = \left(\frac{N_1}{N_2}\right)^2 Z_2 \nonumber$
and therefore $\boxed{ \frac{Z_1}{Z_2} = \left(\frac{N_1}{N_2}\right)^2 } \nonumber$ Thus, we have demonstrated that a transformer scales impedance in proportion to the square of the turns ratio $$N_1/N_2$$. Remarkably, the impedance transformation depends only on the turns ratio, and is independent of the relative direction of the windings ($$p$$).
The relationships developed above should be viewed as AC expressions, and are not normally valid at DC. This is because transformers exhibit a fundamental limitation in their low-frequency performance. To see this, first recall Faraday’s Law:
$V = - N \frac{\partial}{\partial t} \Phi \nonumber$
If the magnetic flux $$\Phi$$ is not time-varying, then there is no induced electric potential, and subsequently no linking of the signals associated with the coils. At very low but non-zero frequencies, we encounter another problem that gets in the way – magnetic saturation. To see this, note we can obtain an expression for $$\Phi$$ from Faraday’s Law by integrating with respect to time, yielding
$\Phi(t) = -\frac{1}{N}\int_{t_0}^{t}V(\tau)d\tau + \Phi(t_0) \nonumber$
where $$t_0$$ is some earlier time at which we know the value of $$\Phi(t_0)$$. Let us assume that $$V(t)$$ is sinusoidally-varying. Then the peak value of $$\Phi$$ after $$t=t_0$$ depends on the frequency of $$V(t)$$. If the frequency of $$V(t)$$ is very low, then $$\Phi$$ can become very large. Since the associated cross-sectional areas of the coils are presumably constant, this means that the magnetic field becomes very large. The problem with that is that most practical high-permeability materials suitable for use as transformer cores exhibit magnetic saturation; that is, the rate at which the magnetic field is able to increase decreases with increasing magnetic field magnitude (see Section 7.16). The result of all this is that a transformer may work fine at (say) 1 MHz, but at (say) 1 Hz the transformer may exhibit an apparent loss associated with this saturation. Thus, practical transformers exhibit highpass frequency response.
It should be noted that the highpass behavior of practical transistors can be useful. For example, a transformer can be used to isolate an undesired DC offset and/or low-frequency noise in the circuit attached to one coil from the circuit attached to the other coil.
The DC-isolating behavior of a transformer also allows the transformer to be used as a balun, as illustrated in Figure $$\PageIndex{2}$$. A balun is a two-port device that transforms a single-ended (“unbalanced”) signal – that is, one having an explicit connection to a datum (e.g., ground) – into a differential (“balanced”) signal, for which there is no explicit connection to a datum. Differential signals have many benefits in circuit design, whereas inputs and outputs to devices must often be in single-ended form. Thus, a common use of transformers is to effect the conversion between single-ended and differential circuits. Although a transformer is certainly not the only device that can be used as a balun, it has one very big advantage, namely bandwidth.
Figure $$\PageIndex{2}$$: Transformers used to convert a singleended (“unbalanced”) signal to a differential (balanced) signal, and back. (© CC BY SA 3.0 (modified); SpinningSpark)
1. The minus signs in this equation are a result of the reference polarity and directions indicated in Figure $$\PageIndex{1}$$. These are more-or-less standard in electrical two-port theory (see “Additional Reading” at the end of this section), but are certainly not the only reasonable choice. If you see these expressions without the minus signs, it probably means that a different combination of reference directions/polarities is in effect.↩ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117037653923035, "perplexity": 485.76794736532247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00256.warc.gz"} |
http://math.stackexchange.com/questions/165904/lebesgue-measure-of-x-y-z-in-mathbb-r-3-x-in-mathbb-r-0-leq-y-le | # lebesgue measure of $\{ (x,y,z) \in \mathbb R ^3 : x \in \mathbb R, 0 \leq y \leq 10, z \in \mathbb Z \}$
Find the lebesgue measure of the set: $$\Bigl\{ (x,y,z) \in \mathbb R ^3 : x \in \mathbb R, \quad 0 \leq y \leq 10, \quad z \in \mathbb Z \Bigr\}$$
I think is a null set but for some reason I have stuck and I can't write down a complete solution.
Any help?
-
Well, technically the Lebesgue measure assigns a number (not a set) to your set, so you must be thinking of measure zero". What do you know about product measures / the Lebesgue measure of a countable set in $\mathbb{R}?$ – user17794 Jul 2 '12 at 23:03
@TimDuff: The Lebesgue measure of a countable set in $\mathbb R$ is zero. What do you mean by product measures? – passenger Jul 2 '12 at 23:05
$$M:=\{(x,y,z) \in \mathbb{R}^3:\ x \in \mathbb{R}, \ 0 \le y \le 10, \ z \in \mathbb{Z}\}= \bigcup_{i,j \in \mathbb{Z}}M_{i,j},$$ where each $M_{ij}:=[i,i+1]\times[0,10]\times\{k\}$ has measure $0$ since for every $\epsilon>0$ one has $M_{ij} \subset M_{ij}^\epsilon:=[i,i+1]\times[0,10]\times[-\epsilon/20,\epsilon/20]$ and $|M_{ij}^\epsilon|=\epsilon$. Therefore $M$ is a countable union of null sets, and thus $M$ is a set of measure $0$.
-
Thank you very much for your solution! – passenger Jul 2 '12 at 23:10
You can use countable additivity of Lebesgue measure along the $x$-axis and $z$-axis. Specifically your set is the disjoint union $\cup_{n,z\in\mathbb Z}\{(x,y,z)| x\in (n,n+1], y\in[0,10]\},$ each of these having measure $0.$
-
Thank you for your answer! – passenger Jul 2 '12 at 23:10
You're welcome! – Andrew Jul 2 '12 at 23:31
Let's call the set $X$. The Lebesgue measure $m$ is inner and outer-regular, so in order to show $m(X)=0$ we need only show that any compact subset $C$ of $X$ has measure $0$, which in turn we can do by showing that for any $\epsilon>0$ there is an open set $U\supset C$ with $m(U)<\epsilon$.
Since $C$ is compact, it must be bounded, thus we have upper bounds for $|x|$ and $|z|$. Let's denote these $M_x$ and $M_z$. Note that $$C\subseteq C'=\bigcup\limits_{z=-M_z}^{M_z} [-M_x,M_x]\times [0,10]\times \{z\}$$ and since $m$ is monotonic $m(C)\leq m(C')$. To see that $m(C')=0$, let $$U_n=\bigcup\limits_{z=-M_z}^{M_z} (-M_x-1,M_x+1)\times (-1,11)\times (z-1/n,z+1/n)$$ which is open and a finite union of rectangles, so its measure is easy to calculate. Specifically, $m(U_n)=(2M_x+2)\times12\times 2/n$, which goes to $0$ as $n\to \infty$. Thus $m(C')=0$ so $m(C)=0$ for any compact set $C\subseteq X$, hence $m(X)=0$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942283630371094, "perplexity": 142.67230456123747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997860453.15/warc/CC-MAIN-20140722025740-00166-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://lexique.netmath.ca/en/invariant/ | # Invariant
## Invariant
Term that refers to a property or a set that is preserved under the effect of a relation.
A set E is generally invariant by a relation if (E) = E.
### Examples
Invariant set:
Invariant property:
• The parallelism of lines in a plane is preserved by a translation in the plane. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035158157348633, "perplexity": 641.6379447709198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00100.warc.gz"} |
https://hal-upec-upem.archives-ouvertes.fr/hal-01136333 | # Free-path distribution and Knudsen-layer modeling for gaseous flows in the transition regime
Abstract : In this paper, we use molecular dynamics (MD) simulations to study the mean free path distribution of nonequilibrium gases in micronanochannel and to model the Knudsen (Kn) layer effect. It is found that the mean free path is significantly reduced near the wall and rather insensitive to flow types (Poiseuille or Couette). The Cercignani relation between the mean free path and the viscosity is adopted to capture the velocity behavior of the special zone in the framework of the extended Navier-Stokes (NS) equations. MD simulations of flows are carried out at different Kn numbers. Results are then compared with the theoretical model.
Document type :
Journal articles
Domain :
Complete list of metadata
https://hal-upec-upem.archives-ouvertes.fr/hal-01136333
Contributor : Celine Leonard Connect in order to contact the contributor
Submitted on : Friday, March 27, 2015 - 6:33:47 AM
Last modification on : Tuesday, October 19, 2021 - 4:09:04 PM
### Citation
Quy-Dong To, Céline Léonard, Guy Lauriat. Free-path distribution and Knudsen-layer modeling for gaseous flows in the transition regime. Physical Review Online Archive (PROLA), American Physical Society, 2015, 91 (2), pp.023015. ⟨10.1103/PhysRevE.91.023015⟩. ⟨hal-01136333⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260316252708435, "perplexity": 3049.2304281874513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00687.warc.gz"} |
https://en.wikipedia.org/wiki/Sun_kink | # Buckling
(Redirected from Sun kink)
Buckled skin panels on a B-52 aircraft
In engineering, buckling is the sudden change in shape of a structural component under load such as the bowing of a column under compression or the wrinkling of a plate under shear. If a structure is subjected to a gradually increasing load, when the load reaches a critical level, a member may suddenly change shape and the structure and component is said to have buckled.
Buckling may occur even though the stresses that develop in the structure are well below those needed to cause failure in the material of which the structure is composed. Further loading may cause significant and somewhat unpredictable deformations, possibly leading to complete loss of the member's load-carrying capacity. However, if the deformations that occur after buckling do not cause the complete collapse of that member, the member will continue to support the load that caused it to buckle. If the buckled member is part of a larger assemblage of components such as a building, any load applied to the buckled part of the structure beyond that which caused the member to buckle will be redistributed within the structure. Some aircraft are designed for thin skin panels to continue carrying load even in the buckled state.
## Forms of buckling
### Columns
A column under a concentric axial load exhibiting the characteristic deformation of buckling
The eccentricity of the axial force results in a bending moment acting on the beam element.
The ratio of the effective length of a column to the least radius of gyration of its cross section is called the slenderness ratio (sometimes expressed with the Greek letter lambda, λ). This ratio affords a means of classifying columns and their failure mode. The slenderness ratio is important for design considerations. All the following are approximate values used for convenience.
If the load on a column is applied through the center of gravity (centroid) of its cross section, it is called an axial load. A load at any other point in the cross section is known as an eccentric load. A short column under the action of an axial load will fail by direct compression before it buckles, but a long column loaded in the same manner will fail by springing suddenly outward laterally (buckling) in a bending mode. The buckling mode of deflection is considered a failure mode, and it generally occurs before the axial compression stresses (direct compression) can cause failure of the material by yielding or fracture of that compression member. However, intermediate-length columns will fail by a combination of direct compressive stress and bending.
In particular:
• A short steel column is one whose slenderness ratio does not exceed 50; an intermediate length steel column has a slenderness ratio ranging from about 50 to 200, and its behavior is dominated by the strength limit of the material, while a long steel column may be assumed to have a slenderness ratio greater than 200 and its behavior is dominated by the modulus of elasticity of the material.
• A short concrete column is one having a ratio of unsupported length to least dimension of the cross section equal to or less than 10. If the ratio is greater than 10, it is considered a long column (sometimes referred to as a slender column).
• Timber columns may be classified as short columns if the ratio of the length to least dimension of the cross section is equal to or less than 10. The dividing line between intermediate and long timber columns cannot be readily evaluated. One way of defining the lower limit of long timber columns would be to set it as the smallest value of the ratio of length to least cross sectional area that would just exceed a certain constant K of the material. Since K depends on the modulus of elasticity and the allowable compressive stress parallel to the grain, it can be seen that this arbitrary limit would vary with the species of the timber. The value of K is given in most structural handbooks.
The theory of the behavior of columns was investigated in 1757 by mathematician Leonhard Euler. He derived the formula, the Euler formula, that gives the maximum axial load that a long, slender, ideal column can carry without buckling. An ideal column is one that is perfectly straight, made of a homogeneous material, and free from initial stress. When the applied load reaches the Euler load, sometimes called the critical load, the column comes to be in a state of unstable equilibrium. At that load, the introduction of the slightest lateral force will cause the column to fail by suddenly "jumping" to a new configuration, and the column is said to have buckled. This is what happens when a person stands on an empty aluminum can and then taps the sides briefly, causing it to then become instantly crushed (the vertical sides of the can understood as an infinite series of extremely thin columns).[citation needed] The formula derived by Euler for long slender columns is given below.
${\displaystyle F={\frac {\pi ^{2}EI}{(KL)^{2}}}}$
where
${\displaystyle F}$, maximum or critical force (vertical load on column),
${\displaystyle E}$, modulus of elasticity,
${\displaystyle I}$, smallest area moment of inertia (second moment of area) of the cross section of the column,
${\displaystyle L}$, unsupported length of column,
${\displaystyle K}$, column effective length factor, whose value depends on the conditions of end support of the column, as follows.
For both ends pinned (hinged, free to rotate), ${\displaystyle K=1.0}$.
For both ends fixed, ${\displaystyle K=0.50}$.
For one end fixed and the other end pinned, ${\displaystyle K={\sqrt {2}}/2=0.7071}$
For one end fixed and the other end free to move laterally, ${\displaystyle K=2.0}$.
${\displaystyle KL}$ is the effective length of the column.
Examination of this formula reveals the following facts with regard to the load-bearing ability of slender columns.
• The elasticity of the material of the column and not the compressive strength of the material of the column determines the column's buckling load.
• The buckling load is directly proportional to the second moment of area of the cross section.
• The boundary conditions have a considerable effect on the critical load of slender columns. The boundary conditions determine the mode of bending of the column and the distance between inflection points on the displacement curve of the deflected column. The inflection points in the deflection shape of the column are the points at which the curvature of the column changes sign and are also the points at which the column's internal bending moments of the column are zero. The closer the inflection points are, the greater the resulting axial load capacity (bucking load) of the column.
A demonstration model illustrating the different "Euler" buckling modes. The model shows how the boundary conditions affect the critical load of a slender column. Notice that the columns are identical, apart from the boundary conditions.
A conclusion from the above is that the buckling load of a column may be increased by changing its material to one with a higher modulus of elasticity (E), or changing the design of the column's cross section so as to increase its moment of inertia. The latter can be done without increasing the weight of the column by distributing the material as far from the principal axis of the column's cross section as possible. For most purposes, the most effective use of the material of a column is that of a tubular section.
Another insight that may be gleaned from this equation is the effect of length on critical load. Doubling the unsupported length of the column quarters the allowable load. The restraint offered by the end connections of a column also affects its critical load. If the connections are perfectly rigid (does not allowing rotation of its ends), the critical load will be four times that for a similar column where the ends are pinned (allowing rotation of its ends).
Since the radius of gyration is defined as the square root of the ratio of the column's moment of inertia about an axis to its cross sectional area, the above Euler formula may be reformatted by substituting the radius of gyration ${\displaystyle Ar^{2}}$ for ${\displaystyle I}$:
${\displaystyle \sigma ={\frac {F}{A}}={\frac {\pi ^{2}E}{(l/r)^{2}}}}$
where ${\displaystyle \sigma =F/A}$ is the stress that causes buckling the column, and ${\displaystyle l/r}$ is the slenderness ratio.
Since structural columns are commonly of intermediate length, the Euler formula has little practical application for ordinary design. Issues that cause deviation from the pure Euler column behaviour include imperfections in geometry of the column in combination with plasticity/non-linear stress strain behaviour of the column's material. Consequently, a number of empirical column formulae have been developed that agree with test data, all of which embody the slenderness ratio. Due to the uncertainty in the behavior of columns, for design, appropriate safety factors are introduced into these formulae. One such formula is the Perry Robertson formula which estimates the critical buckling load based on an assumed small initial curvature, hence an eccentricity of the axial load. The Rankine Gordon formula (Named for William John Macquorn Rankine and Perry Hugesworth Gordon (1899 – 1966)) is also based on experimental results and suggests that a column will buckle at a load Fmax given by:
${\displaystyle {\frac {1}{F_{\max }}}={\frac {1}{F_{e}}}+{\frac {1}{F_{c}}}}$
where ${\displaystyle F_{e}}$ is the Euler maximum load and ${\displaystyle F_{c}}$ is the maximum compressive load. This formula typically produces a conservative estimate of ${\displaystyle F_{\max }}$.
#### Self-buckling
To get the mathematical demonstration read: Self-buckling
A free-standing, vertical column, with density ${\displaystyle \rho }$, Young's modulus ${\displaystyle E}$, and cross-sectional area ${\displaystyle A}$, will buckle under its own weight if its height exceeds a certain critical value:[1][2][3]
${\displaystyle h_{\text{crit}}=\left({\frac {9B^{2}}{4}}\,{\frac {EI}{\rho gA}}\right)^{\frac {1}{3}}}$
where ${\displaystyle g}$ is the acceleration due to gravity, ${\displaystyle I}$ is the second moment of area of the beam cross section, and ${\displaystyle B}$ is the first zero of the Bessel function of the first kind of order −1/3, which is equal to 1.86635086…
### Plate buckling
A plate is a 3-dimensional structure defined as having a width of comparable size to its length, with a thickness that is very small in comparison to its other two dimensions. Similar to columns, thin plates experience out-of-plane buckling deformations when subjected to critical loads; however, contrasted to column buckling, plates under buckling loads can continue to carry loads, called local buckling. This phenomenon is incredibly useful in numerous systems, as it allows systems to be engineered to provide greater loading capacities.
For a rectangular plate, supported along every edge, loaded with a uniform compressive force per unit length, the derived governing equation can be stated by:[4]
${\displaystyle {\frac {\partial ^{4}w}{\partial x^{4}}}+2{\frac {\partial ^{4}w}{\partial x^{2}\partial y^{2}}}+{\frac {\partial ^{4}w}{\partial y^{4}}}={\frac {12\left(1-\nu ^{2}\right)}{Et^{3}}}\left(-N_{x}{\frac {\partial ^{2}w}{\partial x^{2}}}\right)}$
where
${\displaystyle w}$, out-of-plane deflection
${\displaystyle N_{x}}$, uniformly distributed compressive load
${\displaystyle \nu }$, Poisson's ratio
${\displaystyle E}$, modulus of elasticity
${\displaystyle t}$, thickness
The solution to the deflection can be expanded into two harmonic functions shown:[4]
${\displaystyle w=\sum _{m=1}^{\infty }\sum _{n=1}^{\infty }w_{mn}\sin \left({\frac {m\pi x}{a}}\right)\sin \left({\frac {n\pi y}{b}}\right)}$
where
${\displaystyle m}$, number of half sine curvatures that occur lengthwise
${\displaystyle n}$, number of half sine curvatures that occur widthwise
${\displaystyle a}$, length of specimen
${\displaystyle b}$, width of specimen
The previous equation can be substituted into the earlier differential equation where ${\displaystyle n}$ equals 1. ${\displaystyle N_{x}}$ can be separated providing the equation for the critical compressive loading of a plate:[4]
${\displaystyle N_{x,cr}=k_{cr}{\frac {\pi ^{2}Et^{3}}{12\left(1-\nu ^{2}\right)b}}}$
where
${\displaystyle k_{cr}}$, buckling coefficient, given by:[4]
${\displaystyle k_{cr}=\left({\frac {mb}{a}}+{\frac {a}{mb}}\right)^{2}}$
The buckling coefficient is influenced by the aspect of the specimen, ${\displaystyle a}$ / ${\displaystyle {b}}$, and the number of lengthwise curvatures. For an increasing number of such curvatures, the aspect ratio produces a varying buckling coefficient; but each relation provides a minimum value for each ${\displaystyle m}$. This minimum value can then be used as a constant, independent from both the aspect ratio and ${\displaystyle m}$.[4]
Given stress is found by the load per unit area, the following expression is found for the critical stress:
${\displaystyle \sigma _{cr}=k_{cr}{\frac {\pi ^{2}E}{12\left(1-\nu ^{2}\right)\left({\frac {b}{t}}\right)^{2}}}}$
From the derived equations, it can be seen the close similarities between the critical stress for a column and for a plate. As the width ${\displaystyle b}$ shrinks, the plate acts more like a column as it increases the resistance to buckling along the plate’s width. The increase of ${\displaystyle a}$ allows for an increase of the number of sine waves produced by buckling along the length, but also increases the resistance from the buckling along the width.[4] This creates the preference of the plate to buckle in such a way to equal the number of curvatures both along the width and length. Due to boundary conditions, when a plate is loaded with a critical stress and buckles, the edges perpendicular to the load cannot deform out-of-plane and will therefore continue to carry the stresses. This creates a non-uniform compressive loading along the ends, where the stresses are imposed on half of the effective width on either side of the specimen, given by the following:[4]
${\displaystyle {\frac {b_{\text{eff}}}{b}}\approx {\sqrt {{\frac {\sigma _{cr}}{\sigma _{y}}}\left(1-1.022{\sqrt {\frac {\sigma _{cr}}{\sigma _{y}}}}\right)}}}$
where
${\displaystyle b_{\text{eff}}}$, effective width
${\displaystyle \sigma _{y}}$, yielding stress
As the loaded stress increase, the effective width continues to shrink; if the stresses on the ends ever reaches the yield stress, the plate will fail. This is what allows the buckled structure to continue supporting loadings. When the axial load over the critical load is plotted against the displacement, the fundamental path is shown. It demonstrates the plate's similarity to a column under buckling; however, past the buckling load, the fundamental path bifurcates into a secondary path that curves upward, providing the ability to be subjected to higher loads past the critical load.
### Flexural-torsional buckling
Flexural-torsional buckling can be described as a combination of bending and twisting response of a member in compression. Such a deflection mode must be considered for design purposes. This mostly occurs in columns with "open" cross-sections and hence have a low torsional stiffness, such as channels, structural tees, double-angle shapes, and equal-leg single angles. Circular cross sections do not experience such a mode of buckling.
### Lateral-torsional buckling
Lateral-torsional buckling of an I-beam with vertical force in center: a) longitudinal view, b) cross section near support, c) cross section in center with lateral-torsional buckling
When a simply supported beam is loaded in flexure, the top side is in compression, and the bottom side is in tension. If the beam is not supported in the lateral direction (i.e., perpendicular to the plane of bending), and the flexural load increases to a critical limit, the beam will experience a lateral deflection of the compression flange as it buckles locally. The lateral deflection of the compression flange is restrained by the beam web and tension flange, but for an open section the twisting mode is more flexible, hence the beam both twists and deflects laterally in a failure mode known as lateral-torsional buckling. In wide-flange sections (with high lateral bending stiffness), the deflection mode will be mostly twisting in torsion. In narrow-flange sections, the bending stiffness is lower and the column's deflection will be closer to that of lateral bucking deflection mode.
The use of closed sections such as square hollow section will mitigate the effects of lateral-torsional buckling by virtue of their high torsional rigidity.
Cb is a modification factor used in the equation for nominal flexural strength when determining lateral-torsional buckling. The reason for this factor is to allow for non-uniform moment diagrams when the ends of a beam segment are braced. The conservative value for Cb can be taken as 1, regardless of beam configuration or loading, but in some cases it may be excessively conservative. Cb is always equal to or greater than 1, never less. For cantilevers or overhangs where the free end is unbraced, Cb is equal to 1. Tables of values of Cb for simply supported beams exist.
If an appropriate value of Cb is not given in tables, it can be obtained via the following formula:
${\displaystyle C_{b}={\frac {12.5M_{\max }}{2.5M_{\max }+3M_{A}+4M_{B}+3M_{C}}}}$
where
${\displaystyle M_{\max }}$, absolute value of maximum moment in the unbraced segment,
${\displaystyle M_{A}}$, absolute value of maximum moment at quarter point of the unbraced segment,
${\displaystyle M_{B}}$, absolute value of maximum moment at centerline of the unbraced segment,
${\displaystyle M_{C}}$, absolute value of maximum moment at three-quarter point of the unbraced segment,
The result is the same for all unit systems.
### Plastic buckling
The buckling strength of a member is less than the elastic buckling strength of a structure if the material of the member is stressed beyond the elastic material range and into the non-linear (plastic) material behavior range. When the compression load is near the buckling load, the structure will bend significantly and the material of the column will diverge from a linear stress-strain behavior. The stress-strain behavior of materials is not strictly linear even below the yield point, hence the modulus of elasticity decreases as stress increases, and significantly so as the stresses approach the material's yield strength. This reduced material rigidity reduces the buckling strength of the structure and results in a buckling load less than that predicted by the assumption of linear elastic behavior.
A more accurate approximation of the buckling load can be had by the use of the tangent modulus of elasticity, Et, which is less than the elastic modulus, in place of the elastic modulus of elasticity. The tangent is equal to the elastic modulus and then decreases beyond the proportional limit. The tangent modulus is a line drawn tangent to the stress-strain curve at a particular value of strain (in the elastic section of the stress-strain curve, the tangent modulus is equal to the elastic modulus). Plots of the tangent modulus of elasticity for a variety of materials are available in standard references.
### Crippling
Sections that are made up of flanged plates such as a channel, can still carry load in the corners after the flanges have locally buckled. Crippling is failure of the complete section.[5]
### Diagonal tension
Sheets under diagonal tension are supported by stiffeners that as a result of sheet buckling carry a distributed load along their length, and may in turn result in these structural members failing under buckling.
Thicker plates may only partially form a diagonal tension field and may continue to carry some of the load through shear. This is known as incomplete diagonal tension (IDT). This behavior was studied by Wagner and these beams are sometimes known as Wagner beams[5]
Diagonal tension may also result in a pulling force on any fasteners such as rivets that are used to fasten the web to the supporting members. Fasteners and sheets must be designed to resist being pulled off their supports.
### Dynamic buckling
If a column is loaded suddenly and then the load released, the column can sustain a much higher load than its static (slowly applied) buckling load. This can happen in a long, unsupported column used as a drop hammer. The duration of compression at the impact end is the time required for a stress wave to travel along the column to the other (free) end and back down as a relief wave. Maximum buckling occurs near the impact end at a wavelength much shorter than the length of the rod, and at a stress many times the buckling stress of a statically-loaded column. The critical condition for buckling amplitude to remain less than about 25 times the effective rod straightness imperfection at the buckle wavelength is
${\displaystyle \sigma L=\rho c^{2}h}$
where ${\displaystyle \sigma }$ is the impact stress, ${\displaystyle L}$ is the length of the rod, ${\displaystyle c}$ is the elastic wave speed, and ${\displaystyle h}$ is the smaller lateral dimension of a rectangular rod. Because the buckle wavelength depends only on ${\displaystyle \sigma }$ and ${\displaystyle h}$, this same formula holds for thin cylindrical shells of thickness ${\displaystyle h}$.[6]
## Theory
### Energy method
Often it is very difficult to determine the exact buckling load in complex structures using the Euler formula, due to the difficulty in determining the constant K. Therefore, maximum buckling load is often approximated using energy conservation and referred to as an energy method in structural analysis.
The first step in this method is to assume a displacement mode and a function that represents that displacement. This function must satisfy the most important boundary conditions, such as displacement and rotation. The more accurate the displacement function, the more accurate the result.
The method assumes that the system (the column) is a conservative system in which energy is not dissipated as heat, hence the energy added to the column by the applied external forces is stored in the column in the form of strain energy.
${\displaystyle U_{\text{applied}}=U_{\text{strain}}}$
In this method, there are two equations used (for small deformations) to approximate the "strain" energy (the potential energy stored as elastic deformation of the structure) and "applied" energy (the work done on the system by external forces).
{\displaystyle {\begin{aligned}U_{\text{strain}}&={\frac {E}{2}}\int I(x)(w_{xx}(x))^{2}\,dx\\U_{\text{applied}}&={\frac {P_{\text{crit}}}{2}}\int (w_{x}(x))^{2}\,dx\end{aligned}}}
where ${\displaystyle w(x)}$ is the displacement function and the subscripts ${\displaystyle x}$ and ${\displaystyle xx}$ refer to the first and second derivatives of the displacement.
### Single-degree-of-freedom models
Using the concept of total potential energy, ${\displaystyle V}$, it is possible to identify four fundamental forms of buckling found in structural models with one degree of freedom. We start by expressing
${\displaystyle V=U-P\Delta }$
where ${\displaystyle U}$ is the strain energy stored in the structure, ${\displaystyle P}$ is the applied conservative load and ${\displaystyle \Delta }$ is the distance moved by ${\displaystyle P}$ in its direction. Using the axioms of elastic instability theory, namely that equilibrium is any point where ${\displaystyle V}$ is stationary with respect to the coordinate measuring the degree(s) of freedom and that these points are only stable if ${\displaystyle V}$ is a local minimum and unstable if otherwise (e.g. maximum or a point of inflection)[7].
These four forms elastic buckling are the saddle-node bifurcation or limit point; the supercritical or stable-symmetric bifurcation; the subcritical or unstable-symmetric bifurcation; and the transcritical or asymmetric bifurcation. All but the first of these examples is a form of pitchfork bifurcation. Simple models for each of these types of buckling behaviour are shown in the figures below, along with the associated bifurcation diagrams.
Single-degree-of-freedom (SDoF) rigid link models depicting four distinct types of buckling phenomena. The spring in each model is unstressed when ${\displaystyle q=0}$.
Limit Point Stable-symmetric bifurcation Unstable-symmetric bifurcation Asymmetric bifurcation
A tied truss model with inclinde links and horizontal spring.
Bifurcation diagrams (blue) for the above models with the energy function (red) animated at different values of load, ${\displaystyle P}$ (black). Note, the load is on the vertical axis. All graphs are in non-dimensional form.
${\displaystyle P^{C}=c/(2L)}$
${\displaystyle P^{C}=kL/2}$
${\displaystyle P^{C}=kL/2}$
## Engineering examples
### Bicycle wheels
A conventional bicycle wheel consists of a thin rim kept under high compressive stress by the (roughly normal) inward pull of a large number of spokes. It can be considered as a loaded column that has been bent into a circle. If spoke tension is increased beyond a safe level or if part of the rim is subject to a certain lateral force, the wheel spontaneously fails into a characteristic saddle shape (sometimes called a "taco" or a "pringle") like a three-dimensional Euler column. If this is a purely elastic deformation the rim will resume its proper plane shape if spoke tension is reduced or a lateral force from the opposite direction is applied.
Buckling is also a failure mode in pavement materials, primarily with concrete, since asphalt is more flexible. Radiant heat from the sun is absorbed in the road surface, causing it to expand, forcing adjacent pieces to push against each other. If the stress is great enough, the pavement can lift up and crack without warning. Going over a buckled section can be very jarring to automobile drivers, described as running over a speed hump at highway speeds.
### Rail tracks
Railway tracks in the Netherlands affected by Sun kink.
Similarly, rail tracks also expand when heated, and can fail by buckling, a phenomenon called sun kink. It is more common for rails to move laterally, often pulling the underlying ties (sleepers) along.
These accidents were deemed to be sun kink related (more information available at List of rail accidents (2000–2009)):
### Pipes and pressure vessels
Pipes and pressure vessels subject to external overpressure, caused for example by steam cooling within the pipe and condensing into water with subsequent massive pressure drop, risk buckling due to compressive hoop stresses. Design rules for calculation of the required wall thickness or reinforcement rings are given in various piping and pressure vessel codes.
## References
1. ^ Kato, K. (1915). "Mathematical Investigation on the Mechanical Problems of Transmission Line". Journal of the Japan Society of Mechanical Engineers. 19: 41.
2. ^ Ratzersdorfer, Julius (1936). Die Knickfestigkeit von Stäben und Stabwerken [The buckling resistance of members and frames] (in German). Wein, Austria: J. Springer. pp. 107–109. ISBN 978-3-662-24075-5.
3. ^ Cox, Steven J.; C. Maeve McCarthy (1998). "The Shape of the Tallest Column". SIAM Journal on Mathematical Analysis. 29 (3): 547–554. doi:10.1137/s0036141097314537.
4. Bulson, P. S. (1970). Theory of Flat Plates. Chatto and Windus, London.
5. ^ a b c Bruhn, E. F. (1973). Analysis and Design of Flight Vehicle Structures. Indianapolis: Jacobs.
6. ^ Lindberg, H. E.; Florence, A. L. (1987). Dynamic Pulse Buckling. Martinus Nijhoff Publishers. pp. 11–56, 297–298.
7. ^ Thompson, J.M.T.; Hunt, G.W. (1973). A general theory of elastic stability. London: John Wiley. ISBN 9780471859918.
8. ^ Lucero, Kat (2012-07-07). "Misaligned Track from Heat 'Probable Cause' In Green Line Derailment". DCist. American University Radio. Archived from the original on 2018-02-04. Retrieved 2019-01-21. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 84, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738422989845276, "perplexity": 958.1054577318387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00550.warc.gz"} |
https://www.khanacademy.org/math/integral-calculus/area-and-arc-length-ic | # Area & arc length using calculus
Contents
Become a professional area-under-curve finder! You will also learn here how integrals can be used to find lengths of curves. The tools of calculus are so versatile!
8 exercises available
### Area between curves
By integrating the difference of two functions, you can find the area between them.
### Arc length
Integral calculus isn't only useful for finding area. For example, it can also be used to find lengths of one-dimensional curves. Learn all about it here.
### Area defined by polar graphs
We're used to finding the area under curves in the Cartesian plane, but integration can be used to find area defined by polar curves too.
### Arc length of polar graphs
You may already be familiar with finding arc length of graphs that are defined in terms of rectangular coordinates. We'll now extend our knowledge of arc length to include polar graphs. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025678634643555, "perplexity": 620.857541111707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607849.21/warc/CC-MAIN-20170524173007-20170524193007-00404.warc.gz"} |
https://deepai.org/publication/multi-source-deep-gaussian-process-kernel-learning | # Multi-source Deep Gaussian Process Kernel Learning
For many problems, relevant data are plentiful but explicit knowledge is not. Predictions about target variables may be informed by data sources that are noisy but plentiful, or data which the target variable is merely some function of. Intrepretable and flexible machine learning methods capable of fusing data across sources are lacking. We generalize the Deep Gaussian Processes so that GPs in intermediate layers can represent the posterior distribution summarizing the data from a related source. We model the prior-posterior stacking DGP with a single GP. The exact second moment of DGP is calculated analytically, and is taken as the kernel function for GP. The result is a kernel that captures effective correlation through function composition, reflects the structure of the observations from other data sources, and can be used to inform prediction based on limited direct observations. Therefore, the approximation of prior-posterior DGP can be considered a novel kernel composition which blends the kernels in different layers and have explicit dependence on the data. We consider two synthetic multi-source prediction problems: a) predicting a target variable that is merely a function of the source data and b) predicting noise-free data using a kernel trained on noisy data. Our method produces better prediction and tighter uncertainty on the synthetic data when comparing with standard GP and other DGP method, suggesting that our data-informed approximate DGPs are a powerful tool for integrating data across sources.
## Authors
• 3 publications
• 9 publications
• ### Dimensionality Detection and Integration of Multiple Data Sources via the GP-LVM
The Gaussian Process Latent Variable Model (GP-LVM) is a non-linear prob...
07/01/2013 ∙ by James Barrett, et al. ∙ 0
• ### Aggregating Predictions on Multiple Non-disclosed Datasets using Conformal Prediction
Conformal Prediction is a machine learning methodology that produces val...
06/11/2018 ∙ by Ola Spjuth, et al. ∙ 0
• ### mGPfusion: Predicting protein stability changes with Gaussian process kernel learning and data fusion
Proteins are commonly used by biochemical industry for numerous processe...
02/08/2018 ∙ by Emmi Jokinen, et al. ∙ 0
• ### Efficient Deep Gaussian Process Models for Variable-Sized Input
Deep Gaussian processes (DGP) have appealing Bayesian properties, can ha...
05/16/2019 ∙ by Issam H. Laradji, et al. ∙ 5
• ### Bayesian Heatmaps: Probabilistic Classification with Multiple Unreliable Information Sources
Unstructured data from diverse sources, such as social media and aerial ...
04/05/2019 ∙ by Edwin Simpson, et al. ∙ 0
• ### Additive Kernels for Gaussian Process Modeling
Gaussian Process (GP) models are often used as mathematical approximatio...
03/21/2011 ∙ by Nicolas Durrande, et al. ∙ 0
• ### Learning from Multiple Sources for Video Summarisation
Many visual surveillance tasks, e.g.video summarisation, is conventional...
01/13/2015 ∙ by Xiatian Zhu, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Gaussian Process (GP) (Rasmussen and Williams, 2006)
is a flexible model that imposes a prior distribution over continuous functions in Bayesian framework. For regression tasks, the ability to estimate uncertainty is the main advantage of GP over the deterministic models such as deep neural networks. This advantage originates in the assumption that the finite set of latent random variables relating to both observed and unobserved data are collectively subject to the multivariate Gaussian distribution. As such, one can elegantly formulate prediction and calibrate uncertainty within the Bayesian framework.
Two main weakness of GP are the need to choose a kernel and the lack of explicit dependence on the observations in the predictive covariance. The kernel function encodes the similarity between data and , and hence the prior distribution over functions. Typically, this must be chosen by the modeler, presumably based on knowledge of the domain. But explicit knowledge is sometimes not available, and even when it is, it can be challenging to translate knowledge into an appropriate choice of kernel. The second weakness is the lack of explicit dependence on the observations, ’s, in the predictive covariance. In standard GP, uncertainty about a predicted value is completely determined by closeness of prior observations to in the input space, with no influence of the actual predicted values for those prior observations. This left unjustified in most research on GPs, presumably because there are not ready methods to incorporate such a dependence.
Research has attempted to ameliorate limitations of existing kernels through composition. For example, basic operations including addition, multiplication, and convolution allow composite kernels that are more flexible then the kernels they combine (Rasmussen and Williams, 2006; Duvenaud et al., 2013). More recently, research has investigated composition of kernels. For example, one may pass the input through some deterministic transformation, , which then generates the new kernel, , with new properties within the the standard GP framework Hinton and Salakhutdinov (2008). More generally, one may pass the input through a probabilistic transformation, for example taking the output of one GP as input into another, which gives rise to Deep GPs. Through the connection to kernel methods and Deep neural networks (Wilson et al., 2016; Raissi and Karniadakis, 2016), Deep Gaussian Processes (DGPs) (Damianou and Lawrence, 2013) and warped GPs (Snelson et al., 2004; Lázaro-Gredilla, 2012) can be regarded as a probabilistic generalization of kernel composition. Indeed, the Deep GP prior is non-Gaussian, consistent with increased expressiveness as compared to GPs, which leads to challenges for inference. Although increased expressiveness is useful in allowing application across domains, it is not the same as incorporating knowledge about a domain, which enables strong inference from limited observations.
A less obvious benefit from DGP is that the stacking structure is suitable for fusing the collection of data coming from different sources before making prediction (Perdikaris et al., 2017; Cutajar et al., 2019). For example, one may consider the regression task in which the accurate observations are rare but very close to the ground truth while there is large amount of less precise data available. Ideally, the plentiful data could be used to produce more informative prior while the more precise data could help reduce the uncertainty. More generally, one may consider a regression task in which the desired observations are rare, and one has access to large amount of relevant data. Relevant data may inform or constrain inference without being sampled from the target function itself. By training on this relevant data, one could construct a domain specific kernel that leverages very general knowledge about the domain—that two variables are related—and the kernel would then encode this knowledge in a way that could inform predictions of new variables based on limited observations.
In this paper, we propose a novel DGP structure in which the hidden layer connecting the input emits the random function sampled from the posterior distribution summarizing the data from the source variable. The next hidden layer then emits random function sampled from standard GP, which connects with the output. To tackle the intractability of inference, we follow the method of moments. Thus, we approximate our prior-posterior DGP with GP Lu et al. (2019). We shall show that the present approach can address the two weakness of standard GP mentioned above. The effective kernel captures long-range and multi-scale correlation via marginalized composition, and it also obtains explicit dependence of the observations through posterior mean. In comparison with directly applying GP on union of noisy and noise-free data, our multi-source GP shows reduced uncertainty and greater accuracy when predicting based on limited data. Experiments on synthetic multi-fidelity data also present better accuracy and tighter uncertainty in comparison with other DGP model and direct application of RBF-GP GPy (2012) with the rare data.
## 2 Related Works
The machine learning tasks with data from multiple sources have been studied in the contexts of multi-fidelity learning (Kennedy and O’Hagan, 2000; Raissi and Karniadakis, 2016; Perdikaris et al., 2017; Cutajar et al., 2019) and multi-output learning (Álvarez and Lawrence, 2011; Moreno-Muñoz et al., 2018; Kaiser et al., 2018; Kazlauskaite et al., 2019; Janati et al., 2019). The kernel learning appeared in the literature of deep neural networks (Neal, 2012; Williams, 1997; Cho and Saul, 2009; Poole et al., 2016; Lee et al., 2017; Daniely et al., 2016; Agrawal et al., 2020) and neural tangent kernel (Jacot et al., 2018; Karakida et al., 2019; Yang, 2019). Hinton and Salakhutdinov (2008); Wilson et al. (2016); Raissi and Karniadakis (2016) demonstrated learning of the composition in kernel function through deep models.
The approach in our paper is different from the multi-fidelity DGP (Cutajar et al., 2019) which relies on inducing points (Snelson and Ghahramani, 2006; Titsias, 2009; Titsias and Lawrence, 2010) and variational inference Salimbeni and Deisenroth (2017). In the prediction stage, Cutajar et al. (2019) relies on the Monte Carlo sampling but our method does not.
Student-t process is the only stochastic model known, to our knowledge, to possess the capacity of outputting the predictive covariance which depends on the observations Shah et al. (2014).
## 3 Gaussian Process and Deep Gaussian Process
Gaussian Process (GP) is the continuum generalization of multivariate Gaussian distribution over a set of random variables denoted by
. As the joint distribution
is specified by the mean vector
of size and kernel matrix of size , the distribution over the function is labeled as where, due to the marginal property of Gaussian distribution, the mean function and kernel function
generalize their counterparts in discrete case. GP regression is related to kernel ridge regression. One can regard the set of kernel functions
evaluated at the training input
as a basis in standard linear regression with regularization
(Steinke and Schölkopf, 2008). For the zero-mean case, the conditional distribution associated with the function value at new input is still a Gaussian,
, with the predictive mean and variance,
μ∗=k(x∗,X)K−1(X,X)f (1)
and
σ2∗=k(x∗,x∗)−k(x∗,X)K−1(X,X)k(X,x∗), (2)
respectively. The above can be easily adapted to connect with noisy observations
, and the hyperparameters in the kernel are determined by optimizing the marginal likelihood for the training observations.
The kernel function is key to the properties of function sampled from GP. Squared exponential kernel is a common model for smooth functions whereas the non-stationary Brownian motion kernel can generate stochastic non-differentiable continuous functions. New kernels can be generated by a simple trick in which one composes a deterministic transformation with a standard GP kernel. For example, passing the data through before an SE kernel allows functions of multi-length scale.
Deep Gaussian Process (DGP) Damianou and Lawrence (2013) generalizes GP the composition trick by passing the input through a GP instead of a deterministic transformation. Marginalizing the random transformation in the latent layers, the DGP prior distribution reads,
p(f|X)=∫dh p(f|h)p(h|X), (3)
where the intractable marginalization arises because the random variables appear in the inverse of kernel matrix in the GP connecting to final layer . Deeper models can be straightforwardly obtained by inserting more latent layers in Equation 3.
The variational inference method Hensman et al. (2013); Salimbeni and Deisenroth (2017) tackles the intractable marginalization by introducing inducing inputs in each layer and assuming the corresponding function values are from some tractable distribution. Although the approximate inference is effective, the approximate posterior distributions over these latent functions did not capture some non-Gaussian properties such as the multi-modalness Havasi et al. (2018); Lu et al. (2019).
## 4 From DGP to Kernel Learning
Although the form of distribution in Equation 3 is not known exactly due to the intractable marginalization over latent function layers, Lu et al. (2019) studied the second and fourth moments of the DGP models where squared exponential and squared cosine covariance functions are employed in the final layer connecting to observations. Their study suggested that the two-layer DGP distribution is heavier-tailed and the joint distribution has the invariant symmetry under if both conditional distributions in Equation 3 are zero-mean. Such zero-mean assumption is justified because the latent function variables, e.g. , are not connected with any observation.
In the context of multi-source data regression, GPs are very useful for modeling the general relation between mutually dependent observations from different sources (Kaiser et al., 2018; Cutajar et al., 2019). Instead of selecting some predetermined mean function for intermediate GP in the warped GP approach (Snelson et al., 2004; Kaiser et al., 2018), we follow the DGP structure and direct part of the observations to connect with the latent function (Cutajar et al., 2019) so that the conditional distribution in Equation 3 can be posterior distribution given the partial data. More explicitly, referring to Fig. 1, we consider the collection of plentiful and rare data denoted by and , and extend the DGP model to take into account that the latent layer represents the random functions from the posterior distribution given ,
p(f|X,Dp)=∫dh p(f|h)p(h|X,Dp). (4)
The DGP distribution in Equation 4 can be considered as prior over the composite functions taking into account the partial data. For convenience, we extend the notation in Lu et al. (2019) to represent the two-layer DGP. For instance, SE[NN] stands for the structure associated with Equation 4 where the second layer connecting with output is zero-mean GP with SE kernel function while the first layer connecting with input is posterior distribution given and kernel function is the neural network (NN) covariance function (Williams, 1997). In the followings, we shall extract the second and fourth moments of two families of DGP models, SE[] and SC[] where the first layer GP can use arbitrary kernel function.
### 4.1 Squared Exponential Composition: SE[⋅|D]
Here we shall evaluate the expectation values and associated with the DGP distribution in Equation 4. For the second moment, the marginal property of Gaussian and the integration with respect to the exposed terms ’s allow us not to confront with marginalizing out the ’s being in the inverse of covariance matrix in . Instead, only the tractable integral one needs to perform is . It is convenient to write the square in quadratic form, where the notation for the two-entry column vector and the two-by-two matrix have been used. Therefore, the SE covariance function in the layer connecting with output now reads,
k2(hi,hj)=σ22exp[−[h]TijJ2[h]ij2ℓ22]. (5)
The marginal property of Gaussian again simplifies the integration into
E[fifj]=∫d[h]ijk2(hi,hj)N([h]ij|V2,K2), (6)
where the two-entry posterior mean vector
V2=(μiμj),
and the two-by-two covariance matrix,
K2=(kiikijkijkjj),
are obtained from the posterior mean and covariance matrix, respectively, given the partial data .
###### Lemma 1.
The second moment of SE[] in Equation 6 can be shown to have the following form,
E[fifj]=σ22√Dijexp[−(μi−μj)22ℓ22Dij] (7)
where the symbol represents the determinant of the matrix .
###### Proof.
Without loss of generality, we assume the hyperparameters in the output layer. The analytic form for the second moment is given in (Lu et al., 2019),
E[fifj]=exp(−12Vt2A2V2)√I2+K2J2,
where the matrix . It can be shown that the involved 2-by-2 matrices satisfy the following identity,
I2−(I2+K2J2)−1=K2J2Dij, (8)
with which we can proceed to show and the exponent in above reads . ∎
The second moment captures the effective covariance generated through the marginalized composition in DGP distribution. Unlike the discussions in (Lu et al., 2019) where the GPs in latent layers are zero-mean, the presence of Gaussian posterior in DGP allows the effective covariance function in Equation 7 to depend on the posterior mean given the partial data, which is a novel method of data driven composition of deep multi-scale kernels. The factor reproduces the kernel for zero-mean case in which the length scales in and give rise to multi-scale correlation. Another novelty of this kernel is that the difference between the posterior mean from first GP appears in the second factor allows the partial training observations to enter the covariance function and the ensuing predictive covariance. Standard GPs does not have the capacity of providing observation-dependent predictive covariance (Shah et al., 2014). In addition, the exponent of this factor is scaled by input-dependent length scale . Lastly, one can collapse the first GP into a deterministic mapping, i.e. , by setting the signal magnitude , which leads to a transformed SE kernel function .
The statistics of DGP are not solely determined by the second and first moments, however. We can show that the fourth moment of interest can be derived in a similar manner. The calculation of higher moments for multivariate Gaussian distribution is faciliated by the theorem in (Isserlis, 1918) with which the fourth moment with the summation being over all three distinct ways decomposing the quartet into two doublets (for instance and ).
###### Lemma 2.
The fourth moment of SE[] is given by the sum over distinct doublet decomposition and corresponding expectation values,
E[fifjfmfl]=σ42∑αab,cdαcd,abβab,cd√DabDcd−V2ab,cd (9)
with and the expressions,
αab,cd=exp[−(μa−μb)22ℓ22(Dab−V2ab,cd/Dcd)],
and
βab,cd=exp[(μa−μb)(μc−μd)Vab,cdℓ22(DabDcd−V2ab,cd)].
###### Proof.
We start with rewriting the product of the covariance function where the row vector and the matrix
J4=(J200J2).
The above zeros stand for 2-by-2 zero matrices in the off-diagonal blocks. The procedure of obtaining expectation value with respect to the 4-variable multivariate Gaussian distribution is similar to the previous one in obtaining the second moment. Namely,
E[k2(ha,hb)k2(hc,hd)]=exp(−12Vt4A4V4)√I4+K4J4,
in which the calculation of inverse of 4-by-4 matrix and its determinant is quite tedious but tractable. ∎
### 4.2 Squared Cosine Composition: SC[⋅|D]
In the present case, the second GP in Figure 1 is also zero-mean but employs the squared cosine kernel function, i.e. . After writing , the same trick can be applied to obtain the second moment of SC[] DGP.
###### Lemma 3.
The second moment of SC[] DGP distribution is
E[fifj]=σ222[1+cos(μi−μj)exp(Vij)] (10)
where the symbol represents the posterior covariance from the first-layer GP.
###### Proof.
The above follows from the expectation value identity along with the identification of vector . ∎
Similarly, the fourth moment of SC[] also has closed form in the following lemma.
###### Lemma 4.
The fourth moment is a sum over all three partition of quartet into pair of doublets, and the relation with the second moment is given by,
E[fifjfmfl]=∑E[fafb]E[fcfd]+eVab+Vcd8×[e2Vab,cdcos(μa−μb+μc−μd)+e−2Vab,cdcos(μa−μb−μc+μd)−2cos(μa−μb)cos(μc−μd)] (11)
The proof can be found in Appendix.
### 4.3 Non-Gaussian statistics
One can determine whether a univariate distribution is heavy-tailed or light-tailed by the sign of its excessive kurtosis, and Gaussian distribution corresponds to zero. The fourth moments along with the second moments in the zero-mean two-layer DGP studied in
Lu et al. (2019) suggest that both the SE[] and SC[] compositions result in heavy-tailed statistics. Nonzero posterior mean in the present two-layer DGP enriches the statistical property. First, in both SE[] and SC[] compositions, the present DGP can be a GP if the signal in the first layer, which results in zero covariance and . Moreover, one can show that the fourth moment, for instance, , which is signature of multivariate Gaussian distribution.
However, the presence of mean function in second and fourth moments complicates the consideration for general cases. We first consider the SC composition and the simpler fourth moment . Since we are interested in approximating DGP with GP, the fourth moment in GP approximation shall read . The true fourth moment, compared with the GP approximation, contains additional contribution proportional to,
excos2θ+e−x−2cos2θ≥2sin2θ(1−ex),
where we denote and . Therefore, this fourth moment is underestimated if one approximates this DGP with GP. In general, one could either prove that the most general fourth moment is always underestimated given any posterior mean function, or one could prove the opposite by finding a posterior mean such that the true fourth moment is smaller than the one obtained from the approximate GP. As for the SE DGP, the situation is even more complicated as the posterior mean and covariance both appear in the exponent in Equation 7 and 9, which makes the demonstration even more challenging.
### 4.4 Pathology kernels
The previous derivations show construction of effective kernel function in the posterior-prior stacking DGP, SE[] (Equation 7) and SC[] (Equation 10). The kernel functions depend on the input ’s implicitly through the posterior mean and covariance given data . One has the freedom to include explicit dependence of input by multiplying with SE kernel Duvenaud et al. (2014),
kpath(xi,xj)=σ22exp(−|xi−xj|22ℓ2)keff(xi,xj). (12)
## 5 Multi-source GP Regression
Given a set of rare observation , the multi-source regression task is to infer the function, which has following functional dependence,
y=f[h(x),x]+N(0,σ2n), (13)
and the hidden function can be inferred from the plentiful data . For instance, could represent the amount of sun exposure while could represent temperature at an array of locations. The hierarchy of two-level inference suits with the prior-posterior DGP shown in Figure 1. Moreover, marginalization over the hidden function
is the Bayesian spirit, instead of selecting the most probable
. As shown in previous section, our approach is to approximate the DGP with a GP in which the employed kernel function represents the nontrivial correlation and dependence on plentiful data out of the intractable marginalization over the hidden function. In passing, we note that the non-Gaussian character not captured by this approximation is related to whether outlier samples should appear more (heavy-tailed) or less (light-tailed) frequently than the multivariate Gaussian distribution can generate. Below, we shall describe our approximate inference with the effective GP model shown in Figure
2.
The first objective is to infer the hidden function from . Namely, we are interested in and evaluated at and
, respectively, and they follow the joint multivariate normal distribution,
[yphr,∗]∼N(0,[K(Xp,Xp)+σ2nIK(Xp,Xr,∗)K(Xr,∗,Xp)K(Xr,∗,Xr,∗)]), (14)
given the information from the plentiful data. Consequently, the posterior distribution can be obtained by standard GP procedure. The relevant information regarding within the posterior mean and and covariance can lead to the kernel matrix in Figure 2 at input and . Then optimization over the evidence for results in the corresponding hyperparameters for the effective GP. Algorithm 1 lists the summary of calculation procedure for multi-source DGP kernel learning with the rare data.
## 6 Experiments
Here, we shall demonstrate the results from our multi-source GP inference with synthetic rare and plentiful data. There are two demonstrations of data here. In the first one, the relation between the two sources of data is fixed but implicit. This type of problem is the same as that studied in multi-fidelity simulation Perdikaris et al. (2017); Cutajar et al. (2019)
. On the other hand, in the second type of problem the relation between noisy data and noise-free data is not through a fixed mapping. Thus, it is interesting to note that the multi-source GP inference may be analogous with the denoising autoencoder
Alain and Bengio (2014); Bengio et al. (2013).
### 6.1 Two-fidelity data
We first show the example with the plentiful data generated from (30 data) and the rare data from (10 data). We apply the standard GPy regression package with RBF kernel GPy (2012) to infer the distribution over hidden function from plentiful data . With the predictive mean and covariance matrix , we can obtain the effective kernel matrix via Equation 7. The left top panel in Figure 3 displays the heatmap for the corresponding pathology kernel in Equation 12 evaluated at the grid of 30 points between . The left bottom panel in Figure 3 shows the sample functions generated from the effective GP as a prior using the pathology kernel learned from the plentiful data. Here, we remark that sampling function from DGP prior models is non-trivial as we did not find many instances in the literature except (Duvenaud et al., 2014; Dunlop et al., 2018). In the end, the regression with the rare data can be done with optimizing the evidence of . In the present experiment, we set the in the pathology kernel and employ grid search for the optimal values for and . The panel (a) in Figure 4 shows the regression result where the predictive uncertainty substantially improves upon the result from directly applying GP with the rare data, which can be found in the Supplemental Material (Fig.6 right panel). In particular, one may note that there is no data for in the region but our prediction mean is still close to the truth and the uncertainty is small. The result from direct application of standard GP sees similar predictive mean but the uncertainty is very large for .
For comparison with the simulation results in (Cutajar et al., 2019), we follow the nonlinear-A example there and generate 30 plentiful data from the function and 10 rare data from . Learning from the plentiful data leads to the effective pathology kernel in the right top panel in Figure 3. Apparently, the learned covariance has a very different structure than that learned from the previous case, which shows the flexibility of our kernel learning. Two random functions are sampled from the corresponding GP and are displayed in the right bottom panel. It can be seen that the sampled functions possess the length scale learned from the low-fidelity function, and additional length scales which may originate from the deep model can be found too. Finally, we use the rare data for our effective GP regression, and the prediction is shown in the panel (b) of Figure 4. In the present simulation, there are 30 random data as and 10 as , and the corresponding result using the code of Cutajar et al. (2019) can be found in panel (c) of Figure 4. The two methods show similar accuracy and tight uncertainty near the region where the rare data is available. In the region where no rare data is seen, on the other hand, our method demonstrates more accurate prediction (black curve) and tighter uncertain region between the blue dashed curves.
### 6.2 Noisy&noise-free data
Unlike the above cases where the plentiful data has a fixed relation with the rare data, the noise-free data can not be obtained by sending the noisy data through some predetermined transformation. Nevertheless, if we regard the marginalization over in Equation 13 as a kind of model averaging Goodfellow et al. (2016), we shall expect tighter variance from the present regression algorithm.
In figure 5, we demonstrate the regression result with 40 noisy and random data in from the high-fidelity function in the nonlinear-A example, i,e, with , and 10 noise-free data from the same function. The black cross symbols represent the noisy data while the red circles for the noise-free ones. The light-blue curve and the shadow region represent the predictive mean and uncertainty from applying RBF-GP to the union of both data. The relatively large noise and the lack of data in some regions result in significant uncertainty around the prediction. The predictive mean and uncertainty are presented by the black curve and blue dashed curves, respectively. The predictions from both methods are similar, but the uncertainty in our method is much tighter, which is expected from the model averaging perspective.
## 7 Conclusion
In this paper we propose a novel kernel learning inspired from the multi-source Deep Gaussian Process structure. Our approach addresses two limitations of prior research on GPs: the need to choose a kernel and the lack of explicit dependence on the observations in the predictive covariance. We resolve limitations associated with reliance on experts to choose kernels, introducing new data-dependent kernels together with effective approximate inference. Our results show that the method is effective, and we prove that our moment-matching approximation retains some of multi-scale and long-ranged correlation that are characteristic of deep models. We resolve limitations associated with lack of explicit dependence on observations in the predictive covariance by introducing the data-driven kernel. Our results show the benefits of joint dependence on the input and the predicted variables in reduced uncertainty in regions with sparse observations (e.g. Figure 4).
Central to the allure of Bayesian methods, including Gaussian Processes, is the ability to calibrate model uncertainty through marginalization over hidden variables. The power and promise of deep GP is in allowing rich composition of functions while maintaining the Bayesian character of inference over unobserved functions. Whereas most approaches are based on variational approximations for inference and Monte Carlo sampling in prediction stage, our approach uses a moment-based approximation in which deep GP is a analytically approximated with a GP. For both, the full implications of these approximations are unknown. Through analysis of higher moments, we can show that our approach retains some of important signatures of deep models, while avoiding the need for further optimization or sample-based approximation. Continued research is required to understand the full strengths and limitations of each approach.
## Appendix
From the true DGP distribution given in the main text, is can be shown with the theorem in Isserlis (1918) that the fourth moment is the sum,
E[fifjfmfl]=Ep(h)[k2(hi,hj)k2(hl,hm)]+Ep(h)[k2(hi,hl)k2(hj,hm)]+Ep(h)[k2(hi,hm)k2(hj,hl)], (15)
where the distribution for the quartet is a multivariate Gaussian distribution, specified by its mean vector and covariance matrix ,
V4=⎡⎢ ⎢ ⎢ ⎢⎣μiμjμmμl⎤⎥ ⎥ ⎥ ⎥⎦,
and
K4=⎡⎢ ⎢ ⎢ ⎢⎣kiikijkimkilkijkjjkjmkjlkimkjmkmmkmlkilkjlkmlkll⎤⎥ ⎥ ⎥ ⎥⎦.
We shall focus on the first term in the sum,
k2(hi,hj)k2(hm,hl)=cos2hi−hj2cos2hm−hl2,
which can be expressed as the exponential form,
14+18[ei(hi−hj)+⋯]+116[ei(hi−hj+hm−hl)+⋯], (16)
where there are a total of 4 similar terms involving in the first bracket. In the second bracket, there are also 4 similar terms with different sign combinations. Next we can apply the identity valid for both two-dimension and four-dimension cases. One can show that
Ep(h)[ei(hi−hj)]=ei(μi−μj)exp(2kij−kii−kjj2), (17)
and
Ep(h)[ei(hi−hj+hm−hl)]=ei(μi−μj+μm−μl)exp(2kij−kii−kjj2)exp(2kml−kmm−kll2)exp(Vij,ml) (18)
where we denote the exponential cross term , and the plus/minus is associated with the sign in the exponent of in . Now we can first collect the terms and obtain the second moment,
E[fifj]=1+cos(μi−μj)eVij2, (19)
where the two-indices symbol . Similarly, the fourth moment is given by,
Ep(h)[k2(hi,hj)k2(hl,hm)]=1+cos(μi−μj)eVij+cos(μm−μl)eVml4+eVij+Vml[cos(μi−μj+μm−μl)8eVij,ml+cos(μi−μj−μm+μl)8e−Vij,ml]. (20)
Comparing with the expression for the second moment, the difference now reads,
eVij+Vml[cos(μi−μj+μm−μl)8eVij,ml+cos(μi−μj−μm+μl)8e−Vij,ml−cos(μi−μj)cos(μm−μl)4].
Note that and can be positive or negative. The remaining two terms in the sum Eq. (1) follow identical derivation, thus we complete the proof of Lemma 4.
## Supplemental Material
Here we present the comparison with the results obtained by directly applying RBF-GP (GPy package) GPy (2012). We generate randomly 30 points as and 10 points as using np.random.rand function in numpy with random seed of 59. The data (black cross) serves as relevant information to the target knowledge (red line) which is partially known through the rare data (red circle). The predictive mean and variance are displayed with black solid line and blue dashed lines, respectively. We also use the examples from the nonlinear-A (Fig. 7) and nonlinear-B (Fig. 8) cases in Cutajar et al. (2019).
## References
• Agrawal et al. (2020) Agrawal, D., Papamarkou, T., and Hinkle, J. (2020). Wide neural networks with bottlenecks are deep gaussian processes. arXiv preprint arXiv:2001.00921.
• Alain and Bengio (2014) Alain, G. and Bengio, Y. (2014). What regularized auto-encoders learn from the data-generating distribution. The Journal of Machine Learning Research, 15(1):3563–3593.
• Álvarez and Lawrence (2011) Álvarez, M. A. and Lawrence, N. D. (2011). Computationally efficient convolved multiple output gaussian processes. Journal of Machine Learning Research, 12(May):1459–1500.
• Bengio et al. (2013) Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013). Generalized denoising auto-encoders as generative models. In Advances in neural information processing systems, pages 899–907.
• Cho and Saul (2009) Cho, Y. and Saul, L. K. (2009).
Kernel methods for deep learning.
In Advances in neural information processing systems, pages 342–350.
• Cutajar et al. (2019) Cutajar, K., Pullin, M., Damianou, A., Lawrence, N., and González, J. (2019). Deep gaussian processes for multi-fidelity modeling. arXiv preprint arXiv:1903.07320.
• Damianou and Lawrence (2013) Damianou, A. and Lawrence, N. (2013). Deep gaussian processes. In Artificial Intelligence and Statistics, pages 207–215.
• Daniely et al. (2016) Daniely, A., Frostig, R., and Singer, Y. (2016). Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In NIPS.
• Dunlop et al. (2018) Dunlop, M. M., Girolami, M. A., Stuart, A. M., and Teckentrup, A. L. (2018). How deep are deep gaussian processes? The Journal of Machine Learning Research, 19(1):2100–2145.
• Duvenaud et al. (2013) Duvenaud, D., Lloyd, J. R., Grosse, R., Tenenbaum, J. B., and Ghahramani, Z. (2013). Structure discovery in nonparametric regression through compositional kernel search. arXiv preprint arXiv:1302.4922.
• Duvenaud et al. (2014) Duvenaud, D., Rippel, O., Adams, R., and Ghahramani, Z. (2014). Avoiding pathologies in very deep networks. In Artificial Intelligence and Statistics, pages 202–210.
• Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press.
• GPy (2012) GPy (since 2012). GPy: A gaussian process framework in python.
• Havasi et al. (2018) Havasi, M., Hernández-Lobato, J. M., and Murillo-Fuentes, J. J. (2018). Inference in deep gaussian processes using stochastic gradient hamiltonian monte carlo. In Advances in Neural Information Processing Systems, pages 7506–7516.
• Hensman et al. (2013) Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian processes for big data. In Uncertainty in Artificial Intelligence, page 282. Citeseer.
• Hinton and Salakhutdinov (2008) Hinton, G. E. and Salakhutdinov, R. R. (2008). Using deep belief nets to learn covariance kernels for gaussian processes. In Advances in neural information processing systems, pages 1249–1256.
• Isserlis (1918) Isserlis, L. (1918). On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika, 12(1/2):134–139.
• Jacot et al. (2018) Jacot, A., Gabriel, F., and Hongler, C. (2018). Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571–8580.
• Janati et al. (2019) Janati, H., Cuturi, M., and Gramfort, A. (2019). Wasserstein regularization for sparse multi-task regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1407–1416.
• Kaiser et al. (2018) Kaiser, M., Otte, C., Runkler, T., and Ek, C. H. (2018). Bayesian alignments of warped multi-output gaussian processes. In Advances in Neural Information Processing Systems, pages 6995–7004.
• Karakida et al. (2019) Karakida, R., Akaho, S., and Amari, S.-i. (2019). Universal statistics of fisher information in deep neural networks: Mean field approach. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1032–1041.
• Kazlauskaite et al. (2019) Kazlauskaite, I., Ek, C. H., and Campbell, N. (2019). Gaussian process latent variable alignment learning. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 748–757.
• Kennedy and O’Hagan (2000) Kennedy, M. C. and O’Hagan, A. (2000). Predicting the output from a complex computer code when fast approximations are available. Biometrika, 87(1):1–13.
• Lázaro-Gredilla (2012) Lázaro-Gredilla, M. (2012). Bayesian warped gaussian processes. In Advances in Neural Information Processing Systems, pages 1619–1627.
• Lee et al. (2017) Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. (2017). Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165.
• Lu et al. (2019) Lu, C.-K., Yang, S. C.-H., Hao, X., and Shafto, P. (2019). Interpretable deep gaussian processes with moments. arXiv preprint arXiv:1905.10963.
• Moreno-Muñoz et al. (2018) Moreno-Muñoz, P., Artés-Rodríguez, A., and Álvarez, M. A. (2018). Heterogeneous multi-output Gaussian process prediction. In Advances in Neural Information Processing Systems (NeurIPS) 31.
• Neal (2012) Neal, R. M. (2012). Bayesian learning for neural networks, volume 118. Springer Science & Business Media.
• Perdikaris et al. (2017) Perdikaris, P., Raissi, M., Damianou, A., Lawrence, N., and Karniadakis, G. E. (2017). Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2198):20160751.
• Poole et al. (2016) Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J., and Ganguli, S. (2016). Exponential expressivity in deep neural networks through transient chaos. In NIPS.
• Raissi and Karniadakis (2016) Raissi, M. and Karniadakis, G. (2016). Deep multi-fidelity gaussian processes. arXiv preprint arXiv:1604.07484.
• Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Process for Machine Learning. MIT press, Cambridge, MA.
• Salimbeni and Deisenroth (2017) Salimbeni, H. and Deisenroth, M. (2017). Doubly stochastic variational inference for deep gaussian processes. In Advances in Neural Information Processing Systems.
• Shah et al. (2014) Shah, A., Wilson, A., and Ghahramani, Z. (2014). Student-t processes as alternatives to gaussian processes. In Artificial intelligence and statistics, pages 877–885.
• Snelson and Ghahramani (2006) Snelson, E. and Ghahramani, Z. (2006). Sparse gaussian processes using pseudo-inputs. In Advances in neural information processing systems, pages 1257–1264.
• Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
• Steinke and Schölkopf (2008) Steinke, F. and Schölkopf, B. (2008). Kernels, regularization and differential equations. Pattern Recognition, 41(11):3271–3286.
• Titsias (2009) Titsias, M. (2009). Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pages 567–574.
• Titsias and Lawrence (2010) Titsias, M. and Lawrence, N. (2010). Bayesian gaussian process latent variable model. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 844–851.
• Williams (1997) Williams, C. K. (1997). Computing with infinite networks. In Advances in neural information processing systems, pages 295–301.
• Wilson et al. (2016) Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016). Deep kernel learning. In Artificial Intelligence and Statistics, pages 370–378.
• Yang (2019) Yang, G. (2019). Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495393991470337, "perplexity": 1668.5453918310425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00489.warc.gz"} |
https://www.lessonplanet.com/teachers/scotts-big-decision-a-look-at-the-decision-making-process | # Scott's Big Decision, A Look at the Decision Making Process
Students define the problem, list alternatives, state criteria to consider, and evaluate alternatives in terms of chosen criteria, through a chart, then interpret the chart to arrive at a decision.
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162816166877747, "perplexity": 3158.84338394361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128329372.0/warc/CC-MAIN-20170629154125-20170629174125-00486.warc.gz"} |
http://math.stackexchange.com/questions/103576/the-systematic-method-for-the-explicit-construction-of-representations-of-su2 | # The systematic method for the explicit construction of representations of su(2) algebra? (e.g. pauli matrices)
How do I construct the su(2) representations of a given dimension?
-
There are many ways but it's a rather lenghty derivation if you want to find all representations. I don't think people here will serve you the whole derivation. And I'm talking about finite dimensional representations here.
The usual way is to reformulate the algebra with raising and lowering operators and use these to construct the representation of the desired dimension. This is done by loads and loads of (computationally simple) eigenvector business.
You will find all the answer you're looking for in standard textbooks, for example An Elementary Introduction to Groups and Representations by Brian Hall.
Anyway, here is a direct path to construct a complex representation of any dimension:
You get the Lie Algebra $\text{su}(2)$ as the tangent space of the Lie Group $SU(2)$ at the unit element. How to get the $m$-dimensional, irreducible representations? You know the fundamental, two dimensional representation acting on vectors $z=(z_1,z_2)$, i.e. the set of unitarily matrices $U$ with complex entries and determinant $1$.
Now consider the polynomials of the form $$p_{m+1}(z)\equiv p_{m+1}(z_1,z_2)=a_0 z_1^m+a_1z_1^{m-1}z_2+a_2z_1^{m-2}z_2^2+ \cdots+ a_{m-1}z_1z_2^{m-1}+a_{m}z_2^{m},$$ viewed as vector space with elements $a=(a_0,a_1, \ldots, a_m)$, then $$\Pi_{m+1}(U):p_{m+1}(z)\longrightarrow p_{m+1}(U^{-1}z),$$ is an $m+1$-dimensional representation.
You can sit down with pen and paper, choose a small $m$ and watch how $U^{-1}$ messes up the coefficients of the polynomial (i.e. maps to another vector) for yourself. Consider a set of $a$-basis vectors and you have your $m+1$ dimensional $\Pi_{m+1}(U)$ matrix. Now express $U$ it terms of the three angles ($SU(3)$ is a three dimensional manifold), compute the derivatives in all directions, set the angles to zero and you have your Lie algebra basis.
You also find odd dimensional representations by considering representations of $SO(3)$, so you might wanna study the behaviour of subsets of spherical hermonics under rotation.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061277508735657, "perplexity": 155.08637950098998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468971.92/warc/CC-MAIN-20151124205428-00290-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/33558/two-weighted-coins-determining-which-has-a-higher-probability-of-landing-heads | # Two weighted coins, determining which has a higher probability of landing heads
A friend of mine asked me the following question, and I am not sure how to solve it:
You are given two weighted coins, $C_1$ and $C_2$. Coin $C_1$ has probability $p_1$ of landing heads and $C_2$ has probability $p_2$ of landing heads. The following experiment is preformed:
Coin $C_1$ is flipped 3 times, and lands heads 3 times.
Coin $C_2$ is flipped 10 times, and lands heads 7 times.
Based on this experiment, choose the coin which is more likely to have a higher probability of being heads. In other words, which is more likely: $p_1>p_2$ or $p_2>p_1$.
Intuition tells me coin $C_1$ is the better choice, but this could be wrong, and I am wondering how do you solve this in general. Consider the experiment, $C_1$ is flipped $n_1$ times and lands heads $m_1$ times, $C_2$ is flipped $n_2$ times and lands heads $m_2$ times.
Thanks for the help,
Edit: I think this might answer some questions: Suppose that the probabilities of the coins, $p_1$ and $p_2$ are chosen uniformly from $[0,1]$.
• You really need a prior distribution for how "weighted" the coins may be, and then use Bayesian techniques. – Henry Apr 18 '11 at 0:18
• I would be very interested in seeing a "prior-free", possibly frequentist, approach to this problem, if there is such a thing. I feel like Bayesian techniques get all the press these days. – Rahul Apr 18 '11 at 0:20
• I wonder if you can do this with nonparameteric statistics. The only test I can think of requires that the two sequences have the same length, i.e. $n_1=n_2$ and that's pretty dull here. – Carl Brannen Apr 18 '11 at 0:49
• @Rahul: The question "which is more likely: $p_1 > p_2$ or $p_2 > p_2$" cries out for a solution which treats both as uncertain. – Henry Apr 18 '11 at 1:15
• @Eric: Yes - you get a figure of about 0.758 for the chance $p_1 > p_2$ – Henry Apr 18 '11 at 8:58
Given a specific probability $p$ of heads, the probability of getting $h$ heads and $t$ tails is just the binomial distribution: $P(H = h, T = t | n, p) = p^h (1-p)^t {n \choose h}$ (though we consider $p$ as varying, rather than $n$ and $h$.
With a uniform prior $g(p) = 1$, that probability is just the weight. The integral of this is $\frac{1}{1 + h + t}$, giving a probability density of $(1 + h + t) p^h (1-p)^t {n \choose h}$. The mean is the integral of $p$ times this, is $(1 + h + t) \int p^{(h+1)} (1-p)^t {n \choose h} dp$. Reusing the result from last time, we know that this must be $(1+h+t) {n \choose h}/{n+1 \choose h + 1}/(2+h+t) = (1+n)(h+1)/(n+1)(n+2) = (h+1)/(n+2)$. This is slightly "hedged toward the center" from the naïve estimator $h/n$ (which is the peak of the distribution).
Another common prior that a Bayesian might use is the beta distribution. It's handy because it is a conjugate prior for the binomial distribution. After collecting data generated by the binomial distribution, the probability is still in the form of a beta distribution. In fact, the uniform prior is just the beta distribution with $\alpha = \beta = 1$. Heads and tail each just add one to the parameters $\alpha$ and $\beta$ respectively. The integrals were essentially worked out above -- factorials generalize to $\Gamma(x+1)$. It's often considered that this case of $\alpha = \beta = 1$ is too conservative, and that $\alpha = \beta = 1/2$ "assumes less" and "lets the data speak more".
With the Uniform Prior (Beta(1,1)), $\overline{p} = (h+1)/(n+2)$:
$C_1$, 3 heads, 0 tails: $\overline{p} = 4/5$
$C_2$, 7 heads, 3 tails: $\overline{p} = 8/12 = 2/3$
$C_1$ is expected to do better.
With the Beta(1/2, 1/2) prior, $\overline{p} = (h+1/2)/(n+1) = (2h+1)/(2n+2)$:
$C_1$, 3 heads, 0 tails: $\overline{p} = 7/8$
$C_2$, 7 heads, 3 tails: $\overline{p} = 15/22$
$C_1$ is expected to do better.
Actually calculating the chances of $C_1$ being better than $C_2$ involve a rather nasty integral, but the calculated $p$ value is enough to tell you which is the better bet.
• How do you prove that it's ok to compare estimators for $p_1$ and $p_2$ as you did here, instead of actually testing (or finding the probability of) the hypothesis that $p_1 > p_2$? It's intuitive, but does it not require proof? – ShreevatsaR Jun 2 '11 at 3:26
• Huh, you're right @ShreevatsaRt. This only gets $E[p_1 - p_2] > 0$, not $P(p_1 > p_2) > 1/2$. For most purposes what you want is the first, as payoffs are linear in $p$. – wnoise Jun 2 '11 at 13:45
As Henry mentions, I think one needs some information about the prior distributions of the weights.
Denote by $r$ the weight of a coin. Suppose that the weight has some prior distribution $g(r)$. Let $f(r|H=h, T=t)$ be the posterior probability density function of $r$ having observed $h$ heads and $t$ tails tossed. Bayes' Theorem tells us that: $$f(r|H=h, T=t) = \frac{Pr(H=h|r, N = h+t)g(r)}{\int_0^1 Pr(H=h|r, N = h+t)g(r)\ dr}.$$
This should allow you to answer your question. Once you have the posterior pdf for each coin, just find their respective expected weights.
You could do the integration which wnoise is talking about approximately using the following R code, and it is easily adapted to other cases:
> n <- 1000000 # number of cases to simulate for integration
> prior <- c(1,1) # Beta(1,1) is uniform prior
> coin_1 <- c(3,0) # number of heads and tails observed
> coin_2 <- c(7,3) # number of heads and tails observed
> p_1 <- rbeta(n, prior[1]+coin_1[1], prior[2]+coin_1[2])
> p_2 <- rbeta(n, prior[1]+coin_2[1], prior[2]+coin_2[2])
> p_diff <- p_1 - p_2
> length(p_diff[p_diff > 0]) / n # proportion with p_1 > p_2
[1] 0.758118
I agree with the beta approach, but given the question, I think it makes more sense to plot out the results and compare visually:
$$x$$ <- seq($$0,1,$$length $$= 1000$$) #set $$x$$ from $$0$$ to $$1$$ since we're looking at probabilities
$$y_1$$ <- dbeta$$(x,4,1)$$ #calculate density based on a prior of $$[1,1]$$ and a posterior of $$[3,0]$$.
$$y_2$$ <- dbeta$$(x,8,4)$$ #calculate density based on a prior of $$[1,1]$$ and a posterior of $$[7,3]$$.
plot($$x,y_1$$,type = "l") #plot density of coin 1 in black
lines($$x,y2$$,col $$= 2$$) #plot density of coin 2 in red
This yields the following:
Think of the densities as representing the "probability of probability".
• Coin 1 (represented by the black line) has a higher probability of having a higher probability of showing heads.
• Coin 2 (represented by the red line) has a lower probability of having a higher probability of showing heads (or a higher probability of having a lower probability of showing heads...). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393076300621033, "perplexity": 508.00992659230553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00289.warc.gz"} |
http://math.stackexchange.com/questions/286009/proving-that-this-function-must-be-even-ii | # Proving that this function must be even (II)
Suppose $g:\mathbb{R}^d\rightarrow\mathbb{R}$ is continuous. Also let $\mathbf{x}=(x_1,\ldots,x_d)\in\mathbb{R}^d$.
I'd like to prove the following:
If
$$\int_{\mathbb{R}^d}x_k\exp(-\mathbf{x}^T\mathbf{x})f(g(x))\,d\mathbf{x}=0$$
for all bounded continuous functions $f$, then $g(\mathbf{x})=g(-\mathbf{x})$ for all $\mathbf{x}\in\mathbb{R}^d$, that is $g$ is an even function.
-
Something is missing here, I think. Right now you are integrating a vector-valued function - perhaps your integrand is supposed to be $\vert\textbf{x}\vert\exp(-\textbf{x}^\intercal\textbf{x})f(g(\textbf{x}))$? – icurays1 Jan 24 '13 at 19:06
Presumably the integral is the vector valued integral, ie, $[\int_{\mathbb{R}^d}\mathbf{x}\exp(-\mathbf{x}^T\mathbf{x})f(g(x))\,d\mathbf{x}]_k = \int_{\mathbb{R}^d}\mathbf{x_k}\exp(-\mathbf{x}^T\mathbf{x})f(g(x))\,d\mathbf{x}$. – copper.hat Jan 24 '13 at 19:31
Sorry I meant what copper.hat has written. Thanks for spotting the mistake! – red271 Jan 25 '13 at 8:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916501045227051, "perplexity": 707.971653228741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00247-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.hackmath.net/en/math-problem/1225 | # Vector - basic operations
There are given points A [-9; -2] B [2; 16] C [16; -2] and D [12; 18]
a. Determine the coordinates of the vectors u=AB v=CD s=DB
b. Calculate the sum of the vectors u + v
c. Calculate difference of vectors u-v
d. Determine the coordinates of the vector w = -7.u
ux = 11
uy = 18
vx = -4
vy = 20
sx = 10
sy = 2
(u+v)x = 7
(u+v)y = 38
(u-v)x = 15
(u-v)y = -2
wx = -77
wy = -126
### Step-by-step explanation:
$u=\left(2-\left(-9\right);16-\left(-2\right)\right)=\left(11;18\right)$
${u}_{y}=16-\left(-2\right)=18$
$s=\left(12-\left(2\right);18-\left(16\right)\right)=\left(10;2\right)$
${v}_{y}=18-\left(-2\right)=20$
${s}_{x}=12-\left(2\right)=10$
${s}_{y}=18-\left(16\right)=2$
$u+v=\left(11+\left(-4\right);18+\left(20\right)\right)=\left(7;38\right)$
$\left(u+v{\right)}_{y}=18+\left(20\right)=38$
$u-v=\left(11-\left(-4\right);18-\left(20\right)\right)=\left(15;-2\right)$
$\left(u-v{\right)}_{y}=18-\left(20\right)=-2$
$w=-7.u=\left(-7\cdot \left(11\right);-7\cdot \left(18\right)\right)=\left(-77;-126\right)$
We will be pleased if You send us any improvements to this math problem. Thank you!
Tips to related online calculators
Line slope calculator is helpful for basic calculations in analytic geometry. The coordinates of two points in the plane calculate slope, normal and parametric line equation(s), slope, directional angle, direction vector, the length of the segment, intersections of the coordinate axes, etc.
Our vector sum calculator can add two vectors given by their magnitudes and by included angle.
## Related math problems and questions:
• Vectors
Vector a has coordinates (8; 10) and vector b has coordinates (0; 17). If the vector c = b - a, what is the magnitude of the vector c?
• Vectors
For vector w is true: w = 2u-5v. Determine coordinates of vector w if u=(3, -1), v=(12, -10)
• Coordinates of vector
Determine the coordinate of a vector u=CD if C(19;-7) and D(-16;-5)
• Coordinates of a centroind
Let’s A = [3, 2, 0], B = [1, -2, 4] and C = [1, 1, 1] be 3 points in space. Calculate the coordinates of the centroid of △ABC (the intersection of the medians).
• Vector
Determine coordinates of the vector u=CD if C[19;-7], D[-16,-5].
• Vector sum
The magnitude of the vector u is 12 and the magnitude of the vector v is 8. Angle between vectors is 61°. What is the magnitude of the vector u + v?
• Three points 2
The three points A(3, 8), B(6, 2) and C(10, 2). The point D is such that the line DA is perpendicular to AB, and DC is parallel to AB. Calculate the coordinates of D.
• Vector v4
Find the vector v4 perpendicular to vectors v1 = (1, 1, 1, -1), v2 = (1, 1, -1, 1) and v3 = (0, 0, 1, 1)
• Two forces
The two forces F1 = 580N and F2 = 630N, have an angle of 59 degrees. Calculate their resultant force, F.
• Vectors abs sum diff
The vectors a = (4,2), b = (- 2,1) are given. Calculate: a) |a+b|, b) |a|+|b|, c) |a-b|, d) |a|-|b|.
• Unit vector 2D
Determine coordinates of unit vector to vector AB if A[-6; 8], B[-18; 10].
• Linear independence
Determine if vectors u=(-4; -10) and v=(-2; -7) are linear dependent.
• Three points
Three points K (-3; 2), L (-1; 4), M (3, -4) are given. Find out: (a) whether the triangle KLM is right b) calculate the length of the line to the k side c) write the coordinates of the vector LM d) write the directional form of the KM side e) write the d
• Space vectors 3D
The vectors u = (1; 3; -4), v = (0; 1; 1) are given. Find the size of these vectors, calculate the angle of the vectors, the distance between the vectors.
• Line
Straight-line passing through points A [-3; 22] and B [33; -2]. Determine the total number of points of the line in which both coordinates are positive integers.
• Cuboids
Two separate cuboids with different orientation in space. Determine the angle between them, knowing the direction cosine matrix for each separate cuboid. u1=(0.62955056, 0.094432584, 0.77119944) u2=(0.14484653, 0.9208101, 0.36211633)
• Points on circle
In the Cartesian coordinate system with the origin O is a sketched circle k /O; r=2 cm/. Write all the points that lie on a circle k and whose coordinates are integers. Write all the points that lie on the circle I / O; r=5 cm / and whose coordinates are | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131050705909729, "perplexity": 1975.923683767259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00451.warc.gz"} |
https://experts.mcmaster.ca/display/publication1492041 | # Counting Lyndon Factors Academic Article
•
• Overview
•
• Research
•
• Identity
•
• In this paper, we determine the maximum number of distinct Lyndon factors that a word of length $n$ can contain. We also derive formulas for the expected total number of Lyndon factors in a word of length $n$ on an alphabet of size $\sigma$, as well as the expected number of distinct Lyndon factors in such a word. The minimum number of distinct Lyndon factors in a word of length $n$ is $1$ and the minimum total number is $n$, with both bounds being achieved by $x^n$ where $x$ is a letter. A more interesting question to ask is what is the minimum number of distinct Lyndon factors in a Lyndon word of length $n$? In this direction, it is known (Saari, 2014) that a lower bound for the number of distinct Lyndon factors in a Lyndon word of length $n$ is $\lceil\log_{\phi}(n) + 1\rceil$, where $\phi$ denotes the golden ratio $(1 + \sqrt{5})/2$. Moreover, this lower bound is sharp when $n$ is a Fibonacci number and is attained by the so-called finite Fibonacci Lyndon words, which are precisely the Lyndon factors of the well-known infinite Fibonacci word $\boldsymbol{f}$ (a special example of an infinite Sturmian word). Saari (2014) conjectured that if $w$ is Lyndon word of length $n$, $n\ne 6$, containing the least number of distinct Lyndon factors over all Lyndon words of the same length, then $w$ is a Christoffel word (i.e., a Lyndon factor of an infinite Sturmian word). We give a counterexample to this conjecture. Furthermore, we generalise Saari's result on the number of distinct Lyndon factors of a Fibonacci Lyndon word by determining the number of distinct Lyndon factors of a given Christoffel word. We end with two open problems. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229706287384033, "perplexity": 154.9567388369503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00418.warc.gz"} |
http://physics.stackexchange.com/questions/27023/unitarity-of-s-matrix-in-qft | Unitarity of S-matrix in QFT
I am a beginner in QFT, and my question is probably very basic.
As far as I understand, usually in QFT, in particular in QED, one postulates existence of IN and OUT states. Unitarity of the S-matrix is also essentially postulated. On the other hand, in more classical and better understood non-relativistic scattering theory unitarity of S-matrix is a non-trivial theorem which is proved under some assumptions on the scattering potential, which are not satisfied automatically in general. For example, unitarity of the S-matrix may be violated if the potential is too strongly attractive at small distances: in that case a particle (or two interacting with each other particles) may approach each other from infinity and form a bound state. (However the Coulomb potential is not enough attractive for this phenomenon.)
The first question is why this cannot happen in the relativistic situation, say in QED. Why electron and positron (or better anti-muon) cannot approach each other from infinity and form a bound state?
As far as I understand, this would contradict the unitarity of S-matrix. On the other hand, in principle S-matrix can be computed, using Feynmann rules, to any order of approximation in the coupling constants. Thus in principle unitarity of S-matrix could be probably checked in this sense to any order.
The second question is whether such a proof, for QED or any other theory, was done anywhere? Is it written somewhere?
-
Why do you say that two particles can't form a bound state in QFT? I'm pretty sure there are two-dimensional integrable field theories with scattering $A+B \to C$ and where $A$, $B$ and $C$ are perfectly stable particle states. – Sidious Lord Feb 26 '12 at 15:36
@Sidious Lord: Can I read somewhere about such examples? Can it happen in QED? (As far as I heard, the 2d case is somewhat exceptional in QED: in the Schwinger model polarization of vacuum has an effect of creation of a bound state of electron-positron pair which is a free boson. But I might be wrong about this, I do not really know this.) – MKO Feb 26 '12 at 18:56
Hi @Dilaton: Concerning the tag edit(v3) I would suggest the unitarity tag and the s-matrix-theory tag instead of the qed tag (because OP is really asking about qft) and the research-level tag (because the question is textbook material). – Qmechanic Jan 1 '13 at 16:27
Thanks @Qmechanic, it never hurts when you hava a look at it too when I retag, since you are much much much more knowledgable. I change the tags as you suggest. And happy new year to you :-) – Dilaton Jan 1 '13 at 17:33
In principle, bound states are possible in a QFT. In this case, their states must be part of the S-matrix in- and out- state space in order that the S-matrix is unitary. (Weinberg, QFT I, p.110)
However, for QED proper (i.e., without any other species of particles apart from photon, electron, and positron) it happens that there are no bound states; electron and positron only form positronium, which is unstable, and decays quickly into two photons. http://en.wikipedia.org/wiki/Positronium
[Edit: Positronium is unstable: http://arxiv.org/abs/hep-ph/0310099 - muonium is stable electromagnetically (i.e., in QED + muon without weak force), but decays via the weak interaction, hence is unstable, too: http://arxiv.org/abs/nucl-ex/0404013. About how to make muonium, see page 3 of this article, or the paper discovering muonium, Phys. Rev. Lett. 5, 63–65 (1960). There is no obstacle in forming the bound state; due to the attraction of unlike charges, an electron is easily captured by an antimuon.]
Note that the current techniques for relativistic QFT do not handle bound states well. Bound states of two particles are (in the simplest approximation) described by Bethe-Salpeter equations. The situation is technically difficult because such bound states always have multiparticle contributions.
-
Unitarity of the S-matrix can be checked perturbatively. Bound states tend to be non-perturbative effects, so may not show up naive perturbative calculations. Unfortunately, the datailed proof is not discussed in many places. One book that has it is Scharf's book on QED. When looking through other books you should look for keywords like optical theorem and Cutkosky rules. Bound states are usefully discussed in the last chapter of vol.1 of Weinberg's tretease on QFT.
-
In and Out states are not obligatory free states, but can be bound states too, so transitions from free to bound states are also possible. In case of QED with electron-antimuon bound state, its formation is accompanied with photon emission present in the final system state. It does not contradict the unitarity.
Problems with proofs in QED and other QFTs are due to wrong coupling term like $jA$ which is not correct alone and is corrected with counterterms. In addition, these counterterms cannot be treated exactly but only perturbatively so the true interaction of true constituents is not seen.
-
Thanks for the comment. I realize that In and Out states may be bound states in principle. However in QED bound states are not taken into account. That means that free electrons, muons etc. cannot become a bound state (am I wrong?). Also I realize that when one uses Feynmann rules to compute S-matrix, one should include all counterterms. So I think it does not really answer the question. – MKO Feb 26 '12 at 12:40
There is a cross section of producing bound states when two opposite-charge particles collide. All what is necessary is to emit the excess of energy-momentum as photons that is quite possible. Also it is possible to create a pair in the final state that is in a bound state, not free electron and positron. In QED there is no problem with unitarity in this respect. Renormalized and infra-red fixed QED is adequate theory. Feynman rules can include bound states in In and Out states, as a matter of fact. – Vladimir Kalitvianski Feb 26 '12 at 16:47
If I understand correctly, in QED in 4d space-time there is no cross-section of producing bound states when two opposite-charge particles collide. Definitely in the non-relativistic setting two particle coming from infinity and interacting according to Coulomb law (at short distances) cannot collide. – MKO Feb 26 '12 at 18:50
A pair of non interacting electron and positron is described with a product of two plane waves. A bound state is described with a product of a plane wave (center of mass) and a wave function of a bound state, easy. – Vladimir Kalitvianski Feb 26 '12 at 19:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261042475700378, "perplexity": 686.0324050416924}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132646.40/warc/CC-MAIN-20140914011212-00236-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://michaelnielsen.org/polymath1/index.php?title=Ergodic-inspired_methods&diff=prev&oldid=1329 | # Difference between revisions of "Ergodic-inspired methods"
These methods are inspired by the Furstenberg-Katznelson argument and the ergodic perspective.
## Idea #1: extreme localisation
Let $A \subset [3]^n$ be line-free with density $\delta$. Let $m = m(\delta)$ be a medium size integer independent of n. We embed $[3]^m$ inside $[3]^n$ to create a random set $A_m \subset [3]^m$ which enjoys stationarity properties. We then look at the events $E_{i,j}$ for $1 \leq i \leq j \leq m$, which are the event that $1^i 0^{j-i} 2^{m-j}$ lies in $A_m$. As A is line-free, we observe that $E_{i,i}, E_{i,j}, E_{j,j}$ cannot simultaneously occur for any $1 \leq i \lt j \leq m$. Also, each of the $E_{i,j}$ have probability about $\delta$.
On the other hand, by the first moment method, many of the $E_{i,i}$ hold with positive probability. Some Cauchy-Schwarz then tells us that there exists $1 \leq i \lt i' \lt j \lt j' \leq n$ such that $E_{i,j} \wedge E_{i',j} \wedge E_{i,j'} \wedge E_{i',j'}$ has probability significantly larger than $\delta^4$.
One can view the events $E_{i,j}$ as an i+m-j-uniform hypergraph, by fixing a base point x and viewing the random subspace $[3]^m$ as formed by modifying x on m random indices. The above correlation would mean some significant irregularity in this hypergraph; the hope is that this implies some sort of usable structure on A that can be used, for instance to locate a density increment.
## Idea #2: IP Roth first
McCutcheon.508 (revised 2-17): I will give my general idea for a proof. I’m pretty sure it’s sound, though it may not be feasible in practice. On the other hand I may be badly mistaken about something. I will throw it out there for someone else to attempt, or say why it’s nonsense, or perhaps ignore. I won’t formulate it as a strategy to prove DHJ, but of what I’ve called IP Roth. If successful, one could possibly adapt it to the DHJ, k=3 situation, but there would be complications that would obscure what was going on.
We work in $X=[n]^{[n]}\times [n]^{[n]}.$ For a real valued function $f$ defined on $X$, define $||f||_1=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x+a,y))f((x+b,y-b))f((x+a+b,y-b)))^{1/4},$
$||f||_2=(\mathrm{IP-lim}_a\mathrm{IP-lim}_b {1\over |X|}\sum_{(x,y)\in X} f((x,y))f((x,y+a))f((x+b,y-b))f((x+b,y+a-b)))^{1/4}.$ Now, let me explain what this means. $a$ and $b$ are subsets of $[n]$, and we identify $a$ with the characteristic function of $a$, which is a member of $[n]^{[n]}.$ (That is how we can add $a$ to $x$ inside, etc. Since $[n]$ is a finite set, you can’t really take limits, but if $n$ is large, we can do something almost as good, namely ensure that whenever $\max\alpha\lt\min\beta$, the expression we are taking the limit of is close to something (Milliken Taylor ensures this, I think). Of course, you have to restrict $a$ and $b$ to a subspace. What is a subspace? You take a sequence $a_i$ of subsets of $[n]$ with $\max a_i\lt\min a_{i+1}$ and then restrict to unions of the $a_i.$
Now here is the idea. Take a subset $E$ of $X$ and let $f$ be its balanced indicator function. You first want to show that if either of the above-defined 2-norms of $f$ is small, then $E$ contains about the right number of corners $\{ (x,y), (x+a,y), (x,y+a)\}.$ Restricted to the subspace of course. What does that mean? Well, you treat each of the $a_i$ as a single coordinate, moving them together. The other coordinates I’m not sure about. Maybe you can just fix them in the right way and have the norm that was small summing over all of $X$ still come out small. At any rate, the real trick is to show that if both coordinate 2-norms are big, you get a density increment on a subspace. Here a subspace surely means that you find some $a_i$s, treat them as single coordinates, and fix the values on the other coordinates. (If the analogy with Shkredov's proof of the Szemeredi corners theorem holds, you probably only need for one of these norms to be big....) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040843844413757, "perplexity": 231.83756146314738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668896.47/warc/CC-MAIN-20191117064703-20191117092703-00085.warc.gz"} |
https://projecteuclid.org/euclid.aoms/1177704584 | ## The Annals of Mathematical Statistics
### On the Order Structure of the Set of Sufficient Subfields
D. L. Burkholder
#### Abstract
In [5], the concept of statistical sufficiency is studied within a general probability setting. The study is continued here. The notation and definitions of [5] are used. Here we give an example of sufficient statistics $t_1$ and $t_2$ such that the pair $(t_1, t_2)$ is not sufficient. The example also has the property that, in a sense to be made precise, no smallest sufficient statistic containing $t_1$ and $t_2$ exists. In Example 4 of [5], sufficient subfields $\mathbf{A}_1$ and $\mathbf{A}_2$ are exhibited such that $\mathbf{A}_1 \vee \mathbf{A}_2$, the smallest subfield containing $\mathbf{A}_1 and \mathbf{A}_2$, is not sufficient. Such an example is given here with the even stronger property that no smallest sufficient subfield containing $\mathbf{A}_1$ and $\mathbf{A}_2$ exists. Let $(X, \mathbf{A}, P)$ be the probability structure under consideration. Here $X$ is a set, $\mathbf{A}$ is a $\sigma$-field of subsets of $X$, and $P$ is a family of probability measures $p$ on $\mathbf{A}$. Let $\mathbf{N}$ be the smallest $\sigma$-field containing the $P$-null sets and let $K$ be the collection of sufficient subfields of $A$ containing $N$. (Restricting attention to sufficient subfields containing $N$ is technically convenient. Note that any sufficient subfield is equivalent, in the usual sense, to one containing $N$.) Some of the properties of $K$ can be described in the language of lattice theory as follows. Let $L$ be the set of subfields (= sub-$\sigma$-fields) of $\mathbf{A}$. Then $L$, partially ordered by inclusion, is a complete lattice. (Our terminology is essentially that of Birkhoff [4].) Example 4 of [5], mentioned above, shows that $K$ is not always a sublattice of $L$. The example given below shows more: The set $K$, partially ordered by inclusion, is not always a lattice in its own right. Note, however, that if $H$ is a finite, or even countable, subset of $K$, then the greatest lower bound of $H$ relative to $L$ exists and is in $K$ ([5], Corollary 2). The difficulty is with the least upper bound. There is less difficulty if $\mathbf{A}$ is separable. Corollaries 2 and 4 of [5] indicate that if $A$ is separable, then $K$ is a $\sigma$-complete sublattice of $L$. This is about as strong a result as could be expected here. For even if $\mathbf{A}$ is separable, $\mathbf{K}$ is sometimes neither complete nor conditionally complete: Each of the nonsufficient subfields exhibited in Example 1 of [5] is easily seen to be both the greatest lower bound of a subset of $K$ and the least upper bound of a subset of $K$. There is no difficulty if $P$ is dominated. If $P$ is dominated, then $K$ is a complete sublattice of $L$. This follows easily from the existence in this case (Bahadur [2], Theorems 6.2 and 6.4; Loeve [6], Section 24.4) of a subfield $\mathbf{A}_0$ in $K$ such that $K = \{\mathbf{B} \mid \mathbf{B} \epsilon L, \mathbf{A}_0 \subset \mathbf{B}\}$.
#### Article information
Source
Ann. Math. Statist., Volume 33, Number 2 (1962), 596-599.
Dates
First available in Project Euclid: 27 April 2007
https://projecteuclid.org/euclid.aoms/1177704584
Digital Object Identifier
doi:10.1214/aoms/1177704584
Mathematical Reviews number (MathSciNet)
MR137227
Zentralblatt MATH identifier
0127.34808
JSTOR | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478095769882202, "perplexity": 169.3523409504003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00019.warc.gz"} |
http://fabien.galerio.org/drivingrain/fjrvanmook2002/node26.htm | # 6.1 Wind calculation method
Numerical simulations of the wind around the Main Building were performed by the commercially available CFD package Fluent. We used versions 4.4 and 4.5 and the differences between these two versions are not relevant for our purpose. The used turbulence model is a standard - model ([Fluent 1995], or see e.g. [Launder and Spalding 1974] and [Rodi 1980]).
Model constants The applied values for the - model constants are: , , , and . Except for , the standard values of the - model have been applied (the model and the constants are described in [Launder and Spalding 1974]). The constant was adapted according to the findings of [Bottema 1993b], who also compared the results of his simulations favourably with wind tunnel measurements.
Grid The applied grid is a so-called structured grid. The size of the three-dimensional computational domain was determined by the rules of thumb given by [Bottema 1993a] and based on general estimates of the influence zone in which wind speeds deviate by more than 10% due to the presence of the building. The reader is also referred to section 2.1.4. For the description of this zone the dimension , i.e. the smaller of and , is used. The upstream influence zone is about and its downstream counterpart extends to . The influence zone in lateral and vertical directions extends to . The boundaries of the computational domain should be outside the influence zone, although for the downstream influence zone one may make an exception and put the boundary at only [Bottema 1993a].
As is 90 m (=) in our case, the computational domain should be about (=1170 m) long along the wind direction. To be able to simulate wind flow around the Main Building at oblique angles, the chosen computational domain is rectangular with the Main Building situated in the south-west corner, at from the south and the west domain boundaries.
The actual three-dimensional computational domain is 1190 m long in east-west direction, 1477 m long in north-south direction and 225 m high. It consists of 95, 96 and 47 cells, respectively. Figure 6.1 shows the computational grid with the three buildings (Auditorium, Main Building and Building T) in it. Two variants of the shown configuration will be applied: with and without Building T. There are two reasons for this. Firstly, the influence of Building T on winds from south-west to south can be presented. Secondly, the grid without Building T is symmetric with respect to the east-west plane. This implies that winds from directions and (where is a value from 0 to 90) can be studied using one simulation.
The grid cells become progressively smaller near building boundaries. The first grid cells on the façade of the Main Building have a thickness of 0.25 m. Care is taken to keep the grid expansion factor of two successive grid lines between 0.7 and 1.3. However, since the grid is structured, undesired large expansion factors and cell aspect ratios are inevitable in some parts of the grid.
Wind profile The profile of the wind entering the domain is described by:
(6.1)
and:
(6.2)
with = longitudinal wind velocity [m s] at height [m] above ground level, = roughness length [m] for 20 m, = roughness length [m] for 20 m, = friction velocity [m s], = friction velocity [m s], resulting from the requirement that is continuous at 20 m, = von Kármán constant (0.4), and = displacement height [m].
The applied values of 1.0 m and 10 m were taken from results of measurements at the site by [Geurts 1997]. The division of the wind profile in two parts is necessary to account for the displacement height of 10 m; otherwise, the wind profile below 10 m height would be undetermined. Moreover, the fetch consists of a park up to a distance of 400 m from the Main Building (therefore an estimated of 0.1 m) and buildings west from the park with a height of 20 m. The choice of and the choice that the boundary between the two parts of the profile is at 20 m height, are relatively arbitrary, but the simulation results are not very sensitive to the precise values of these parameters.
The friction velocity is based on the wind speed at Eindhoven Airport (7.5 km westward from the Main Building, with = 10 m, = 0.03 m and = 0 m). See section 5.2.1 for a discussion on the measurements of .
Turbulent kinetic energy The profiles of the turbulent kinetic energy and its dissipation rate, for wind coming into the domain are described by:
(6.3)
and:
(6.4)
respectively, with = turbulent kinetic energy per unit of mass [m s], and = energy dissipation rate [m s].
Terrain roughness In Fluent the roughness of surfaces is modelled by the following formula:
(6.5)
where is a roughness parameter [-] and is the kinematic viscosity [m s] of air.
The roughness parameter is empirically determined. Its value is 9.8 for a smooth wall. Equation 6.5 corresponds to the wind profile (eqs. 6.1 and 6.2) if:
(6.6)
Façade roughness Equations 6.5 and 6.6 are also applied to model the surface roughness of the building façades. As the façade consists of a smooth glass cladding, a value of 0.0005 m is assumed for its roughness length ,
Separation modelling Separation of the airflow at corners has been modelled by so-called link-cuts'' (i.e. a feature of Fluent by which the wall-function in a computational cell is disabled).
Chosen wind speeds and directions The following reference wind speeds and directions at the mast on the Auditorium were chosen for the wind calculations:
• 3.5, 5.7 and 11.2 m s,
• 210, 240, 270, 300 and 330.
Not every combination was simulated, see table 6.2 for the simulations which were actually performed. Recall that we use two geometrical models, namely with and without the inclusion of Building T, and because of symmetry, the simulations with 240 will be used to represent 300 too. The validity of these `double simulations' will be discussed in section 6.3.
© 2002 Fabien J.R. van Mook
ISBN 90-6814-569-X
Published as issue 69 in the Bouwstenen series of the Faculty of Architecture, Planning and Building of the Eindhoven University of Technology. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933719396591187, "perplexity": 921.6053951210046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00731.warc.gz"} |
https://brilliant.org/problems/integration-of-the-greatest-integer-function-2/ | Integration of the greatest integer function #2
Calculus Level 3
$\large \int\limits_0^{\pi} \left\lfloor x\right\rfloor \, dx = \ ?$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916448712348938, "perplexity": 2074.5402716561225}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00127.warc.gz"} |
http://mathhelpforum.com/calculus/210478-integral.html | # Math Help - Integral
1. ## Integral
Hello! How would you integrate 2^x / ln2? Thanks already!
2. ## Re: Integral
$\displaystyle 2^x=e^{\ln 2^x}=e^{x \ln 2}$
3. ## Re: Integral
note that:
$\frac{1}{\ln(2)} = \frac{\ln(2)}{(\ln(2))^2}$
you can take the denominator (which is just a constant) outside the integral, so:
$\int \frac{2^x}{\ln(2)}\ dx = \frac{1}{(\ln(2))^2}\int \ln(2)2^x\ dx = \frac{1}{(\ln(2))^2} \int e^{\ln(2)x} \ln(2)\ dx$
if you use the substitution $u = \ln(2)x$ what would $du$ be? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990562796592712, "perplexity": 4574.930002726203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.56/warc/CC-MAIN-20150323172140-00056-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://docs.precisely.com/docs/sftw/spectrum/22.1/en/webhelp/Spatial/ERM/source/Rest/ClosestArcSnapping/impact_on_route_and_matrix_calculations.html | # Impact on Route and Matrix Calculations
In some cases, the user receives an error - “Path could not be calculated.” This error is caused by restricted closest arcs. The improved snapping logic helps in reducing such errors by avoiding restricted arcs.
The improved logic takes the closest arc into consideration and attempts to ignore such arcs thus reducing errors. For example, if there is any restricted road near the start or end point, the new snapping logic ignores the restricted path and finds another way to calculate the route matrix.
With the reduced errors caused by restricted arcs, the performance of route and matrix calculations increases as the routing algorithm has less number of path complexities to handle.
startpoint: -73.5661, 45.5077 endPoint: -73.576048, 45.496936
Table 1. Comparing Boundaries
Before After
“Path could not be calculated” error
startPoint: -80.146276, 26.707754 endPoint: -81.483591, 28.583825
Table 2. Comparing Boundaries
Before After
“Path could not be calculated” error | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008151292800903, "perplexity": 4226.082701807494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00281.warc.gz"} |
https://pdglive.lbl.gov/Particle.action?init=0&node=M142&home=MXXX005 | LIGHT UNFLAVORED MESONS($\boldsymbol S$ = $\boldsymbol C$ = $\boldsymbol B$ = 0) For $\mathit I = 1$ (${{\mathit \pi}}$, ${{\mathit b}}$, ${{\mathit \rho}}$, ${{\mathit a}}$): ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit d}}}$, ( ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit u}}}−$ ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit d}}})/\sqrt {2 }$, ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit u}}}$;for $\mathit I = 0$ (${{\mathit \eta}}$, ${{\mathit \eta}^{\,'}}$, ${{\mathit h}}$, ${{\mathit h}^{\,'}}$, ${{\mathit \omega}}$, ${{\mathit \phi}}$, ${{\mathit f}}$, ${{\mathit f}^{\,'}}$): ${\mathit {\mathit c}}_{{\mathrm {1}}}$( ${{\mathit u}}{{\overline{\mathit u}}}$ $+$ ${{\mathit d}}{{\overline{\mathit d}}}$ ) $+$ ${\mathit {\mathit c}}_{{\mathrm {2}}}$( ${{\mathit s}}{{\overline{\mathit s}}}$ ) INSPIRE search
# ${{\boldsymbol f}_{{2}}{(1910)}}$ $I^G(J^{PC})$ = $0^+(2^{+ +})$
We list here three different peaks with close masses and widths seen in the mass distributions of ${{\mathit \omega}}{{\mathit \omega}}$ , ${{\mathit \eta}}{{\mathit \eta}^{\,'}}$ , and ${{\mathit K}^{+}}{{\mathit K}^{-}}$ final states. ALDE 1991B argues that they are of different nature.
${{\boldsymbol f}_{{2}}{(1910)}}$ MASS
see data
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit \omega}}{{\mathit \omega}}$ MODE $1900 \pm9$ MeV (S = 1.4)
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit \eta}}{{\mathit \eta}^{\,'}}$ MODE $1934 \pm16$ MeV
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}$ MODE
${{\boldsymbol f}_{{2}}{(1910)}}$ WIDTH
Full width $\Gamma$
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit \omega}}{{\mathit \omega}}$ MODE $167 \pm21$ MeV (S = 1.3)
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit \eta}}{{\mathit \eta}^{\,'}}$ MODE $141 \pm40$ MeV
${{\mathit f}_{{2}}{(1910)}}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}$ MODE | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757540225982666, "perplexity": 857.1907863323511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00351.warc.gz"} |
https://www.physicsforums.com/threads/velocity-vector.117213/ | # Velocity Vector
1. Apr 11, 2006
### danago
Ok, ive got a question. Its probably something really stupid that im overlooking, but ill ask anyway.
Say i have the question:
if a plane is can travel 75 m/s north (75j i assume?) under normal conditions, what velocity will the pilot need to set if he is to travel directly north when there are winds of 21i+8j m/s blowing.
Since i need to find the velocity which will make the plane travel at 75m/s north, 75j is therefore the resultant. If ai+bj is the velocity the pilot needs to set his engines at, i can say:
75j=ai+bj+21i+8j
I then equate the components, to get:
i 0=a+21
j 75=b+8
From that, i can solve for a and say that a=-21, and if i then substitute that into the equation:
$$75^2=(-21)^2+b^2$$
solve for b and that gives me the velocity vector -21i+72j, which is the correct answer. What im wondering is, when i equated the components, i got two equations. When i used the first equation (0=a+21) and used the value of a to find the final vector, i get the correct answer, but when i used the value from solving b from 75=b+8, i get the wrong answer.
Thanks,
Dan.
2. Apr 11, 2006
### HallsofIvy
Staff Emeritus
What was the exact wording of the question? You originally set it up so that the airplane's speed relative to the ground, taking into account both velocity relative to the air and air velocity, was 75 m/s due north. If that is the case then 21+ a= 0, 8+ b= 75, so the velocity relative to the air is -21i+ 67j, is correct.
However, if the airplanes speed relative to the air is to be 75 m/s , Then we must have 21+ a= 0 (so that the velocity vector is due north) and a2+ b2= 752 (so that the airspeed is due north). Those are different problems. Is it the planes speed relative to the air or relative to the ground that is to be 75 m/s?
3. Apr 11, 2006
### danago
im a bit confused, but heres exacly word for word what the question asks:
a helicopter can fly at 75m/s in still air. The pilot wishes to fly from airport A to a second airport B, 300km due north of A. If i is a unit vector due east, and j a unit vector due north, find the velocity vector that the pilot should set and the time the journey will take if there is a wind of 21i+8j blowing?
4. Apr 11, 2006
### HallsofIvy
Staff Emeritus
I would interpret this as: the helicopter has a maximum airspeed of 75 m/s. Flying as fast as he can (i.e. at 75 m/s relative to the air) what velocity vector should the pilot take to go due north. There is no requirement that the actual "speed made good" (i.e. relative to the ground) be 75 m/s. In that case, in order to go due n, the helicopter must be angled so that the net "east-west" (i.e. j) component is 0. That's why, with velocity vector ai+ bj, you have b+ 8= 0 (NOT 21+ a= 0 as we both incorrectly said before). The requirement that the airspeed be 75 m/s gives a2+ b2= 752.
5. Apr 11, 2006
### danago
but with b+8=0, it gives a value of -8 for b. And according to the answers page, the answer is -21i+72j.
6. Apr 12, 2006
### HallsofIvy
Staff Emeritus
Sorry, I got my "east and west" confused with my "north and south"!
If his vector velocity, relative to the air is ai+ bj then his actual velocity relative to the ground is ai+ bj+ 21i+ 8j. That must have no x (east and west) component so we must have a+ 21= 0 and a= -21. NOW do what you were talking about before: a2+ b2= 752 to solve for b. That will give you -21i+ 72j.
7. Apr 12, 2006
### danago
im a bit lost when you say velocity relative to the air and ground. But today i went to a study session with my teacher, and from what she said, and what you said, i understand the question alot better now. Thanks alot for the help. Greatly appreciated.
Dan.
8. Apr 13, 2006
### HallsofIvy
Staff Emeritus
An airplane flies "on the wind". That is, it is supported by the air and necessarily goes wherever the air goes! Imagine a toy car moving on a table while you are carrying the table to the side. We can calculate the velocity of the car "relative to the table" but have to add to that the motion of the table itself "relative to the floor" in order to find the motion of the car "relative to the floor".
9. Apr 14, 2006
### danago
ohhhh i see now. Makes sense :)
Thanks for that.
Similar Discussions: Velocity Vector | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279814958572388, "perplexity": 1229.4162749724308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00267-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://nceax.co.nz/Y13/Y13%20Differentiation/Rates%201.html | Rates 1
Rates are an important application of calculus . This series of 10 questions provide practice in this.
Question Number Actual Question Answer Right/Wrong 1 2 3 4 5 > 6 7 8 9 10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424246311187744, "perplexity": 584.1890177106636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00407.warc.gz"} |
http://mathhelpforum.com/calculus/81588-finding-time-takes-hit-ground.html | # Math Help - Finding the time it takes to hit the ground
1. ## Finding the time it takes to hit the ground
The ball is tossed up in the air from a tower 50m above the ground. If the ball has an inital velocity of 30m/s, how long will it take to hit the ground if a = -9.8m/s^2.
Now ive got my velocity function v = -10t +30, my height function h = -5t^2 +30t + 50, how can I find the total time?
Thank You!
2. First let us see what in formation we have:
s - Displacement - 50 metres
u - Initial velocity - 30m/s
v - Final velocity - N/A
a - Acceleration - 9.8m/s
t - Time - ?
We will take all values as downward, that is why we have acceleration as a positive value.
From the above information you should notice that we can use the following formula
$s = ut + \frac{1}{2}at^2$
Subbing in our values gives us,
$50 = 30t + 4.9t^2$
Which is just a normal quadratic.
Hope this helps
Craig
3. Originally Posted by Meeklo Braca
$h = -4.9t^2 + 30t + 50$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722030520439148, "perplexity": 990.8070578673087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00129-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://www.planetmath.org/limitofgeometricsequence | # limit of geometric sequence
As mentionned in the geometric sequence entry,
$\displaystyle\lim_{n\to\infty}ar^{n}=0$ (1)
for $|r|<1$. We will prove this for real or complex values of $r$.
We first remark, that for the values $s>1$ we have $\displaystyle\lim_{n\to\infty}s^{n}=\infty$ (cf. limit of real number sequence). In fact, if $M$ is an arbitrary positive number, the binomial theorem (or Bernoulli’s inequality) implies that
$s^{n}=(1+s-1)^{n}>1^{n}+\binom{n}{1}(s-1)=1+n(s-1)>n(s-1)>M$
as soon as $\displaystyle n>\frac{M}{s-1}$.
Let now $|r|<1$ and $\varepsilon$ be an arbitrarily small positive number. Then $\displaystyle|r|=\frac{1}{s}$ with $s>1$. By the above remark,
$|r^{n}|=|r|^{n}=\frac{1}{s^{n}}<\frac{1}{n(s-1)}<\varepsilon$
when $\displaystyle n>\frac{1}{(s-1)\varepsilon}$. Hence,
$\lim_{n\to\infty}r^{n}=0,$
which easily implies (1) for any real number $a$.
Title limit of geometric sequence LimitOfGeometricSequence 2013-03-22 18:32:43 2013-03-22 18:32:43 pahio (2872) pahio (2872) 6 pahio (2872) Proof msc 40-00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 16, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99529629945755, "perplexity": 3976.029874254837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865145.53/warc/CC-MAIN-20180623171526-20180623191526-00353.warc.gz"} |
http://mathoverflow.net/questions/138629/haless-fan-associated-with-a-polyhedron/138631 | # Hales's fan associated with a polyhedron
In Hales's book (cited below), he associates what he calls a fan with any convex polyhedron in $\mathbb{R}^3$. I will not define his notion of fan, but let his figure (p.137) serve as a definition:
My question is: Has this natural object been defined and used previously in other contexts and perhaps under another name?
Thomas Hales, Dense Sphere Packings: A Blueprint for Formal Proofs. 2012. (Cambridge link)
-
The fan Hales is using is called the "face fan" of the polytope.
In toric varieties, one mainly considers the outer normal fan of a polytope, which has a ray for each facet (perpendicular to it). The face fan, on the other hand, has a ray for each vertex. There is an inclusion-reversing map between the poset of faces of these two fans (if we omit the zero-faces). So that is probably what Hales means by saying that the two notions bear some resemblance but are not the same.
-
Thank you, Hugh! This likely explains his comments about the notation. – Joseph O'Rourke Aug 9 '13 at 1:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124828338623047, "perplexity": 1110.7695141195015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653402/warc/CC-MAIN-20140305060733-00043-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://learncheme.com/quiz-yourself/interactive-self-study-modules/fugacities-of-mixtures/fugacities-of-mixtures-screencast/ | #### Fugacities of Mixtures: Screencasts
Explains why fugacity is important for mixtures and explains how it is used.
We suggest you list the important points in this screencast as a way to increase retention.
Describes how the fugacities of each component in a binary mixture liquid change as the temperature increases until all the liquid vaporizes.
We suggest you list the important points in this screencast as a way to increase retention.
##### Important Equations:
Fugacity of component $$i$$, $$\hat{f_i}$$, in an ideal solution:
$\hat{f_i} \; = x_iP^{sat} _i$
where $$x_i$$ is liquid mole fraction of component $$i$$
$$P^{sat} _i$$ is the saturation pressure of component $$i$$
Fugacity of component $$i$$ in a non-ideal liquid solution:
$\hat{f_i} = x_i\gamma _i P^{sat} _i$
where $$\gamma _i$$ is the activity coefficient of component $$i$$.
Fugacity of component $$i$$ in a liquid solution at elevated pressure (Poynting correction):
$\hat{f_i} = x_i\gamma _i \phi ^{sat} _i P^{sat} _i exp \left( \frac{V^L(P-P^{sat} _i)}{RT} \right)$
where $$V^L$$ is the molar volume of the liquid
$$\phi ^{sat} _i$$ is the fugacity coefficient at saturation pressure for pure component $$i$$
$$R$$ is the ideal gas constant
$$T$$ is the absolute temperature
Antoine equation for component $$i$$:
$log-{10}(P^{sat} _i) = A_i – \frac{B_i}{C_i + T}$
where $$P^{sat} _i$$ is the saturation pressure
$$T$$ is the temperature (most often in °C)
$$A_i, B_i,$$ and $$C_i$$ are constants for a given component $$i$$
Vapor-liquid phase equilibrium for component $$i$$:
$\hat{\,f^V _i} = \hat{\,f^L _i}$
where $$\hat{\,f^V _i}$$ is the fugacity of component $$i$$ in the vapor phase
$$\hat{\,f^L _i}$$ is the fugacity of component $$i$$ in the liquid phase | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698508381843567, "perplexity": 1176.2051950620114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00676.warc.gz"} |
https://jp.maplesoft.com/support/help/view.aspx?path=examples%2FCalculus1Tangents | Calculus1 Tangents - Maple Help
Calculus 1: Tangents, Inverses, and Sampling
The Student[Calculus1] package contains three routines that can be used to both work with and visualize the concepts of tangents, the inverses of functions, and the errors of plotting a function by sampling. This worksheet demonstrates this functionality.
For further information about any command in the Calculus1 package, see the corresponding help page. For a general overview, see Calculus1.
Getting Started
While any command in the package can be referred to using the long form, for example, Student[Calculus1][Tangent], it is easier, and often clearer, to load the package, and then use the short form command names.
> $\mathrm{restart}$
> $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{Calculus1}\right]\right):$
The following sections show how the routines work.
Main: Visualization
Next: Derivatives | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922307133674622, "perplexity": 2589.489243439026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00872.warc.gz"} |
Subsets and Splits